query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
5a75eac382923f64bed776dcee9b45d4
|
Deep Learning: A Critical Appraisal
|
[
{
"docid": "1f700c0c55b050db7c760f0c10eab947",
"text": "Cathy O’Neil’s Weapons of Math Destruction is a timely reminder of the power and perils of predictive algorithms and model-driven decision processes. The book deals in some depth with eight case studies of the abuses she associates with WMDs: “weapons of math destruction.” The cases include the havoc wrought by value-added models used to evaluate teacher performance and by the college ranking system introduced by U.S. News and World Report; the collateral damage of online advertising and models devised to track and monetize “eyeballs”; the abuses associated with the recidivism models used in judicial decisions; the inequities perpetrated by the use of personality tests in hiring decisions; the burdens placed on low-wage workers by algorithm-driven attempts to maximize labor efficiency; the injustices written into models that evaluate creditworthiness; the inequities produced by insurance companies’ risk models; and the potential assault on the democratic process by the use of big data in political campaigns. As this summary suggests, O’Neil had plenty of examples to choose from when she wrote the book, but since the publication of Weapons of Math Destruction, two more problems associated with model-driven decision procedures have surfaced, making O’Neil’s work even more essential reading. The first—the role played by fake news, much of it circulated on Facebook, in the 2016 election—has led to congressional investigations. The second—the failure of algorithm-governed oversight to recognize and delete gruesome posts on the Facebook Live streaming service—has caused CEO Mark Zuckerberg to announce the addition of 3,000 human screeners to the Facebook staff. While O’Neil’s book may seem too polemical to some readers and too cautious to others, it speaks forcefully to the cultural moment we share. O’Neil weaves the story of her own credentials and work experience into her analysis, because, as she explains, her training as a mathematician and her experience in finance shaped the way she now understands the world. O’Neil earned a PhD in mathematics from Harvard; taught at Barnard College, where her research area was algebraic number theory; and worked for the hedge fund D. E. Shaw, which uses mathematical analysis to guide investment decisions. When the financial crisis of 2008 revealed that even the most sophisticated models were incapable of anticipating risks associated with “black swans”—events whose rarity make them nearly impossible to predict—O’Neil left the world of corporate finance to join the RiskMetrics Group, where she helped market risk models to financial institutions eager to rehabilitate their image. Ultimately, she became disillusioned with the financial industry’s refusal to take seriously the limitations of risk management models and left RiskMetrics. She rebranded herself a “data scientist” and took a job at Intent Media, where she helped design algorithms that would make big data useful for all kinds of applications. All the while, as O’Neil describes it, she “worried about the separation between technical models and real people, and about the moral repercussions of that separation” (page 48). O’Neil eventually left Intent Media to devote her energies to inWeapons of Math Destruction",
"title": ""
},
{
"docid": "a39834162b2072c69b03745cfdbe2f1a",
"text": "AI has seen great advances of many kinds recently, but there is one critical area where progress has been extremely slow: ordinary commonsense.",
"title": ""
}
] |
[
{
"docid": "13867cdfb8ae697a1fa22d09e6966f0c",
"text": "In this paper we deal with the ground optimization problem, that is the problem of routing and scheduling airplanes surface maneuvering operations. We consider the specific case study of Malpensa Terminal Maneuvering Area (Italy). Our objective function is the minimization of total tardiness. At first a routing problem is solved to assign a path to each aircraft in the terminal, then the scheduling problem of minimizing the average tardiness is addressed. We model the scheduling problem as a job-shop scheduling problem. We develop heuristic procedures based on the alternative graph formulation of the problem to construct and improve feasible solutions. Experimental results based on real data and analysis are reported.",
"title": ""
},
{
"docid": "39c1b53047e4314073312741a39c7e5c",
"text": "We propose a novel superpixel-based multi-view convolutional neural network for semantic image segmentation. The proposed network produces a high quality segmentation of a single image by leveraging information from additional views of the same scene. Particularly in indoor videos such as captured by robotic platforms or handheld and bodyworn RGBD cameras, nearby video frames provide diverse viewpoints and additional context of objects and scenes. To leverage such information, we first compute region correspondences by optical flow and image boundary-based superpixels. Given these region correspondences, we propose a novel spatio-temporal pooling layer to aggregate information over space and time. We evaluate our approach on the NYU-Depth-V2 and the SUN3D datasets and compare it to various state-of-the-art single-view and multi-view approaches. Besides a general improvement over the state-of-the-art, we also show the benefits of making use of unlabeled frames during training for multi-view as well as single-view prediction.",
"title": ""
},
{
"docid": "af9c94a8d4dcf1122f70f5d0b90a247f",
"text": "New cloud services are being developed to support a wide variety of real-life applications. In this paper, we introduce a new cloud service: industrial automation, which includes different functionalities from feedback control and telemetry to plant optimization and enterprise management. We focus our study on the feedback control layer as the most time-critical and demanding functionality. Today's large-scale industrial automation projects are expensive and time-consuming. Hence, we propose a new cloud-based automation architecture, and we analyze cost and time savings under the proposed architecture. We show that significant cost and time savings can be achieved, mainly due to the virtualization of controllers and the reduction of hardware cost and associated labor. However, the major difficulties in providing cloud-based industrial automation systems are timeliness and reliability. Offering automation functionalities from the cloud over the Internet puts the controlled processes at risk due to varying communication delays and potential failure of virtual machines and/or links. Thus, we design an adaptive delay compensator and a distributed fault tolerance algorithm to mitigate delays and failures, respectively. We theoretically analyze the performance of the proposed architecture when compared to the traditional systems and prove zero or negligible change in performance. To experimentally evaluate our approach, we implement our controllers on commercial clouds and use them to control: (i) a physical model of a solar power plant, where we show that the fault-tolerance algorithm effectively makes the system unaware of faults, and (ii) industry-standard emulation with large injected delays and disturbances, where we show that the proposed cloud-based controllers perform indistinguishably from the best-known counterparts: local controllers.",
"title": ""
},
{
"docid": "a8ca07bf7784d7ac1d09f84ac76be339",
"text": "AbstructEstimation of 3-D information from 2-D image coordinates is a fundamental problem both in machine vision and computer vision. Circular features are the most common quadratic-curved features that have been addressed for 3-D location estimation. In this paper, a closed-form analytical solution to the problem of 3-D location estimation of circular features is presented. Two different cases are considered: 1) 3-D orientation and 3-D position estimation of a circular feature when its radius is known, and 2) 3-D orientation and 3-D position estimation of a circular feature when its radius is not known. As well, extension of the developed method to 3-D quadratic features is addressed. Specifically, a closed-form analytical solution is derived for 3-D position estimation of spherical features. For experimentation purposes, simulated as well as real setups were employed. Simulated experimental results obtained for all three cases mentioned above verified the analytical method developed in this paper. In the case of real experiments, a set of circles located on a calibration plate, whose locations were known with respect to a reference frame, were used for camera calibration as well as for the application of the developed method. Since various distortion factors had to be compensated in order to obtain accurate estimates of the parameters of the imaged circle-an ellipse-with respect to the camera's image frame, a sequential compensation procedure was applied to the input grey-level image. The experimental results obtained once more showed the validity of the total process involved in the 3-D location estimation of circular features in general and the applicability of the analytical method developed in this paper in particular.",
"title": ""
},
{
"docid": "57fd4b59ffb27c35faa6a5ee80001756",
"text": "This paper describes a novel method for motion generation and reactive collision avoidance. The algorithm performs arbitrary desired velocity profiles in absence of external disturbances and reacts if virtual or physical contact is made in a unified fashion with a clear physically interpretable behavior. The method uses physical analogies for defining attractor dynamics in order to generate smooth paths even in presence of virtual and physical objects. The proposed algorithm can, due to its low complexity, run in the inner most control loop of the robot, which is absolutely crucial for safe Human Robot Interaction. The method is thought as the locally reactive real-time motion generator connecting control, collision detection and reaction, and global path planning.",
"title": ""
},
{
"docid": "9bc6a9eb27b9d717c9c390deaeeab502",
"text": "The widespread adoption of autonomous systems such as drones and assistant robots has created a need for real-time high-quality semantic scene segmentation. In this paper, we propose an efficient yet robust technique for on-the-fly dense reconstruction and semantic segmentation of 3D indoor scenes. To guarantee (near) real-time performance, our method is built atop an efficient super-voxel clustering method and a conditional random field with higher-order constraints from structural and object cues, enabling progressive dense semantic segmentation without any precomputation. We extensively evaluate our method on different indoor scenes including kitchens, offices, and bedrooms in the SceneNN and ScanNet datasets and show that our technique consistently produces state-of-the-art segmentation results in both qualitative and quantitative experiments.",
"title": ""
},
{
"docid": "03a2795c53de1a5d5d15d698a2372165",
"text": "Article history: Received 9 January 2016 Received in revised form 15 June 2016 Accepted 25 August 2016 Available online xxxx Communicated by W. Cary Huffman",
"title": ""
},
{
"docid": "d214ef50a5c26fb65d8c06ea7db3d07c",
"text": "We introduce a method for learning to generate the surface of 3D shapes. Our approach represents a 3D shape as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape. Beyond its novelty, our new shape generation framework, AtlasNet, comes with significant advantages, such as improved precision and generalization capabilities, and the possibility to generate a shape of arbitrary resolution without memory issues. We demonstrate these benefits and compare to strong baselines on the ShapeNet benchmark for two applications: (i) autoencoding shapes, and (ii) single-view reconstruction from a still image. We also provide results showing its potential for other applications, such as morphing, parametrization, super-resolution, matching, and co-segmentation.",
"title": ""
},
{
"docid": "201273e307d5c5fe5b5498937bd7e848",
"text": "The technology of Augmented Reality exists for several decades and it is attributed the potential to provide an ideal, efficient and intuitive way of presenting information. However it is not yet widely used. This is because the realization of such Augmented Reality systems requires to solve many principal problems from various areas. Much progress has been made in solving problems of core technologies, which enables us now to intensively explore the development of Augmented Reality applications. As an exemplary industrial use case for this exploration, I selected the order picking process in logistics applications. This thesis reports on the development of an application to support this task, by iteratively improving Augmented Reality-based metaphors. In such order picking tasks, workers collect sets of items from assortments in warehouses according to work orders. This order picking process has been subject to optimization for a long time, as it occurs a million times a day in industrial life. For this Augmented Reality application development, workers have been equipped with mobile hardware, consisting of a wearable computer (in a back-pack) and tracked head-mounted displays (HMDs). This thesis presents the iterative approach of exploring, evaluating and refining the Augmented Reality system, focusing on usability and utility. It starts in a simple laboratory setup and goes up to a realistic industrial setup in a factory hall. The Augmented Reality visualization shown in the HMD was the main subject of optimization in this thesis. Overall, the task was challenging, as workers have to be guided on different levels, from very coarse to very fine granularity and accuracy. The resulting setup consists of a combined and adaptive visualization to precisely and efficiently guide the user, even if the actual target of the augmentation is not always in the field of view of the HMD. A side-effect of this iterative evaluation and refinement of visualizations in an industrial setup is the report on many lessons learned and an advice on the way Augmented Reality user interfaces should be improved and refined.",
"title": ""
},
{
"docid": "3eee111e4521528031019f83786efab7",
"text": "Social media platforms such as Twitter and Facebook enable the creation of virtual customer environments (VCEs) where online communities of interest form around specific firms, brands, or products. While these platforms can be used as another means to deliver familiar e-commerce applications, when firms fail to fully engage their customers, they also fail to fully exploit the capabilities of social media platforms. To gain business value, organizations need to incorporate community building as part of the implementation of social media.",
"title": ""
},
{
"docid": "e1c0fc53db69eb0cc8778fd03498aa64",
"text": "An outlier is an observation that deviates so much from other observations that it seems to have been generated by a different mechanism. Outlier detection has many applications, such as data cleaning, fraud detection and network intrusion. The existence of outliers can indicate individuals or groups that exhibit a behavior that is very different from most of the individuals of the data set. Frequently, outliers are removed to improve accuracy of estimators, but sometimes, the presence of an outlier has a certain meaning, which explanation can be lost if the outlier is deleted. In this paper we study the effect of the presence of outliers on the performance of three well-known classifiers based on the results observed on four real world datasets. We use detection of outliers based on robust statistical estimators of the center and the covariance matrix for the Mahalanobis distance, detection of outliers based on clustering using the partitioning around medoids (PAM) algorithm, and two data mining techniques to detect outliers: Bay’s algorithm for distance-based outliers, and the LOF, a density-based local outlier algorithm.",
"title": ""
},
{
"docid": "6c8983865bf3d6bdbf120e0480345aac",
"text": "In the future Internet of Things (IoT), smart objects will be the fundamental building blocks for the creation of cyber-physical smart pervasive systems in a great variety of application domains ranging from health-care to transportation, from logistics to smart grid and cities. The implementation of a smart objects-oriented IoT is a complex challenge as distributed, autonomous, and heterogeneous IoT components at different levels of abstractions and granularity need to cooperate among themselves, with conventional networked IT infrastructures, and also with human users. In this paper, we propose the integration of two complementary mainstream paradigms for large-scale distributed computing: Agents and Cloud. Agent-based computing can support the development of decentralized, dynamic, cooperating and open IoT systems in terms of multi-agent systems. Cloud computing can enhance the IoT objects with high performance computing capabilities and huge storage resources. In particular, we introduce a cloud-assisted and agent-oriented IoT architecture that will be realized through ACOSO, an agent-oriented middleware for cooperating smart objects, and BodyCloud, a sensor-cloud infrastructure for large-scale sensor-based systems.",
"title": ""
},
{
"docid": "eb7990a677cd3f96a439af6620331400",
"text": "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.",
"title": ""
},
{
"docid": "f58a66f2caf848341b29094e9d3b0e71",
"text": "Since student performance and pass rates in school reflect teaching level of the school and even all education system, it is critical to improve student pass rates and reduce dropout rates. Decision Tree (DT) algorithm and Support Vector Machine (SVM) algorithm in data mining, have been used by researchers to find important student features and predict the student pass rates, however they did not consider the coefficient of initialization, and whether there is a dependency between student features. Therefore, in this study, we propose a new concept: features dependencies, and use the grid search algorithm to optimize DT and SVM, in order to improve the accuracy of the algorithm. Furthermore, we added 10-fold cross-validation to DT and SVM algorithm. The results show the experiment can achieve better results in this work. The purpose of this study is providing assistance to students who have greater difficulties in their studies, and students who are at risk of graduating through data mining techniques.",
"title": ""
},
{
"docid": "fbf2a211d53603cbcb7441db3006f035",
"text": "This letter presents a new metamaterial-based waveguide technology referred to as ridge gap waveguides. The main advantages of the ridge gap waveguides compared to hollow waveguides are that they are planar and much cheaper to manufacture, in particular at high frequencies such as for millimeter and sub- millimeter waves. The latter is due to the fact that there are no mechanical joints across which electric currents must float. The gap waveguides have lower losses than microstrip lines, and they are completely shielded by metal so no additional packaging is needed, in contrast to the severe packaging problems associated with microstrip circuits. The gap waveguides are realized in a narrow gap between two parallel metal plates by using a texture or multilayer structure on one of the surfaces. The waves follow metal ridges in the textured surface. All wave propagation in other directions is prohibited (in cutoff) by realizing a high surface impedance (ideally a perfect magnetic conductor) in the textured surface at both sides of all ridges. Thereby, cavity resonances do not appear either within the band of operation. The present letter introduces the gap waveguide and presents some initial simulated results.",
"title": ""
},
{
"docid": "d939f9e7b3229b654d5a1d331376eca1",
"text": "Knowledge graph embedding aims to represent entities and relations of a knowledge graph in continuous vector spaces. It has increasingly drawn attention for its ability to encode semantics in low dimensional vectors as well as its outstanding performance on many applications, such as question answering systems and information retrieval tasks. Existing methods often handle each triple independently, without considering context information of a triple in the knowledge graph, such an information can be useful for inference of new knowledge. Moreover, the relations and paths between an entity pair also provide information for inference. In this paper, we define a novel context-dependent knowledge graph representation model named triple-context-based knowledge embedding, which is based on the notion of triple context used for embedding entities and relations. For each triple, the triple context is composed of two kinds of graph structured information: one is a set of neighboring entities along with their outgoing relations, the other is a set of relation paths which contain a pair of target entities. Our embedding method is designed to utilize the triple context of each triple while learning embeddings of entities and relations. The method is evaluated on multiple tasks in the paper. Experimental results reveal that our method achieves significant improvements over the state-of-the-art methods.",
"title": ""
},
{
"docid": "c74bd8c04af6a63b73a30bf9637b5a2a",
"text": "Complex regional pain syndrome (CRPS) is a debilitating condition affecting the limbs that can be induced by surgery or trauma. This condition can complicate recovery and impair one's functional and psychological well-being. The wide variety of terminology loosely used to describe CRPS in the past has led to misdiagnosis of this condition, resulting in poor evidence-base regarding the treatment modalities available and their impact. The aim of this review is to report on the recent progress in the understanding of the epidemiology, pathophysiology and treatment of CRPS and to discuss novel approaches in treating this condition.",
"title": ""
},
{
"docid": "fee50f8ab87f2b97b83ca4ef92f57410",
"text": "Ontologies now play an important role for many knowledge-intensive applications for which they provide a source of precisely defined terms. However, with their wide-spread usage there come problems concerning their proliferation. Ontology engineers or users frequently have a core ontology that they use, e.g., for browsing or querying data, but they need to extend it with, adapt it to, or compare it with the large set of other ontologies. For the task of detecting and retrieving relevant ontologies, one needs means for measuring the similarity between ontologies. We present a set of ontology similarity measures and a multiple-phase empirical evaluation.",
"title": ""
},
{
"docid": "9245316ec7a2d1cb98d9385e54e0874d",
"text": "A novel partial order is defined on the space of digraphs or hypergraphs, based on assessing the cost of producing a graph via a sequence of elementary transformations. Leveraging work by Knuth and Skilling on the foundations of inference, and the structure of Heyting algebras on graph space, this partial order is used to construct an intuitionistic probability measure that applies to either digraphs or hypergraphs. As logical inference steps can be represented as transformations on hypergraphs representing logical statements, this also yields an intuitionistic probability measure on spaces of theorems. The central result is also extended to yield intuitionistic probabilities based on more general weighted rule systems defined over bicartesian closed categories.",
"title": ""
}
] |
scidocsrr
|
f798f62e893ec045d2ba4cd9d7333882
|
Approximate Thin Plate Spline Mappings
|
[
{
"docid": "d529b4f1992f438bb3ce4373090f8540",
"text": "One conventional tool for interpolating surfaces over scattered data, the thin-plate spline, has an elegant algebra expressing the dependence of the physical bending energy of a thin metal plate on point constraints. For interpolation of a surface over a fixed set of nodes in the plane, the bending energy is a quadratic form in the heights assigned to the surface. The spline is the superposition of eigenvectors of the bending energy matrix, of successively larger physical scales, over a tilted flat plane having no bending energy at all. When these splines are paired, one representing the x-coordinate of another form and the other the y-coordinate, they aid greatly in the modeling of biological shape change as deformation. In this context, the pair becomes an interpolation map from RZ to R' relating two sets of landmark points. The spline maps decompose, in the same way as the spline surfaces, into a linear part (an affine transformation) together with the superposition of principal warps, which are geometrically independent, affine-free deformations of progressively smaller geometrical scales. The warps decompose an empirical deformation into orthogonal features more or less as a conventional orthogonal functional analysis decomposes the single scene. This paper demonstrates the decomposition of deformations by principal warps, extends the method to deal with curving edges between landmarks, relates this formalism to other applications of splines current in computer vision, and indicates how they might aid in the extraction of features for analysis, comparison, and diagnosis of biological and medical images.",
"title": ""
},
{
"docid": "5b0e088e2bddd0535bc9d2dfbfeb0298",
"text": "We had previously shown that regularization principles lead to approximation schemes that are equivalent to networks with one layer of hidden units, called regularization networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known radial basis functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends radial basis functions (RBF) to hyper basis functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of projection pursuit regression, and several types of neural networks. We propose to use the term generalized regularization networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call generalized regularization networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are (1) radial basis functions that can be generalized to hyper basis functions, (2) some tensor product splines, and (3) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions, and several perceptron-like neural networks with one hidden layer.",
"title": ""
}
] |
[
{
"docid": "bba76e1d3564a1d2c02726b6226e3adb",
"text": "Hand-drawn sketching on napkins or whiteboards is a common, accessible method for generating visual representations. This practice is shared by experts and non-experts and is probably one of the faster and more expressive ways to draft a visual representation of data. In order to better understand the types of and variations in what people produce when sketching data, we conducted a qualitative study. We asked people with varying degrees of visualization expertise, from novices to experts, to manually sketch representations of a small, easily understandable dataset using pencils and paper and to report on what they learned or found interesting about the data. From this study, we extract a data sketching representation continuum from numeracy to abstraction; a data report spectrum from individual data items to speculative data hypothesis; and show the correspondence between the representation types and the data reports from our results set. From these observations we discuss the participants’ representations in relation to their data reports, indicating implications for design and potentially fruitful directions for research.",
"title": ""
},
{
"docid": "a94d8b425aed0ade657aa1091015e529",
"text": "Generative models for source code are an interesting structured prediction problem, requiring to reason about both hard syntactic and semantic constraints as well as about natural, likely programs. We present a novel model for this problem that uses a graph to represent the intermediate state of the generated output. Our model generates code by interleaving grammar-driven expansion steps with graph augmentation and neural message passing steps. An experimental evaluation shows that our new model can generate semantically meaningful expressions, outperforming a range of strong baselines.",
"title": ""
},
{
"docid": "d42f5fdbcaf8933dc97b377a801ef3e0",
"text": "Bodyweight supported treadmill training has become a prominent gait rehabilitation method in leading rehabilitation centers. This type of locomotor training has many functional benefits but the labor costs are considerable. To reduce therapist effort, several groups have developed large robotic devices for assisting treadmill stepping. A complementary approach that has not been adequately explored is to use powered lower limb orthoses for locomotor training. Recent advances in robotic technology have made lightweight powered orthoses feasible and practical. An advantage to using powered orthoses as rehabilitation aids is they allow practice starting, turning, stopping, and avoiding obstacles during overground walking.",
"title": ""
},
{
"docid": "25b380a374ce765dc03725b571c4927a",
"text": "Poly(vinyl chloride) resins are produced by four basic processes: suspension, emulsion, bulk and solution polymerization. PVC suspensions resins are usually relatively dust-free and granular with varying degrees of particle porosity. PVC emulsion resins are small particle powders containing very little free monomer. Bulk PVC resins are similar to suspension PVC resins, though the particles tend to be more porous. Solution PVC resins are smaller in particle size than suspension PVC with high porosity particles containing essentially no free monomer. The variety of PVC resin products does not lend itself to broad generalizations concerning health hazards. In studying occupational hazards the particular PVC process and the product must be considered and identified in the study.",
"title": ""
},
{
"docid": "97d44ea66371f73922d0f16f5e3427e2",
"text": "Expanding a seed set into a larger community is a common procedure in link-based analysis. We show how to adapt recent results from theoretical computer science to expand a seed set into a community with small conductance and a strong relationship to the seed, while examining only a small neighborhood of the entire graph. We extend existing results to give theoretical guarantees that apply to a variety of seed sets from specified communities. We also describe simple and flexible heuristics for applying these methods in practice, and present early experiments showing that these methods compare favorably with existing approaches.",
"title": ""
},
{
"docid": "787377fc8e1f9da5ec2b6ea77bcc0725",
"text": "We show that the counting class LWPP [8] remains unchanged even if one allows a polynomial number of gap values rather than one. On the other hand, we show that it is impossible to improve this from polynomially many gap values to a superpolynomial number of gap values by relativizable proof techniques. The first of these results implies that the Legitimate Deck Problem (from the study of graph reconstruction) is in LWPP (and thus low for PP, i.e., PPLegitimate Deck = PP) if the weakened version of the Reconstruction Conjecture holds in which the number of nonisomorphic preimages is assumed merely to be polynomially bounded. This strengthens the 1992 result of Köbler, Schöning, and Torán [15] that the Legitimate Deck Problem is in LWPP if the Reconstruction Conjecture holds, and provides strengthened evidence that the Legitimate Deck Problem is not NP-hard. We additionally show on the one hand that our main LWPP robustness result also holds for WPP, and also holds even when one allows both the rejectionand acceptancegap-value targets to simultaneously be polynomial-sized lists; yet on the other hand, we show that for the #P-based analog of LWPP the behavior much differs in that, in some relativized worlds, even two target values already yield a richer class than one value does. 2012 ACM Subject Classification Theory of computation → Complexity classes",
"title": ""
},
{
"docid": "062e0c3c3b8fec66aa3c647a7e5cf028",
"text": "We face the complex problem of timely, accurate and mutually satisfactory mediation between job offers and suitable applicant profiles by means of semantic processing techniques. In fact, this problem has become a major challenge for all public and private recruitment agencies around the world as well as for employers and job seekers. It is widely agreed that smart algorithms for automatically matching, learning, and querying job offers and candidate profiles will provide a key technology of high importance and impact and will help to counter the lack of skilled labor and/or appropriate job positions for unemployed people. Additionally, such a framework can support global matching aiming at finding an optimal allocation of job seekers to available jobs, which is relevant for independent employment agencies, e.g. in order to reduce unemployment.",
"title": ""
},
{
"docid": "7850280ba2c29dc328b9594f4def05a6",
"text": "Electric traction motors in automotive applications work in operational conditions characterized by variable load, rotational speed and other external conditions: this complicates the task of diagnosing bearing defects. The objective of the present work is the development of a diagnostic system for detecting the onset of degradation, isolating the degrading bearing, classifying the type of defect. The developed diagnostic system is based on an hierarchical structure of K-Nearest Neighbours classifiers. The selection of the features from the measured vibrational signals to be used in input by the bearing diagnostic system is done by a wrapper approach based on a Multi-Objective (MO) optimization that integrates a Binary Differential Evolution (BDE) algorithm with the K-Nearest Neighbour (KNN) classifiers. The developed approach is applied to an experimental dataset. The satisfactory diagnostic performances obtain show the capability of the method, independently from the bearings operational conditions.",
"title": ""
},
{
"docid": "ffb87dc7922fd1a3d2a132c923eff57d",
"text": "It has been suggested that pulmonary artery pressure at the end of ejection is close to mean pulmonary artery pressure, thus contributing to the optimization of external power from the right ventricle. We tested the hypothesis that dicrotic notch and mean pulmonary artery pressures could be of similar magnitude in 15 men (50 +/- 12 yr) referred to our laboratory for diagnostic right and left heart catheterization. Beat-to-beat relationships between dicrotic notch and mean pulmonary artery pressures were studied 1) at rest over 10 consecutive beats and 2) in 5 patients during the Valsalva maneuver (178 beats studied). At rest, there was no difference between dicrotic notch and mean pulmonary artery pressures (21.8 +/- 12.0 vs. 21.9 +/- 11.1 mmHg). There was a strong linear relationship between dicrotic notch and mean pressures 1) over the 10 consecutive beats studied in each patient (mean r = 0.93), 2) over the 150 resting beats (r = 0.99), and 3) during the Valsalva maneuver in each patient (r = 0.98-0.99) and in the overall beats (r = 0.99). The difference between dicrotic notch and mean pressures was -0.1 +/- 1.7 mmHg at rest and -1.5 +/- 2.3 mmHg during the Valsalva maneuver. Substitution of the mean pulmonary artery pressure by the dicrotic notch pressure in the standard formula of the pulmonary vascular resistance (PVR) resulted in an equation relating linearly end-systolic pressure and stroke volume. The slope of this relation had the dimension of a volume elastance (in mmHg/ml), a simple estimate of volume elastance being obtained as 1.06(PVR/T), where T is duration of the cardiac cycle. In conclusion, dicrotic notch pressure was of similar magnitude as mean pulmonary artery pressure. These results confirmed our primary hypothesis and indicated that human pulmonary artery can be treated as if it is an elastic chamber with a volume elastance of 1.06(PVR/T).",
"title": ""
},
{
"docid": "495fd5ab98109dda187c3e157b6785cf",
"text": "Bike sharing systems consist of a fleet of bikes placed in a network of docking stations. These bikes can then be rented and returned to any of the docking stations after usage. Predicting unrealized bike demand at locations currently without bike stations is important for effectively designing and expanding bike sharing systems. We predict pairwise bike demand for New York City’s Citi Bike system. Since the system is driven by daily commuters we focus only on the morning rush hours between 7:00 AM to 11:00 AM during weekdays. We use taxi usage, weather and spatial variables as covariates to predict bike demand, and further analyze the influence of precipitation and day of week. We show that aggregating stations in neighborhoods can substantially improve predictions. The presented model can assist planners by predicting bike demand at a macroscopic level, between pairs of neigh-",
"title": ""
},
{
"docid": "c123d61a6a94e963d4fbf6075c496599",
"text": "Most metastatic tumors, such as those originating in the prostate, lung, and gastrointestinal tract, respond poorly to conventional chemotherapy. Novel treatment strategies for advanced cancer are therefore desperately needed. Dietary restriction of the essential amino acid methionine offers promise as such a strategy, either alone or in combination with chemotherapy or other treatments. Numerous in vitro and animal studies demonstrate the effectiveness of dietary methionine restriction in inhibiting growth and eventually causing death of cancer cells. In contrast, normal host tissues are relatively resistant to methionine restriction. These preclinical observations led to a phase I clinical trial of dietary methionine restriction for adults with advanced cancer. Preliminary findings from this trial indicate that dietary methionine restriction is safe and feasible for the treatment of patients with advanced cancer. In addition, the trial has yielded some preliminary evidence of antitumor activity. One patient with hormone-independent prostate cancer experienced a 25% reduction in serum prostate-specific antigen (PSA) after 12 weeks on the diet, and a second patient with renal cell cancer experienced an objective radiographic response. The possibility that methionine restriction may act synergistically with other cancer treatments such as chemotherapy is being explored. Findings to date support further investigation of dietary methionine restriction as a novel treatment strategy for advanced cancer.",
"title": ""
},
{
"docid": "a8287a99def9fec3a9a2fda06a95e36e",
"text": "The abstraction of a process enables certain primitive forms of communication during process creation and destruction such as wait(). However, the operating system provides more general mechanisms for flexible inter-process communication. In this paper, we have studied and evaluated three commonly-used inter-process communication devices pipes, sockets and shared memory. We have identified the various factors that could affect their performance such as message size, hardware caches and process scheduling, and constructed experiments to reliably measure the latency and transfer rate of each device. We identified the most reliable timer APIs available for our measurements. Our experiments reveal that shared memory provides the lowest latency and highest throughput, followed by kernel pipes and lastly, TCP/IP sockets. However, the latency trends provide interesting insights into the construction of each mechanism. We also make certain observations on the pros and cons of each mechanism, highlighting its usefulness for different kinds of applications.",
"title": ""
},
{
"docid": "565b07fee5a5812d04818fa132c0da4c",
"text": "PHP is the most popular scripting language for web applications. Because no native solution to compile or protect PHP scripts exists, PHP applications are usually shipped as plain source code which is easily understood or copied by an adversary. In order to prevent such attacks, commercial products such as ionCube, Zend Guard, and Source Guardian promise a source code protection. In this paper, we analyze the inner working and security of these tools and propose a method to recover the source code by leveraging static and dynamic analysis techniques. We introduce a generic approach for decompilation of obfuscated bytecode and show that it is possible to automatically recover the original source code of protected software. As a result, we discovered previously unknown vulnerabilities and backdoors in 1 million lines of recovered source code of 10 protected applications.",
"title": ""
},
{
"docid": "066e0f4902bb4020c6d3fad7c06ee519",
"text": "Automatic traffic light detection (TLD) plays an important role for driver-assistance system and autonomous vehicles. State-of-the-art TLD systems showed remarkable results by exploring visual information from static frames. However, traffic lights from different countries, regions, and manufactures are always visually distinct. The existing large intra-class variance makes the pre-trained detectors perform good on one dataset but fail on the others with different origins. One the other hand, LED traffic lights are widely used because of better energy efficiency. Based on the observation LED traffic light flashes in proportion to the input AC power frequency, we propose a hybrid TLD approach which combines the temporally frequency analysis and visual information using high-speed camera. Exploiting temporal information is shown to be very effective in the experiments. It is considered to be more robust than visual information-only methods.",
"title": ""
},
{
"docid": "f6e8f2f990ca60a5b659c1c7a19d0638",
"text": "OBJECTIVE\nTo develop an understanding of the stability of mental health during imprisonment through review of existing research evidence relating physical prison environment to mental state changes in prisoners.\n\n\nMETHOD\nA systematic literature search was conducted looking at changes in mental state and how this related to various aspects of imprisonment and the prison environment.\n\n\nRESULTS\nFifteen longitudinal studies were found, and from these, three broad themes were delineated: being imprisoned and aspects of the prison regime; stage of imprisonment and duration of sentence; and social density. Reception into prison results in higher levels of psychiatric symptoms that seem to improve over time; otherwise, duration of imprisonment appears to have no significant impact on mental health. Regardless of social density, larger prisons are associated with poorer mental state, as are extremes of social density.\n\n\nCONCLUSION\nThere are large gaps in the literature relating prison environments to changes in mental state; in particular, high-quality longitudinal studies are needed. Existing research suggests that although entry to prison may be associated with deterioration in mental state, it tends to improve with time. Furthermore, overcrowding, ever more likely as prison populations rise, is likely to place a particular burden on mental health services.",
"title": ""
},
{
"docid": "f7a228d688b9faa8cc3e27ce12affe9f",
"text": "Research into genome assembly algorithms has experienced a resurgence due to new challenges created by the development of next generation sequencing technologies. Several genome assemblers have been published in recent years specifically targeted at the new sequence data; however, the ever-changing technological landscape leads to the need for continued research. In addition, the low cost of next generation sequencing data has led to an increased use of sequencing in new settings. For example, the new field of metagenomics relies on large-scale sequencing of entire microbial communities instead of isolate genomes, leading to new computational challenges. In this article, we outline the major algorithmic approaches for genome assembly and describe recent developments in this domain.",
"title": ""
},
{
"docid": "f7a8116cefaaf6ab82118885efac4c44",
"text": "Entrepreneurs have created a number of new Internet-based platforms that enable owners to rent out their durable goods when not using them for personal consumption. We develop a model of these kinds of markets in order to analyze the determinants of ownership, rental rates, quantities, and the surplus generated in these markets. Our analysis considers both a short run, before consumers can revise their ownership decisions and a long run, in which they can. This allows us to explore how patterns of ownership and consumption might change as a result of these new markets. We also examine the impact of bringing-to-market costs, such as depreciation, labor costs and transaction costs and consider the platform’s pricing problem. An online survey of consumers broadly supports the modeling assumptions employed. For example, ownership is determined by individuals’ forward-looking assessments of planned usage. Factors enabling sharing markets to flourish are explored. JEL L1, D23, D47",
"title": ""
},
{
"docid": "d3e561a6ac610d84921664662b57ed33",
"text": "Antibiotic resistance is ancient and widespread in environmental bacteria. These are therefore reservoirs of resistance elements and reflective of the natural history of antibiotics and resistance. In a previous study, we discovered that multi-drug resistance is common in bacteria isolated from Lechuguilla Cave, an underground ecosystem that has been isolated from the surface for over 4 Myr. Here we use whole-genome sequencing, functional genomics and biochemical assays to reveal the intrinsic resistome of Paenibacillus sp. LC231, a cave bacterial isolate that is resistant to most clinically used antibiotics. We systematically link resistance phenotype to genotype and in doing so, identify 18 chromosomal resistance elements, including five determinants without characterized homologues and three mechanisms not previously shown to be involved in antibiotic resistance. A resistome comparison across related surface Paenibacillus affirms the conservation of resistance over millions of years and establishes the longevity of these genes in this genus.",
"title": ""
},
{
"docid": "35404fbbf92e7a995cdd6de044f2ec0d",
"text": "The ball on plate system is the extension of traditional ball on beam balancing problem in control theory. In this paper the implementation of a proportional-integral-derivative controller (PID controller) to balance a ball on a plate has been demonstrated. To increase the system response time and accuracy multiple controllers are piped through a simple custom serial protocol to boost the processing power, and overall performance. A single HD camera module is used as a sensor to detect the ball's position and two RC servo motors are used to tilt the plate to balance the ball. The result shows that by implementing multiple PUs (Processing Units) redundancy and high resolution can be achieved in real-time control systems.",
"title": ""
},
{
"docid": "79218f4dfecdef0bd7df21aa4854af75",
"text": "Multi-gigabit 60 GHz radios are expected to match QoS requirements of modern multimedia applications. Several published standards were defined based on performance evaluations over standard channel models. Unfortunately, those models, and most models available in the literature, do not take into account the behavior of 60 GHz channels at different carrier frequencies, thus no guidelines are provided for the selection of the best suitable frequency band for a given service. This paper analyzes the impact of changes in multipath profiles, due to both frequency and distance, on the BER performance achieved by IEEE 802.11ad 60 GHz radios. Our analysis is based on real experimental channel impulse responses recorded through an indoor measurement campaign in seven sub-bands from 54 to 65 GHz with a break at 60 GHz at distances from 1 to 5 m. The small-scale fading is modeled by Rician distributions with K-factors extracted from experimental data, which are shown to give good agreement with the empirical distributions. A strong dependence of performance on both frequency and distance due to the sole multipath is observed, which calls for an appropriate selection of the best suitable frequency band according to the service required by the current session over the 802.11ad link.",
"title": ""
}
] |
scidocsrr
|
384b6665577123dc3bf1e3e09be7dae1
|
Location-Based Mobile Games
|
[
{
"docid": "e5a3119470420024b99df2d6eb14b966",
"text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?",
"title": ""
}
] |
[
{
"docid": "258da62ca5b12f01de336c6db3acfd8c",
"text": "The explosive growth of the internet and electronic publishing has led to a huge number of scientific documents being available to users, however, they are usually inaccessible to those with visual impairments and often only partially compatible with software and modern hardware such as tablets and e-readers. In this paper we revisit Maxtract, a tool for analysing and converting documents into accessible formats, and combine it with two advanced segmentation techniques, statistical line identification and machine learning formula identification. We show how these advanced techniques improve the quality of both Maxtract's underlying document analysis and its output. We re-run and compare experimental results over a number of datasets, presenting a qualitative review of the improved output and drawing conclusions.",
"title": ""
},
{
"docid": "d07b01d5bf0ba3088434ec46b3b4c65d",
"text": "The recent increase in short messaging system (SMS) text messaging, often using abbreviated, non-conventional ‘textisms’ (e.g. ‘2nite’), in school-aged children has raised fears of negative consequences of such technology for literacy. The current research used a paradigm developed by Dixon and Kaminska, who showed that exposure to phonetically plausible misspellings (e.g. ‘recieve’) negatively affected subsequent spelling performance, though this was true only with adults, not children. The current research extends this work to directly investigate the effects of exposure to textisms, misspellings and correctly spelled words on adults’ spelling. Spelling of a set of key words was assessed both before and after an exposure phase where participants read the same key words, presented either as textisms (e.g. ‘2nite’), correctly spelled (e.g. ‘tonight’) or misspelled (e.g. ‘tonite’) words.Analysis showed that scores decreased from preto post-test following exposure to misspellings, whereas performance improved following exposure to correctly spelled words and, interestingly, to textisms. Data suggest that exposure to textisms, unlike misspellings, had a positive effect on adults’ spelling. These findings are interpreted in light of other recent research suggesting a positive relationship between texting and some literacy measures in school-aged children.",
"title": ""
},
{
"docid": "725e826f13a17fe73369e85733431e32",
"text": "This study aims to explore the determinants influencing usage intention in mobile social media from the user motivation and the Theory of Planned Behavior (TPB) perspectives. Based on TPB, this study added three motivations, namely entertainment, sociality, and information, into the TPB model, and further examined the moderating effect of posters and lurkers in the relationships of the proposed model. A structural equation modeling was used and 468 LINE users in Taiwan were investigated. The results revealed that entertainment, sociality, and information are positively associated with behavioral attitude. Moreover, behavioral attitude, subjective norms, and perceived behavioral control are positively associated with usage intention. Furthermore, posters likely post messages on the LINE because of entertainment, sociality, and information, but they are not significantly subject to subjective norms. In contrast, lurkers tend to read, not write messages on the LINE because of entertainment and information rather than sociality and perceived behavioral control.",
"title": ""
},
{
"docid": "4301aa3bb6a7d1ca9c0c17b8a12ebb37",
"text": "A CAPTCHA is a test that can, automatically, tell human and computer programs apart. It is a mechanism widely used nowadays for protecting web applications, interfaces, and services from malicious users and automated spammers. Usability and robustness are two fundamental aspects with CAPTCHA, where the usability aspect is the ease with which humans pass its challenges, while the robustness is the strength of its segmentation-resistance mechanism. The collapsing mechanism, which is removing the space between characters to prevent segmentation, has been shown to be reasonably resistant to known attacks. On the other hand, this mechanism drops considerably the human-solvability of text-based CAPTCHAs. Accordingly, an optimizer has previously been proposed that automatically enhances the usability of a CAPTCHA generation without sacrificing its robustness level. However, this optimizer has not yet been evaluated in terms of improving the usability. This paper, therefore, evaluates the usability of this optimizer by conducting an experimental study. The results of this evaluation showed that a statistically significant enhancement is found in the usability of text-based CAPTCHA generation. Keywords—text-based CAPTCHA; usability; security; optimization; experimentation; evaluation",
"title": ""
},
{
"docid": "f2489daf0e1bd0ecb50be00f1d36bcdc",
"text": "A fuzzy goal programming approach is applied in this paper for solving the vendor selection problem with multiple objectives, in which some of the parameters are fuzzy in nature. A vendor selection problem has been formulated as a fuzzy mixed integer goal programming vendor selection problem that includes three primary goals: minimizing the net cost, minimizing the net rejections, and minimizing the net late deliveries subject to realistic constraints regarding buyer's demand, vendors' capacity, vendors' quota flexibility, purchase value of items, budget allocation to individual vendor, etc. An illustration with a data set from a realistic situation is included to demonstrate the effectiveness of the proposed model. The proposed approach has the capability to handle realistic situations in a fuzzy environment and provides a better decision tool for the vendor selection decision in a supply chain.",
"title": ""
},
{
"docid": "b47d53485704f4237e57d220640346a7",
"text": "Features of consciousness difficult to understand in terms of conventional neuroscience have evoked application of quantum theory, which describes the fundamental behavior of matter and energy. In this paper we propose that aspects of quantum theory (e.g. quantum coherence) and of a newly proposed physical phenomenon of quantum wave function \"self-collapse\" (objective reduction: OR Penrose, 1994) are essential for consciousness, and occur in cytoskeletal microtubules and other structures within each of the brain's neurons. The particular characteristics of microtubules suitable for quantum effects include their crystal-like lattice structure, hollow inner core, organization of cell function and capacity for information processing. We envisage that conformational states of microtubule subunits (tubulins) are coupled to internal quantum events, and cooperatively interact (compute) with other tubulins. We further assume that macroscopic coherent superposition of quantum-coupled tubulin conformational states occurs throughout significant brain volumes and provides the global binding essential to consciousness. We equate the emergence of the microtubule quantum coherence with pre-conscious processing which grows (for up to 500 ms) until the mass energy difference among the separated states of tubulins reaches a threshold related to quantum gravity. According to the arguments for OR put forth in Penrose (1994), superpositioned states each have their own space-time geometries. When the degree of coherent mass energy difference leads to sufficient separation of space time geometry, the system must choose and decay (reduce, collapse) to a single universe state. In this way, a transient superposition of slightly differing space-time geometries persists until an abrupt quantum --, classical reduction occurs. Unlike the random, \"subjective reduction\" (SR, or R) of standard quantum theory caused by observation or environmental entanglement, the OR we propose in microtubules is a se(f-collapse and it results in particular patterns of microtubule-tubulin conformational states that regulate neuronal activities including synaptic functions. Possibilities and probabilities for post-reduction tubulin states are influenced by factors including attachments of microtubule-associated proteins (MAPs) acting as \"nodes\" which tune and \"orchestrate\" the quantum oscillations. We thus term the self-tuning OR process in microtubules \"orchestrated objective reduction\" (\"Orch OR\"), and calculate an estimate for the number of tubulins (and neurons) whose coherence for relevant time periods (e.g. 500ms) will elicit Orch OR. In providing a connection among (1) pre-conscious to conscious transition, (2) fundamental space time notions, (3) non-computability, and (4) binding of various (time scale and spatial) reductions into an instantaneous event (\"conscious now\"), we believe Orch OR in brain microtubules is the most specific and plausible model for consciousness yet proposed. * Corresponding author. Tel.: (520) 626-2116. Fax: (520) 626-2689. E-Mail: srh(cv ccit.arizona.edu. 0378-4754/96/$15.00 © 1996 Elsevier Science B.V. All rights reserved SSDI0378-4754(95 ) 0049-6 454 S. Hameroff, R. Penrose/Mathematics and Computers in Simulation 40 (1996) 453 480",
"title": ""
},
{
"docid": "92600ef3d90d5289f70b10ccedff7a81",
"text": "In this paper, the chicken farm monitoring system is proposed and developed based on wireless communication unit to transfer data by using the wireless module combined with the sensors that enable to detect temperature, humidity, light and water level values. This system is focused on the collecting, storing, and controlling the information of the chicken farm so that the high quality and quantity of the meal production can be produced. This system is developed to solve several problems in the chicken farm which are many human workers is needed to control the farm, high cost in maintenance, and inaccurate data collected at one point. The proposed methodology really helps in finishing this project within the period given. Based on the research that has been carried out, the system that can monitor and control environment condition (temperature, humidity, and light) has been developed by using the Arduino microcontroller. This system also is able to collect data and operate autonomously.",
"title": ""
},
{
"docid": "ea9f5956e09833c107d79d5559367e0e",
"text": "This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. This paper describes a modified feedforward neural network constructive algorithm (MFNNCA), a new algorithm for medical diagnosis. The new constructive algorithm with backpropagation; offer an approach for the incremental construction of near-minimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. The MFNNCA was tested on several benchmarking classification problems including the cancer, heart disease and diabetes. Experimental results show that the MFNNCA can produce optimal neural network architecture with good generalization ability.",
"title": ""
},
{
"docid": "db70302a3d7e7e7e5974dd013e587b12",
"text": "In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we propose the SIPHON architecture---a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called \\emph{wormholes} distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, five physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50 000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware.",
"title": ""
},
{
"docid": "560a19017dcc240d48bb879c3165b3e1",
"text": "Battery management systems in hybrid electric vehicle battery packs must estimate values descriptive of the pack’s present operating condition. These include: battery state of charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose a method, based on extended Kalman filtering (EKF), that is able to accomplish these goals on a lithium ion polymer battery pack. We expect that it will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. In order to use EKF to estimate the desired quantities, we first require a mathematical model that can accurately capture the dynamics of a cell. In this paper we “evolve” a suitable model from one that is very primitive to one that is more advanced and works well in practice. The final model includes terms that describe the dynamic contributions due to open-circuit voltage, ohmic loss, polarization time constants, electro-chemical hysteresis, and the effects of temperature. We also give a means, based on EKF, whereby the constant model parameters may be determined from cell test data. Results are presented that demonstrate it is possible to achieve root-mean-squared modeling error smaller than the level of quantization error expected in an implementation. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d9df98fbd7281b67347df0f2643323fa",
"text": "Predefined categories can be assigned to the natural language text using for text classification. It is a “bag-of-word” representation, previous documents have a word with values, it represents how frequently this word appears in the document or not. But large documents may face many problems because they have irrelevant or abundant information is there. This paper explores the effect of other types of values, which express the distribution of a word in the document. These values are called distributional features. All features are calculated by tfidf style equation and these features are combined with machine learning techniques. Term frequency is one of the major factor for distributional features it holds weighted item set. When the need is to minimize a certain score function, discovering rare data correlations is more interesting than mining frequent ones. This paper tackles the issue of discovering rare and weighted item sets, i.e., the infrequent weighted item set mining problem. The classifier which gives the more accurate result is selected for categorization. Experiments show that the distributional features are useful for text categorization.",
"title": ""
},
{
"docid": "efde28bc545de68dbb44f85b198d85ff",
"text": "Blockchain technology is regarded as highly disruptive, but there is a lack of formalization and standardization of terminology. Not only because there are several (sometimes propriety) implementation platforms, but also because the academic literature so far is predominantly written from either a purely technical or an economic application perspective. The result of the confusion is an offspring of blockchain solutions, types, roadmaps and interpretations. For blockchain to be accepted as a technology standard in established industries, it is pivotal that ordinary internet users and business executives have a basic yet fundamental understanding of the workings and impact of blockchain. This conceptual paper provides a theoretical contribution and guidance on what blockchain actually is by taking an ontological approach. Enterprise Ontology is used to make a clear distinction between the datalogical, infological and essential level of blockchain transactions and smart contracts.",
"title": ""
},
{
"docid": "3e421ee43916a02afeb4987844d12b4f",
"text": "s from all researchers who are engaged with the seriously ill, frail elderly. Authors: John Muscedere1 MD, Sarah Grace Bebenek2 MSc, Denise Stockley PhD2, Laura Kinderman2 PhD, and Carol Barrie1 CPA, CA. 1Canadian Frailty Network, 2Queen’s University. Ultrasound of Thigh Muscle Can Predict Frailty in Elderly Patients S. Salim, L. Warkentin, A. Gallivan, T. Churchill, V. Baracos, R. Khadaroo. University of Alberta, Edmonton, AB, Canada. Background: Sarcopenia, defined as loss of muscle mass and function, has been associated with high morbidity and mortality in patients over 65 years. Yet, it is not part of the routine screening process in geriatric care. Computed tomography (CT) scan has been used as the gold-standard tool to identify sarcopenia. Unfortunately, the high cost, limited availability, and radiation exposure limits the use of CT scans. Thigh muscle ultrasound (US) may provide a feasible diagnostic modality to identify frail older patients. We hypothesize that thigh ultrasound is predictive of frailty and post-operative complications in high-risk elderly patients. Methods: Thirty-eight patients above the age of 65 years referred to Acute Care Surgery service were recruited. Using ultrasound, thigh muscle thickness was standardized to patient height. CT scan images at L3 were analyzed and the skeletal muscle index was calculated. Sarcopenia was defined as skeletal muscle index < 41cm2/m2 for females and <43cm2/m2 or < 53cm2/m2 for males (with BMI 25kg/ m2, respectively). Rockwood Clinical Frailty score (1-3 non-frail, >4 frail) was used to assess patient condition. Results: The mean age of our preliminary study group was 78 ± 8 years and 68% (n=26) were females. Sarcopenia was identified in 69% of the patients via CT. Sarcopenic patients had a greater number of in-hospital complications (48% vs. 16.6% in non-sarcopenic, p=.0001). There was no difference in duration of stay between sarcopenic and non-sarcopenic patients (14 vs. 11 days, p=.06). There were significant differences between sarcopenic and non-sarcopenic females in skeletal muscle surface area (113 ± 9 versus 91 ± 10 cm2, p < .001), and skeletal muscle index (35.2 versus 46.3 cm2/ m2, p< .001). CT scan skeletal muscle index of sarcopenic patients showed significant correlation with frailty score (r2=0.21, p<.05). US of rectus femoris in all females was significantly associated with frailty score (r2=0.19, p=.008). The receiver-operating characteristic (ROC) for thigh ultrasound was not able to distinguish sarcopenic patients (area ROC curve=0.6, p=.8). Conclusion: CT identified sarcopenia was associated with high-risk frail patients. US measured muscle thickness was predictive of frailty but not of CT identified sarcopenia. Validity and Reliability Testing of Two Acute Care Nutrition Support Tools J. McCullough1, H. Keller2, E. Vesnaver3, H. Marcus4, T. Lister5, R. Nasser6, L. Belley7. 1University of Waterloo, Waterloo, ON, Canada; 2Department of Kinesiology, University of Waterloo, Waterloo, ON, Canada; SchlegelUniversity of Waterloo, Research Institute for Aging, Waterloo, ON, Canada; 3Department of Family Relations and Applied Nutrition, University of Guelph, Guelph, ON, Canada; 4Grand River Hospital, Kitchener, ON, Canada; 5Vancouver Island Health Authority, Vancouver, BC, Canada; 6Regina Qu’Appelle Health Region, Regina, SK, Canada; 7Centre hospitalier de l’Université de Montréal, Montreal, QC, Canada. Abstract: Poor food intake is common with patients in acute care, which can affect their recovery. Research describes that many barriers to food intake and poor intake is associated with a longer hospital stay. Thus, identifying problems and intervening as soon as possible is important. The aim of this project was to test the validity and reliability of recently developed tools designed to monitor food intake and barriers experienced by patients. 120 patients over the age of 65 were recruited at four hospitals. Patients reported their food intake for a single meal on the My Meal Intake Tool (M-MIT) and reported mealtime barriers at a single meal on the Mealtime Audit Tool (MAT). Validity of the M-MIT was determined by comparing patient completed M-MIT with food intake estimations conducted by on-site dietitians. Sensitivity (SE) and specificity (SP) for solid food and individual fluid intake (≤ 50% vs. > 50%) was adequate (solids: SE 76%, SP 74%; juice: SE 74.1%, SP 88.1%; coffee/ tea: SE 70.6%, SP 97.0%; milk: SE 64.3%, SP 83.3%). According to the MAT, the mean number of food intake barriers that patients experienced across the four hospitals was 2.93 ± 1.58 out of 18 potential barriers. Some of the most common barriers experienced included: meal tray not looking/smelling appetizing, food not served hot, tray not set up for patient, patient not provided snacks between meals, and the patient being disturbed during the meal. Inter-rater reliability testing of the MAT was conducted at Poor food intake is common with patients in acute care, which can affect their recovery. Research describes that many barriers to food intake and poor intake is associated with a longer hospital stay. Thus, identifying problems and intervening as soon as possible is important. The aim of this project was to test the validity and reliability of recently developed tools designed to monitor food intake and barriers experienced by patients. 120 patients over the age of 65 were recruited at four hospitals. Patients reported their food intake for a single meal on the My Meal Intake Tool (M-MIT) and reported mealtime barriers at a single meal on the Mealtime Audit Tool (MAT). Validity of the M-MIT was determined by comparing patient completed M-MIT with food intake estimations conducted by on-site dietitians. Sensitivity (SE) and specificity (SP) for solid food and individual fluid intake (≤ 50% vs. > 50%) was adequate (solids: SE 76%, SP 74%; juice: SE 74.1%, SP 88.1%; coffee/ tea: SE 70.6%, SP 97.0%; milk: SE 64.3%, SP 83.3%). According to the MAT, the mean number of food intake barriers that patients experienced across the four hospitals was 2.93 ± 1.58 out of 18 potential barriers. Some of the most common barriers experienced included: meal tray not looking/smelling appetizing, food not served hot, tray not set up for patient, patient not provided snacks between meals, and the patient being disturbed during the meal. Inter-rater reliability testing of the MAT was conducted at",
"title": ""
},
{
"docid": "49cba878e4d36e08abd4acdfd48123a7",
"text": "Advances in data storage and image acquisition technologie s have enabled the creation of large image datasets. In this scenario, it is necess ary to develop appropriate information systems to efficiently manage these collect ions. The commonest approaches use the so-called Content-Based Image Retrieval (CBIR) systems . Basically, these systems try to retrieve images similar to a user-define d sp cification or pattern (e.g., shape sketch, image example). Their goal is to suppor t image retrieval based on contentproperties (e.g., shape, color, texture), usually encoded into feature vectors . One of the main advantages of the CBIR approach is the possibi lity of an automatic retrieval process, instead of the traditional keyword-bas ed approach, which usually requires very laborious and time-consuming previous annot ation of database images. The CBIR technology has been used in several applications su ch as fingerprint identification, biodiversity information systems, digital librar ies, crime prevention, medicine, historical research, among others. This paper aims to introduce the problems and challenges con cerned with the creation of CBIR systems, to describe the existing solutions and appl ications, and to present the state of the art of the existing research in this area.",
"title": ""
},
{
"docid": "4e734f8e7d3ac7249ce7eb4ad5833c95",
"text": "Conventional sports training emphasizes adequate training of muscle fibres, of cardiovascular conditioning and/or neuromuscular coordination. Most sports-associated overload injuries however occur within elements of the body wide fascial net, which are then loaded beyond their prepared capacity. This tensional network of fibrous tissues includes dense sheets such as muscle envelopes, aponeuroses, as well as specific local adaptations, such as ligaments or tendons. Fibroblasts continually but slowly adapt the morphology of these tissues to repeatedly applied challenging loading stimulations. Principles of a fascia oriented training approach are introduced. These include utilization of elastic recoil, preparatory counter movement, slow and dynamic stretching, as well as rehydration practices and proprioceptive refinement. Such training should be practiced once or twice a week in order to yield in a more resilient fascial body suit within a time frame of 6-24 months. Some practical examples of fascia oriented exercises are presented.",
"title": ""
},
{
"docid": "9e6b95131d4d78c8abe4eddb5c728ad5",
"text": "A solo attack may cause a big loss in computer and network systems, its prevention is, therefore, very inevitable. Precise detection is very important to prevent such losses. Such detection is a pivotal part of any security tools like intrusion detection system, intrusion prevention system, and firewalls etc. Therefore, an approach is provided in this paper to analyze denial of service attack by using a supervised neural network. The methodology used sampled data from Kddcup99 dataset, an attack database that is a standard for judgment of attack detection tools. The system uses multiple layered perceptron architecture and resilient backpropagation for its training and testing. The developed system is then applied to denial of service attacks. Moreover, its performance is also compared to other neural network approaches which results more accuracy and precision in detection rate.",
"title": ""
},
{
"docid": "cec6e899c23dd65881f84cca81205eb0",
"text": "A fuzzy graph (f-graph) is a pair G : ( σ, μ) where σ is a fuzzy subset of a set S and μ is a fuzzy relation on σ. A fuzzy graph H : ( τ, υ) is called a partial fuzzy subgraph of G : (σ, μ) if τ (u) ≤ σ(u) for every u and υ (u, v) ≤ μ(u, v) for every u and v . In particular we call a partial fuzzy subgraph H : ( τ, υ) a fuzzy subgraph of G : ( σ, μ ) if τ (u) = σ(u) for every u in τ * and υ (u, v) = μ(u, v) for every arc (u, v) in υ*. A connected f-graph G : ( σ, μ) is a fuzzy tree(f-tree) if it has a fuzzy spannin g subgraph F : (σ, υ), which is a tree, where for all arcs (x, y) not i n F there exists a path from x to y in F whose strength is more than μ(x, y). A path P of length n is a sequence of disti nct nodes u0, u1, ..., un such that μ(ui−1, ui) > 0, i = 1, 2, ..., n and the degree of membershi p of a weakest arc is defined as its strength. If u 0 = un and n≥ 3, then P is called a cycle and a cycle P is called a fuzzy cycle(f-cycle) if it cont ains more than one weakest arc . The strength of connectedness between two nodes x and y is efined as the maximum of the strengths of all paths between x and y and is denot ed by CONNG(x, y). An x − y path P is called a strongest x − y path if its strength equal s CONNG(x, y). An f-graph G : ( σ, μ) is connected if for every x,y in σ ,CONNG(x, y) > 0. In this paper, we offer a survey of selected recent results on fuzzy graphs.",
"title": ""
},
{
"docid": "5445892bdf8478cfacac9d599dead1f9",
"text": "The problem of determining feature correspondences across multiple views is considered. The term \"true multi-image\" matching is introduced to describe techniques that make full and efficient use of the geometric relationships between multiple images and the scene. A true multi-image technique must generalize to any number of images, be of linear algorithmic complexity in the number of images, and use all the images in an equal manner. A new space-sweep approach to true multi-image matching is presented that simultaneously determines 2D feature correspondences and the 3D positions of feature points in the scene. The method is illustrated on a seven-image matching example from the aerial im-",
"title": ""
},
{
"docid": "f8080213c06830bc140eaa21b82602ae",
"text": "This paper addresses the perceived benefits from gamification in the context of special education. It presents the findings of a study evaluating the effects of a specific gamification element (badges) on the engagement of five students with special learning needs, through online courses developed on the Moodle Learning Management System (LMS). The results indicate that this particular gamification element yielded positive effects on students’ engagement and on their overall attitude towards the educational process in general.",
"title": ""
},
{
"docid": "d57cac6e8c0013a6d4ba9c149c247d4b",
"text": "When humans look at an image, they see not just a pattern of color and texture, but the world behind the image. In the same way, computer vision algorithms must go beyond the pixels and reason about the underlying scene. In this dissertation, we propose methods to recover the basic spatial layout from a single image and begin to investigate its use as a foundation for scene understanding. Our spatial layout is a description of the 3D scene in terms of surfaces, occlusions, camera viewpoint, and objects. We propose a geometric class representation, a coarse categorization of surfaces according to their 3D orientations, and learn appearance-based models of geometry to identify surfaces in an image. These surface estimates serve as a basis for recovering the boundaries and occlusion relationships of prominent objects. We further show that simple reasoning about camera viewpoint and object size in the image allows accurate inference of the viewpoint and greatly improves object detection. Finally, we demonstrate the potential usefulness of our methods in applications to 3D reconstruction, scene synthesis, and robot navigation. Scene understanding from a single image requires strong assumptions about the world. We show that the necessary assumptions can be modeled statistically and learned from training data. Our work demonstrates the importance of robustness through a wide variety of image cues, multiple segmentations, and a general strategy of soft decisions and gradual inference of image structure. Above all, our work manifests the tremendous amount of 3D information that can be gleaned from a single image. Our hope is that this dissertation will inspire others to further explore how computer vision can go beyond pattern recognition and produce an understanding of the environment.",
"title": ""
}
] |
scidocsrr
|
c1dbdad7693b1ed25a6821b5261d4133
|
Honeypot detection in advanced botnet attacks
|
[
{
"docid": "ddd3d575bbe51459f38492e439dd2d67",
"text": "A “botnet” consists of a network of compromised computers controlled by an attacker (“botmaster”). Recently, botnets have become the root cause of many Internet attacks. To be well prepared for future attacks, it is not enough to study how to detect and defend against the botnets that have appeared in the past. More importantly, we should study advanced botnet designs that could be developed by botmasters in the near future. In this paper, we present the design of an advanced hybrid peer-to-peer botnet. Compared with current botnets, the proposed botnet is harder to be shut down, monitored, and hijacked. It provides robust network connectivity, individualized encryption and control traffic dispersion, limited botnet exposure by each bot, and easy monitoring and recovery by its botmaster. In the end, we suggest and analyze several possible defenses against this advanced botnet.",
"title": ""
}
] |
[
{
"docid": "7daf5ad71bda51eacc68f0a1482c3e7e",
"text": "Nearly every modern mobile device includes two cameras. With advances in technology the resolution of these sensors has constantly increased. While this development provides great convenience for users, for example with video-telephony or as dedicated camera replacement, the security implications of including high resolution cameras on such devices has yet to be considered in greater detail. With this paper we demonstrate that an attacker may abuse the cameras in modern smartphones to extract valuable information from a victim. First, we consider exploiting a front-facing camera to capture a user’s keystrokes. By observing facial reflections, it is possible to capture user input with the camera. Subsequently, individual keystrokes can be extracted from the images acquired with the camera. Furthermore, we demonstrate that these cameras can be used by an attacker to extract and forge the fingerprints of a victim. This enables an attacker to perform a wide range of malicious actions, including authentication bypass on modern biometric systems and falsely implicating a person by planting fingerprints in a crime scene. Finally, we introduce several mitigation strategies for the identified threats.",
"title": ""
},
{
"docid": "16a7142a595da55de7df5253177cbcb5",
"text": "The present study represents the first large-scale, prospective comparison to test whether aging out of foster care contributes to homelessness risk in emerging adulthood. A nationally representative sample of adolescents investigated by the child welfare system in 2008 to 2009 from the second cohort of the National Survey of Child and Adolescent Well-being Study (NSCAW II) reported experiences of housing problems at 18- and 36-month follow-ups. Latent class analyses identified subtypes of housing problems, including literal homelessness, housing instability, and stable housing. Regressions predicted subgroup membership based on aging out experiences, receipt of foster care services, and youth and county characteristics. Youth who reunified after out-of-home placement in adolescence exhibited the lowest probability of literal homelessness, while youth who aged out experienced similar rates of literal homelessness as youth investigated by child welfare but never placed out of home. No differences existed between groups on prevalence of unstable housing. Exposure to independent living services and extended foster care did not relate with homelessness prevention. Findings emphasize the developmental importance of families in promoting housing stability in the transition to adulthood, while questioning child welfare current focus on preparing foster youth to live.",
"title": ""
},
{
"docid": "20b1a9f9ea3a9a1798f611cbd44658c5",
"text": "The majority of colorectal cancers (CRCs) are classified as adenocarcinoma not otherwise specified (AC). Mucinous carcinoma (MC) is a distinct form of CRC and is found in 10–15% of patients with CRC. MC differs from AC in terms of both clinical and histopathological characteristics, and has long been associated with an inferior response to treatment compared with AC. The debate concerning the prognostic implications of MC in patients with CRC is ongoing and MC is still considered an unfavourable and unfamiliar subtype of the disease. Nevertheless, in the past few years epidemiological and clinical studies have shed new light on the treatment and management of patients with MC. Use of a multidisciplinary approach, including input from surgeons, pathologists, oncologists and radiologists, is beginning to lead to more-tailored approaches to patient management, on an individualized basis. In this Review, the authors provide insight into advances that have been made in the care of patients with MC. The prognostic implications for patients with colon or rectal MC are described separately; moreover, the predictive implications of MC regarding responses to commonly used therapies for CRC, such as chemotherapy, radiotherapy and chemoradiotherapy, and the potential for, and severity of, metastasis are also described.",
"title": ""
},
{
"docid": "227874c489b6599583f4f5a3698491ed",
"text": "Since the knee joint bears the full weight load of the human body and the highest pressure loads while providing flexible movement, it is the body part most vulnerable and susceptible to osteoarthritis. In exercise therapy, the early rehabilitation stages last for approximately six weeks, during which the patient works with the physical therapist several times each week. The patient is afterwards given instructions for continuing rehabilitation exercise by him/herself at home. This study develops a rehabilitation exercise assessment mechanism using three wearable sensors mounted on the chest, thigh and shank of the working leg in order to enable the patients with knee osteoarthritis to manage their own rehabilitation progress. In this work, time-domain, frequency-domain features and angle information of the motion sensor signals are used to classify the exercise type and identify whether their postures are proper or not. Three types of rehabilitation exercise commonly prescribed to knee osteoarthritis patients are: Short-Arc Exercise, Straight Leg Raise, and Quadriceps Strengthening Mini-squats. After ten subjects performed the three kinds of rehabilitation activities, three validation techniques including 10-fold cross-validation, within subject cross validation, and leave-one-subject cross validation are utilized to confirm the proposed mechanism. The overall recognition accuracy for exercise type classification is 97.29% and for exercise posture identification it is 88.26%. The experimental results demonstrate the feasibility of the proposed mechanism which can help patients perform rehabilitation movements and progress effectively. Moreover, the proposed mechanism is able to detect multiple errors at once, fulfilling the requirements for rehabilitation assessment.",
"title": ""
},
{
"docid": "8f660dd12e7936a556322f248a9e2a2a",
"text": "We develop and apply statistical topic models to software as a means of extracting concepts from source code. The effectiveness of the technique is demonstrated on 1,555 projects from SourceForge and Apache consisting of 113,000 files and 19 million lines of code. In addition to providing an automated, unsupervised, solution to the problem of summarizing program functionality, the approach provides a probabilistic framework with which to analyze and visualize source file similarity. Finally, we introduce an information-theoretic approach for computing tangling and scattering of extracted concepts, and present preliminary results",
"title": ""
},
{
"docid": "59a25ae61a22baa8e20ae1a5d88c4499",
"text": "This paper tackles a major privacy threat in current location-based services where users have to report their exact locations to the database server in order to obtain their desired services. For example, a mobile user asking about her nearest restaurant has to report her exact location. With untrusted service providers, reporting private location information may lead to several privacy threats. In this paper, we present a peer-to-peer (P2P)spatial cloaking algorithm in which mobile and stationary users can entertain location-based services without revealing their exact location information. The main idea is that before requesting any location-based service, the mobile user will form a group from her peers via single-hop communication and/or multi-hop routing. Then,the spatial cloaked area is computed as the region that covers the entire group of peers. Two modes of operations are supported within the proposed P2P s patial cloaking algorithm, namely, the on-demand mode and the proactive mode. Experimental results show that the P2P spatial cloaking algorithm operated in the on-demand mode has lower communication cost and better quality of services than the proactive mode, but the on-demand incurs longer response time.",
"title": ""
},
{
"docid": "97f6e18ea96e73559a05444d666f306f",
"text": "The increasingly ubiquitous availability of digital and networked tools has the potential to fundamentally transform the teaching and learning process. Research on the instructional uses of technology, however, has revealed that teachers often lack the knowledge to successfully integrate technology in their teaching and their attempts tend to be limited in scope, variety, and depth. Thus, technology is used more as “ef fi ciency aids and extension devices” (McCormick & Scrimshaw, 2001 , p. 31) rather than as tools that can “transform the nature of a subject at the most fundamental level” (p. 47). One way in which researchers have tried to better understand how teachers may better use technology in their classrooms has focused on the kinds of knowledge that teachers require Abstract In this chapter, we introduce a framework, called technological pedagogical content knowledge (or TPACK for short), that describes the kinds of knowledge needed by a teacher for effective technology integration. The TPACK framework emphasizes how the connections among teachers’ understanding of content, pedagogy, and technology interact with one another to produce effective teaching. Even as a relatively new framework, the TPACK framework has signi fi cantly in fl uenced theory, research, and practice in teacher education and teacher professional development. In this chapter, we describe the theoretical underpinnings of the framework, and explain the relationship between TPACK and related constructs in the educational technology literature. We outline the various approaches teacher educators have used to develop TPACK in preand in-service teachers, and the theoretical and practical issues that these professional development efforts have illuminated. We then review the widely varying approaches to measuring TPACK, with an emphasis on the interaction between form and function of the assessment, and resulting reliability and validity outcomes for the various approaches. We conclude with a summary of the key theoretical, pedagogical, and methodological issues related to TPACK, and suggest future directions for researchers, practitioners, and teacher educators.",
"title": ""
},
{
"docid": "838a79ec0376a23ac24a462a00d140dc",
"text": "Bounding the generalization error of learning algorithms has a long history, which yet falls short in explaining various generalization successes including those of deep learning. Two important difficulties are (i) exploiting the dependencies between the hypotheses, (ii) exploiting the dependence between the algorithm’s input and output. Progress on the first point was made with the chaining method, originating from the work of Kolmogorov, and used in the VC-dimension bound. More recently, progress on the second point was made with the mutual information method by Russo and Zou ’15. Yet, these two methods are currently disjoint. In this paper, we introduce a technique to combine chaining and mutual information methods, to obtain a generalization bound that is both algorithm-dependent and that exploits the dependencies between the hypotheses. We provide an example in which our bound significantly outperforms both the chaining and the mutual information bounds. As a corollary, we tighten Dudley’s inequality when the learning algorithm chooses its output from a small subset of hypotheses with high probability.",
"title": ""
},
{
"docid": "ac6410d8891491d050b32619dc2bdd50",
"text": "Due to the increase of generation sources in distribution networks, it is becoming very complex to develop and maintain models of these networks. Network operators need to determine reduced models of distribution networks to be used in grid management functions. This paper presents a novel method that synthesizes steady-state models of unbalanced active distribution networks with the use of dynamic measurements (time series) from phasor measurement units (PMUs). Since phasor measurement unit (PMU) measurements may contain errors and bad data, this paper presents the application of a Kalman filter technique for real-time data processing. In addition, PMU data capture the power system's response at different time-scales, which are generated by different types of power system events; the presented Kalman filter has been improved to extract the steady-state component of the PMU measurements to be fed to the steady-state model synthesis application. Performance of the proposed methods has been assessed by real-time hardware-in-the-loop simulations on a sample distribution network.",
"title": ""
},
{
"docid": "0b245fedd608d21389372faa192d62a0",
"text": "This paper explores the effectiveness of Data Mining (DM) classification techniques in detecting firms that issue fraudulent financial statements (FFS) and deals with the identification of factors associated to FFS. In accomplishing the task of management fraud detection, auditors could be facilitated in their work by using Data Mining techniques. This study investigates the usefulness of Decision Trees, Neural Networks and Bayesian Belief Networks in the identification of fraudulent financial statements. The input vector is composed of ratios derived from financial statements. The three models are compared in terms of their performances. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "20f4bcde35458104271e9127d8b7f608",
"text": "OBJECTIVES\nTo evaluate the effect of bulk-filling high C-factor posterior cavities on adhesion to cavity-bottom dentin.\n\n\nMETHODS\nA universal flowable composite (G-ænial Universal Flo, GC), a bulk-fill flowable base composite (SDR Posterior Bulk Fill Flowable Base, Dentsply) and a conventional paste-like composite (Z100, 3M ESPE) were bonded (G-ænial Bond, GC) into standardized cavities with different cavity configurations (C-factors), namely C=3.86 (Class-I cavity of 2.5mm deep, bulk-filled), C=5.57 (Class-I cavity of 4mm deep, bulk-filled), C=1.95 (Class-I cavity of 2.5mm deep, filled in three equal layers) and C=0.26 (flat surface). After one-week water storage, the restorations were sectioned in 4 rectangular micro-specimens and subjected to a micro-tensile bond strength (μTBS) test.\n\n\nRESULTS\nHighly significant differences were found between pairs of means of the experimental groups (Kruskal-Wallis, p<0.0001). Using the bulk-fill flowable base composite SDR (Dentsply), no significant differences in μTBS were measured among all cavity configurations (p>0.05). Using the universal flowable composite G-ænial Universal Flo (GC) and the conventional paste-like composite Z100 (3M ESPE), the μTBS to cavity-bottom dentin was not significantly different from that of SDR (Dentsply) when the cavities were layer-filled or the flat surface was build up in layers; it was however significantly lower when the Class-I cavities were filled in bulk, irrespective of cavity depth.\n\n\nSIGNIFICANCE\nThe filling technique and composite type may have a great impact on the adhesion of the composite, in particular in high C-factor cavities. While the bulk-fill flowable base composite provided satisfactory bond strengths regardless of filling technique and cavity depth, adhesion failed when conventional composites were used in bulk.",
"title": ""
},
{
"docid": "3746275fe4cfd6132d9b7a2a38639356",
"text": "A design procedure for circularly polarized waveguide slot linear arrays is presented. The array element, a circularly polarized radiator, consists of two closely spaced inclined radiating slots. Both the characterization of the isolated element and the evaluation of the mutual coupling between the array elements are performed by using a method of moments procedure. A number of traveling wave arrays with equiphase excitations are designed and then analyzed using a finite element method commercial software. A good circular polarization is achieved, the design goals on the far field pattern are fulfilled and high antenna efficiency can be obtained",
"title": ""
},
{
"docid": "4d69284c25e1a9a503dd1c12fde23faa",
"text": "Human pose estimation has been actively studied for decades. While traditional approaches rely on 2d data like images or videos, the development of Time-of-Flight cameras and other depth sensors created new opportunities to advance the field. We give an overview of recent approaches that perform human motion analysis which includes depthbased and skeleton-based activity recognition, head pose estimation, facial feature detection, facial performance capture, hand pose estimation and hand gesture recognition. While the focus is on approaches using depth data, we also discuss traditional image based methods to provide a broad overview of recent developments in these areas.",
"title": ""
},
{
"docid": "251f5f5af4aa9390f6e144956006097f",
"text": "As algorithms are increasingly used to make important decisions that affect human lives, ranging from social benefit assignment to predicting risk of criminal recidivism, concerns have been raised about the fairness of algorithmic decision making. Most prior works on algorithmic fairness normatively prescribe how fair decisions ought to be made. In contrast, here, we descriptively survey users for how they perceive and reason about fairness in algorithmic decision making. A key contribution of this work is the framework we propose to understand why people perceive certain features as fair or unfair to be used in algorithms. Our framework identifies eight properties of features, such as relevance, volitionality and reliability, as latent considerations that inform people’s moral judgments about the fairness of feature use in decision-making algorithms. We validate our framework through a series of scenario-based surveys with 576 people. We find that, based on a person’s assessment of the eight latent properties of a feature in our exemplar scenario, we can accurately (> 85%) predict if the person will judge the use of the feature as fair. Our findings have important implications. At a high-level, we show that people’s unfairness concerns are multi-dimensional and argue that future studies need to address unfairness concerns beyond discrimination. At a low-level, we find considerable disagreements in people’s fairness judgments. We identify root causes of the disagreements, and note possible pathways to resolve them.",
"title": ""
},
{
"docid": "a8c373535cfc4a574f0a91eca1eb10c3",
"text": "Changes in the media landscape have made simultaneous usage of the computer and television increasingly commonplace, but little research has explored how individuals navigate this media multitasking environment. Prior work suggests that self-insight may be limited in media consumption and multitasking environments, reinforcing a rising need for direct observational research. A laboratory experiment recorded both younger and older individuals as they used a computer and television concurrently, multitasking across television and Internet content. Results show that individuals are attending primarily to the computer during media multitasking. Although gazes last longer on the computer when compared to the television, the overall distribution of gazes is strongly skewed toward very short gazes only a few seconds in duration. People switched between media at an extreme rate, averaging more than 4 switches per min and 120 switches over the 27.5-minute study exposure. Participants had little insight into their switching activity and recalled their switching behavior at an average of only 12 percent of their actual switching rate revealed in the objective data. Younger individuals switched more often than older individuals, but other individual differences such as stated multitasking preference and polychronicity had little effect on switching patterns or gaze duration. This overall pattern of results highlights the importance of exploring new media environments, such as the current drive toward media multitasking, and reinforces that self-monitoring, post hoc surveying, and lay theory may offer only limited insight into how individuals interact with media.",
"title": ""
},
{
"docid": "eb391a570fdf79df51987016ad3abcbc",
"text": "BACKGROUND\nReliable and comparable analysis of risks to health is key for preventing disease and injury. Causal attribution of morbidity and mortality to risk factors has traditionally been in the context of individual risk factors, often in a limited number of settings, restricting comparability. Our aim was to estimate the contributions of selected major risk factors to global and regional burden of disease in a unified framework.\n\n\nMETHODS\nFor 26 selected risk factors, expert working groups undertook a comprehensive review of published work and other sources--eg, government reports and international databases--to obtain data on the prevalence of risk factor exposure and hazard size for 14 epidemiological regions of the world. Population attributable fractions were estimated by applying the potential impact fraction relation, and applied to the mortality and burden of disease estimates from the global burden of disease (GBD) database.\n\n\nFINDINGS\nChildhood and maternal underweight (138 million disability adjusted life years [DALY], 9.5%), unsafe sex (92 million DALY, 6.3%), high blood pressure (64 million DALY, 4.4%), tobacco (59 million DALY, 4.1%), and alcohol (58 million DALY, 4.0%) were the leading causes of global burden of disease. In the poorest regions of the world, childhood and maternal underweight, unsafe sex, unsafe water, sanitation, and hygiene, indoor smoke from solid fuels, and various micronutrient deficiencies were major contributors to loss of healthy life. In both developing and developed regions, alcohol, tobacco, high blood pressure, and high cholesterol were major causes of disease burden.\n\n\nINTERPRETATION\nSubstantial proportions of global disease burden are attributable to these major risks, to an extent greater than previously estimated. Developing countries suffer most or all of the burden due to many of the leading risks. Strategies that target these known risks can provide substantial and underestimated public-health gains.",
"title": ""
},
{
"docid": "b5b73560481ad29bed07ddf156531561",
"text": "IQ heritability, the portion of a population's IQ variability attributable to the effects of genes, has been investigated for nearly a century, yet it remains controversial. Covariance between relatives may be due not only to genes, but also to shared environments, and most previous models have assumed different degrees of similarity induced by environments specific to twins, to non-twin siblings (henceforth siblings), and to parents and offspring. We now evaluate an alternative model that replaces these three environments by two maternal womb environments, one for twins and another for siblings, along with a common home environment. Meta-analysis of 212 previous studies shows that our ‘maternal-effects’ model fits the data better than the ‘family-environments’ model. Maternal effects, often assumed to be negligible, account for 20% of covariance between twins and 5% between siblings, and the effects of genes are correspondingly reduced, with two measures of heritability being less than 50%. The shared maternal environment may explain the striking correlation between the IQs of twins, especially those of adult twins that were reared apart. IQ heritability increases during early childhood, but whether it stabilizes thereafter remains unclear. A recent study of octogenarians, for instance, suggests that IQ heritability either remains constant through adolescence and adulthood, or continues to increase with age. Although the latter hypothesis has recently been endorsed, it gathers only modest statistical support in our analysis when compared to the maternal-effects hypothesis. Our analysis suggests that it will be important to understand the basis for these maternal effects if ways in which IQ might be increased are to be identified.",
"title": ""
},
{
"docid": "11f8f9bcee6375f499a5db0435e10f30",
"text": "In the field of reverse engineering one often faces the problem of repairing triangulations with holes, intersecting triangles, Möbius-band-like structures or other artifacts. In this paper we present a novel approach for generating manifold triangle meshes from such incomplete or imperfect triangulations. Even for heavily damaged triangulations, representing closed surfaces with arbitrary genus, our algorithm results in correct manifold triangle meshes. The algorithm is based on a randomized optimization technique from probability calculus called simulated annealing.",
"title": ""
},
{
"docid": "8007efba73f42a8015262d502fb4b545",
"text": "In this paper we present a universal haptic drive (UHD), a device that enables rehabilitation of either arm (“ARM” mode) or wrist (“WRIST” mode) movement in two degrees-of-freedom. The mode of training depends on the selected mechanical configuration, which depends on locking/unlocking of a passive universal joint. Actuation of the device is accomplished by utilizing a series elastic actuation principle, which enables use of off-the-shelf mechanical and actuation components. A proportional force control scheme, needed for implementation of impedance control based movement training, was implemented. The device performance in terms of achievable lower and upper bound of viable impedance range was evaluated through adequately chosen sinusoidal movement in eight directions of a planar movement for the “ARM” mode and in eight directions of a combined wrist flexion/extension and forearm pronation/supination movement for the “WRIST” mode. Additionally, suitability of the universal haptic drive for movement training was tested in a series of training sessions conducted with a chronic stroke subject. The results have shown that reliable and repeatable performance can be achieved in both modes of operation for all tested directions.",
"title": ""
},
{
"docid": "e63836b5053b7f56d5ad5081a7ef79b7",
"text": "This paper presents interfaces for exploring large collections of fonts for design tasks. Existing interfaces typically list fonts in a long, alphabetically-sorted menu that can be challenging and frustrating to explore. We instead propose three interfaces for font selection. First, we organize fonts using high-level descriptive attributes, such as \"dramatic\" or \"legible.\" Second, we organize fonts in a tree-based hierarchical menu based on perceptual similarity. Third, we display fonts that are most similar to a user's currently-selected font. These tools are complementary; a user may search for \"graceful\" fonts, select a reasonable one, and then refine the results from a list of fonts similar to the selection. To enable these tools, we use crowdsourcing to gather font attribute data, and then train models to predict attribute values for new fonts. We use attributes to help learn a font similarity metric using crowdsourced comparisons. We evaluate the interfaces against a conventional list interface and find that our interfaces are preferred to the baseline. Our interfaces also produce better results in two real-world tasks: finding the nearest match to a target font, and font selection for graphic designs.",
"title": ""
}
] |
scidocsrr
|
2d2d778ca09e97ab75edc3745dedf165
|
Joint Matrix-Tensor Factorization for Knowledge Base Inference
|
[
{
"docid": "d3997f030d5d7287a4c6557681dc7a46",
"text": "This paper presents the first use of a computational model of natural logic—a system of logical inference which operates over natural language—for textual inference. Most current approaches to the PASCAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds a low-cost edit sequence which transforms the premise into the hypothesis; learns to classify entailment relations across atomic edits; and composes atomic entailments into a top-level entailment judgment. We provide the first reported results for any system on the FraCaS test suite. We also evaluate on RTE3 data, and show that hybridizing an existing RTE system with our natural logic system yields significant performance gains.",
"title": ""
},
{
"docid": "e813c3ce6c9cbf9e9d61a14cdf609220",
"text": "Most information extraction (IE) systems identify facts that are explicitly stated in text. However, in natural language, some facts are implicit, and identifying them requires “reading between the lines”. Human readers naturally use common sense knowledge to infer such implicit information from the explicitly stated facts. We propose an approach that uses Bayesian Logic Programs (BLPs), a statistical relational model combining firstorder logic and Bayesian networks, to infer additional implicit information from extracted facts. It involves learning uncertain commonsense knowledge (in the form of probabilistic first-order rules) from natural language text by mining a large corpus of automatically extracted facts. These rules are then used to derive additional facts from extracted information using BLP inference. Experimental evaluation on a benchmark data set for machine reading demonstrates the efficacy of our approach.",
"title": ""
},
{
"docid": "78cda62ca882bb09efc08f7d4ea1801e",
"text": "Open Domain: There are nearly an unbounded number of classes, objects and relations Missing Data: Many useful facts are never explicitly stated No Negative Examples: Labeling positive and negative examples for all interesting relations is impractical Learning First-Order Horn Clauses from Web Text Stefan Schoenmackers Oren Etzioni Daniel S. Weld Jesse Davis Turing Center, University of Washington Katholieke Universiteit Leuven",
"title": ""
}
] |
[
{
"docid": "8bcb5def2a0b847a5d0800849443e5bc",
"text": "BACKGROUND\nMMPs play a crucial role in the process of cancer invasion and metastasis.\n\n\nMETHODS\nThe influence of NAC on invasion and MMP-9 production of human bladder cancer cell line T24 was investigated using an in vitro invasion assay, gelatin zymography, Western and Northern blot analyses and RT-PCR assays.\n\n\nRESULTS\nTPA increased the number of invading T24 cells through reconstituted basement membrane more than 10-fold compared to basal condition. NAC inhibited TPA-enhanced invasion dose-dependently. TPA increased the MMP-9 production by T24 cells without altering expression of TIMP-1 gene, while NAC suppressed TPA-enhanced production of MMP-9. Neither TPA nor NAC altered TIMP-1 mRNA level in T24 cells. In vitro experiments demonstrated that MMP-9 was directly inhibited by NAC but was not influenced by TPA.\n\n\nCONCLUSION\nNAC limits invasion of T24 human bladder cancer cells by inhibiting the MMP-9 production in addition to a direct inhibition of MMP-9 activity.",
"title": ""
},
{
"docid": "d0a2c8cf31e1d361a7c2b306dffddc25",
"text": "During the first years of the so called fourth industrial revolution, main attempts that tried to define the main ideas and tools behind this new era of manufacturing, always end up referring to the concept of smart machines that would be able to communicate with each and with the environment. In fact, the defined cyber physical systems, connected by the internet of things, take all the attention when referring to the new industry 4.0. But, nevertheless, the new industrial environment will benefit from several tools and applications that complement the real formation of a smart, embedded system that is able to perform autonomous tasks. And most of these revolutionary concepts rest in the same background theory as artificial intelligence does, where the analysis and filtration of huge amounts of incoming information from different types of sensors, assist to the interpretation and suggestion of the most recommended course of action. For that reason, artificial intelligence science suit perfectly with the challenges that arise in the consolidation of the fourth industrial revolution.",
"title": ""
},
{
"docid": "8549acfe9c5b30dc4196ea50139a35ed",
"text": "The development of methods and tools for the generation of visually appealing motion sequences using prerecorded motion capture data has become an important research area in computer animation. In particular, data-driven approaches have been used for reconstructing high-dimensional motion sequences from low-dimensional control signals. In this article, we contribute to this strand of research by introducing a novel framework for generating full-body animations controlled by only four 3D accelerometers that are attached to the extremities of a human actor. Our approach relies on a knowledge base that consists of a large number of motion clips obtained from marker-based motion capturing. Based on the sparse accelerometer input a cross-domain retrieval procedure is applied to build up a lazy neighborhood graph in an online fashion. This graph structure points to suitable motion fragments in the knowledge base, which are then used in the reconstruction step. Supported by a kd-tree index structure, our procedure scales to even large datasets consisting of millions of frames. Our combined approach allows for reconstructing visually plausible continuous motion streams, even in the presence of moderate tempo variations which may not be directly reflected by the given knowledge base.",
"title": ""
},
{
"docid": "59ce0a9af71c96d684ffb385df1f1f23",
"text": "STUDIES in animals have shown that the amygdala receives highly processed visual input1,2, contains neurons that respond selectively to faces3, and that it participates in emotion4,5 and social behaviour6. Although studies in epileptic patients support its role in emotion7, determination of the amygdala's function in humans has been hampered by the rarity of patients with selective amygdala lesions8. Here, with the help of one such rare patient, we report findings that suggest the human amygdala may be indispensable to: (1) recognize fear in facial expressions; (2) recognize multiple emotions in a single facial expression; but (3) is not required to recognize personal identity from faces. These results suggest that damage restricted to the amygdala causes very specific recognition impairments, and thus constrains the broad notion that the amygdala is involved in emotion.",
"title": ""
},
{
"docid": "9ece8dd1905fe0cba49d0fa8c1b21c62",
"text": "This paper describes the origins and history of multiple resource theory in accounting for di erences in dual task interference. One particular application of the theory, the 4-dimensional multiple resources model, is described in detail, positing that there will be greater interference between two tasks to the extent that they share stages (perceptual/cognitive vs response) sensory modalities (auditory vs visual), codes (visual vs spatial) and channels of visual information (focal vs ambient). A computational rendering of this model is then presented. Examples are given of how the model predicts interference di erences in operational environments. Finally, three challenges to the model are outlined regarding task demand coding, task allocation and visual resource competition.",
"title": ""
},
{
"docid": "6a7839b42c549e31740f70aa0079ad46",
"text": "Deep learning has improved performance on many natural language processing (NLP) tasks individually. However, general NLP models cannot emerge within a paradigm that focuses on the particularities of a single metric, dataset, and task. We introduce the Natural Language Decathlon (decaNLP), a challenge that spans ten tasks: question answering, machine translation, summarization, natural language inference, sentiment analysis, semantic role labeling, relation extraction, goal-oriented dialogue, semantic parsing, and commonsense pronoun resolution. We cast all tasks as question answering over a context. Furthermore, we present a new multitask question answering network (MQAN) that jointly learns all tasks in decaNLP without any task-specific modules or parameters more effectively than sequence-to-sequence and reading comprehension baselines. MQAN shows improvements in transfer learning for machine translation and named entity recognition, domain adaptation for sentiment analysis and natural language inference, and zero-shot capabilities for text classification. We demonstrate that the MQAN’s multi-pointer-generator decoder is key to this success and that performance further improves with an anti-curriculum training strategy. Though designed for decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic parsing task in the single-task setting. We also release code for procuring and processing data, training and evaluating models, and reproducing all experiments for decaNLP.",
"title": ""
},
{
"docid": "33e88cb3ce4b17d3540b4dfc6d9ef08a",
"text": "We propose MAD-GAN, an intuitive generalization to the Generative Adversarial Networks (GANs) and its conditional variants to address the well known problem of mode collapse. First, MAD-GAN is a multi-agent GAN architecture incorporating multiple generators and one discriminator. Second, to enforce that different generators capture diverse high probability modes, the discriminator of MAD-GAN is designed such that along with finding the real and fake samples, it is also required to identify the generator that generated the given fake sample. Intuitively, to succeed in this task, the discriminator must learn to push different generators towards different identifiable modes. We perform extensive experiments on synthetic and real datasets and compare MAD-GAN with different variants of GAN. We show high quality diverse sample generations for challenging tasks such as image-to-image translation and face generation. In addition, we also show that MAD-GAN is able to disentangle different modalities when trained using highly challenging diverse-class dataset (e.g. dataset with images of forests, icebergs, and bedrooms). In the end, we show its efficacy on the unsupervised feature representation task.",
"title": ""
},
{
"docid": "83a5bab37e1e576f579aee47a9ddaba2",
"text": "This chapter reviews 218 published and unpublished research reports of pressure ulcer prevention and management by nurse researchers and researchers from other disciplines. The electronic databases MEDLINE (1966-July 2001), CINAHL (1982-June 2001), AMED (1985-July 2001), and EI Compedex Plus (1980-June 2001) were selected for the searches because of their focus on health and applied research. Moreover, evaluations of previous review articles and seminal studies that were published before 1966 are also included. Research conducted worldwide and published in English between 1930 and 2001 was included for review. Studies using descriptive, correlational, longitudinal, and randomized control trials were included. This review found that numerous gaps remain in our understanding of effective pressure ulcer prevention and management. Moreover, the majority of pressure ulcer care is derived from expert opinion rather than empirical evidence. Thus, additional research is needed to investigate pressure ulcer risk factors of ethnic minorities. Further studies are needed that examine the impact of specific preventive interventions (e.g., turning intervals based on risk stratification) and the cost-effectiveness of comprehensive prevention programs to prevent pressure ulcers. Finally, an evaluation is needed of various aspects of pressure ulcer management (e.g., use of support surfaces, use of adjunctive therapies) and healing of pressure ulcers.",
"title": ""
},
{
"docid": "2461a83b1da812bfdce3a802a2fed972",
"text": "Training large neural networks requires distributing learning across multiple workers, where the cost of communicating gradients can be a significant bottleneck. SIGNSGD alleviates this problem by transmitting just the sign of each minibatch stochastic gradient. We prove that it can get the best of both worlds: compressed gradients and SGD-level convergence rate. The relative `1/`2 geometry of gradients, noise and curvature informs whether SIGNSGD or SGD is theoretically better suited to a particular problem. On the practical side we find that the momentum counterpart of SIGNSGD is able to match the accuracy and convergence speed of ADAM on deep Imagenet models. We extend our theory to the distributed setting, where the parameter server uses majority vote to aggregate gradient signs from each worker enabling 1-bit compression of worker-server communication in both directions. Using a theorem by Gauss (1823) we prove that majority vote can achieve the same reduction in variance as full precision distributed SGD. Thus, there is great promise for sign-based optimisation schemes to achieve fast communication and fast convergence. Code to reproduce experiments is to be found at https://github.com/jxbz/signSGD.",
"title": ""
},
{
"docid": "a0f4b7f3f9f2a5d430a3b8acead2b746",
"text": "Imagining a scene described in natural language with realistic layout and appearance of entities is the ultimate test of spatial, visual, and semantic world knowledge. Towards this goal, we present the Composition, Retrieval and Fusion Network (Craft), a model capable of learning this knowledge from video-caption data and applying it while generating videos from novel captions. Craft explicitly predicts a temporal-layout of mentioned entities (characters and objects), retrieves spatio-temporal entity segments from a video database and fuses them to generate scene videos. Our contributions include sequential training of components of Craft while jointly modeling layout and appearances, and losses that encourage learning compositional representations for retrieval. We evaluate Craft on semantic fidelity to caption, composition consistency, and visual quality. Craft outperforms direct pixel generation approaches and generalizes well to unseen captions and to unseen video databases with no text annotations. We demonstrate Craft on Flintstones, a new richly annotated video-caption dataset with over 25000 videos. For a glimpse of videos generated by Craft, see https://youtu.be/688Vv86n0z8. Fred wearing a red hat is walking in the living room Retrieve Compose Retrieve Compose Retrieve Pebbles is sitting at a table in a room watching the television Retrieve Compose Retrieve Compose Compose Retrieve Retrieve Fuse",
"title": ""
},
{
"docid": "1c5f53fe8d663047a3a8240742ba47e4",
"text": "Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. In this paper, we explore whether clustering methods have a role to play in spatial data mining. To this end, we develop a new clustering method called CLAHANS which is based on randomized search. We also develop two spatial data mining algorithms that use CLAHANS. Our analysis and experiments show that with the assistance of CLAHANS, these two algorithms are very effective and can lead to discoveries that are difficult to find with current spatial data mining algorithms. Furthermore, experiments conducted to compare the performance of CLAHANS with that of existing clustering methods show that CLAHANS is the most efficient.",
"title": ""
},
{
"docid": "ea411e1666cf9f9e1220b0ec642d45de",
"text": "The night sky remains a largely unexplored frontier for biologists studying the behavior and physiology of free-ranging, nocturnal organisms. Conventional imaging tools and techniques such as night-vision scopes, infrared-reflectance cameras, flash cameras, and radar provide insufficient detail for the scale and resolution demanded by field researchers. A new tool is needed that is capable of imaging noninvasively in the dark at high-temporal and spatial resolution. Thermal infrared imaging represents the most promising such technology that is poised to revolutionize our ability to observe and document the behavior of free-ranging organisms in the dark. Herein we present several examples from our research on free-ranging bats that highlight the power and potential of thermal infrared imaging for the study of animal behavior, energetics and censusing of large colonies, among others. Using never-before-seen video footage and data, we have begun to answer questions that have puzzled biologists for decades, as well as to generate new hypotheses and insight. As we begin to appreciate the functional significance of the aerosphere as a dynamic environment that affects organisms at different spatial and temporal scales, thermal infrared imaging can be at the forefront of the effort to explore this next frontier.",
"title": ""
},
{
"docid": "16a95d66bcd74cdfc0e7369db90366b2",
"text": "The problem of authorship attribution – attributing texts to their original authors – has been an active research area since the end of the 19th century, attracting increased interest in the last decade. Most of the work on authorship attribution focuses on scenarios with only a few candidate authors, but recently considered cases with tens to thousands of candidate authors were found to be much more challenging. In this paper, we propose ways of employing Latent Dirichlet Allocation in authorship attribution. We show that our approach yields state-of-the-art performance for both a few and many candidate authors, in cases where these authors wrote enough texts to be modelled effectively.",
"title": ""
},
{
"docid": "54b6c687262c5d051529e5ed2d2bf8a1",
"text": "INTRODUCTION\nThe chick embryo is an emerging in vivo model in several areas of pre-clinical research including radiopharmaceutical sciences. Herein, it was evaluated as a potential test system for assessing the biodistribution and in vivo stability of radiopharmaceuticals. For this purpose, a number of radiopharmaceuticals labeled with (18)F, (125)I, (99m)Tc, and (177)Lu were investigated in the chick embryo and compared with the data obtained in mice.\n\n\nMETHODS\nChick embryos were cultivated ex ovo for 17-19 days before application of the radiopharmaceutical directly into the peritoneum or intravenously using a vein of the chorioallantoic membrane (CAM). At a defined time point after application of radioactivity, the embryos were euthanized by shock-freezing using liquid nitrogen. Afterwards they were separated from residual egg components for post mortem imaging purposes using positron emission tomography (PET) or single photon emission computed tomography (SPECT).\n\n\nRESULTS\nSPECT images revealed uptake of [(99m)Tc]pertechnetate and [(125)I]iodide in the thyroid of chick embryos and mice, whereas [(177)Lu]lutetium, [(18)F]fluoride and [(99m)Tc]-methylene diphosphonate ([(99m)Tc]-MDP) were accumulated in the bones. [(99m)Tc]-dimercaptosuccinic acid ((99m)Tc-DMSA) and the somatostatin analog [(177)Lu]-DOTATOC, as well as the folic acid derivative [(177)Lu]-DOTA-folate showed accumulation in the renal tissue whereas [(99m)Tc]-mebrofenin accumulated in the gall bladder and intestine of both species. In vivo dehalogenation of [(18)F]fallypride and of the folic acid derivative [(125)I]iodo-tyrosine-folate was observed in both species. In contrast, the 3'-aza-2'-[(18)F]fluorofolic acid ([(18)F]-AzaFol) was stable in the chick embryo as well as in the mouse.\n\n\nCONCLUSIONS\nOur results revealed the same tissue distribution profile and in vivo stability of radiopharmaceuticals in the chick embryo and the mouse. This observation is promising with regard to a potential use of the chick embryo as an inexpensive and simple test model for preclinical screening of novel radiopharmaceuticals.",
"title": ""
},
{
"docid": "1e6310e8b16625e8f8319c7386723e55",
"text": "Exploiting memory disclosure vulnerabilities like the HeartBleed bug may cause arbitrary reading of a victim's memory, leading to leakage of critical secrets such as crypto keys, personal identity and financial information. While isolating code that manipulates critical secrets into an isolated execution environment is a promising countermeasure, existing approaches are either too coarse-grained to prevent intra-domain attacks, or require excessive intervention from low-level software (e.g., hypervisor or OS), or both. Further, few of them are applicable to large-scale software with millions of lines of code. This paper describes a new approach, namely SeCage, which retrofits commodity hardware virtualization extensions to support efficient isolation of sensitive code manipulating critical secrets from the remaining code. SeCage is designed to work under a strong adversary model where a victim application or even the OS may be controlled by the adversary, while supporting large-scale software with small deployment cost. SeCage combines static and dynamic analysis to decompose monolithic software into several compart- ments, each of which may contain different secrets and their corresponding code. Following the idea of separating control and data plane, SeCage retrofits the VMFUNC mechanism and nested paging in Intel processors to transparently provide different memory views for different compartments, while allowing low-cost and transparent invocation across domains without hypervisor intervention.\n We have implemented SeCage in KVM on a commodity Intel machine. To demonstrate the effectiveness of SeCage, we deploy it to the Nginx and OpenSSH server with the OpenSSL library as well as CryptoLoop with small efforts. Security evaluation shows that SeCage can prevent the disclosure of private keys from HeartBleed attacks and memory scanning from rootkits. The evaluation shows that SeCage only incurs small performance and space overhead.",
"title": ""
},
{
"docid": "fb7a1d6fee1f3f763d10b7bccfbe7cd1",
"text": "Through the application of life course theory to the study of sexual orientation, this paper specifies a new paradigm for research on human sexual orientation that seeks to reconcile divisions among biological, social science, and humanistic paradigms. Recognizing the historical, social, and cultural relativity of human development, this paradigm argues for a moderate stance between essentialism and constructionism, identifying (a) the history of sexual orientation as an identity category emerging from the medical model of homosexuality in the late 1800s; (b) the presence of same-sex desire across species, history, and cultures, revealing its normality; (c) an underlying affective motivational force which organizes sexual desire within individuals, and (d) the assumption of a sexual identity in response to the identity and behavioral possibilities of a culture. This framework considers the biology of sexual desire while simultaneously acknowledging the socially constructed nature of identity and the historical foundations of sexual orientation as a meaningful index of human identity. The study of human sexual orientation is currently confronted with two significant problems. First, research on sexual orientation continues to be intellectually fragmented along disciplinary lines, primarily due to divergent epistemological, methodological, and metatheoretical perspectives. Second, as societal transformations fundamentally alter the life course of individuals who identify as gay, lesbian, or bisexual [Cohler & Hammack, in press], it becomes increasingly apparent Copyright © 2005 S. Karger AG, Basel",
"title": ""
},
{
"docid": "d74131a431ca54f45a494091e576740c",
"text": "In today’s highly competitive business environments with shortened product and technology life cycle, it is critical for software industry to continuously innovate. This goal can be achieved by developing a better understanding and control of the activities and determinants of innovation. Innovation measurement initiatives assess innovation capability, output and performance to help develop such an understanding. This study explores various aspects relevant to innovation measurement ranging from definitions, measurement frameworks and metrics that have been proposed in literature and used in practice. A systematic literature review followed by an online questionnaire and interviews with practitioners and academics were employed to identify a comprehensive definition of innovation that can be used in software industry. The metrics for the evaluation of determinants, inputs, outputs and performance were also aggregated and categorised. Based on these findings, a conceptual model of the key measurable elements of innovation was constructed from the findings of the systematic review. The model was further refined after feedback from academia and industry through interviews.",
"title": ""
},
{
"docid": "353fae3edb830aa86db682f28f64fd90",
"text": "The penetration of renewable resources in power system has been increasing in recent years. Many of these resources are uncontrollable and variable in nature, wind in particular, are relatively unpredictable. At high penetration levels, volatility of wind power production could cause problems for power system to maintain system security and reliability. One of the solutions being proposed to improve reliability and performance of the system is to integrate energy storage devices into the network. In this paper, unit commitment and dispatch schedule in power system with and without energy storage is examined for different level of wind penetration. Battery energy storage (BES) is considered as an alternative solution to store energy. The SCUC formulation and solution technique with wind power and BES is presented. The proposed formulation and model is validated with eight-bus system case study. Further, a discussion on the role of BES on locational pricing, economic, peak load shaving, and transmission congestion management had been made.",
"title": ""
},
{
"docid": "c757e54a14beec3b4930ad050a16d311",
"text": "The University Class Scheduling Problem (UCSP) is concerned with assigning a number of courses to classrooms taking into consideration constraints like classroom capacities and university regulations. The problem also attempts to optimize the performance criteria and distribute the courses fairly to classrooms depending on the ratio of classroom capacities to course enrollments. The problem is a classical scheduling problem and considered to be NP-complete. It has received some research during the past few years given its wide use in colleges and universities. Several formulations and algorithms have been proposed to solve scheduling problems, most of which are based on local search techniques. In this paper, we propose a complete approach using integer linear programming (ILP) to solve the problem. The ILP model of interest is developed and solved using the three advanced ILP solvers based on generic algorithms and Boolean Satisfiability (SAT) techniques. SAT has been heavily researched in the past few years and has lead to the development of powerful 0-1 ILP solvers that can compete with the best available generic ILP solvers. Experimental results indicate that the proposed model is tractable for reasonable-sized UCSP problems. Index Terms — University Class Scheduling, Optimization, Integer Linear Programming (ILP), Boolean Satisfiability.",
"title": ""
},
{
"docid": "9fa46e75dc28961fe3ce6fadd179cff7",
"text": "Task-oriented repetitive movements can improve motor recovery in patients with neurological or orthopaedic lesions. The application of robotics can serve to assist, enhance, evaluate, and document neurological and orthopaedic rehabilitation. ARMin II is the second prototype of a robot for arm therapy applicable to the training of activities of daily living. ARMin II has a semi-exoskeletal structure with seven active degrees of freedom (two of them coupled), five adjustable segments to fit in with different patient sizes, and is equipped with position and force sensors. The mechanical structure, the actuators and the sensors of the robot are optimized for patient-cooperative control strategies based on impedance and admittance architectures. This paper describes the mechanical structure and kinematics of ARMin II.",
"title": ""
}
] |
scidocsrr
|
21daeeae03a4140b3b6fcd538ae048df
|
Are hardware performance counters a cost effective way for integrity checking of programs
|
[
{
"docid": "552545ea9de47c26e1626efc4a0f201e",
"text": "For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena in nature. Despite the prevalence of computing power, the process of finding natural laws and their corresponding equations has resisted automation. A key challenge to finding analytic relations automatically is defining algorithmically what makes a correlation in observed data important and insightful. We propose a principle for the identification of nontriviality. We demonstrated this approach by automatically searching motion-tracking data captured from various physical systems, ranging from simple harmonic oscillators to chaotic double-pendula. Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the \"alphabet\" used to describe those systems.",
"title": ""
}
] |
[
{
"docid": "f3c44f35a2942b3a2b52c0ad72b55aff",
"text": "An overview of Polish and foreign literature concerning the chemical composition of edible mushrooms both cultivated and harvested in natural sites in Poland and abroad is presented. 100 g of fresh mushrooms contains 5.3-14.8 g dry matter, 1.5-6.7 g of carbohydrates, 1.5-3.0 g of protein and 0.3-0.4 g of fat. Mushrooms are a high valued source of mineral constituents, particularly potassium, phosphorus and magnesium and of vitamins of the B group, chiefly vitamins B2 and B3 and also vitamin D. The aroma of the discussed raw materials is based on about 150 aromatic compounds. The mushrooms can be a source of heavy metals and radioactive substances. They are also characterized by the occurrence of numerous enzymes.",
"title": ""
},
{
"docid": "1450854a32ea6c18f4cc817f686aaf15",
"text": "This article reports on the development of two measures relating to historical trauma among American Indian people: The Historical Loss Scale and The Historical Loss Associated Symptoms Scale. Measurement characteristics including frequencies, internal reliability, and confirmatory factor analyses were calculated based on 143 American Indian adult parents of children aged 10 through 12 years who are part of an ongoing longitudinal study of American Indian families in the upper Midwest. Results indicate both scales have high internal reliability. Frequencies indicate that the current generation of American Indian adults have frequent thoughts pertaining to historical losses and that they associate these losses with negative feelings. Two factors of the Historical Loss Associated Symptoms Scale indicate one anxiety/depression component and one anger/avoidance component. The results are discussed in terms of future research and theory pertaining to historical trauma among American Indian people.",
"title": ""
},
{
"docid": "d7573e7b3aac75b49132076ce9fc83e0",
"text": "The prevalent use of social media produces mountains of unlabeled, high-dimensional data. Feature selection has been shown effective in dealing with high-dimensional data for efficient data mining. Feature selection for unlabeled data remains a challenging task due to the absence of label information by which the feature relevance can be assessed. The unique characteristics of social media data further complicate the already challenging problem of unsupervised feature selection, (e.g., part of social media data is linked, which makes invalid the independent and identically distributed assumption), bringing about new challenges to traditional unsupervised feature selection algorithms. In this paper, we study the differences between social media data and traditional attribute-value data, investigate if the relations revealed in linked data can be used to help select relevant features, and propose a novel unsupervised feature selection framework, LUFS, for linked social media data. We perform experiments with real-world social media datasets to evaluate the effectiveness of the proposed framework and probe the working of its key components.",
"title": ""
},
{
"docid": "903ca1121c9906452e36210338113b12",
"text": "During the last decade, Norway has carried out an ambitious climate policy by implementing a relatively high carbon tax already in 1991. The Norwegian carbon taxes are among the highest in the world. Data for the development in CO2 emissions provide a unique opportunity to evaluate carbon taxes as a policy tool for CO2 abatement. We combine a divisia index decomposition method and applied general equilibrium simulations to decompose the emission changes, with and without the carbon taxes, in the period 1990-1999. We find that despite significant price increases for some fueltypes, the carbon tax effect on emissions was modest. The taxes contributed to a reduction in onshore emissions of only 1.5 percent and total emissions of 2.3 percent. With zero tax, the total emissions would have increased by 21.1 percent over the period 1990-1999, as opposed to the observed growth of 18.7 percent. This surprisingly small effect relates to the extensive tax exemptions and relatively inelastic demand in the sectors in which the tax is actually implemented. The tax does not work on the levied sources, and is exempted in sectors where it could have worked.",
"title": ""
},
{
"docid": "6c09932a4747c7e2d15b06720b1c48d9",
"text": "A distributed ledger made up of mutually distrusting nodes would allow for a single global database that records the state of deals and obligations between institutions and people. This would eliminate much of the manual, time consuming effort currently required to keep disparate ledgers synchronised with each other. It would also allow for greater levels of code sharing than presently used in the financial industry, reducing the cost of financial services for everyone. We present Corda, a platform which is designed to achieve these goals. This paper provides a high level introduction intended for the general reader. A forthcoming technical white paper elaborates on the design and fundamental architectural decisions.",
"title": ""
},
{
"docid": "5ef0c7a1e7970c1f37e18447c0c3aaf8",
"text": "Most existing high-performance co-segmentation algorithms are usually complicated due to the way of co-labelling a set of images and the requirement to handle quite a few parameters for effective co-segmentation. In this paper, instead of relying on the complex process of co-labelling multiple images, we perform segmentation on individual images but based on a combined saliency map that is obtained by fusing single-image saliency maps of a group of similar images. Particularly, a new multiple image based saliency map extraction, namely geometric mean saliency (GMS) method, is proposed to obtain the global saliency maps. In GMS, we transmit the saliency information among the images using the warping technique. Experiments show that our method is able to outperform state-of-the-art methods on three benchmark co-segmentation datasets.",
"title": ""
},
{
"docid": "e444dcc97882005658aca256991e816e",
"text": "The terms superordinate, hyponym, and subordinate designate the hierarchical taxonomic relationship of words. They also represent categories and concepts. This relationship is a subject of interest for anthropology, cognitive psychology, psycholinguistics, linguistic semantics, and cognitive linguistics. Taxonomic hierarchies are essentially classificatory systems, and they are supposed to reflect the way that speakers of a language categorize the world of experience. A well-formed taxonomy offers an orderly and efficient set of categories at different levels of specificity (Cruse 2000:180). However, the terms and levels of taxonomic hierarchy used in each discipline vary. This makes it difficult to carry out cross-disciplinary readings on the hierarchical taxonomy of words or categories, which act as an interface in these cognitive-based cross-disciplinary ventures. Not only words— terms and concepts differ but often the nature of the problem is compounded as some terms refer to differing word classes, categories and concepts at the same time. Moreover, the lexical relationship of terms among these lexical hierarchies is far from clear. As a result two lines of thinking can be drawn from the literature: (1) technical terms coined for the hierarchical relationship of words are conflicting and do not reflect reality or environment, and (2) the relationship among these hierarchies of word levels and the underlying principles followed to explain them are uncertain except that of inclusion.",
"title": ""
},
{
"docid": "eb7ccd69c0bbb4e421b8db3b265f5ba6",
"text": "The discovery of Novoselov et al. (2004) of a simple method to transfer a single atomic layer of carbon from the c-face of graphite to a substrate suitable for the measurement of its electrical and optical properties has led to a renewed interest in what was considered to be before that time a prototypical, yet theoretical, two-dimensional system. Indeed, recent theoretical studies of graphene reveal that the linear electronic band dispersion near the Brillouin zone corners gives rise to electrons and holes that propagate as if they were massless fermions and anomalous quantum transport was experimentally observed. Recent calculations and experimental determination of the optical phonons of graphene reveal Kohn anomalies at high-symmetry points in the Brillouin zone. They also show that the Born– Oppenheimer principle breaks down for doped graphene. Since a carbon nanotube can be viewed as a rolled-up sheet of graphene, these recent theoretical and experimental results on graphene should be important to researchers working on carbon nanotubes. The goal of this contribution is to review the exciting news about the electronic and phonon states of graphene and to suggest how these discoveries help understand the properties of carbon nanotubes.",
"title": ""
},
{
"docid": "47ea90e34fc95a941bc127ad8ccd2ca9",
"text": "The ever increasing number of cyber attacks requires the cyber security and forensic specialists to detect, analyze and defend against the cyber threats in almost real-time. In practice, timely dealing with such a large number of attacks is not possible without deeply perusing the attack features and taking corresponding intelligent defensive actions—this in essence defines cyber threat intelligence notion. However, such an intelligence would not be possible without the aid of artificial intelligence, machine learning and advanced data mining techniques to collect, analyse, and interpret cyber attack evidences. In this introductory chapter we first discuss the notion of cyber threat intelligence and its main challenges and opportunities, and then briefly introduce the chapters of the book which either address the identified challenges or present opportunistic solutions to provide threat intelligence.",
"title": ""
},
{
"docid": "aa6f0cc6aa491a94a585d4b8e82490ea",
"text": "Convolutional Neural Networks (CNNs) are state-of-theart models for many image and video classification tasks. However, training on large-size training samples is currently computationally impossible. Hence when the training data is multi-gigapixel images, only small patches of the original images can be used as training input. Since there is no guarantee that each patch is discriminative, we advocate the use of Multiple Instance Learning (MIL) to combine evidence from multiple patches sampled from the same image. In this paper we propose a framework that integrates MIL with CNNs. In our algorithm, patches of the images or videos are treated as instances, where only the imageor video-level label is given. Our algorithm iteratively identifies discriminative patches in a high resolution image and trains a CNN on them. In the test phase, instead of using voting to the predict the label of the image, we train a logistic regression model to aggregate the patch-level predictions. Our method selects discriminative patches more robustly through the use of Gaussian smoothing. We apply our method to glioma (the most common brain cancer) subtype classification based on multi-gigapixel whole slide images (WSI) from The Cancer Genome Atlas (TCGA) dataset. We can classify Glioblastoma (GBM) and Low-Grade Glioma (LGG) with an accuracy of 97%. Furthermore, for the first time, we attempt to classify the three most common subtypes of LGG, a much more challenging task. We achieved an accuracy of 57.1% which is similar to the inter-observer agreement between experienced pathologists.",
"title": ""
},
{
"docid": "5e36d572d7af3990d7a1f1e040f79d26",
"text": "In horizontal collaborations, carriers form coalitions in order to perform parts of their logistics operations jointly. By exchanging transportation requests among each other, they can operate more efficiently and in a more sustainable way. Collaborative vehicle routing has been extensively discussed in the literature. We identify three major streams of research: (i) centralized collaborative planning, (ii) decentralized planning without auctions, and (ii) auction-based decentralized planning. For each of them we give a structured overview on the state of knowledge and discuss future research directions.",
"title": ""
},
{
"docid": "a448b5e4e4bd017049226f06ce32fa9d",
"text": "We present an approach to accelerating a wide variety of image processing operators. Our approach uses a fully-convolutional network that is trained on input-output pairs that demonstrate the operator’s action. After training, the original operator need not be run at all. The trained network operates at full resolution and runs in constant time. We investigate the effect of network architecture on approximation accuracy, runtime, and memory footprint, and identify a specific architecture that balances these considerations. We evaluate the presented approach on ten advanced image processing operators, including multiple variational models, multiscale tone and detail manipulation, photographic style transfer, nonlocal dehazing, and nonphoto- realistic stylization. All operators are approximated by the same model. Experiments demonstrate that the presented approach is significantly more accurate than prior approximation schemes. It increases approximation accuracy as measured by PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from 27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 com- pared to the most accurate prior approximation scheme, while being the fastest. We show that our models general- ize across datasets and across resolutions, and investigate a number of extensions of the presented approach.",
"title": ""
},
{
"docid": "1e6e562a92b0fc15839a5a5fe191479d",
"text": "Self-tuning optical systems are of growing importance in technological applications such as mode-locked fiber lasers. Such self-tuning paradigms require intelligent algorithms capable of inferring approximate models of the underlying physics and discovering appropriate control laws in order to maintain robust performance for a given objective. In this work, we demonstrate the first integration of a deep learning (DL) architecture with model predictive control (MPC) in order to self-tune a mode-locked fiber laser. Not only can our DL-MPC algorithmic architecture approximate the unknown fiber birefringence, it also builds a dynamical model of the laser and appropriate control law for maintaining robust, high-energy pulses despite a stochastically drifting birefringence. We demonstrate the effectiveness of this method on a fiber laser which is mode-locked by nonlinear polarization rotation. The method advocated can be broadly applied to a variety of optical systems that require robust controllers. © 2018",
"title": ""
},
{
"docid": "d4406b74040e9f06b1d05cefade12c6c",
"text": "Steganography is a science to hide information, it hides a message to another object, and it increases the security of data transmission and archiving it. In the process of steganography, the hidden object in which data is hidden the carrier object and the new object, is called the steganography object. The multiple carriers, such as text, audio, video, image and so can be mentioned for steganography; however, audio has been significantly considered due to the multiplicity of uses in various fields such as the internet. For steganography process, several methods have been developed; including work in the temporary and transformation, each of has its own advantages and disadvantages, and special function. In this paper we mainly review and evaluate different types of audio steganography techniques, advantages and disadvantages.",
"title": ""
},
{
"docid": "3ae6703f2ea27b1c3418ce623aa394a0",
"text": "A Hardware Trojan is a malicious, undesired, intentional modification of an electronic circuit or design, resulting in the incorrect behaviour of an electronic device when in operation – a back-door that can be inserted into hardware. A Hardware Trojan may be able to defeat any and all security mechanisms (software or hardware-based) and subvert or augment the normal operation of an infected device. This may result in modifications to the functionality or specification of the hardware, the leaking of sensitive information, or a Denial of Service (DoS) attack. Understanding Hardware Trojans is vital when developing next generation defensive mechanisms for the development and deployment of electronics in the presence of the Hardware Trojan threat. Research over the past five years has primarily focussed on detecting the presence of Hardware Trojans in infected devices. This report reviews the state-of-the-art in Hardware Trojans, from the threats they pose through to modern prevention, detection and countermeasure techniques. APPROVED FOR PUBLIC RELEASE",
"title": ""
},
{
"docid": "1dd4a95adcd4f9e7518518148c3605ac",
"text": "Kernel modules are an integral part of most operating systems (OS) as they provide flexible ways of adding new functionalities (such as file system or hardware support) to the kernel without the need to recompile or reload the entire kernel. Aside from providing an interface between the user and the hardware, these modules maintain system security and reliability. Malicious kernel level exploits (e.g. code injections) provide a gateway to a system's privileged level where the attacker has access to an entire system. Such attacks may be detected by performing code integrity checks. Several commodity operating systems (such as Linux variants and MS Windows) maintain signatures of different pieces of kernel code in a database for code integrity checking purposes. However, it quickly becomes cumbersome and time consuming to maintain a database of legitimate dynamic changes in the code, such as regular module updates. In this paper we present Mod Checker, which checks in-memory kernel modules' code integrity in real time without maintaining a database of hashes. Our solution applies to virtual environments that have multiple virtual machines (VMs) running the same version of the operating system, an environment commonly found in large cloud servers. Mod Checker compares kernel module among a pool of VMs within a cloud. We thoroughly evaluate the effectiveness and runtime performance of Mod Checker and conclude that Mod Checker is able to detect any change in a kernel module's headers and executable content with minimal or no impact on the guest operating systems' performance.",
"title": ""
},
{
"docid": "085307dca53722f902dd3651e7383521",
"text": "BACKGROUND\nExposure to drugs and toxins is a major cause for patients' visits to the emergency department (ED).\n\n\nMETHODS\nRecommendations for the use of clinical laboratory tests were prepared by an expert panel of analytical toxicologists and ED physicians specializing in clinical toxicology. These recommendations were posted on the world wide web and presented in open forum at several clinical chemistry and clinical toxicology meetings.\n\n\nRESULTS\nA menu of important stat serum and urine toxicology tests was prepared for clinical laboratories who provide clinical toxicology services. For drugs-of-abuse intoxication, most ED physicians do not rely on results of urine drug testing for emergent management decisions. This is in part because immunoassays, although rapid, have limitations in sensitivity and specificity and chromatographic assays, which are more definitive, are more labor-intensive. Ethyl alcohol is widely tested in the ED, and breath testing is a convenient procedure. Determinations made within the ED, however, require oversight by the clinical laboratory. Testing for toxic alcohols is needed, but rapid commercial assays are not available. The laboratory must provide stat assays for acetaminophen, salicylates, co-oximetry, cholinesterase, iron, and some therapeutic drugs, such as lithium and digoxin. Exposure to other heavy metals requires laboratory support for specimen collection but not for emergent testing.\n\n\nCONCLUSIONS\nImprovements are needed for immunoassays, particularly for amphetamines, benzodiazepines, opioids, and tricyclic antidepressants. Assays for new drugs of abuse must also be developed to meet changing abuse patterns. As no clinical laboratory can provide services to meet all needs, the National Academy of Clinical Biochemistry Committee recommends establishment of regional centers for specialized toxicology testing.",
"title": ""
},
{
"docid": "f7bed669e86a76f707e0f22e58f15de9",
"text": "A new stream cipher, Grain, is proposed. The design targets hardware environments where gate count, power consumption and memory is very limited. It is based on two shift registers and a nonlinear output function. The cipher has the additional feature that the speed can be increased at the expense of extra hardware. The key size is 80 bits and no attack faster than exhaustive key search has been identified. The hardware complexity and throughput compares favourably to other hardware oriented stream ciphers like E0 and A5/1.",
"title": ""
},
{
"docid": "328aad76b94b34bf49719b98ae391cfe",
"text": "We discuss methods for statistically analyzing the output from stochastic discrete-event or Monte Carlo simulations. Terminating and steady-state simulations are considered.",
"title": ""
},
{
"docid": "a208f2a2720313479773c00a74b1cbc6",
"text": "I present a web service for querying an embedding of entities in the Wikidata knowledge graph. The embedding is trained on the Wikidata dump using Gensim’s Word2Vec implementation and a simple graph walk. A REST API is implemented. Together with the Wikidata API the web service exposes a multilingual resource for over 600’000 Wikidata items and properties.",
"title": ""
}
] |
scidocsrr
|
f4899ed8b7e1888aa4aa000835866f58
|
Neuro-Symbolic Program Synthesis
|
[
{
"docid": "c183e77e531141ea04b7ea95149be70a",
"text": "Millions of computer end users need to perform tasks over large spreadsheet data, yet lack the programming knowledge to do such tasks automatically. We present a programming by example methodology that allows end users to automate such repetitive tasks. Our methodology involves designing a domain-specific language and developing a synthesis algorithm that can learn programs in that language from user-provided examples. We present instantiations of this methodology for particular domains of tasks: (a) syntactic transformations of strings using restricted forms of regular expressions, conditionals, and loops, (b) semantic transformations of strings involving lookup in relational tables, and (c) layout transformations on spreadsheet tables. We have implemented this technology as an add-in for the Microsoft Excel Spreadsheet system and have evaluated it successfully over several benchmarks picked from various Excel help forums.",
"title": ""
}
] |
[
{
"docid": "ed7832f6fbb1777ab3139cc8b5dd2d28",
"text": "Tree ensemble models such as random forests and boosted trees are among the most widely used and practically successful predictive models in applied machine learning and business analytics. Although such models have been used to make predictions based on exogenous, uncontrollable independent variables, they are increasingly being used to make predictions where the independent variables are controllable and are also decision variables. In this paper, we study the problem of tree ensemble optimization: given a tree ensemble that predicts some dependent variable using controllable independent variables, how should we set these variables so as to maximize the predicted value? We formulate the problem as a mixed-integer optimization problem. We theoretically examine the strength of our formulation, provide a hierarchy of approximate formulations with bounds on approximation quality and exploit the structure of the problem to develop two large-scale solution methods, one based on Benders decomposition and one based on iteratively generating tree split constraints. We test our methodology on real data sets, including two case studies in drug design and customized pricing, and show that our methodology can efficiently solve large-scale instances to near or full optimality, and outperforms solutions obtained by heuristic approaches. In our drug design case, we show how our approach can identify compounds that efficiently trade-off predicted performance and novelty with respect to existing, known compounds. In our customized pricing case, we show how our approach can efficiently determine optimal store-level prices under a random forest model that delivers excellent predictive accuracy.",
"title": ""
},
{
"docid": "3dd518c87372b51a9284e4b8aa2e4fb4",
"text": "Traditional background modeling and subtraction methods have a strong assumption that the scenes are of static structures with limited perturbation. These methods will perform poorly in dynamic scenes. In this paper, we present a solution to this problem. We first extend the local binary patterns from spatial domain to spatio-temporal domain, and present a new online dynamic texture extraction operator, named spatio- temporal local binary patterns (STLBP). Then we present a novel and effective method for dynamic background modeling and subtraction using STLBP. In the proposed method, each pixel is modeled as a group of STLBP dynamic texture histograms which combine spatial texture and temporal motion information together. Compared with traditional methods, experimental results show that the proposed method adapts quickly to the changes of the dynamic background. It achieves accurate detection of moving objects and suppresses most of the false detections for dynamic changes of nature scenes.",
"title": ""
},
{
"docid": "1728add8c17ff28fd9e580f4fb388155",
"text": "We study response selection for multi-turn conversation in retrieval based chatbots. Existing works either ignores relationships among utterances, or misses important information in context when matching a response with a highly abstract context vector finally. We propose a new session based matching model to address both problems. The model first matches a response with each utterance on multiple granularities, and distills important matching information from each pair as a vector with convolution and pooling operations. The vectors are then accumulated in a chronological order through a recurrent neural network (RNN) which models the relationships among the utterances. The final matching score is calculated with the hidden states of the RNN. Empirical study on two public data sets shows that our model can significantly outperform the state-of-the-art methods for response selection in multi-turn conversation.",
"title": ""
},
{
"docid": "446a15e1dae957f1e142454e4f32db5d",
"text": "Cyber attacks in the Internet are common knowledge for even nontechnical people. Same attack techniques can also be used against any military radio networks in the battlefield. This paper describes a test setup that can be used to test tactical radio networks against cyber vulnerabilities. The test setup created is versatile and can be adapted to any command and control system on any level of the OSI model. Test setup uses as much publicly or commercially available tools as possible. Need for custom made components is minimized to decrease costs, to decrease deployment time and to increase usability. With architecture described, same tools used in IP network testing can be used in tactical radio networks. Problems found in any level of the system can be fixed in co-operation with vendors of the system. Cyber testing should be adapted as part of acceptance tests of any new military communication system.",
"title": ""
},
{
"docid": "3f48f5be25ac5d040cc9d226588427b3",
"text": "Snake robots, sometimes called hyper-redundant mechanisms, can use their many degrees of freedom to achieve a variety of locomotive capabilities. These capabilities are ideally suited for disaster response because the snake robot can thread through tightly packed volumes, accessing locations that people and conventional machinery otherwise cannot. Snake robots also have the advantage of possessing a variety of locomotion capabilities that conventional robots do not. Just like their biological counterparts, snake robots achieve these locomotion capabilities using cyclic motions called gaits. These cyclic motions directly control the snake robot’s internal degrees of freedom which, in turn, causes a net motion, say forward, lateral and rotational, for the snake robot. The gaits described in this paper fall into two categories: parameterized and scripted. The parameterized gaits, as their name suggests, can be described by a relative simple parameterized function, whereas the scripted cannot. This paper describes the functions we prescribed for gait generation and our experiences in making these robots operate in real experiments. © Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2009",
"title": ""
},
{
"docid": "299f17dca15e2eab1692e82869fc2f6d",
"text": "the \"dark figure\" of crime—that is, about occurrences that by some criteria are called crime yet that are not registered in the statistics of whatever agency was the source of the data being used. Contending arguments arose about the dark figure between the \"realists\" who emphasized the virtues of completeness with which data represent the \"real crime\" that takes place and the \"institutionalists\" who emphasize that crime can have valid meaning only in terms of organized, legitimate social responses to it. This paper examines these arguments in the context of police and survey statistics as measures of crime in a population. It concludes that in exploring the dark figure of crime, the primary question is not how much of it",
"title": ""
},
{
"docid": "9dd66d538b0195b216c10cc47d3f7005",
"text": "This study presents a stochastic demand multi-product supplier selection model with service level and budget constraints using Genetic Algorithm. Recently, much attention has been given to stochastic demand due to uncertainty in the real world. Conflicting objectives also exist between profit, service level and resource utilization. In this study, the relationship between the expected profit and the number of trials as well as between the expected profit and the combination of mutation and crossover rates are investigated to identify better parameter values to efficiently run the Genetic Algorithm. Pareto optimal solutions and return on investment are analyzed to provide decision makers with the alternative options of achieving the proper budget and service level. The results show that the optimal value for the return on investment and the expected profit are obtained with a certain budget and service level constraint. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f9d91253c5c276bb020daab4a4127822",
"text": "Conveying a narrative with visualizations often requires choosing an order in which to present visualizations. While evidence exists that narrative sequencing in traditional stories can affect comprehension and memory, little is known about how sequencing choices affect narrative visualization. We consider the forms and reactions to sequencing in narrative visualization presentations to provide a deeper understanding with a focus on linear, 'slideshow-style' presentations. We conduct a qualitative analysis of 42 professional narrative visualizations to gain empirical knowledge on the forms that structure and sequence take. Based on the results of this study we propose a graph-driven approach for automatically identifying effective sequences in a set of visualizations to be presented linearly. Our approach identifies possible transitions in a visualization set and prioritizes local (visualization-to-visualization) transitions based on an objective function that minimizes the cost of transitions from the audience perspective. We conduct two studies to validate this function. We also expand the approach with additional knowledge of user preferences for different types of local transitions and the effects of global sequencing strategies on memory, preference, and comprehension. Our results include a relative ranking of types of visualization transitions by the audience perspective and support for memory and subjective rating benefits of visualization sequences that use parallelism as a structural device. We discuss how these insights can guide the design of narrative visualization and systems that support optimization of visualization sequence.",
"title": ""
},
{
"docid": "c03a0bd78edcb7ebde0321ca7479853d",
"text": "The evolution of speech can be studied independently of the evolution of language, with the advantage that most aspects of speech acoustics, physiology and neural control are shared with animals, and thus open to empirical investigation. At least two changes were necessary prerequisites for modern human speech abilities: (1) modification of vocal tract morphology, and (2) development of vocal imitative ability. Despite an extensive literature, attempts to pinpoint the timing of these changes using fossil data have proven inconclusive. However, recent comparative data from nonhuman primates have shed light on the ancestral use of formants (a crucial cue in human speech) to identify individuals and gauge body size. Second, comparative analysis of the diverse vertebrates that have evolved vocal imitation (humans, cetaceans, seals and birds) provides several distinct, testable hypotheses about the adaptive function of vocal mimicry. These developments suggest that, for understanding the evolution of speech, comparative analysis of living species provides a viable alternative to fossil data. However, the neural basis for vocal mimicry and for mimesis in general remains unknown.",
"title": ""
},
{
"docid": "1da19f806430077f7ad957dbeb0cb8d1",
"text": "BACKGROUND\nTo date, periorbital melanosis is an ill-defined entity. The condition has been stated to be darkening of the skin around the eyes, dark circles, infraorbital darkening and so on.\n\n\nAIMS\nThis study was aimed at exploring the nature of pigmentation in periorbital melanosis.\n\n\nMETHODS\nOne hundred consecutive patients of periorbital melanosis were examined and investigated to define periorbital melanosis. Extent of periorbital melanosis was determined by clinical examination. Wood's lamp examination was performed in all the patients to determine the depth of pigmentation. A 2-mm punch biopsy was carried out in 17 of 100 patients.\n\n\nRESULTS\nIn 92 (92%) patients periorbital melanosis was an extension of pigmentary demarcation line over the face (PDL-F).\n\n\nCONCLUSION\nPeriorbital melanosis and pigmentary demarcation line of the face are not two different conditions; rather they are two different manifestations of the same disease.",
"title": ""
},
{
"docid": "84b366294dbddcede8675ddd234ca1ea",
"text": "Binary Moment Diagrams (BMDs) provide a canonical representations for linear functions similar to the way Binary Decision Diagrams (BDDs) represent Boolean functions. Within the class of linear functions, we can embed arbitrary functions from Boolean variables to integer values. BMDs can thus model the functionality of data path circuits operating over word-level data. Many important functions, including integermultiplication, that cannot be represented efficiently at the bit level with BDDs have simple representations at the word level with BMDs. Furthermore, BMDs can represent Boolean functions with around the same complexity as BDDs. We propose a hierarchical approach to verifying arithmetic circuits, where componentmodules are first shownto implement their word-level specifications. The overall circuit functionality is then verified by composing the component functions and comparing the result to the word-level circuit specification. Multipliers with word sizes of up to 256 bits have been verified by this technique.",
"title": ""
},
{
"docid": "717bea69015f1c2e9f9909c3510c825a",
"text": "To assess the impact of anti-vaccine movements that targeted pertussis whole-cell vaccines, we compared pertussis incidence in countries where high coverage with diphtheria-tetanus-pertussis vaccines (DTP) was maintained (Hungary, the former East Germany, Poland, and the USA) with countries where immunisation was disrupted by anti-vaccine movements (Sweden, Japan, UK, The Russian Federation, Ireland, Italy, the former West Germany, and Australia). Pertussis incidence was 10 to 100 times lower in countries where high vaccine coverage was maintained than in countries where immunisation programs were compromised by anti-vaccine movements. Comparisons of neighbouring countries with high and low vaccine coverage further underscore the efficacy of these vaccines. Given the safety and cost-effectiveness of whole-cell pertussis vaccines, our study shows that, far from being obsolete, these vaccines continue to have an important role in global immunisation.",
"title": ""
},
{
"docid": "5bff5809ff470084497011a1860148e0",
"text": "A statistical meta-analysis of the technology acceptance model (TAM) as applied in various fields was conducted using 88 published studies that provided sufficient data to be credible. The results show TAM to be a valid and robust model that has been widely used, but which potentially has wider applicability. A moderator analysis involving user types and usage types was performed to investigate conditions under which TAM may have different effects. The study confirmed the value of using students as surrogates for professionals in some TAM studies, and perhaps more generally. It also revealed the power of meta-analysis as a rigorous alternative to qualitative and narrative literature review methods. # 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a114801b4a00d024d555378ffa7cc583",
"text": "UNLABELLED\nRectal prolapse is the partial or complete protrusion of the rectal wall into the anal canal. The most common etiology consists in the insufficiency of the diaphragm of the lesser pelvis and anal sphincter apparatus. Methods of surgical treatment involve perineal or abdominal approach surgical procedures. The aim of the study was to present the method of surgical rectal prolapse treatment, according to Mikulicz's procedure by means of the perineal approach, based on our own experience and literature review.\n\n\nMATERIAL AND METHODS\nThe study group comprised 16 patients, including 14 women and 2 men, aged between 38 and 82 years admitted to the department, due to rectal prolapse, during the period between 2000 and 2012. Nine female patients, aged between 68 and 82 years (mean age-76.3 years) with fullthickness rectal prolapse underwent surgery by means of Mikulicz's method with levator muscle and external anal sphincter plasty. The most common comorbidities amongst patients operated by means of Mikulicz's method included cardiovascular and metabolic diseases.\n\n\nRESULTS\nMean hospitalization was 14.4 days (ranging between 12 and 17 days). Despite advanced age and poor general condition of the patients, complications during the perioperative period were not observed. Good early and late functional results were achieved. The degree of anal sphincter continence was determined 6-8 weeks after surgery showing significant improvement, as compared to results obtained prior to surgery. One case of recurrence consisting in mucosal prolapse was noted, being treated surgically by means of Whitehead's method. Good treatment results were observed.\n\n\nCONCLUSION\nTransperineal rectosigmoidectomy using Mikulicz's method with levator muscle and external anal sphincter plasty seems to be an effective, minimally invasive and relatively safe procedure that does not require general anesthesia. It is recommended in case of patients with significant comorbidities and high surgical risk.",
"title": ""
},
{
"docid": "41fdf1b9313d4b0510e2d7ebe0a16c62",
"text": "With the development of Internet technology, online job-hunting plays an increasingly important role in job-searching. It is difficult for job hunters to solely rely on keywords retrieving to find positions which meet their needs. To solve this issue, we adopted item-based collaborative filtering algorithm for job recommendations. In this paper, we optimized the algorithm by combining position descriptions and resume information. Specifically, job preference prediction formula is optimized by historical delivery weight calculated by position descriptions and similar user weight calculated by resume information. The experiments tested on real data set have shown that our methods have a significant improvement on job recommendation results.",
"title": ""
},
{
"docid": "6778931314fbaa831264c91250614a0c",
"text": "We present a real-time indoor visible light positioning system based on the optical camera communication, where the coordinate data in the ON–OFF keying format is transmitted via light-emitting diode-based lights and captured using a smartphone camera. The position of the camera is estimated using a novel perspective-<inline-formula> <tex-math notation=\"LaTeX\">$n$ </tex-math></inline-formula>-point problem algorithm, which determines the position of a calibrated camera from <inline-formula> <tex-math notation=\"LaTeX\">$n~3\\text{D}$ </tex-math></inline-formula>-to-2D point correspondences. The experimental results show that the proposed system offers mean position errors of 4.81 and 6.58 cm for the heights of 50 and 80 cm, respectively.",
"title": ""
},
{
"docid": "9825e8a24aba301c4c7be3b8b4c4dde5",
"text": "Being a cross-camera retrieval task, person re-identification suffers from image style variations caused by different cameras. The art implicitly addresses this problem by learning a camera-invariant descriptor subspace. In this paper, we explicitly consider this challenge by introducing camera style (CamStyle) adaptation. CamStyle can serve as a data augmentation approach that smooths the camera style disparities. Specifically, with CycleGAN, labeled training images can be style-transferred to each camera, and, along with the original training samples, form the augmented training set. This method, while increasing data diversity against over-fitting, also incurs a considerable level of noise. In the effort to alleviate the impact of noise, the label smooth regularization (LSR) is adopted. The vanilla version of our method (without LSR) performs reasonably well on few-camera systems in which over-fitting often occurs. With LSR, we demonstrate consistent improvement in all systems regardless of the extent of over-fitting. We also report competitive accuracy compared with the state of the art. Code is available at: https://github.com/zhunzhong07/CamStyle",
"title": ""
},
{
"docid": "6c153f12481f365039f47c252acbe4ee",
"text": "DNA methylation has emerged as promising epigenetic markers for disease diagnosis. Both the differential mean (DM) and differential variability (DV) in methylation have been shown to contribute to transcriptional aberration and disease pathogenesis. The presence of confounding factors in large scale EWAS may affect the methylation values and hamper accurate marker discovery. In this paper, we propose a exible framework called methylDMV which allows for confounding factors adjustment and enables simultaneous characterization and identification of CpGs exhibiting DM only, DV only and both DM and DV. The proposed framework also allows for prioritization and selection of candidate features to be included in the prediction algorithm. We illustrate the utility of methylDMV in several TCGA datasets. An R package methylDMV implementing our proposed method is available at http://www.ams.sunysb.edu/~pfkuan/softwares.html#methylDMV.",
"title": ""
},
{
"docid": "2210207d9234801710fa2a9c59f83306",
"text": "\"Big Data\" as a term has been among the biggest trends of the last three years, leading to an upsurge of research, as well as industry and government applications. Data is deemed a powerful raw material that can impact multidisciplinary research endeavors as well as government and business performance. The goal of this discussion paper is to share the data analytics opinions and perspectives of the authors relating to the new opportunities and challenges brought forth by the big data movement. The authors bring together diverse perspectives, coming from different geographical locations with different core research expertise and different affiliations and work experiences. The aim of this paper is to evoke discussion rather than to provide a comprehensive survey of big data research.",
"title": ""
},
{
"docid": "5e09f6b6f1f1ef3e3c259d40f0259a7f",
"text": "ABSTRACT: We analyze survey responses from nearly 600 corporate tax executives to investigate firms’ incentives and disincentives for tax planning. While many researchers hypothesize that reputational concerns affect the degree to which managers engage in tax planning, this hypothesis is difficult to test with archival data. Our survey allows us to investigate reputational influences and indeed we find that reputational concerns are important – 69% of executives rate reputation as important and the factor ranks second in order of importance among all factors explaining why firms do not adopt a potential tax planning strategy. We also find that financial accounting incentives play a role. For example, 84% of publicly traded firms respond that top management at their company cares at least as much about the GAAP ETR as they do about cash taxes paid and 57% of public firms say that increasing earnings per share is an important outcome from a tax planning strategy.",
"title": ""
}
] |
scidocsrr
|
dd927cd0f575cf3b23b93ae604d38d8b
|
CLVQ: cross-language video question/answering system
|
[
{
"docid": "50d0b1e141bcea869352c9b96b0b2ad5",
"text": "In this paper we present the features of a Question/Answering (Q/A) system that had unparalleled performance in the TREC-9 evaluations. We explain the accuracy of our system through the unique characteristics of its architecture: (1) usage of a wide-coverage answer type taxonomy; (2) repeated passage retrieval; (3) lexico-semantic feedback loops; (4) extraction of the answers based on machine learning techniques; and (5) answer caching. Experimental results show the effects of each feature on the overall performance of the Q/A system and lead to general conclusions about Q/A from large text collections.",
"title": ""
}
] |
[
{
"docid": "551d642efa547b9d8c089b8ecb9530fb",
"text": "Using piezoelectric materials to harvest energy from ambient vibrations to power wireless sensors has been of great interest over the past few years. Due to the power output of the piezoelectric materials is relatively low, rechargeable battery is considered as one kind of energy storage to accumulate the harvested energy for intermittent use. Piezoelectric harvesting circuits for rechargeable batteries have two schemes: non-adaptive and adaptive ones. A non-adaptive harvesting scheme includes a conventional diode bridge rectifier and a passive circuit. In recent years, several researchers have developed adaptive schemes for the harvesting circuit. Among them, the adaptive harvesting scheme by Ottman et al. is the most promising. This paper is aimed to quantify the performances of adaptive and non-adaptive schemes and to discuss their performance characteristics.",
"title": ""
},
{
"docid": "b1e0fa6b41fb697db8dfe5520b79a8e6",
"text": "The problem of computing the minimum-angle bounding cone of a set of three-dimensional vectors has numero cations in computer graphics and geometric modeling. One such application is bounding the tangents of space cur vectors normal to a surface in the computation of the intersection of two surfaces. No optimal-time exact solution to this problem has been yet given. This paper presents a roadmap for a few strate provide optimal or near-optimal (time-wise) solutions to this problem, which are also simple to implement. Specifica worst-case running time is required, we provide an O ( logn)-time Voronoi-diagram-based algorithm, where n is the number of vectors whose optimum bounding cone is sought. Otherwise, i f one is willing to accept an, in average, efficient algorithm, we show that the main ingredient of the algorithm of Shirman and Abi-Ezzi [Comput. Graphics Forum 12 (1993) 261–272 implemented to run in optimal (n) expected time. Furthermore, if the vectors (as points on the sphere of directions) are to occupy no more than a hemisphere, we show how to simplify this ingredient (by reducing the dimension of the p without affecting the asymptotic expected running time. Both versions of this algorithm are based on computing (as an problem) the minimum spanning circle (respectively, ball) of a two-dimensional (respectively, three-dimensional) set o 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "13774d2655f2f0ac575e11991eae0972",
"text": "This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each block of variables. It also accepts nonconvex blocks and requires these blocks to be updated by proximal minimization. We review some interesting applications and propose a generalized block coordinate descent method. Under certain conditions, we show that any limit point satisfies the Nash equilibrium conditions. Furthermore, we establish global convergence and estimate the asymptotic convergence rate of the method by assuming a property based on the Kurdyka– Lojasiewicz inequality. The proposed algorithms are tested on nonnegative matrix and tensor factorization, as well as matrix and tensor recovery from incomplete observations. The tests include synthetic data and hyperspectral data, as well as image sets from the CBCL and ORL databases. Compared to the existing state-of-the-art algorithms, the proposed algorithms demonstrate superior performance in both speed and solution quality. The MATLAB code of nonnegative matrix/tensor decomposition and completion, along with a few demos, are accessible from the authors’ homepages.",
"title": ""
},
{
"docid": "9f1336d17f5d8fd7e04bd151eabb6a97",
"text": "Immensely popular video sharing websites such as YouTube have become the most important sources of music information for Internet users and the most prominent platform for sharing live music. The audio quality of this huge amount of live music recordings, however, varies significantly due to factors such as environmental noise, location, and recording device. However, most video search engines do not take audio quality into consideration when retrieving and ranking results. Given the fact that most users prefer live music videos with better audio quality, we propose the first automatic, non-reference audio quality assessment framework for live music video search online. We first construct two annotated datasets of live music recordings. The first dataset contains 500 human-annotated pieces, and the second contains 2,400 synthetic pieces systematically generated by adding noise effects to clean recordings. Then, we formulate the assessment task as a ranking problem and try to solve it using a learning-based scheme. To validate the effectiveness of our framework, we perform both objective and subjective evaluations. Results show that our framework significantly improves the ranking performance of live music recording retrieval and can prove useful for various real-world music applications.",
"title": ""
},
{
"docid": "77946858c15b525f5b48499dea2fa1c4",
"text": "Porphyrias are a group of metabolic disorders in which there are defects in the normal pathway for the biosynthesis of heme, the critical prosthetic group for numerous hemoproteins. The clinical manifestations of the porphyrias can be highly varied, and patients may present to general physicians and be referred to a wide variety of subspecialists because of these manifestations. However, two major clinical forms are represented by the so-called \"acute\" porphyrias, in which patients suffer recurrent bouts of pain, especially pain in the abdomen, and the \"cutaneous\" porphyrias, in which patients have painful skin lesions. Knowledge of the factors chiefly responsible for regulating the rate of synthesis of heme has helped to explain how drugs and other factors may cause porphyria. Knowledge of the physical and chemical properties of porphyrins also forms an important part of the foundation for understanding the clinical manifestations of these diseases. Thus, the porphyrias can best be understood after reviewing the chemical properties of porphyrins and heme and the control of their biosynthesis.",
"title": ""
},
{
"docid": "894164566e284f0e4318d94cc6768871",
"text": "This paper investigates the problems of signal reconstruction and blind deconvolution for graph signals that have been generated by an originally sparse input diffused through the network via the application of a graph filter operator. Assuming that the support of the sparse input signal is unknown, and that the diffused signal is observed only at a subset of nodes, we address the related problems of: 1) identifying the input and 2) interpolating the values of the diffused signal at the non-sampled nodes. We first consider the more tractable case where the coefficients of the diffusing graph filter are known and then address the problem of joint input and filter identification. The corresponding blind identification problems are formulated, novel convex relaxations are discussed, and modifications to incorporate a priori information on the sparse inputs are provided.",
"title": ""
},
{
"docid": "f15cb62cb81b71b063d503eb9f44d7c5",
"text": "This study presents an improved krill herd (IKH) approach to solve global optimization problems. The main improvement pertains to the exchange of information between top krill during motion calculation process to generate better candidate solutions. Furthermore, the proposed IKH method uses a new Lévy flight distribution and elitism scheme to update the KH motion calculation. This novel meta-heuristic approach can accelerate the global convergence speed while preserving the robustness of the basic KH algorithm. Besides, the detailed implementation procedure for the IKH method is described. Several standard benchmark functions are used to verify the efficiency of IKH. Based on the results, the performance of IKH is superior to or highly competitive with the standard KH and other robust population-based optimization methods. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "72f7c13f21c047e4dcdf256fbbbe1b74",
"text": "Programming by Examples (PBE) has the potential to revolutionize end-user programming by enabling end users, most of whom are non-programmers, to create small scripts for automating repetitive tasks. However, examples, though often easy to provide, are an ambiguous specification of the user's intent. Because of that, a key impedance in adoption of PBE systems is the lack of user confidence in the correctness of the program that was synthesized by the system. We present two novel user interaction models that communicate actionable information to the user to help resolve ambiguity in the examples. One of these models allows the user to effectively navigate between the huge set of programs that are consistent with the examples provided by the user. The other model uses active learning to ask directed example-based questions to the user on the test input data over which the user intends to run the synthesized program. Our user studies show that each of these models significantly reduces the number of errors in the performed task without any difference in completion time. Moreover, both models are perceived as useful, and the proactive active-learning based model has a slightly higher preference regarding the users' confidence in the result.",
"title": ""
},
{
"docid": "319acfecfc47b3899c40fb09bed49d54",
"text": "We present a new automated method for efficient detection of security vulnerabilities in binary programs. This method starts with a bounded symbolic execution of the target program so as to explore as many paths as possible. Constraints of the explored paths are collected and solved for inputs. The inputs will then be fed to the following interleaved coverage-based fuzzing and concolic execution. As the paths explored by the bounded symbolic execution may cover some unique paths that can be rarely reached by random testing featured fuzzing and locality featured concolic execution, the efficiency and effectiveness of the overall exploration can be greatly enhanced. In particular, the bounded symbolic execution can effectively prevent the fuzzing guided exploration from converging to the less interesting but easy-to-fuzz branches.",
"title": ""
},
{
"docid": "8807ba2f1c0b380db3d9e5a389011b2b",
"text": "Many cloud-based applications employ a data centers as a central server to process data that is generated by edge devices, such as smartphones, tablets and wearables. This model places ever increasing demands on communication and computational infrastructure with inevitable adverse effect on Quality-of-Service and Experience. The concept of Edge Computing is predicated on moving some of this computational load towards the edge of the network to harness computational capabilities that are currently untapped in edge nodes, such as base stations, routers and switches. This position paper considers the challenges and opportunities that arise out of this new direction in the computing landscape.",
"title": ""
},
{
"docid": "40fef2ba4ae0ecd99644cf26ed8fa37f",
"text": "Plant has plenty use in foodstuff, medicine and industry. And it is also vitally important for environmental protection. However, it is an important and difficult task to recognize plant species on earth. Designing a convenient and automatic recognition system of plants is necessary and useful since it can facilitate fast classifying plants, and understanding and managing them. In this paper, a leaf database from different plants is firstly constructed. Then, a new classification method, referred to as move median centers (MMC) hypersphere classifier, for the leaf database based on digital morphological feature is proposed. The proposed method is more robust than the one based on contour features since those significant curvature points are hard to find. Finally, the efficiency and effectiveness of the proposed method in recognizing different plants is demonstrated by experiments. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "27b8e6f3781bd4010c92a705ba4d5fcc",
"text": "Maximum power point tracking (MPPT) strategies in photovoltaic (PV) systems ensure efficient utilization of PV arrays. Among different strategies, the perturb and observe (P&O) algorithm has gained wide popularity due to its intuitive nature and simple implementation. However, such simplicity in P&O introduces two inherent issues, namely, an artificial perturbation that creates losses in steady-state operation and a limited ability to track transients in changing environmental conditions. This paper develops and discusses in detail an MPPT algorithm with zero oscillation and slope tracking to address those technical challenges. The strategy combines three techniques to improve steady-state behavior and transient operation: 1) idle operation on the maximum power point (MPP); 2) identification of the irradiance change through a natural perturbation; and 3) a simple multilevel adaptive tracking step. Two key elements, which form the foundation of the proposed solution, are investigated: 1) the suppression of the artificial perturb at the MPP; and 2) the indirect identification of irradiance change through a current-monitoring algorithm, which acts as a natural perturbation. The zero-oscillation adaptive step P&O strategy builds on these mechanisms to identify relevant information and to produce efficiency gains. As a result, the combined techniques achieve superior overall performance while maintaining simplicity of implementation. Simulations and experimental results are provided to validate the proposed strategy, and to illustrate its behavior in steady and transient operations.",
"title": ""
},
{
"docid": "61f7693dd01e94867866963387e77fb6",
"text": "This paper seeks to identify and characterize healthrelated topics discussed on the Chinese microblogging website, Sina Weibo. We identified nearly 1 million messages containing health-related keywords, filtered from a dataset of 93 million messages spanning five years. We applied probabilistic topic models to this dataset and identified the prominent health topics. We show that a variety of health topics are discussed in Sina Weibo, and that four flu-related topics are correlated with monthly influenza case rates in China.",
"title": ""
},
{
"docid": "326b0cb75e92e216cbac8f3c648b0efc",
"text": "Scholarly content is increasingly being discussed, shared and bookmarked online by researchers. Altmetric is a start-up that focuses on tracking, collecting and measuring this activity on behalf of publishers and here we describe our approach and general philosophy. Over the past year we've seen sharing and discussion activity around approximately 750k articles. The average number of articles shared each day grows by 5- 10% a month. We look at examples of how people are interacting with papers online and at how publishers can collect and present the resulting data to deliver real value to their authors and readers. Introduction Scholars are increasingly visible on the web and social media 1. While the majority of their online activities may not be directly related to their research they are nevertheless discussing, sharing and bookmarking scholarly articles online in large numbers. We know this because our job at Altmetric is to track the attention paid to papers online. Founded in January 2011 and with investment from Digital Science we're a London based start-‐up that identifies, tracks and collects article level metrics on behalf of publishers. Article level metrics are quantitative or qualitative indicators of the impact that a single article has had. Examples of the former would be a count of the number of times the article has been downloaded, or shared on Twitter. Examples of the latter would be media coverage or a blog post from somebody well respected in the field. Tracking the conversations around papers Encouraging audiences to engage with articles online isn't anything new for many publishers. The Public Library of Science (PLoS), BioMed Central, Cell Press and Nature Publishing Group have all tried encouraging users to leave comments on papers with varying degrees of success but the response from users has generally been poor, with only a small fraction of papers ever receiving notable attention 2. A larger proportion of papers are discussed in some depth on academic blogs and a larger still proportion shared on social networks like Twitter, Facebook and Google+. Scholars seem to feel more comfortable sharing or discussing content in more informal environments tied to their personal identity and where",
"title": ""
},
{
"docid": "ce3ac7716734e2ebd814900d77ca3dfb",
"text": "The large pose discrepancy between two face images is one of the fundamental challenges in automatic face recognition. Conventional approaches to pose-invariant face recognition either perform face frontalization on, or learn a pose-invariant representation from, a non-frontal face image. We argue that it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes a Disentangled Representation learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator enables DR-GAN to learn a representation that is both generative and discriminative, which can be used for face image synthesis and pose-invariant face recognition. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one unified identity representation along with an arbitrary number of synthetic face images. Extensive quantitative and qualitative evaluation on a number of controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art in both learning representations and rotating large-pose face images.",
"title": ""
},
{
"docid": "beff14cfa1d0e5437a81584596e666ea",
"text": "Graphene has exceptional optical, mechanical, and electrical properties, making it an emerging material for novel optoelectronics, photonics, and flexible transparent electrode applications. However, the relatively high sheet resistance of graphene is a major constraint for many of these applications. Here we propose a new approach to achieve low sheet resistance in large-scale CVD monolayer graphene using nonvolatile ferroelectric polymer gating. In this hybrid structure, large-scale graphene is heavily doped up to 3 × 10(13) cm(-2) by nonvolatile ferroelectric dipoles, yielding a low sheet resistance of 120 Ω/□ at ambient conditions. The graphene-ferroelectric transparent conductors (GFeTCs) exhibit more than 95% transmittance from the visible to the near-infrared range owing to the highly transparent nature of the ferroelectric polymer. Together with its excellent mechanical flexibility, chemical inertness, and the simple fabrication process of ferroelectric polymers, the proposed GFeTCs represent a new route toward large-scale graphene-based transparent electrodes and optoelectronics.",
"title": ""
},
{
"docid": "1ff317c5514dfc1179ee7c474187d4e5",
"text": "The emergence and spread of antibiotic resistance among pathogenic bacteria has been a rising problem for public health in recent decades. It is becoming increasingly recognized that not only antibiotic resistance genes (ARGs) encountered in clinical pathogens are of relevance, but rather, all pathogenic, commensal as well as environmental bacteria-and also mobile genetic elements and bacteriophages-form a reservoir of ARGs (the resistome) from which pathogenic bacteria can acquire resistance via horizontal gene transfer (HGT). HGT has caused antibiotic resistance to spread from commensal and environmental species to pathogenic ones, as has been shown for some clinically important ARGs. Of the three canonical mechanisms of HGT, conjugation is thought to have the greatest influence on the dissemination of ARGs. While transformation and transduction are deemed less important, recent discoveries suggest their role may be larger than previously thought. Understanding the extent of the resistome and how its mobilization to pathogenic bacteria takes place is essential for efforts to control the dissemination of these genes. Here, we will discuss the concept of the resistome, provide examples of HGT of clinically relevant ARGs and present an overview of the current knowledge of the contributions the various HGT mechanisms make to the spread of antibiotic resistance.",
"title": ""
},
{
"docid": "7490197babcd735c48e1c42af03c8473",
"text": "Clustering is one of the most fundamental tasks in data analysis and machine learning. It is central to many data-driven applications that aim to separate the data into groups with similar patterns. Moreover, clustering is a complex procedure that is affected significantly by the choice of the data representation method. Recent research has demonstrated encouraging clustering results by learning effectively these representations. In most of these works a deep auto-encoder is initially pre-trained to minimize a reconstruction loss, and then jointly optimized with clustering centroids in order to improve the clustering objective. Those works focus mainly on the clustering phase of the procedure, while not utilizing the potential benefit out of the initial phase. In this paper we propose to optimize an auto-encoder with respect to a discriminative pairwise loss function during the auto-encoder pre-training phase. We demonstrate the high accuracy obtained by the proposed method as well as its rapid convergence (e.g. reaching above 92% accuracy on MNIST during the pre-training phase, in less than 50 epochs), even with small networks.",
"title": ""
},
{
"docid": "8851c4383b10db7b0482eaf9417149ae",
"text": "There are many difficulties associated with developing correct multithreaded software, and many of the activities that are simple for single threaded software are exceptionally hard for multithreaded software. One such example is constructing unit tests involving multiple threads. Given, for example, a blocking queue implementation, writing a test case to show that it blocks and unblocks appropriately using existing testing frameworks is exceptionally hard. In this paper, we describe the MultithreadedTC framework which allows the construction of deterministic and repeatable unit tests for concurrent abstractions. This framework is not designed to test for synchronization errors that lead to rare probabilistic faults under concurrent stress. Rather, this framework allows us to demonstrate that code does provide specific concurrent functionality (e.g., a thread attempting to acquire a lock is blocked if another thread has the lock).\n We describe the framework and provide empirical comparisons against hand-coded tests designed for Sun's Java concurrency utilities library and against previous frameworks that addressed this same issue. The source code for this framework is available under an open source license.",
"title": ""
}
] |
scidocsrr
|
75fd6bebd88571163b0873865a562ff1
|
UHF RFID localization system based on a phased array antenna
|
[
{
"docid": "8ef924488d6e86bee065446f385405f7",
"text": "Due to their light weight, low power, and practically unlimited identification capacity, radio frequency identification (RFID) tags and associated devices offer distinctive advantages and are widely recognized for their promising potential in context-aware computing; by tagging objects with RFID tags, the environment can be sensed in a cost- and energy-efficient means. However, a prerequisite to fully realizing the potential is accurate localization of RFID tags, which will enable and enhance a wide range of applications. In this paper we show how to exploit the phase difference between two or more receiving antennas to compute accurate localization. Phase difference based localization has better accuracy, robustness and sensitivity when integrated with other measurements compared to the currently popular technique of localization using received signal strength. Using a software-defined radio setup, we show experimental results that support accurate localization of RFID tags and activity recognition based on phase difference.",
"title": ""
},
{
"docid": "154e25caf9eb954bb7658304dd37a8a2",
"text": "RFID is an automatic identification technology that enables tracking of people and objects. Both identity and location are generally key information for indoor services. An obvious and interesting method to obtain these two types of data is to localize RFID tags attached to devices or objects or carried by people. However, signals in indoor environments are generally harshly impaired and tags have very limited capabilities which pose many challenges for positioning them. In this work, we propose a classification and survey the current state-of-art of RFID localization by first presenting this technology and positioning principles. Then, we explain and classify RFID localization techniques. Finally, we discuss future trends in this domain.",
"title": ""
},
{
"docid": "7f92ead5b555e9447e44ad73392c25d1",
"text": "Multiple antenna systems are a useful way of overcoming the effects of multipath interference, and can allow more efficient use of spectrum. In order to test the effectiveness of various algorithms such as diversity combining, phased array processing, and adaptive array processing in an indoor environment, a channel model is needed which models both the time and angle of arrival in indoor environments. Some data has been collected indoors and some temporal models have been proposed, but no existing model accounts for both time and angle of arrival. This paper discusses existing models for the time of arrival, experimental data that were collected indoors, and a proposed extension of the Saleh-Valenzuela model [1], which accounts for the angle of arrival. Model parameters measured in two different buildings are compared with the parameters presented in the paper by Saleh and Valenzuela, and some statistical validation of the model is presented.",
"title": ""
}
] |
[
{
"docid": "839de75206c99c88fbc10f9f322235be",
"text": "This paper proposes a new fault-tolerant sensor network architecture for monitoring pipeline infrastructures. This architecture is an integrated wired and wireless network. The wired part of the network is considered the primary network while the wireless part is used as a backup among sensor nodes when there is any failure in the wired network. This architecture solves the current reliability issues of wired networks for pipelines monitoring and control. This includes the problem of disabling the network by disconnecting the network cables due to artificial or natural reasons. In addition, it solves the issues raised in recently proposed network architectures using wireless sensor networks for pipeline monitoring. These issues include the issues of power management and efficient routing for wireless sensor nodes to extend the life of the network. Detailed advantages of the proposed integrated network architecture are discussed under different application and fault scenarios.",
"title": ""
},
{
"docid": "f78d0dae400b331d6dcb4de9d10ca2f0",
"text": "How ontologies provide the semantics, as explained here with the help of Harry Potter and his owl Hedwig.",
"title": ""
},
{
"docid": "c08bbd6acd494d36afc60f9612fee0bb",
"text": "Guided wave imaging has shown great potential for structural health monitoring applications by providing a way to visualize and characterize structural damage. For successful implementation of delay-and-sum and other elliptical imaging algorithms employing guided ultrasonic waves, some degree of mode purity is required because echoes from undesired modes cause imaging artifacts that obscure damage. But it is also desirable to utilize multiple modes because different modes may exhibit increased sensitivity to different types and orientations of defects. The well-known modetuning effect can be employed to use the same PZT transducers for generating and receiving multiple modes by exciting the transducers with narrowband tone bursts at different frequencies. However, this process is inconvenient and timeconsuming, particularly if extensive signal averaging is required to achieve a satisfactory signal-to-noise ratio. In addition, both acquisition time and data storage requirements may be prohibitive if signals from many narrowband tone burst excitations are measured. In this paper, we utilize a chirp excitation to excite PZT transducers over a broad frequency range to acquire multi-modal data with a single transmission, which can significantly reduce both the measurement time and the quantity of data. Each received signal from a chirp excitation is post-processed to obtain multiple signals corresponding to different narrowband frequency ranges. Narrowband signals with the best mode purity and echo shape are selected and then used to generate multiple images of damage in a target structure. The efficacy of the proposed technique is demonstrated experimentally using an aluminum plate instrumented with a spatially distributed array of piezoelectric sensors and with simulated damage.",
"title": ""
},
{
"docid": "984bf4f0500e737159b847eab2fa5021",
"text": "We present efmaral, a new system for efficient and accurate word alignment using a Bayesian model with Markov Chain Monte Carlo (MCMC) inference. Through careful selection of data structures and model architecture we are able to surpass the fast_align system, commonly used for performance-critical word alignment, both in computational efficiency and alignment accuracy. Our evaluation shows that a phrase-based statistical machine translation (SMT) system produces translations of higher quality when using word alignments from efmaral than from fast_align, and that translation quality is on par with what is obtained using giza++, a tool requiring orders of magnitude more processing time. More generally we hope to convince the reader that Monte Carlo sampling, rather than being viewed as a slow method of last resort, should actually be the method of choice for the SMT practitioner and others interested in word alignment.",
"title": ""
},
{
"docid": "b540cb8f0f0825662d21a5e2ed100012",
"text": "Social media platforms are popular venues for fashion brand marketing and advertising. With the introduction of native advertising, users don’t have to endure banner ads that hold very little saliency and are unattractive. Using images and subtle text overlays, even in a world of ever-depreciating attention span, brands can retain their audience and have a capacious creative potential. While an assortment of marketing strategies are conjectured, the subtle distinctions between various types of marketing strategies remain under-explored. This paper presents a qualitative analysis on the influence of social media platforms on different behaviors of fashion brand marketing. We employ both linguistic and computer vision techniques while comparing and contrasting strategic idiosyncrasies. We also analyze brand audience retention and social engagement hence providing suggestions in adapting advertising and marketing strategies over Twitter and Instagram.",
"title": ""
},
{
"docid": "9fa9ab2f70d4d3bb87c2c6cd90790f94",
"text": "A robot capable of moving on a vertical wall of high-rise buildings can be used for rescue, wall inspection, fire fighting, etc.. A wall climbing robot using thrust force of propellers has been developed. The thrust force is inclined a little to the wall side to produce the frictional force between the wheels and wall surface. As the strong wind is predicted on the wall surface of buildings, the direction of thrust force is controlled to compensate the wind force acting on the robot. A frictional force augmentor is also considered, which is an airfoil to produce the lift force directed to the wall side by the cross wind. Its effect is tested in the wind tunnel. The overall performance of the robot is examined by computer simulation and a model was constructed and tested on the wall.<<ETX>>",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "28efe3b5fe479a1e95029f122f5b62f3",
"text": "Most of the current metric learning methods are proposed for point-to-point distance (PPD) based classification. In many computer vision tasks, however, we need to measure the point-to-set distance (PSD) and even set-to-set distance (SSD) for classification. In this paper, we extend the PPD based Mahalanobis distance metric learning to PSD and SSD based ones, namely point-to-set distance metric learning (PSDML) and set-to-set distance metric learning (SSDML), and solve them under a unified optimization framework. First, we generate positive and negative sample pairs by computing the PSD and SSD between training samples. Then, we characterize each sample pair by its covariance matrix, and propose a covariance kernel based discriminative function. Finally, we tackle the PSDML and SSDML problems by using standard support vector machine solvers, making the metric learning very efficient for multiclass visual classification tasks. Experiments on gender classification, digit recognition, object categorization and face recognition show that the proposed metric learning methods can effectively enhance the performance of PSD and SSD based classification.",
"title": ""
},
{
"docid": "5e614effaf8101b17f50a5d67eb5fae2",
"text": "of children are not new phenomena, both have been evident in families for centuries (Solomon 1973; Smith 1975; Dobash & Dobash 1979; Radbill 1980). Gordon (1988, as cited in Edleson 1999a) has suggested that levels of family violence have remained relatively constant over time, and that it is not so much the incidence of violence that has changed, rather that its level of visibility has shifted with the ‘ebb-and-flow pattern of concern about family violence’ (1988:2, as cited in Edleson 1999a:839) and the ever-expanding definitions of what constitutes ‘family violence’, ‘domestic violence’ and ‘child maltreatment’.",
"title": ""
},
{
"docid": "58331d0d42452d615b5a20da473ef5e2",
"text": "This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of “history of word” to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation. Second, it introduces an improved attention scoring function that better utilizes the “history of word” concept. Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer. We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.",
"title": ""
},
{
"docid": "bbdb4a930ef77f91e8d76dd3a7e0f506",
"text": "Fast and high-quality document clustering algorithms play an important role in providing intuitive navigation and browsing mechanisms by organizing large amounts of information into a small number of meaningful clusters. In particular, hierarchical clustering solutions provide a view of the data at different levels of granularity, making them ideal for people to visualize and interactively explore large document collections.In this paper we evaluate different partitional and agglomerative approaches for hierarchical clustering. Our experimental evaluation showed that partitional algorithms always lead to better clustering solutions than agglomerative algorithms, which suggests that partitional clustering algorithms are well-suited for clustering large document datasets due to not only their relatively low computational requirements, but also comparable or even better clustering performance. We present a new class of clustering algorithms called constrained agglomerative algorithms that combine the features of both partitional and agglomerative algorithms. Our experimental results showed that they consistently lead to better hierarchical solutions than agglomerative or partitional algorithms alone.",
"title": ""
},
{
"docid": "2528f271c739fe54f4ad77fba57f00e7",
"text": "Iron deficiency anemia (IDA) is a major public health problem especially in underdeveloped and developing countries. Zinc is the co-factor of several enzymes and plays a role in iron metabolism, so zinc deficiency is associated with IDA. In this study, it was aimed to investigate the relationship of symptoms of IDA and zinc deficiency in adult IDA patients. The study included 43 IDA patients and 43 healthy control subjects. All patients were asked to provide a detailed history and were subjected to a physical examination. The hematological parameters evaluated included hemoglobin (Hb); hematocrit (Ht); red blood cell (erythrocyte) count (RBC); and red cell indices mean corpuscular volume (MCV), mean corpuscular hemoglobin (МСН), mean corpuscular hemoglobin concentration (МСНС), and red cell distribution width (RDW). Anemia was defined according to the criteria defined by the World Health Organization (WHO). Serum zinc levels were measured in the flame unit of atomic absorption spectrophotometer. Symptoms attributed to iron deficiency or depletion, defined as fatigue, cardiopulmonary symptoms, mental manifestations, epithelial manifestations, and neuromuscular symptoms, were also recorded and categorized. Serum zinc levels were lower in anemic patients (103.51 ± 34.64 μ/dL) than in the control subjects (256.92 ± 88.54 μ/dL; <0.001). Patients with zinc level <99 μ/dL had significantly more frequent mental manifestations (p < 0.001), cardiopulmonary symptoms (p = 0.004), restless leg syndrome (p = 0.016), and epithelial manifestations (p < 0.001) than patients with zinc level > 100 μ/dL. When the serum zinc level was compared with pica, no statistically significant correlation was found (p = 0.742). Zinc is a trace element that functions in several processes in the body, and zinc deficiency aggravates IDA symptoms. Measurement of zinc levels and supplementation if necessary should be considered for IDA patients.",
"title": ""
},
{
"docid": "570e03101ae116e2f20ab6337061ec3f",
"text": "This study explored the potential for using seed cake from hemp (Cannabis sativa L.) as a protein feed for dairy cows. The aim was to evaluate the effects of increasing the proportion of hempseed cake (HC) in the diet on milk production and milk composition. Forty Swedish Red dairy cows were involved in a 5-week dose-response feeding trial. The cows were allocated randomly to one of four experimental diets containing on average 494 g/kg of grass silage and 506 g/kg of concentrate on a dry matter (DM) basis. Diets containing 0 g (HC0), 143 g (HC14), 233 g (HC23) or 318 g (HC32) HC/kg DM were achieved by replacing an increasing proportion of compound pellets with cold-pressed HC. Increasing the proportion of HC resulted in dietary crude protein (CP) concentrations ranging from 126 for HC0 to 195 g CP/kg DM for HC32. Further effects on the composition of the diet with increasing proportions of HC were higher fat and NDF and lower starch concentrations. There were no linear or quadratic effects on DM intake, but increasing the proportion of HC in the diet resulted in linear increases in fat and NDF intake, as well as CP intake (P < 0.001), and a linear decrease in starch intake (P < 0.001). The proportion of HC had significant quadratic effects on the yields of milk, energy-corrected milk (ECM) and milk protein, fat and lactose. The curvilinear response of all yield parameters indicated maximum production from cows fed diet HC14. Increasing the proportion of HC resulted in linear decreases in both milk protein and milk fat concentration (P = 0.005 and P = 0.017, respectively), a linear increase in milk urea (P < 0.001), and a linear decrease in CP efficiency (milk protein/CP intake; P < 0.001). In conclusion, the HC14 diet, corresponding to a dietary CP concentration of 157 g/kg DM, resulted in the maximum yields of milk and ECM by dairy cows in this study.",
"title": ""
},
{
"docid": "95b0ad5e4898cb1610f2a48c3828eb92",
"text": "Talent management is found to be important for modern organizations because of the advent of the Modern economy, new generations entering the human resource and the need for businesses to become more strategic and competitive, which implies new ways of managing resource and human capital. In this research, the relationship between Talent management, employee Retention and organizational trust is investigated. The aim of the article is to examine the effect of Talent management on employee Retention through organizational trust among staffs of Isfahan University in Iran. The research method is a descriptive survey. The statistical population consists of staffs of Isfahan University in Iran. The sample included 280 employees, which were selected randomly. Data have been collected by a researcher-developed questionnaire and sampling has been done through census and analyzed using SPSS and AMOS software. The validity of the instrument was achieved through content validity and the reliability through Cronbach Alpha. The results of hypothesis testing indicate that there is a significant relationship between Talent management, employee Retention and organizational trust. The study is significant in that it draws attention to the effects of talent management on organizational trust and employees Retention in organization.",
"title": ""
},
{
"docid": "3dd755e5041b2b61ef63f65c7695db27",
"text": "The class imbalance problem is encountered in a large number of practical applications of machine learning and data mining, for example, information retrieval and filtering, and the detection of credit card fraud. It has been widely realized that this imbalance raises issues that are either nonexistent or less severe compared to balanced class cases and often results in a classifier's suboptimal performance. This is even more true when the imbalanced data are also high dimensional. In such cases, feature selection methods are critical to achieve optimal performance. In this paper, we propose a new feature selection method, Feature Assessment by Sliding Thresholds (FAST), which is based on the area under a ROC curve generated by moving the decision boundary of a single feature classifier with thresholds placed using an even-bin distribution. FAST is compared to two commonly-used feature selection methods, correlation coefficient and RELevance In Estimating Features (RELIEF), for imbalanced data classification. The experimental results obtained on text mining, mass spectrometry, and microarray data sets showed that the proposed method outperformed both RELIEF and correlation methods on skewed data sets and was comparable on balanced data sets; when small number of features is preferred, the classification performance of the proposed method was significantly improved compared to correlation and RELIEF-based methods.",
"title": ""
},
{
"docid": "01b147cb417ceedf40dadcb3ee31a1b2",
"text": "BACKGROUND\nPurposeful and timely rounding is a best practice intervention to routinely meet patient care needs, ensure patient safety, decrease the occurrence of patient preventable events, and proactively address problems before they occur. The Institute for Healthcare Improvement (IHI) endorsed hourly rounding as the best way to reduce call lights and fall injuries, and increase both quality of care and patient satisfaction. Nurse knowledge regarding purposeful rounding and infrastructure supporting timeliness are essential components for consistency with this patient centred practice.\n\n\nOBJECTIVES\nThe project aimed to improve patient satisfaction and safety through implementation of purposeful and timely nursing rounds. Goals for patient satisfaction scores and fall volume were set. Specific objectives were to determine current compliance with evidence-based criteria related to rounding times and protocols, improve best practice knowledge among staff nurses, and increase compliance with these criteria.\n\n\nMETHODS\nFor the objectives of this project the Joanna Briggs Institute's Practical Application of Clinical Evidence System and Getting Research into Practice audit tool were used. Direct observation of staff nurses on a medical surgical unit in the United States was employed to assess timeliness and utilization of a protocol when rounding. Interventions were developed in response to baseline audit results. A follow-up audit was conducted to determine compliance with the same criteria. For the project aims, pre- and post-intervention unit-level data related to nursing-sensitive elements of patient satisfaction and safety were compared.\n\n\nRESULTS\nRounding frequency at specified intervals during awake and sleeping hours nearly doubled. Use of a rounding protocol increased substantially to 64% compliance from zero. Three elements of patient satisfaction had substantive rate increases but the hospital's goals were not reached. Nurse communication and pain management scores increased modestly (5% and 11%, respectively). Responsiveness of hospital staff increased moderately (15%) with a significant sub-element increase in toileting (41%). Patient falls decreased by 50%.\n\n\nCONCLUSIONS\nNurses have the ability to improve patient satisfaction and patient safety outcomes by utilizing nursing round interventions which serve to improve patient communication and staff responsiveness. Having a supportive infrastructure and an organized approach, encompassing all levels of staff, to meet patient needs during their hospital stay was a key factor for success. Hard-wiring of new practices related to workflow takes time as staff embrace change and understand how best practice interventions significantly improve patient outcomes.",
"title": ""
},
{
"docid": "43874ef421e71b3240eedc81ed665280",
"text": "3 Axiomatisation of the Binary Kleene Star 15 3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.2 Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3 Irredundancy of the Axioms . . . . . . . . . . . . . . . . . . . . . . . 24 3.4 Negative Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.5 Extensions of BPA(A) . . . . . . . . . . . . . . . . . . . . . . . . . 27",
"title": ""
},
{
"docid": "2e384001b105d0b3ace839051cdddf88",
"text": "Conformal prediction is a relatively new framework in which the predictive models output sets of predictions with a bound on the error rate, i.e., in a classification context, the probability of excluding the correct class label is lower than a predefined significance level. An investigation of the use of decision trees within the conformal prediction framework is presented, with the overall purpose to determine the effect of different algorithmic choices, including split criterion, pruning scheme and way to calculate the probability estimates. Since the error rate is bounded by the framework, the most important property of conformal predictors is efficiency, which concerns minimizing the number of elements in the output prediction sets. Results from one of the largest empirical investigations to date within the conformal prediction framework are presented, showing that in order to optimize efficiency, the decision trees should be induced using no pruning and with smoothed probability estimates. The choice of split criterion to use for the actual induction of the trees did not turn out to have any major impact on the efficiency. Finally, the experimentation also showed that when using decision trees, standard inductive conformal prediction was as efficient as the recently suggested method cross-conformal prediction. This is an encouraging results since cross-conformal prediction uses several decision trees, thus sacrificing the interpretability of a single decision tree.",
"title": ""
},
{
"docid": "c45d51108897addb94c6102a753d4591",
"text": "Latent Dirichlet Allocation (LDA) has seen increasing use in the understanding of source code and its related artifacts in part because of its impressive modeling power. However, this expressive power comes at a cost: the technique includes several tuning parameters whose impact on the resulting LDA model must be carefully considered. An obvious example is the burn-in period; too short a burn-in period leaves excessive echoes of the initial uniform distribution. The aim of this work is to provide insights into the tuning parameter's impact. Doing so improves the comprehension of both, 1) researchers who look to exploit the power of LDA in their research and 2) those who interpret the output of LDA-using tools. It is important to recognize that the goal of this work is not to establish values for the tuning parameters because there is no universal best setting. Rather, appropriate settings depend on the problem being solved, the input corpus (in this case, typically words from the source code and its supporting artifacts), and the needs of the engineer performing the analysis. This work's primary goal is to aid software engineers in their understanding of the LDA tuning parameters by demonstrating numerically and graphically the relationship between the tuning parameters and the LDA output. A secondary goal is to enable more informed setting of the parameters. Results obtained using both production source code and a synthetic corpus underscore the need for a solid understanding of how to configure LDA's tuning parameters.",
"title": ""
},
{
"docid": "7feea3bcba08a889ba779a23f79556d7",
"text": "In this report, monodispersed ultra-small Gd2O3 nanoparticles capped with hydrophobic oleic acid (OA) were synthesized with average particle size of 2.9 nm. Two methods were introduced to modify the surface coating to hydrophilic for bio-applications. With a hydrophilic coating, the polyvinyl pyrrolidone (PVP) coated Gd2O3 nanoparticles (Gd2O3-PVP) showed a reduced longitudinal T1 relaxation time compared with OA and cetyltrimethylammonium bromide (CTAB) co-coated Gd2O3 (Gd2O3-OA-CTAB) in the relaxation study. The Gd2O3-PVP was thus chosen for its further application study in MRI with an improved longitudinal relaxivity r1 of 12.1 mM(-1) s(-1) at 7 T, which is around 3 times as that of commercial contrast agent Magnevist(®). In vitro cell viability in HK-2 cell indicated negligible cytotoxicity of Gd2O3-PVP within preclinical dosage. In vivo MR imaging study of Gd2O3-PVP nanoparticles demonstrated considerable signal enhancement in the liver and kidney with a long blood circulation time. Notably, the OA capping agent was replaced by PVP through ligand exchange on the Gd2O3 nanoparticle surface. The hydrophilic PVP grants the Gd2O3 nanoparticles with a polar surface for bio-application, and the obtained Gd2O3-PVP could be used as an in vivo indicator of reticuloendothelial activity.",
"title": ""
}
] |
scidocsrr
|
caeba50304535d1b67ad333cc1ca0e71
|
Mining Twitter big data to predict 2013 Pakistan election winner
|
[
{
"docid": "76ae2082a4ab35fa3046f3f0af54bfe2",
"text": "Electoral prediction from Twitter data is an appealing research topic. It seems relatively straightforward and the prevailing view is overly optimistic. This is problematic because while simple approaches are assumed to be good enough, core problems are not addressed. Thus, this paper aims to (1) provide a balanced and critical review of the state of the art; (2) cast light on the presume predictive power of Twitter data; and (3) depict a roadmap to push forward the field. Hence, a scheme to characterize Twitter prediction methods is proposed. It covers every aspect from data collection to performance evaluation, through data processing and vote inference. Using that scheme, prior research is analyzed and organized to explain the main approaches taken up to date but also their weaknesses. This is the first meta-analysis of the whole body of research regarding electoral prediction from Twitter data. It reveals that its presumed predictive power regarding electoral prediction has been somewhat exaggerated: although social media may provide a glimpse on electoral outcomes current research does not provide strong evidence to support it can currently replace traditional polls. Finally, future lines of work are suggested.",
"title": ""
},
{
"docid": "cd2fb4278f1c2da581708d961bd7aa93",
"text": "Twitter messages are increasingly used to determine consumer sentiment towards a brand. The existing literature on Twitter sentiment analysis uses various feature sets and methods, many of which are adapted from more traditional text classification problems. In this research, we introduce an approach to supervised feature reduction using n-grams and statistical analysis to develop a Twitter-specific lexicon for sentiment analysis. We augment this reduced Twitter-specific lexicon with brand-specific terms for brand-related tweets. We show that the reduced lexicon set, while significantly smaller (only 187 features), reduces modeling complexity, maintains a high degree of coverage over our Twitter corpus, and yields improved sentiment classification accuracy. To demonstrate the effectiveness of the devised Twitter-specific lexicon compared to a traditional sentiment lexicon, we develop comparable sentiment classification models using SVM. We show that the Twitter-specific lexicon is significantly more effective in terms of classification recall and accuracy metrics. We then develop sentiment classification models using the Twitter-specific lexicon and the DAN2 machine learning approach, which has demonstrated success in other text classification problems. We show that DAN2 produces more accurate sentiment classification results than SVM while using the same Twitter-specific lexicon. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "09da8d98929dded2b6ed30810e61f441",
"text": "The FG-NET aging database was released in 2004 in an attempt to support research activities related to facial aging. Since then a number of researchers used the database for carrying out research in various disciplines related to facial aging. Based on the analysis of published work where the FG-NET aging database was used, conclusions related to the type of research carried out in relation to the impact of the dataset in shaping up the research topic of facial aging, are presented. In particular we focus our attention on the topic of age estimation that proved to be the most popular among users of the FG-NET aging database. Through the review of key papers in age estimation and the presentation of benchmark results the main approaches/directions in facial aging are outlined and future trends, requirements and research directions are drafted.",
"title": ""
},
{
"docid": "46545de8429e6e7363a2b41676fc9e91",
"text": "BACKGROUND\nThe scapula osteocutaneous free flap is frequently used to reconstruct complex head and neck defects given its tissue versatility. Because of minimal atherosclerotic changes in its vascular pedicle, this flap also may be used as a second choice when other osseous flaps are not available because of vascular disease at a preferred donor site.\n\n\nMETHODS\nWe performed a retrospective chart review evaluating flap outcome as well as surgical and medical complications based upon the flap choice.\n\n\nRESULTS\nThe flap survival rate was 97%. The surgical complication rate was similar for the 21 first-choice flaps (57.1%) and the 12 second-choice flaps (41.7%; p = .481). However, patients having second-choice flaps had a higher rate of medical complications (66.7%) than those with first-choice flaps (28.6%; p = .066). Age and the presence of comorbidities were associated with increased medical complications. All patients with comorbidities that had a second-choice flap experienced medical complications, with most being severe.\n\n\nCONCLUSIONS\nThe scapula osteocutaneous free flap has a high success rate in head and neck reconstruction. Surgical complications occur frequently regardless of whether the flap is used as a first or second choice. However, medical complications are more frequent and severe in patients undergoing second-choice flaps.",
"title": ""
},
{
"docid": "2b9fa788e7ccacf14fcdc295ba387e25",
"text": "In this paper, two kinds of methods, namely additional momentum method and self-adaptive learning rate adjustment method, are used to improve the BP algorithm. Considering the diversity of factors which affect stock prices, Single-input and Multi-input Prediction Model (SIPM and MIPM) are established respectively to implement short-term forecasts for SDIC Electric Power (600886) shares and Bank of China (601988) shares in 2009. Experiments indicate that the improved BP model has superior performance to the basic BP model, and MIPM is also better than SIPM. However, the best performance is obtained by using MIPM and improved prediction model cohesively.",
"title": ""
},
{
"docid": "2c6c8703d7be507e15066d2a3fbd813c",
"text": "This paper presents a novel and effective audio based method on depression classification. It focuses on two important issues, \\emph{i.e.} data representation and sample imbalance, which are not well addressed in literature. For the former one, in contrast to traditional shallow hand-crafted features, we propose a deep model, namely DepAudioNet, to encode the depression related characteristics in the vocal channel, combining Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) to deliver a more comprehensive audio representation. For the latter one, we introduce a random sampling strategy in the model training phase to balance the positive and negative samples, which largely alleviates the bias caused by uneven sample distribution. Evaluations are carried out on the DAIC-WOZ dataset for the Depression Classification Sub-challenge (DCC) at the 2016 Audio-Visual Emotion Challenge (AVEC), and the experimental results achieved clearly demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "875548b7dc303bef8efa8284216e010d",
"text": "BACKGROUND\nGigantomastia is a breast disorder marked by exaggerated rapid growth of the breasts, generally bilaterally. Since this disorder is very rare and has been reported only in sparse case reports its etiology has yet to be fully established. Treatment is aimed at improving the clinical and psychological symptoms and reducing the treatment side effects; however, the best therapeutic option varies from case to case.\n\n\nCASE PRESENTATION\nThe present report described a case of gestational gigantomastia in a 30-year-old woman, gravida 2, parity 1, 17 week pregnant admitted to Pars Hospital, Tehran, Iran, on May 2014. The patient was admitted to hospital at week 17 of pregnancy, although her breasts initially had begun to enlarge from the first trimester. The patient developed hypercalcemia in her 32nd week of pregnancy. The present report followed this patient from diagnosis until the completion of treatment.\n\n\nCONCLUSION\nAlthough gestational gigantomastia is a rare condition, its timely prognosis and careful examination of some conditions like hyperprolactinemia and hypercalcemia is essential in successful management of this condition.",
"title": ""
},
{
"docid": "287d1e603f7d677cff93aa0601a9bfef",
"text": "Frameworks are an object-oriented reuse technique that are widely used in industry but not discussed much by the software engineering research community. They are a way of reusing design that is part of the reason that some object-oriented developers are so productive. This paper compares and contrasts frameworks with other reuse techniques, and describes how to use them, how to evaluate them, and how to develop them. It describe the tradeo s involved in using frameworks, including the costs and pitfalls, and when frameworks are appropriate.",
"title": ""
},
{
"docid": "4053bbaf8f9113bef2eb3b15e34a209a",
"text": "With the recent availability of commodity Virtual Reality (VR) products, immersive video content is receiving a significant interest. However, producing high-quality VR content often requires upgrading the entire production pipeline, which is costly and time-consuming. In this work, we propose using video feeds from regular broadcasting cameras to generate immersive content. We utilize the motion of the main camera to generate a wide-angle panorama. Using various techniques, we remove the parallax and align all video feeds. We then overlay parts from each video feed on the main panorama using Poisson blending. We examined our technique on various sports including basketball, ice hockey and volleyball. Subjective studies show that most participants rated their immersive experience when viewing our generated content between Good to Excellent. In addition, most participants rated their sense of presence to be similar to ground-truth content captured using a GoPro Omni 360 camera rig.",
"title": ""
},
{
"docid": "70bce8834a23bc84bea7804c58bcdefe",
"text": "This study presents novel coplanar waveguide (CPW) power splitters comprising a CPW T-junction with outputs attached to phase-adjusting circuits, i.e., the composite right/left-handed (CRLH) CPW and the conventional CPW, to achieve a constant phase difference with arbitrary value over a wide bandwidth. To demonstrate the proposed technique, a 180/spl deg/ CRLH CPW power splitter with a phase error of less than 10/spl deg/ and a magnitude difference of below 1.5 dB within 2.4 to 5.22 GHz is experimentally demonstrated. Compared with the conventional 180/spl deg/ delay-line power splitter, the proposed structure possesses not only superior phase and magnitude performances but also a 37% size reduction. The equivalent circuit of the CRLH CPW, which represents the left-handed (LH), right-handed (RH), and lossy characteristics, is constructed and the results obtained are in good agreement with the full-wave simulation and measurement. Applications involving the wideband coplanar waveguide-to-coplanar stripline (CPW-to-CPS) transition and the tapered loop antenna are presented to stress the practicality of the 180/spl deg/ CRLH CPW power splitter. The 3-dB insertion loss bandwidth is measured as 98% for the case of a back-to-back CPW-to-CPS transition. The tapered loop antenna fed by the proposed transition achieves a measured 10-dB return loss bandwidth of 114%, and shows similar radiation patterns and 6-9 dBi antenna gain in its operating band.",
"title": ""
},
{
"docid": "4191648ada97ecc5a906468369c12bf4",
"text": "Dermoscopy is a widely used technique whose role in the clinical (and preoperative) diagnosis of melanocytic and non-melanocytic skin lesions has been well established in recent years. The aim of this paper is to clarify the correlations between the \"local\" dermoscopic findings in melanoma and the underlying histology, in order to help clinicians in routine practice.",
"title": ""
},
{
"docid": "577b0b3215fbd6a6b6fd0d8882967a1e",
"text": "Generating texts of different sentiment labels is getting more and more attention in the area of natural language generation. Recently, Generative Adversarial Net (GAN) has shown promising results in text generation. However, the texts generated by GAN usually suffer from the problems of poor quality, lack of diversity and mode collapse. In this paper, we propose a novel framework SentiGAN, which has multiple generators and one multi-class discriminator, to address the above problems. In our framework, multiple generators are trained simultaneously, aiming at generating texts of different sentiment labels without supervision. We propose a penalty based objective in the generators to force each of them to generate diversified examples of a specific sentiment label. Moreover, the use of multiple generators and one multi-class discriminator can make each generator focus on generating its own examples of a specific sentiment label accurately. Experimental results on four datasets demonstrate that our model consistently outperforms several state-of-the-art text generation methods in the sentiment accuracy and quality of generated texts.",
"title": ""
},
{
"docid": "a0c15895a455c07b477d4486d32582ef",
"text": "PURPOSE\nTo evaluate the efficacy of α-lipoic acid (ALA) in reducing scarring after trabeculectomy.\n\n\nMATERIALS AND METHODS\nEighteen adult New Zealand white rabbits underwent trabeculectomy. During trabeculectomy, thin sponges were placed between the sclera and Tenon's capsule for 3 minutes, saline solution, mitomycin-C (MMC) and ALA was applied to the control group (CG) (n=6 eyes), MMC group (MMCG) (n=6 eyes), and ALA group (ALAG) (n=6 eyes), respectively. After surgery, topical saline and ALA was applied for 28 days to the control and ALAGs, respectively. Filtrating bleb patency was evaluated by using 0.1% trepan blue. Hematoxylin and eosin and Masson trichrome staining for toxicity, total cellularity, and collagen organization; α-smooth muscle actin immunohistochemistry staining performed for myofibroblast phenotype identification.\n\n\nRESULTS\nClinical evaluation showed that all 6 blebs (100%) of the CG had failed, whereas there were only 2 failures (33%) in the ALAG and no failures in the MMCG on day 28. Histologic evaluation showed significantly lower inflammatory cell infiltration in the ALAGs and CGs than the MMCG. Toxicity change was more significant in the MMCG than the control and ALAGs. Collagen was better organized in the ALAG than control and MMCGs. In immunohistochemistry evaluation, ALA significantly reduced the population of cells expressing α-smooth muscle action.\n\n\nCONCLUSIONS\nΑLA prevents and/or reduces fibrosis by inhibition of inflammation pathways, revascularization, and accumulation of extracellular matrix. It can be used as an agent for delaying tissue regeneration and for providing a more functional-permanent fistula.",
"title": ""
},
{
"docid": "3309e09d16e74f87a507181bd82cd7f0",
"text": "The goal of this work is to overview and summarize the grasping taxonomies reported in the literature. Our long term goal is to understand how to reduce mechanical complexity of anthropomorphic hands and still preserve their dexterity. On the basis of a literature survey, 33 different grasp types are taken into account. They were then arranged in a hierarchical manner, resulting in 17 grasp types.",
"title": ""
},
{
"docid": "e2d8da3d28f560c4199991dbdffb8c2c",
"text": "Attention-based neural encoder-decoder frameworks have been widely adopted for image captioning. Most methods force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as the and of. Other words that may seem visual can often be predicted reliably just from the language model e.g., sign after behind a red stop or phone following talking on a cell. In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time step, our model decides whether to attend to the image (and if so, to which regions) or to the visual sentinel. The model decides whether to attend to the image and where, in order to extract meaningful information for sequential word generation. We test our method on the COCO image captioning 2015 challenge dataset and Flickr30K. Our approach sets the new state-of-the-art by a significant margin.",
"title": ""
},
{
"docid": "1b82ef890fbbf033781ea65202b2f4b9",
"text": "We present a fast GPU-based streaming algorithm to perform collision queries between deformable models. Our approach is based on hierarchical culling and reduces the computation to generating different streams. We present a novel stream registration method to compact the streams and efficiently compute the potentially colliding pairs of primitives. We also use a deferred front tracking method to lower the memory overhead. The overall algorithm has been implemented on different GPUs and we have evaluated its performance on non-rigid and deformable simulations. We highlight our speedups over prior CPU-based and GPU-based algorithms. In practice, our algorithm can perform inter-object and intra-object computations on models composed of hundreds of thousands of triangles in tens of milliseconds.",
"title": ""
},
{
"docid": "1104df035599f5f890e9b8650ea336be",
"text": "A new digital programmable CMOS analog front-end (AFE) IC for measuring electroencephalograph or electrocardiogram signals in a portable instrumentation design approach is presented. This includes a new high-performance rail-to-rail instrumentation amplifier (IA) dedicated to the low-power AFE IC. The measurement results have shown that the proposed biomedical AFE IC, with a die size of 4.81 mm/sup 2/, achieves a maximum stable ac gain of 10 000 V/V, input-referred noise of 0.86 /spl mu/ V/sub rms/ (0.3 Hz-150 Hz), common-mode rejection ratio of at least 115 dB (0-1 kHz), input-referred dc offset of less than 60 /spl mu/V, input common mode range from -1.5 V to 1.3 V, and current drain of 485 /spl mu/A (excluding the power dissipation of external clock oscillator) at a /spl plusmn/1.5-V supply using a standard 0.5-/spl mu/m CMOS process technology.",
"title": ""
},
{
"docid": "c01fbc8bd278b06e0476c6fbffca0ad1",
"text": "Memristors can be optimally used to implement logic circuits. In this paper, a logic circuit based on Memristor Ratioed Logic (MRL) is proposed. Specifically, a hybrid CMOS-memristive logic family by a suitable combination of 4 memristor and a complementary inverter CMOS structure is presented. The proposed structure by having outputs of AND, OR and XOR gates of inputs at the same time, reducing the area and connections and fewer power consumption can be appropriate for implementation of more complex circuits. Circuit design of a single-bit Full Adder is considered as a case study. The Full Adder proposed is implemented using 10 memristors and 4 transistors comparing to 18 memristors and 8 transistors in the other related work.",
"title": ""
},
{
"docid": "0b705fc98638cf042e84417849259074",
"text": "G et al. [Gallego, G., G. Iyengar, R. Phillips, A. Dubey. 2004. Managing flexible products on a network. CORC Technical Report TR-2004-01, Department of Industrial Engineering and Operations Research, Columbia University, New York.] recently proposed a choice-based deterministic linear programming model (CDLP) for network revenue management (RM) that parallels the widely used deterministic linear programming (DLP) model. While they focused on analyzing “flexible products”—a situation in which the provider has the flexibility of using a collection of products (e.g., different flight times and/or itineraries) to serve the same market demand (e.g., an origin-destination connection)—their approach has broader implications for understanding choice-based RM on a network. In this paper, we explore the implications in detail. Specifically, we characterize optimal offer sets (sets of available network products) by extending to the network case a notion of “efficiency” developed by Talluri and van Ryzin [Talluri, K. T., G. J. van Ryzin. 2004. Revenue management under a general discrete choice model of consumer behavior. Management Sci. 50 15–33.] for the single-leg, choice-based RM problem. We show that, asymptotically, as demand and capacity are scaled up, only these efficient sets are used in an optimal policy. This analysis suggests that efficiency is a potentially useful approach for identifying “good” offer sets on networks, as it is in the case of single-leg problems. Second, we propose a practical decomposition heuristic for converting the static CDLP solution into a dynamic control policy. The heuristic is quite similar to the familiar displacement-adjusted virtual nesting (DAVN) approximation used in traditional network RM, and it significantly improves on the performance of the static LP solution. We illustrate the heuristic on several numerical examples.",
"title": ""
},
{
"docid": "9a30008cc270ac7a0bb1a0f12dca6187",
"text": "Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains an unsolved challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. Overall, we can train on and embed graphs that are four orders of magnitude larger than typical GCN implementations. We show how GCN embeddings can be used to make high-quality recommendations in various settings at Pinterest, which has a massive underlying graph with 3 billion nodes representing pins and boards, and 17 billion edges. According to offline metrics, user studies, as well as A/B tests, our approach generates higher-quality recommendations than comparable deep learning based systems. To our knowledge, this is by far the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.",
"title": ""
},
{
"docid": "1d8653708c06f27433dc57844550bb4c",
"text": "Because of the nonlinearity of digital PWM generator and the effect of power supply noise in power stage, the error is introduced into digital class D power amplifier. A method used to eliminate the error is presented in this paper, and it is easy to implement. Based on this method, a digital class D power amplifier is designed and simulated, the simulation results indicate this method can basically eliminate the error produced by digital PWM generator and power stage, and improve the performance of the system.",
"title": ""
},
{
"docid": "bd3016195482f7fbd41f03a25d1a9e83",
"text": "Evaluating in Massive Open Online Courses (MOOCs) is a difficult task because of the huge number of students involved in the courses. Peer grading is an effective method to cope with this problem, but something must be done to lessen the effect of the subjective evaluation. In this paper we present a matrix factorization approach able to learn from the order of the subset of exams evaluated by each grader. We tested this method on a data set provided by a real peer review process. By using a tailored graphical representation, the induced model could also allow the detection of peculiarities in the peer review process.",
"title": ""
}
] |
scidocsrr
|
f2be7280227d473ef0dbe3d6c97783ef
|
Study of a T-Shaped Slot With a Capacitor for High Isolation Between MIMO Antennas
|
[
{
"docid": "b3c9bc55f5a9d64a369ec67e1364c4fc",
"text": "This paper introduces a coupling element to enhance the isolation between two closely packed antennas operating at the same frequency band. The proposed structure consists of two antenna elements and a coupling element which is located in between the two antenna elements. The idea is to use field cancellation to enhance isolation by putting a coupling element which artificially creates an additional coupling path between the antenna elements. To validate the idea, a design for a USB dongle MIMO antenna for the 2.4 GHz WLAN band is presented. In this design, the antenna elements are etched on a compact low-cost FR4 PCB board with dimensions of 20times40times1.6 mm3. According to our measurement results, we can achieve more than 30 dB isolation between the antenna elements even though the two parallel individual planar inverted F antenna (PIFA) in the design share a solid ground plane with inter-antenna spacing (Center to Center) of less than 0.095 lambdao or edge to edge separations of just 3.6 mm (0.0294 lambdao). Both simulation and measurement results are used to confirm the antenna isolation and performance. The method can also be applied to different types of antennas such as non-planar antennas. Parametric studies and current distribution for the design are also included to show how to tune the structure and control the isolation.",
"title": ""
}
] |
[
{
"docid": "2232d02a700d412c61cab20b98b6a6c2",
"text": "Intranasal drug delivery (INDD) systems offer a route to the brain that bypasses problems related to gastrointestinal absorption, first-pass metabolism, and the blood-brain barrier; onset of therapeutic action is rapid, and the inconvenience and discomfort of parenteral administration are avoided. INDD has found several applications in neuropsychiatry, such as to treat migraine, acute and chronic pain, Parkinson disease, disorders of cognition, autism, schizophrenia, social phobia, and depression. INDD has also been used to test experimental drugs, such as peptides, for neuropsychiatric indications; these drugs cannot easily be administered by other routes. This article examines the advantages and applications of INDD in neuropsychiatry; provides examples of test, experimental, and approved INDD treatments; and focuses especially on the potential of intranasal ketamine for the acute and maintenance therapy of refractory depression.",
"title": ""
},
{
"docid": "8464328ecb1fcfbd6d727af489de5188",
"text": "Recent deep learning (DL) models have moved beyond static network architectures to dynamic ones, handling data where the network structure changes every example, such as sequences of variable lengths, trees, and graphs. Existing dataflow-based programming models for DL—both static and dynamic declaration—either cannot readily express these dynamic models, or are inefficient due to repeated dataflow graph construction and processing, and difficulties in batched execution. We present Cavs, a vertexcentric programming interface and optimized system implementation for dynamic DL models. Cavs represents dynamic network structure as a static vertex function F and a dynamic instance-specific graph G, and performs backpropagation by scheduling the execution of F following the dependencies in G. Cavs bypasses expensive graph construction and preprocessing overhead, allows for the use of static graph optimization techniques on pre-defined operations in F , and naturally exposes batched execution opportunities over different graphs. Experiments comparing Cavs to two state-of-the-art frameworks for dynamic NNs (TensorFlow Fold and DyNet) demonstrate the efficacy of this approach: Cavs achieves a near one order of magnitude speedup on training of various dynamic NN architectures, and ablations demonstrate the contribution of our proposed batching and memory management strategies.",
"title": ""
},
{
"docid": "9a2ab1d198468819f32a2b74334528ae",
"text": "This paper introduces GeoSpark an in-memory cluster computing framework for processing large-scale spatial data. GeoSpark consists of three layers: Apache Spark Layer, Spatial RDD Layer and Spatial Query Processing Layer. Apache Spark Layer provides basic Spark functionalities that include loading / storing data to disk as well as regular RDD operations. Spatial RDD Layer consists of three novel Spatial Resilient Distributed Datasets (SRDDs) which extend regular Apache Spark RDDs to support geometrical and spatial objects. GeoSpark provides a geometrical operations library that accesses Spatial RDDs to perform basic geometrical operations (e.g., Overlap, Intersect). System users can leverage the newly defined SRDDs to effectively develop spatial data processing programs in Spark. The Spatial Query Processing Layer efficiently executes spatial query processing algorithms (e.g., Spatial Range, Join, KNN query) on SRDDs. GeoSpark also allows users to create a spatial index (e.g., R-tree, Quad-tree) that boosts spatial data processing performance in each SRDD partition. Preliminary experiments show that GeoSpark achieves better run time performance than its Hadoop-based counterparts (e.g., SpatialHadoop).",
"title": ""
},
{
"docid": "7359794d213f095d3429f114748545c3",
"text": "Purpose: To investigate the impact of residual astigmatism on visual acuity (VA) after the implantation of a novel extended range of vision (ERV) intraocular lens (IOL) based on the correction of spherical and chromatic aberration. Method: The study enrolled 411 patients bilaterally implanted with the ERV IOL Tecnis Symfony. Visual acuity and subjective refraction were analyzed during the 4to 6-month follow-up. The sample of eyes was stratified for four groups according to the magnitude of postoperative refractive astigmatism and postoperative spherical equivalent. Results: The astigmatism analysis included 386 eyes of 193 patients with both eyes of each patient within the same cylinder range. Uncorrected VAs for distance, intermediate and near were better in the group of eyes with lower level of postoperative astigmatism, but even in eyes with residual cylinders up to 0.75 D, the loss of VA lines was clinically not relevant. The orientation of astigmatism did not seem to have an impact on the tolerance to the residual cylinder. The SE evaluation included 810 eyes of 405 patients, with both eyes of each patient in the same SE range. Uncorrected VAs for distance, intermediate and near, were very similar in all SE groups. Conclusion: Residual cylinders up to 0.75 D after the implantation of the Tecnis Symfony IOL have a very mild impact on monocular and binocular VA. The Tecnis Symfony IOL shows a good tolerance to unexpected refractive surprises and thus a better “sweet spot”.",
"title": ""
},
{
"docid": "fc0a8bffb77dd7498658eb1319edd566",
"text": "There continues to be debate about the long-term neuropsychological impact of mild traumatic brain injury (MTBI). A meta-analysis of the relevant literature was conducted to determine the impact of MTBI across nine cognitive domains. The analysis was based on 39 studies involving 1463 cases of MTBI and 1191 control cases. The overall effect of MTBI on neuropsychological functioning was moderate (d = .54). However, findings were moderated by cognitive domain, time since injury, patient characteristics, and sampling methods. Acute effects (less than 3 months postinjury) of MTBI were greatest for delayed memory and fluency (d = 1.03 and .89, respectively). In unselected or prospective samples, the overall analysis revealed no residual neuropsychological impairment by 3 months postinjury (d = .04). In contrast, clinic-based samples and samples including participants in litigation were associated with greater cognitive sequelae of MTBI (d = .74 and .78, respectively at 3 months or greater). Indeed, litigation was associated with stable or worsening of cognitive functioning over time. The implications and limitations of these findings are discussed.",
"title": ""
},
{
"docid": "91136fd0fd8e15ed1d6d6bf7add489f0",
"text": "Microelectromechanical Systems (MEMS) technology has already led to advances in optical imaging, scanning, communications and adaptive applications. Many of these efforts have been approached without the use of feedback control techniques that are common in macro-scale operations to ensure repeatable and precise performance. This paper examines control techniques and related issues of precision performance as applied to a one-degree-of-freedom electrostatic MEMS micro mirror.",
"title": ""
},
{
"docid": "81b82ae24327c7d5c0b0bf4a04904826",
"text": "AIM\nTo identify key predictors and moderators of mental health 'help-seeking behavior' in adolescents.\n\n\nBACKGROUND\nMental illness is highly prevalent in adolescents and young adults; however, individuals in this demographic group are among the least likely to seek help for such illnesses. Very little quantitative research has examined predictors of help-seeking behaviour in this demographic group.\n\n\nDESIGN\nA cross-sectional design was used.\n\n\nMETHODS\nA group of 180 volunteers between the ages of 17-25 completed a survey designed to measure hypothesized predictors and moderators of help-seeking behaviour. Predictors included a range of health beliefs, personality traits and attitudes. Data were collected in August 2010 and were analysed using two standard and three hierarchical multiple regression analyses.\n\n\nFINDINGS\nThe standard multiple regression analyses revealed that extraversion, perceived benefits of seeking help, perceived barriers to seeking help and social support were direct predictors of help-seeking behaviour. Tests of moderated relationships (using hierarchical multiple regression analyses) indicated that perceived benefits were more important than barriers in predicting help-seeking behaviour. In addition, perceived susceptibility did not predict help-seeking behaviour unless individuals were health conscious to begin with or they believed that they would benefit from help.\n\n\nCONCLUSION\nA range of personality traits, attitudes and health beliefs can predict help-seeking behaviour for mental health problems in adolescents. The variable 'Perceived Benefits' is of particular importance as it is: (1) a strong and robust predictor of help-seeking behaviour; and (2) a factor that can theoretically be modified based on health promotion programmes.",
"title": ""
},
{
"docid": "6f265af3f4f93fcce13563cac14b5774",
"text": "Inorganic pyrophosphate (PP(i)) produced by cells inhibits mineralization by binding to crystals. Its ubiquitous presence is thought to prevent \"soft\" tissues from mineralizing, whereas its degradation to P(i) in bones and teeth by tissue-nonspecific alkaline phosphatase (Tnap, Tnsalp, Alpl, Akp2) may facilitate crystal growth. Whereas the crystal binding properties of PP(i) are largely understood, less is known about its effects on osteoblast activity. We have used MC3T3-E1 osteoblast cultures to investigate the effect of PP(i) on osteoblast function and matrix mineralization. Mineralization in the cultures was dose-dependently inhibited by PP(i). This inhibition could be reversed by Tnap, but not if PP(i) was bound to mineral. PP(i) also led to increased levels of osteopontin (Opn) induced via the Erk1/2 and p38 MAPK signaling pathways. Opn regulation by PP(i) was also insensitive to foscarnet (an inhibitor of phosphate uptake) and levamisole (an inhibitor of Tnap enzymatic activity), suggesting that increased Opn levels did not result from changes in phosphate. Exogenous OPN inhibited mineralization, but dephosphorylation by Tnap reversed this effect, suggesting that OPN inhibits mineralization via its negatively charged phosphate residues and that like PP(i), hydrolysis by Tnap reduces its mineral inhibiting potency. Using enzyme kinetic studies, we have shown that PP(i) inhibits Tnap-mediated P(i) release from beta-glycerophosphate (a commonly used source of organic phosphate for culture mineralization studies) through a mixed type of inhibition. In summary, PP(i) prevents mineralization in MC3T3-E1 osteoblast cultures by at least three different mechanisms that include direct binding to growing crystals, induction of Opn expression, and inhibition of Tnap activity.",
"title": ""
},
{
"docid": "0c79db142f913564654f53b6519f2927",
"text": "For software process improvement -SPIthere are few small organizations using models that guide the management and deployment of their improvement initiatives. This is largely because a lot of these models do not consider the special characteristics of small businesses, nor the appropriate strategies for deploying an SPI initiative in this type of organization. It should also be noted that the models which direct improvement implementation for small settings do not present an explicit process with which to organize and guide the internal work of the employees involved in the implementation of the improvement opportunities. In this paper we propose a lightweight process, which takes into account appropriate strategies for this type of organization. Our proposal, known as a “Lightweight process to incorporate improvements” uses the philosophy of the Scrum agile",
"title": ""
},
{
"docid": "1b638147b80419c6a4c472b02cd9916f",
"text": "Herein, we report the development of highly water dispersible nanocomposite of conducting polyaniline and multiwalled carbon nanotubes (PANI-MWCNTs) via novel, `dynamic' or `stirred' liquid-liquid interfacial polymerization method using sulphonic acid as a dopant. MWCNTs were functionalized prior to their use and then dispersed in water. The nanocomposite was further subjected for physico-chemical characterization using spectroscopic (UV-Vis and FT-IR), FE-SEM analysis. The UV-VIS spectrum of the PANI-MWCNTs nanocomposite shows a free carrier tail with increasing absorption at higher wavelength. This confirms the presence of conducting emeraldine salt phase of the polyaniline and is further supported by FT-IR analysis. The FE-SEM images show that the thin layer of polyaniline is coated over the functionalized MWCNTs forming a `core-shell' like structure. The synthesized nanocomposite was found to be highly dispersible in water and shows beautiful colour change from dark green to blue with change in pH of the solution from 1 to 12 (i.e. from acidic to basic pH). The change in colour of the polyaniline-MWCNTs nanocomposite is mainly due to the pH dependent chemical transformation /change of thin layer of polyaniline.",
"title": ""
},
{
"docid": "2c61a29907ad3d2d6f1bbd090f33cd08",
"text": "Evolvability is the capacity to evolve. This paper introduces a simple computational model of evolvability and demonstrates that, under certain conditions, evolvability can increase indefinitely, even when there is no direct selection for evolvability. The model shows that increasing evolvability implies an accelerating evolutionary pace. It is suggested that the conditions for indefinitely increasing evolvability are satisfied in biological and cultural evolution. We claim that increasing evolvability is a large-scale trend in evolution. This hypothesis leads to testable predictions about biological and cultural evolution.",
"title": ""
},
{
"docid": "9cc2dfde38bed5e767857b1794d987bc",
"text": "Smartphones providing proprietary encryption schemes, albeit offering a novel paradigm to privacy, are becoming a bone of contention for certain sovereignties. These sovereignties have raised concerns about their security agencies not having any control on the encrypted data leaving their jurisdiction and the ensuing possibility of it being misused by people with malicious intents. Such smartphones have typically two types of customers, independent users who use it to access public mail servers and corporates/enterprises whose employees use it to access corporate emails in an encrypted form. The threat issues raised by security agencies concern mainly the enterprise servers where the encrypted data leaves the jurisdiction of the respective sovereignty while on its way to the global smartphone router. In this paper, we have analyzed such email message transfer mechanisms in smartphones and proposed some feasible solutions, which, if accepted and implemented by entities involved, can lead to a possible win-win situation for both the parties, viz., the smartphone provider who does not want to lose the customers and these sovereignties who can avoid the worry of encrypted data leaving their jurisdiction.",
"title": ""
},
{
"docid": "ec501a4ff57e812a68def82f185f4d19",
"text": "The photosynthetic light-harvesting apparatus moves energy from absorbed photons to the reaction center with remarkable quantum efficiency. Recently, long-lived quantum coherence has been proposed to influence efficiency and robustness of photosynthetic energy transfer in light-harvesting antennae. The quantum aspect of these dynamics has generated great interest both because of the possibility for efficient long-range energy transfer and because biology is typically considered to operate entirely in the classical regime. Yet, experiments to date show only that coherence persists long enough that it can influence dynamics, but they have not directly shown that coherence does influence energy transfer. Here, we provide experimental evidence that interaction between the bacteriochlorophyll chromophores and the protein environment surrounding them not only prolongs quantum coherence, but also spawns reversible, oscillatory energy transfer among excited states. Using two-dimensional electronic spectroscopy, we observe oscillatory excited-state populations demonstrating that quantum transport of energy occurs in biological systems. The observed population oscillation suggests that these light-harvesting antennae trade energy reversibly between the protein and the chromophores. Resolving design principles evident in this biological antenna could provide inspiration for new solar energy applications.",
"title": ""
},
{
"docid": "7bf5aaa12c9525909f39dc8af8774927",
"text": "Certain deterministic non-linear systems may show chaotic behaviour. Time series derived from such systems seem stochastic when analyzed with linear techniques. However, uncovering the deterministic structure is important because it allows constructing more realistic and better models and thus improved predictive capabilities. This paper provides a review of two main key features of chaotic systems, the dimensions of their strange attractors and the Lyapunov exponents. The emphasis is on state space reconstruction techniques that are used to estimate these properties, given scalar observations. Data generated from equations known to display chaotic behaviour are used for illustration. A compilation of applications to real data from widely di erent elds is given. If chaos is found to be present, one may proceed to build non-linear models, which is the topic of the second paper in this series.",
"title": ""
},
{
"docid": "658ff079f4fc59ee402a84beecd77b55",
"text": "Mitochondria are master regulators of metabolism. Mitochondria generate ATP by oxidative phosphorylation using pyruvate (derived from glucose and glycolysis) and fatty acids (FAs), both of which are oxidized in the Krebs cycle, as fuel sources. Mitochondria are also an important source of reactive oxygen species (ROS), creating oxidative stress in various contexts, including in the response to bacterial infection. Recently, complex changes in mitochondrial metabolism have been characterized in mouse macrophages in response to varying stimuli in vitro. In LPS and IFN-γ-activated macrophages (M1 macrophages), there is decreased respiration and a broken Krebs cycle, leading to accumulation of succinate and citrate, which act as signals to alter immune function. In IL-4-activated macrophages (M2 macrophages), the Krebs cycle and oxidative phosphorylation are intact and fatty acid oxidation (FAO) is also utilized. These metabolic alterations in response to the nature of the stimulus are proving to be determinants of the effector functions of M1 and M2 macrophages. Furthermore, reprogramming of macrophages from M1 to M2 can be achieved by targeting metabolic events. Here, we describe the role that metabolism plays in macrophage function in infection and immunity, and propose that reprogramming with metabolic inhibitors might be a novel therapeutic approach for the treatment of inflammatory diseases.",
"title": ""
},
{
"docid": "421261547adfa6c47c6ced492e7e3463",
"text": "Purpose – Conventional street lighting systems in areas with a low frequency of passersby are online most of the night without purpose. The consequence is that a large amount of power is wasted meaninglessly. With the broad availability of flexible-lighting technology like light-emitting diode lamps and everywhere available wireless internet connection, fast reacting, reliably operating, and power-conserving street lighting systems become reality. The purpose of this work is to describe the Smart Street Lighting (SSL) system, a first approach to accomplish the demand for flexible public lighting systems. Design/methodology/approach – This work presents the SSL system, a framework developed for a dynamic switching of street lamps based on pedestrians’ locations and desired safety (or “fear”) zones. In the developed system prototype, each pedestrian is localized via his/her smartphone, periodically sending location and configuration information to the SSL server. For street lamp control, each and every lamppost is equipped with a ZigBee-based radio device, receiving control information from the SSL server via multi-hop routing. Findings – This research paper confirms that the application of the proposed SSL system has great potential to revolutionize street lighting, particularly in suburban areas with low-pedestrian frequency. More important, the broad utilization of SSL can easily help to overcome the regulatory requirement for CO2 emission reduction by switching off lampposts whenever they are not required. Research limitations/implications – The paper discusses in detail the implementation of SSL, and presents results of its application on a small scale. Experiments have shown that objects like trees can interrupt wireless communication between lampposts and that inaccuracy of global positioning system position detection can lead to unexpected lighting effects. Originality/value – This paper introduces the novel SSL framework, a system for fast, reliable, and energy efficient street lamp switching based on a pedestrian’s location and personal desires of safety. Both safety zone definition and position estimation in this novel approach is accomplished using standard smartphone capabilities. Suggestions for overcoming these issues are discussed in the last part of the paper.",
"title": ""
},
{
"docid": "a93e0e98e6367606a8bb72000b0bbe8a",
"text": "Programming by Demonstration: a Machine Learning Approach",
"title": ""
},
{
"docid": "29734bed659764e167beac93c81ce0a7",
"text": "Fashion classification encompasses the identification of clothing items in an image. The field has applications in social media, e-commerce, and criminal law. In our work, we focus on four tasks within the fashion classification umbrella: (1) multiclass classification of clothing type; (2) clothing attribute classification; (3) clothing retrieval of nearest neighbors; and (4) clothing object detection. We report accuracy measurements for clothing style classification (50.2%) and clothing attribute classification (74.5%) that outperform baselines in the literature for the associated datasets. We additionally report promising qualitative results for our clothing retrieval and clothing object detection tasks.",
"title": ""
},
{
"docid": "688848d25ef154a797f85e03987b795f",
"text": "In this paper, we propose an omnidirectional mobile mechanism with surface contact. This mechanism is expected to perform on rough terrain and weak ground at disaster sites. In the discussion on the drive mechanism, we explain how a two axes orthogonal drive transmission system is important and we propose a principle drive mechanism for omnidirectional motion. In addition, we demonstrated that the proposed drive mechanism has potential for omnidirectional movement on rough ground by conducting experiments with prototypes.",
"title": ""
}
] |
scidocsrr
|
283c53f5be834dc2359326a83d3db634
|
Assessing the Corpus Size vs. Similarity Trade-off for Word Embeddings in Clinical NLP
|
[
{
"docid": "93f1ee5523f738ab861bcce86d4fc906",
"text": "Semantic role labeling (SRL) is one of the basic natural language processing (NLP) problems. To this date, most of the successful SRL systems were built on top of some form of parsing results (Koomen et al., 2005; Palmer et al., 2010; Pradhan et al., 2013), where pre-defined feature templates over the syntactic structure are used. The attempts of building an end-to-end SRL learning system without using parsing were less successful (Collobert et al., 2011). In this work, we propose to use deep bi-directional recurrent network as an end-to-end system for SRL. We take only original text information as input feature, without using any syntactic knowledge. The proposed algorithm for semantic role labeling was mainly evaluated on CoNLL-2005 shared task and achieved F1 score of 81.07. This result outperforms the previous state-of-the-art system from the combination of different parsing trees or models. We also obtained the same conclusion with F1 = 81.27 on CoNLL2012 shared task. As a result of simplicity, our model is also computationally efficient that the parsing speed is 6.7k tokens per second. Our analysis shows that our model is better at handling longer sentences than traditional models. And the latent variables of our model implicitly capture the syntactic structure of a sentence.",
"title": ""
}
] |
[
{
"docid": "0be24a284a7490b709bbbdfea458b211",
"text": "This article provides a meta-analytic review of the relationship between the quality of leader-member exchanges (LMX) and citizenship behaviors performed by employees. Results based on 50 independent samples (N = 9,324) indicate a moderately strong, positive relationship between LMX and citizenship behaviors (rho = .37). The results also support the moderating role of the target of the citizenship behaviors on the magnitude of the LMX-citizenship behavior relationship. As expected, LMX predicted individual-targeted behaviors more strongly than it predicted organizational targeted behaviors (rho = .38 vs. rho = .31), and the difference was statistically significant. Whether the LMX and the citizenship behavior ratings were provided by the same source or not also influenced the magnitude of the correlation between the 2 constructs.",
"title": ""
},
{
"docid": "2f92cde5a194a4cabdebebe2c7cc11ba",
"text": "The expressive power of neural networks is important for understanding deep learning. Most existing works consider this problem from the view of the depth of a network. In this paper, we study how width affects the expressiveness of neural networks. Classical results state that depth-bounded (e.g. depth-2) networks with suitable activation functions are universal approximators. We show a universal approximation theorem for width-bounded ReLU networks: width-(n+ 4) ReLU networks, where n is the input dimension, are universal approximators. Moreover, except for a measure zero set, all functions cannot be approximated by width-n ReLU networks, which exhibits a phase transition. Several recent works demonstrate the benefits of depth by proving the depth-efficiency of neural networks. That is, there are classes of deep networks which cannot be realized by any shallow network whose size is no more than an exponential bound. Here we pose the dual question on the width-efficiency of ReLU networks: Are there wide networks that cannot be realized by narrow networks whose size is not substantially larger? We show that there exist classes of wide networks which cannot be realized by any narrow network whose depth is no more than a polynomial bound. On the other hand, we demonstrate by extensive experiments that narrow networks whose size exceed the polynomial bound by a constant factor can approximate wide and shallow network with high accuracy. Our results provide more comprehensive evidence that depth may be more effective than width for the expressiveness of ReLU networks.",
"title": ""
},
{
"docid": "c70702e495108282ebba5cda9ea17a38",
"text": "Recent years have witnessed a widespread increase of interest in network representation learning (NRL). By far most research efforts have focused on NRL for homogeneous networks like social networks where vertices are of the same type, or heterogeneous networks like knowledge graphs where vertices (and/or edges) are of different types. There has been relatively little research dedicated to NRL for bipartite networks. Arguably, generic network embedding methods like node2vec and LINE can also be applied to learn vertex embeddings for bipartite networks by ignoring the vertex type information. However, these methods are suboptimal in doing so, since real-world bipartite networks concern the relationship between two types of entities, which usually exhibit different properties and patterns from other types of network data. For example, E-Commerce recommender systems need to capture the collaborative filtering patterns between customers and products, and search engines need to consider the matching signals between queries and webpages. This work addresses the research gap of learning vertex representations for bipartite networks. We present a new solution BiNE, short for Bipartite Network Embedding, which accounts for two special properties of bipartite networks: long-tail distribution of vertex degrees and implicit connectivity relations between vertices of the same type. Technically speaking, we make three contributions: (1) We design a biased random walk generator to generate vertex sequences that preserve the long-tail distribution of vertices; (2) We propose a new optimization framework by simultaneously modeling the explicit relations (i.e., observed links) and implicit relations (i.e., unobserved but transitive links); (3) We explore the theoretical foundations of BiNE to shed light on how it works, proving that BiNE can be interpreted as factorizing multiple matrices. We perform extensive experiments on five real datasets covering the tasks of link prediction (classification) and recommendation (ranking), empirically verifying the effectiveness and rationality of BiNE. Our experiment codes are available at: https://github.com/clhchtcjj/BiNE.",
"title": ""
},
{
"docid": "dfde9a2febe48e273d12131082071635",
"text": "Instagram, an online photo-sharing platform, has gained increasing popularity. It allows users to take photos, apply digital filters and share them with friends instantaneously by using mobile devices.Instagram provides users with the functionality to associate their photos with points of interest, and it thus becomes feasible to study the association between points of interest and Instagram photos. However, no previous work studies the association. In this paper, we propose to study the problem of mapping Instagram photos to points of interest. To understand the problem, we analyze Instagram datasets, and report our findings, which also characterize the challenges of the problem. To address the challenges, we propose to model the mapping problem as a ranking problem, and develop a method to learn a ranking function by exploiting the textual, visual and user information of photos. To maximize the prediction effectiveness for textual and visual information, and incorporate the users' visiting preferences, we propose three subobjectives for learning the parameters of the proposed ranking function. Experimental results on two sets of Instagram data show that the proposed method substantially outperforms existing methods that are adapted to handle the problem.",
"title": ""
},
{
"docid": "761ac681dc60cdb8bbcf2ac0b6b84afc",
"text": "When a digital library user searches for publications by an author name, she often sees a mixture of publications by different authors who have the same name. With the growth of digital libraries and involvement of more authors, this author ambiguity problem is becoming critical. Author disambiguation (AD) often tries to solve this problem by leveraging metadata such as coauthors, research topics, publication venues and citation information, since more personal information such as the contact details is often restricted or missing. In this paper, we study the problem of how to efficiently disambiguate author names given an incessant stream of published papers. To this end, we propose a “BatchAD+IncAD” framework for dynamic author disambiguation. First, we perform batch author disambiguation (BatchAD) to disambiguate all author names at a given time by grouping all records (each record refers to a paper with one of its author names) into disjoint clusters. This establishes a one-to-one mapping between the clusters and real-world authors. Then, for newly added papers, we periodically perform incremental author disambiguation (IncAD), which determines whether each new record can be assigned to an existing cluster, or to a new cluster not yet included in the previous data. Based on the new data, IncAD also tries to correct previous AD results. Our main contributions are: (1) We demonstrate with real data that a small number of new papers often have overlapping author names with a large portion of existing papers, so it is challenging for IncAD to effectively leverage previous AD results. (2) We propose a novel IncAD model which aggregates metadata from a cluster of records to estimate the author’s profile such as her coauthor distributions and keyword distributions, in order to predict how likely it is that a new record is “produced” by the author. (3) Using two labeled datasets and one large-scale raw dataset, we show that the proposed method is much more efficient than state-of-the-art methods while ensuring high accuracy.",
"title": ""
},
{
"docid": "63dcb42d456ab4b6512c47437e354f7b",
"text": "The deep learning revolution brought us an extensive array of neural network architectures that achieve state-of-the-art performance in a wide variety of Computer Vision tasks including among others classification, detection and segmentation. In parallel, we have also been observing an unprecedented demand in computational and memory requirements, rendering the efficient use of neural networks in low-powered devices virtually unattainable. Towards this end, we propose a threestage compression and acceleration pipeline that sparsifies, quantizes and entropy encodes activation maps of Convolutional Neural Networks. Sparsification increases the representational power of activation maps leading to both acceleration of inference and higher model accuracy. Inception-V3 and MobileNet-V1 can be accelerated by as much as 1.6× with an increase in accuracy of 0.38% and 0.54% on the ImageNet and CIFAR-10 datasets respectively. Quantizing and entropy coding the sparser activation maps lead to higher compression over the baseline, reducing the memory cost of the network execution. Inception-V3 and MobileNet-V1 activation maps, quantized to 16 bits, are compressed by as much as 6× with an increase in accuracy of 0.36% and 0.55% respectively.",
"title": ""
},
{
"docid": "e659f976983c28631062bb5c8b1c35ab",
"text": "This paper presents the outcomes of research into using lingual parts of music in an automatic mood classification system. Using a collection of lyrics and corresponding user-tagged moods, we build classifiers that classify lyrics of songs into moods. By comparing the performance of different mood frameworks (or dimensions), we examine to what extent the linguistic part of music reveals adequate information for assigning a mood category and which aspects of mood can be classified best. Our results show that word oriented metrics provide a valuable source of information for automatic mood classification of music, based on lyrics only. Metrics such as term frequencies and tf*idf values are used to measure relevance of words to the different mood classes. These metrics are incorporated in a machine learning classifier setup. Different partitions of the mood plane are investigated and we show that there is no large difference in mood prediction based on the mood division. Predictions on the valence, tension and combinations of aspects lead to similar performance.",
"title": ""
},
{
"docid": "114e6cde6a38bcbb809f19b80110c16f",
"text": "This paper proposes a neural semantic parsing approach – Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.",
"title": ""
},
{
"docid": "b69e3e8eda027300a66813a9a7afba5c",
"text": "Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.",
"title": ""
},
{
"docid": "019e48981d451eed66ffcfbee8edddb0",
"text": "We consider open government (OG) within the context of e-government and its broader implications for the future of public administration. We argue that the current US Administration's Open Government Initiative blurs traditional distinctions between e-democracy and e-government by incorporating historically democratic practices, now enabled by emerging technology, within administrative agencies. We consider how transparency, participation, and collaboration function as democratic practices in administrative agencies, suggesting that these processes are instrumental attributes of administrative action and decision making, rather than the objective of administrative action, as they appear to be currently treated. We propose alternatively that planning and assessing OG be addressed within a \"public value\" framework. The creation of public value is the goal of public organizations; through public value, public organizations meet the needs and wishes of the public with respect to substantive benefits as well as the intrinsic value of better government. We extend this view to OG by using the framework as a way to describe the value produced when interaction between government and citizens becomes more transparent, participative, and collaborative, i.e., more democratic.",
"title": ""
},
{
"docid": "4646770e02f6c71f749e92b3b372ee00",
"text": "Cochannel speech separation aims to separate two speech signals from a single mixture. In a supervised scenario, the identities of two speakers are given, and current methods use pre-trained speaker models for separation. One issue in model-based methods is the mismatch between training and test signal levels. We propose an iterative algorithm to adapt speaker models to match the signal levels in testing. Our algorithm first obtains initial estimates of source signals using unadapted speaker models and then detects the input signal-to-noise ratio (SNR) of the mixture. The input SNR is then used to adapt the speaker models for more accurate estimation. The two steps iterate until convergence. Compared to search-based SNR detection methods, our method is not limited to given SNR levels. Evaluations demonstrate that the iterative procedure converges quickly in a considerable range of SNRs and improves separation results significantly. Comparisons show that the proposed system performs significantly better than related model-based systems.",
"title": ""
},
{
"docid": "bba6fad7d1d32683e95e475632c9a9e5",
"text": "A great variety of text tasks such as topic or spam identification, user profiling, and sentiment analysis can be posed as a supervised learning problem and tackle using a text classifier. A text classifier consists of several subprocesses, some of them are general enough to be applied to any supervised learning problem, whereas others are specifically designed to tackle a particular task, using complex and computational expensive processes such as lemmatization, syntactic analysis, etc. Contrary to traditional approaches, we propose a minimalistic and wide system able to tackle text classification tasks independent of domain and language, namely μTC. It is composed by some easy to implement text transformations, text representations, and a supervised learning algorithm. These pieces produce a competitive classifier even in the domain of informally written text. We provide a detailed description of μTC along with an extensive experimental comparison with relevant state-of-the-art methods. μTC was compared on 30 different datasets. Regarding accuracy, μTC obtained the best performance in 20 datasets while achieves competitive results in the remaining 10. The compared datasets include several problems like topic and polarity classification, spam detection, user profiling and authorship attribution. Furthermore, it is important to state that our approach allows the usage of the technology even without knowledge of machine learning and natural language processing. ∗CONACyT Consejo Nacional de Ciencia y Tecnoloǵıa, Dirección de Cátedras, Insurgentes Sur 1582, Crédito Constructor 03940, Ciudad de México, México. †INFOTEC Centro de Investigación e Innovación en Tecnoloǵıas de la Información y Comunicación, Circuito Tecnopolo Sur No 112, Fracc. Tecnopolo Pocitos II, Aguascalientes 20313, México. ‡Centro de Investigación en Geograf́ıa y Geomática “Ing. Jorge L. Tamayo”, A.C. Circuito Tecnopolo Norte No. 117, Col. Tecnopolo Pocitos II, C.P. 20313,. Aguascalientes, Ags, México. 1 ar X iv :1 70 4. 01 97 5v 2 [ cs .C L ] 1 4 Se p 20 17",
"title": ""
},
{
"docid": "9efcb4550dc9b703259f49dd0958f8ce",
"text": "A hybrid transformer-based integrated tunable duplexer is demonstrated. High isolation between the transmit and receive ports is achieved through electrical balance between the antenna and balance network impedances. A novel high-power-tolerant balance network, which can be tuned at both the transmit and receive frequencies, allows high isolation in both the transmit and receive bands even under realistic antenna impedance frequency dependence. To maintain high isolation despite antenna impedance variation, a feedback loop is employed to measure the transmitter leakage and correct the impedance of the balance network. An isolation > 50 dB in the transmit and receive bands with an antenna voltage standing-wave ratio within 2:1 was achieved. The duplexer, along with a cascaded direct-conversion receiver, achieves a noise figure of 5.3 dB, a conversion gain of 45 dB, and consumes 51 mW of power. The insertion loss in the transmit path was less than 3.8 dB. Implemented in a 65-nm CMOS process, the chip occupies an active area of 2.2 mm2.",
"title": ""
},
{
"docid": "6f1669cf7fe464c42b5cb0d68efb042e",
"text": "BACKGROUND\nLevine and Drennan described the tibial metaphyseal-diaphyseal angle (MDA) in an attempt to identify patients with infantile Blount's disease. Pediatric orthopaedic surgeons have debated not only the use, but also the reliability of this measure. Two techniques have been described to measure the MDA. These techniques involved using both the lateral border of the tibial cortex and the center of the tibial shaft as the longitudinal axis for radiographic measurements. The use of digital images poses another variable in the reliability of the MDA as digital images are used more commonly.\n\n\nMETHODS\nThe radiographs of 21 children (42 limbs) were retrospectively reviewed by 27 staff pediatric orthopaedic surgeons. Interobserver reliability was determined using the intraclass correlation coefficients (ICCs). Nine duplicate radiographs (18 duplicate limbs) that appeared in the data set were used to calculate ICCs representing the intraobserver reliability. A scatter plot was created comparing the mean MDA determined by the 2 methods. The strength of a linear relationship between the 2 methods was measured with the Pearson correlation coefficient. Finally, we tested for a difference in variability between the 2 measures at angles of 11 degrees or less and greater than 11 degrees by comparing the variance ratios using the F test.\n\n\nRESULTS\nThe interobserver reliability was calculated using the ICC as 0.821 for the single-measure method and 0.992 for the average-measure method. The intraobserver reliability was similarly calculated using the ICC as 0.886 for the single-measure method and 0.940 for the average-measure method. Pearson correlation coefficient (0.9848) revealed a highly linear relationship between the 2 methods (P = 0.00001). We also found that there was no statistically significant variability between the 2 methods of calculating the MDA at angles of 11 degrees or less compared with angles greater than 11 degrees (P = 0.596688).\n\n\nCONCLUSIONS\nThere was excellent interobserver reliability and intraobserver reliability among reviewers. Using either the lateral diaphyseal line or center diaphyseal line produces reasonable reliability with no significant variability at angles of 11 degrees or less or greater than 11 degrees.\n\n\nLEVEL OF EVIDENCE\nLevel IV.",
"title": ""
},
{
"docid": "96430b17e70aa79a1bef9237b77994b5",
"text": "This paper proposes an autopilot system for a small and light unmanned air vehicle called Kiteplane. The Kiteplane has a large delta-shaped main wing that is easily disturbed by the wind, which was minimized by utilizing trim flight with drift. The proposed control system for autonomous trajectory following with a wind disturbance included fuzzy logic controllers, a speed controller, a wind disturbance attenuation block, and low-level feedback controllers. The system was implemented onboard the aircraft. Experiments were performed to test the performance of the proposed system and the Kiteplane nearly succeeded in following the desired trajectory, under the wind disturbance. Although the path was not followed perfectly, the airplane was able to traverse the waypoints by utilizing a failsafe waypoint updating rule",
"title": ""
},
{
"docid": "242a79e9e0d38c5dbd2e87d109566b6e",
"text": "Δ9-Tetrahydrocannabinol (THC) is the main active constituent of cannabis. In recent years, the average THC content of some cannabis cigarettes has increased up to approximately 60 mg per cigarette (20% THC cigarettes). Acute cognitive and psychomotor effects of THC among recreational users after smoking cannabis cigarettes containing such high doses are unknown. The objective of this study was to study the dose–effect relationship between the THC dose contained in cannabis cigarettes and cognitive and psychomotor effects for THC doses up to 69.4 mg (23%). This double-blind, placebo-controlled, randomised, four-way cross-over study included 24 non-daily male cannabis users (two to nine cannabis cigarettes per month). Participants smoked four cannabis cigarettes containing 0, 29.3, 49.1 and 69.4 mg THC on four exposure days. The THC dose in smoked cannabis was linearly associated with a slower response time in all tasks (simple reaction time, visuo-spatial selective attention, sustained attention, divided attention and short-term memory tasks) and motor control impairment in the motor control task. The number of errors increased significantly with increasing doses in the short-term memory and the sustained attention tasks. Some participants showed no impairment in motor control even at THC serum concentrations higher than 40 ng/mL. High feeling and drowsiness differed significantly between treatments. Response time slowed down and motor control worsened, both linearly, with increasing THC doses. Consequently, cannabis with high THC concentrations may be a concern for public health and safety if cannabis smokers are unable to titrate to a high feeling corresponding to a desired plasma THC level.",
"title": ""
},
{
"docid": "d082f98c606c927286d991f8e534462c",
"text": "Distributed online data analytics has attracted significant research interest in recent years with the advent of Fog and Cloud computing. The popularity of novel distributed applications such as crowdsourcing and crowdsensing have fostered the need for scalable energy-efficient platforms that can enable distributed data analytics. In this paper, we propose CARDAP, a (C)ontext (A)ware (R)eal-time (D)ata (A)nalytics (P)latform. CARDAP is a generic, flexible and extensible, component-based platform that can be deployed in complex distributed mobile analytics applications e.g. sensing activity of citizens in smart cities. CARDAP incorporates a number of energy efficient data delivery strategies using real-time mobile data stream mining for data reduction and thus less data transmission. Extensive experimental evaluations indicate the CARDAP platform can deliver significant benefits in energy efficiency over naive approaches. Lessons learnt and future work",
"title": ""
},
{
"docid": "3b31d07c6a5f7522e2060d5032ca5177",
"text": "In the past few years detection of repeatable and distinctive keypoints on 3D surfaces has been the focus of intense research activity, due on the one hand to the increasing diffusion of low-cost 3D sensors, on the other to the growing importance of applications such as 3D shape retrieval and 3D object recognition. This work aims at contributing to the maturity of this field by a thorough evaluation of several recent 3D keypoint detectors. A categorization of existing methods in two classes, that allows for highlighting their common traits, is proposed, so as to abstract all algorithms to two general structures. Moreover, a comprehensive experimental evaluation is carried out in terms of repeatability, distinctiveness and computational efficiency, based on a vast data corpus characterized by nuisances such as noise, clutter, occlusions and viewpoint changes.",
"title": ""
},
{
"docid": "bff3126818b6fd9a91eba7aa6683ca72",
"text": "Several fundamental security mechanisms for restricting access to network resources rely on the ability of a reference monitor to inspect the contents of traffic as it traverses the network. However, with the increasing popularity of cryptographic protocols, the traditional means of inspecting packet contents to enforce security policies is no longer a viable approach as message contents are concealed by encryption. In this paper, we investigate the extent to which common application protocols can be identified using only the features that remain intact after encryption—namely packet size, timing, and direction. We first present what we believe to be the first exploratory look at protocol identification in encrypted tunnels which carry traffic from many TCP connections simultaneously, using only post-encryption observable features. We then explore the problem of protocol identification in individual encrypted TCP connections, using much less data than in other recent approaches. The results of our evaluation show that our classifiers achieve accuracy greater than 90% for several protocols in aggregate traffic, and, for most protocols, greater than 80% when making fine-grained classifications on single connections. Moreover, perhaps most surprisingly, we show that one can even estimate the number of live connections in certain classes of encrypted tunnels to within, on average, better than 20%.",
"title": ""
}
] |
scidocsrr
|
0c660a2146c42deb69c195dab4288156
|
Trust calibration within a human-robot team: Comparing automatically generated explanations
|
[
{
"docid": "fbd05f764470b94af30c7799e94ff0f0",
"text": "Agent-based modeling of human social behavior is an increasingly important research area. A key factor in human social interaction is our beliefs about others, a theory of mind. Whether we believe a message depends not only on its content but also on our model of the communicator. How we act depends not only on the immediate effect but also on how we believe others will react. In this paper, we discuss PsychSim, an implemented multiagent-based simulation tool for modeling interactions and influence. While typical approaches to such modeling have used first-order logic, PsychSim agents have their own decision-theoretic model of the world, including beliefs about its environment and recursive models of other agents. Using these quantitative models of uncertainty and preferences, we have translated existing psychological theories into a decision-theoretic semantics that allow the agents to reason about degrees of believability in a novel way. We discuss PsychSim’s underlying architecture and describe its application to a school violence scenario for illustration.",
"title": ""
}
] |
[
{
"docid": "543dc9543221b507746ebf1fe8d14928",
"text": "Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models’ usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study population. This article presents the results of a simulation study that examines the performance of likelihood-based tests and the traditionally used Information Criterion (ICs) used for determining the number of classes in mixture modeling. We look at the performance of these tests and indexes for 3 types of mixture models: latent class analysis (LCA), a factor mixture model (FMA), and a growth mixture models (GMM). We evaluate the ability of the tests and indexes to correctly identify the number of classes at three different sample sizes (n D 200, 500, 1,000). Whereas the Bayesian Information Criterion performed the best of the ICs, the bootstrap likelihood ratio test proved to be a very consistent indicator of classes across all of the models considered.",
"title": ""
},
{
"docid": "4b6aa06586cd2fd6efe8fb5246e7830b",
"text": "We tackle the problem of reconstructing moving vehicles in autonomous driving scenarios using only a monocular camera. Though the problem appears to be ill-posed, we demonstrate that prior knowledge about how 3D shapes of vehicles project to an image can be used to reason about the reverse process, i.e., how shapes (back-)project from 2D to 3D. We encode this knowledge in shape priors, which are learnt over a small dataset comprising of annotated RGB images of vehicles. Each shape prior comprises of a deformable wireframe model whose vertices are semantically unique keypoints of that vehicle. The first contribution is an approach for reconstructing vehicles from just a single (RGB) image. To obtain a 3D wireframe representing the shape, we first localize the vertices of the wireframe (keypoints) in 2D using a Convolutional Neural Network (CNN). We then formulate a shape-aware optimization problem that uses the learnt shape priors to lift the detected 2D keypoints to 3D, thereby recovering the 3D pose and shape of a query object from an image. The shape-aware adjustment robustly recovers shape (3D locations of the detected keypoints) while simultaneously filling in occluded keypoints. To tackle estimation errors incurred due to erroneously detected keypoints, we use an Iteratively Reweighted Least Squares (IRLS) scheme for robust optimization, and as a by-product characterize noise models for each predicted keypoint. We evaluate our approach on autonomous driving benchmarks, and present superior results to existing monocular, as well as stereo approaches. The second contribution is a real-time monocular object localization system that estimates the shape and pose of dynamic objects in real-time, using video frames captured from a moving monocular camera. Here again, by incorporating prior knowledge of the object category, we can obtain more detailed instance-level reconstructions. As opposed to earlier object model specifications, the proposed shapeprior model leads to the formulation of a Bundle Adjustment-like optimization problem for simultaneous shape and pose estimation. We demonstrate how these keypoints can be used to recover 3D object properties, while accounting for any 2D localization errors and self-occlusion. We show significant performance improvements compared to state-of-the-art monocular competitors for 2D keypoint detection, as well as 3D localization and reconstruction of dynamic objects.",
"title": ""
},
{
"docid": "919ee3a62e28c1915d0be556a2723688",
"text": "Bayesian data analysis includes but is not limited to Bayesian inference (Gelman et al., 2003; Kerman, 2006a). Here, we take Bayesian inference to refer to posterior inference (typically, the simulation of random draws from the posterior distribution) given a fixed model and data. Bayesian data analysis takes Bayesian inference as a starting point but also includes fitting a model to different datasets, altering a model, performing inferential and predictive summaries (including prior or posterior predictive checks), and validation of the software used to fit the model. The most general programs currently available for Bayesian inference are WinBUGS (BUGS Project, 2004) and OpenBugs, which can be accessed from R using the packages R2WinBUGS (Sturtz et al., 2005) and BRugs. In addition, various R packages exist that directly fit particular Bayesian models (e.g. MCMCPack, Martin and Quinn (2005)). In this note, we describe our own entry in the “inference engine” sweepstakes but, perhaps more importantly, describe the ongoing development of some R packages that perform other aspects of Bayesian data analysis.",
"title": ""
},
{
"docid": "c4365eb260c4b45b1e11d28ff35d146d",
"text": "This paper presents concepts and methods to develop digital 3D city model from photography. The aim of current research is to create an effective approach of modelling city with huge amount of complex architectures. Taking advantage of advanced photogrammetry and High Resolution Stereo Camera (HRSC) technology, high quality ground details can be mapped to GIS datasets and the accurate 3D information can be explored. We describe a technique to triangulate (mesh) polygonal surface for creating 3D visual objects. The technique provides high quality and accuracy for the creation of geographic objects. Also, we introduce the method to improve visualization performance by means of rectification and optimization. Finally, we discuss the data management of 3D city model.",
"title": ""
},
{
"docid": "f43a3aa8d3390d110ed7c6eb41342ed2",
"text": "BACKGROUND\nDoppler flow velocity waveform analysis of fetal vessels is one of the main methods for evaluating fetus health before labor. Doppler waves of middle cerebral artery (MCA) can predict most of the at risk fetuses in high risk pregnancies. In this study, we tried to obtain normal values and their nomograms during pregnancy for Doppler flow velocity indices of MCA in 20-40 weeks of normal pregnancies in Iranian population and compare their pattern with other countries' nomograms.\n\n\nMETHODS\nDuring present descriptive cross-sectional study, 1037 normal pregnant women with 20th-40th week gestational age were underwent MCA Doppler study. All cases were studied by gray scale ultrasonography initially and Doppler of MCA afterward. Resistive Index (RI), Pulsative Index (PI), Systolic/Diastolic ratio (S/D ratio), and Peak Systolic Velocity (PSV) values of MCA were determined for all of the subjects.\n\n\nRESULTS\nResults of present study showed that RI, PI, S/D ratio values of MCA decreased with parabolic pattern and PSV value increased with simple pattern, as gestational age progressed. These changes were statistically significant (P=0.000 for all of indices) and more characteristic during late weeks of pregnancy.\n\n\nCONCLUSION\nValues of RI, PI and S/D ratio indices reduced toward the end of pregnancy, but PSV increased. Despite the trivial difference, nomograms of various Doppler indices in present study have similar pattern with other studies.",
"title": ""
},
{
"docid": "dcad574bece3ee5363eb9674cce995c4",
"text": "With advances in robotics, robots can give advice and help using natural language. The field of HRI, however, has not yet developed a communication strategy for giving advice effectively. Drawing on literature in politeness and informal speech, we propose options for a robot's help-giving speech-using hedges or discourse markers, both of which can mitigate the commanding tone implied in direct statements of advice. To test these options, we experimentally compared two help-giving strategies depicted in videos of human and robot helpers. We found that when robot and human helpers used a hedge or discourse markers, they seemed more considerate and likeable, and less controlling. The robot that used discourse markers had even more impact than the human helper. The findings suggest that communication strategies derived from speech used when people help each other in natural settings can be effective for planning the help dialogues of robotic assistants.",
"title": ""
},
{
"docid": "8c0e5e48c8827a943f4586b8e75f4f9d",
"text": "Predicting the results of football matches poses an interesting challenge due to the fact that the sport is so popular and widespread. However, predicting the outcomes is also a difficult problem because of the number of factors which must be taken into account that cannot be quantitatively valued or modeled. As part of this work, a software solution has been developed in order to try and solve this problem. During the development of the system, a number of tests have been carried out in order to determine the optimal combination of features and classifiers. The results of the presented system show a satisfactory capability of prediction which is superior to the one of the reference method (most likely a priori outcome).",
"title": ""
},
{
"docid": "ce59491b083fac5d3eff9393ca1728b4",
"text": "We present a new framework for interpreting face images and i mage sequences using an Active Appearance Model (AAM). The AA M contains a statistical, photo-realistic model of the shape and grey-l evel appearance of faces. This paper demonstrates the use of the AAM’s efficient iterat ive matching scheme for image interpretation. We use the AAM as a basis for face re cognition, obtain good results for difficult images. We show how the AAM fra mework allows identity information to be decoupled from other variation, allowing evidence of identity to be integrated over a sequence. The AAM approach m akes optimal use of the evidence from either a single image or image sequence. Since we derive a complete description of a given image our method can be used a s the basis for a range of face image interpretation tasks.",
"title": ""
},
{
"docid": "32b04b91bc796a082fb9c0d4c47efbf9",
"text": "Intell Sys Acc Fin Mgmt. 2017;24:49–55. Summary A two‐step system is presented to improve prediction of telemarketing outcomes and to help the marketing management team effectively manage customer relationships in the banking industry. In the first step, several neural networks are trained with different categories of information to make initial predictions. In the second step, all initial predictions are combined by a single neural network to make a final prediction. Particle swarm optimization is employed to optimize the initial weights of each neural network in the ensemble system. Empirical results indicate that the two‐ step system presented performs better than all its individual components. In addition, the two‐ step system outperforms a baseline one where all categories of marketing information are used to train a single neural network. As a neural networks ensemble model, the proposed two‐step system is robust to noisy and nonlinear data, easy to interpret, suitable for large and heterogeneous marketing databases, fast and easy to implement.",
"title": ""
},
{
"docid": "0264a3c21559a1b9c78c42d7c9848783",
"text": "This paper presents the first linear bulk CMOS power amplifier (PA) targeting low-power fifth-generation (5G) mobile user equipment integrated phased array transceivers. The output stage of the PA is first optimized for power-added efficiency (PAE) at a desired error vector magnitude (EVM) and range given a challenging 5G uplink use case scenario. Then, inductive source degeneration in the optimized output stage is shown to enable its embedding into a two-stage transformer-coupled PA; by broadening interstage impedance matching bandwidth and helping to reduce distortion. Designed and fabricated in 1P7M 28 nm bulk CMOS and using a 1 V supply, the PA achieves +4.2 dBm/9% measured Pout/PAE at -25 dBc EVM for a 250 MHz-wide 64-quadrature amplitude modulation orthogonal frequency division multiplexing signal with 9.6 dB peak-to-average power ratio. The PA also achieves 35.5%/10% PAE for continuous wave signals at saturation/9.6 dB back-off from saturation. To the best of the authors' knowledge, these are the highest measured PAE values among published K-and Ka-band CMOS PAs.",
"title": ""
},
{
"docid": "16bf05d14d0f4bed68ecbf2fb60b2cc7",
"text": "Amaç: Akıllı telefonlar iletişim amaçlı kullanımları yanında internet, fotoğraf makinesi, video-ses kayıt cihazı, navigasyon, müzik çalar gibi birçok özelliğin bir arada toplandığı günümüzün popüler teknolojik cihazlarıdır. Akıllı telefonların kullanımı hızla artmaktadır. Bu hızlı artış akıllı telefonlara bağımlılığı ve problemli kullanımı beraberinde getirmektedir. Bizim bildiğimiz kadarıyla Türkiye’de akıllı telefonlara bağımlılığı değerlendiren ölçek yoktur. Bu çalışmanın amacı Akıllı Telefon Bağımlılığı Ölçeği’nin Türkçe’ye uyarlanması, geçerlik ve güvenilirliğinin incelenmesidir. Yöntem: Çalışmanın örneklemini Süleyman Demirel Üniversitesi Tıp Fakültesi’nde eğitim gören ve akıllı telefon kullanıcısı olan 301 üniversite öğrencisi oluşturmuştur. Çalışmada veri toplama araçları olarak Akıllı Telefon Bağımlılığı Ölçeği, Bilgi Formu, İnternet Bağımlılığı Ölçeği ve Problemli Cep Telefonu Kullanımı Ölçeği kullanılmıştır. Ölçekler, tüm katılımcılara Bilgi Formu hep ilk sırada olacak şekilde karışık sırayla verilmiştir. Ölçeklerin doldurulması yaklaşık 20 dakika sürmüştür. Test-tekrar-test uygulaması rastgele belirlenmiş 30 öğrenci ile (rumuz yardımıyla) üç hafta sonra yapılmıştır. Ölçeğin faktör yapısı açıklayıcı faktör analizi ve varimaks rotasyonu ile incelenmiştir. Güvenilirlik analizi için iç tutarlılık, iki-yarım güvenilirlik ve test-tekrar test güvenilirlik analizleri uygulanmıştır. Ölçüt bağıntılı geçerlilik analizinde Pearson korelasyon analizi kullanılmıştır. Bulgular: Faktör Analizi yedi faktörlü bir yapı ortaya koymuş, maddelerin faktör yüklerinin 0,349-0,824 aralığında değiştiği belirlenmiştir. Ölçeğin Cronbach alfa iç tutarlılık katsayısı 0,947 bulunmuştur. Ölçeğin diğer ölçeklerle arasındaki korelasyonlar istatistiksel olarak anlamlı bulunmuştur. Test-tekrar test güvenilirliğinin yüksek olduğu (r=0,814) bulunmuştur. İki yarım güvenilirlik analizinde Guttman Splithalf katsayısı 0,893 olarak saptanmıştır. Kız öğrencilerde ölçek toplam puan ortalamasının erkeklerden istatistiksel olarak önemli düzeyde yüksek olduğu bulunmuştur (p=0,03). Yaş ile ölçek toplam puanı arasında anlamlı olmayan negatif ilişki saptanmıştır (r=-0.086, p=0,13). En yüksek ölçek puan ortalaması 16 saat üzeri kullananlarda gözlenmiş olup 4 saatten az kullananlardan istatistiksel olarak önemli derecede fazla bulunmuştur (p=0,01). Ölçek toplam puanı akıllı telefonu en çok kullanım amacına göre karşılaştırıldığında en yüksek ortalamanın oyun kategorisinde olduğu ancak internet (p=0,44) ve sosyal ağ (p=0,98) kategorilerinden farklı olmadığı, ayrıca telefon (p=0,02), SMS (p=0,02) ve diğer kullanım amacı (p=0,04) kategori ortalamalarından istatistiksel olarak önemli derecede fazla olduğu bulunmuştur. Akıllı telefon bağımlısı olduğunu düşünenlerin ve bu konuda emin olmayanların toplam ölçek puanları akıllı telefon bağımlısı olduğunu düşünmeyenlerin toplam ölçek puanlarından anlamlı şekilde yüksek bulunmuştur (p=0,01). Sonuç: Bu çalışmada, Akıllı telefon Bağımlılığı Ölçeği’nin Türkçe formunun akıllı telefon bağımlılığının değerlendirilmesinde geçerli ve güvenilir bir ölçüm aracı olduğu bulunmuştur.",
"title": ""
},
{
"docid": "c896c4c81a3b8d18ad9f8073562f5514",
"text": "A fully integrated passive UHF RFID tag with embedded temperature sensor, compatible with the ISO/IEC 18000 type 6C protocol, is developed in a standard 0.18µm CMOS process, which is designed to measure the axle temperature of a running train. The consumption of RF/analog front-end circuits is 1.556µA@1.0V, and power dissipation of digital part is 5µA@1.0V. The CMOS temperature sensor exhibits a conversion time under 2 ms, less than 7 µW power dissipation, resolution of 0.31°C/LSB and error of +2.3/−1.1°C with a 1.8 V power supply for range from −35°C to 105°C. Measured sensitivity of tag is −5dBm at room temperature.",
"title": ""
},
{
"docid": "77c922c3d2867fa7081a9f18ae0b1151",
"text": "The failure of critical components in industrial systems may have negative consequences on the availability, the productivity, the security and the environment. To avoid such situations, the health condition of the physical system, and particularly of its critical components, can be constantly assessed by using the monitoring data to perform on-line system diagnostics and prognostics. The present paper is a contribution on the assessment of the health condition of a Computer Numerical Control (CNC) tool machine and the estimation of its Remaining Useful Life (RUL). The proposed method relies on two main phases: an off-line phase and an on-line phase. During the first phase, the raw data provided by the sensors are processed to extract reliable features. These latter are used as inputs of learning algorithms in order to generate the models that represent the wear’s behavior of the cutting tool. Then, in the second phase, which is an assessment one, the constructed models are exploited to identify the tool’s current health state, predict its RUL and the associated confidence bounds. The proposed method is applied on a benchmark of condition monitoring data gathered during several cuts of a CNC tool. Simulation results are obtained and discussed at the end of the paper.",
"title": ""
},
{
"docid": "6537921976c2779d1e7d921c939ec64d",
"text": "Stencil computation sweeps over a spatial grid over multiple time steps to perform nearest-neighbor computations. The bandwidth-to-compute requirement for a large class of stencil kernels is very high, and their performance is bound by the available memory bandwidth. Since memory bandwidth grows slower than compute, the performance of stencil kernels will not scale with increasing compute density. We present a novel 3.5D-blocking algorithm that performs 2.5D-spatial and temporal blocking of the input grid into on-chip memory for both CPUs and GPUs. The resultant algorithm is amenable to both thread- level and data-level parallelism, and scales near-linearly with the SIMD width and multiple-cores. Our performance numbers are faster or comparable to state-of-the-art-stencil implementations on CPUs and GPUs. Our implementation of 7-point-stencil is 1.5X-faster on CPUs, and 1.8X faster on GPUs for single- precision floating point inputs than previously reported numbers. For Lattice Boltzmann methods, the corresponding speedup number on CPUs is 2.1X.",
"title": ""
},
{
"docid": "262f1e965b311bf866ef5b924b6085a7",
"text": "By considering the amount of uncertainty perceived and the willingness to bear uncertainty concomitantly, we provide a more complete conceptual model of entrepreneurial action that allows for examination of entrepreneurial action at the individual level of analysis while remaining consistent with a rich legacy of system-level theories of the entrepreneur. Our model not only exposes limitations of existing theories of entrepreneurial action but also contributes to a deeper understanding of important conceptual issues, such as the nature of opportunity and the potential for philosophical reconciliation among entrepreneurship scholars.",
"title": ""
},
{
"docid": "3043eb8fbe54b5ce5f2767934a6e689e",
"text": "A 21-year-old man presented with an enlarged giant hemangioma on glans penis which also causes an erectile dysfunction (ED) that partially responded to the intracavernous injection stimulation test. Although the findings in magnetic resonance imaging (MRI) indicated a glandular hemangioma, penile colored Doppler ultrasound revealed an invaded cavernausal hemangioma to the glans. Surgical excision was avoided according to the broad extension of the gland lesion. Holmium laser coagulation was applied to the lesion due to the cosmetically concerns. However, the cosmetic results after holmium laser application was not impressive as expected without an improvement in intracavernous injection stimulation test. In conclusion, holmium laser application should not be used to the hemangiomas of glans penis related to the corpus cavernosum, but further studies are needed to reveal the effects of holmium laser application in small hemangiomas restricted to the glans penis.",
"title": ""
},
{
"docid": "d23b1cdbf4e8984eb5ae373318d94431",
"text": "Search engines have greatly influenced the way people access information on the Internet, as such engines provide the preferred entry point to billions of pages on the Web. Therefore, highly ranked Web pages generally have higher visibility to people and pushing the ranking higher has become the top priority for Web masters. As a matter of fact, Search Engine Optimization (SEO) has became a sizeable business that attempts to improve their clients’ ranking. Still, the lack of ways to validate SEO’s methods has created numerous myths and fallacies associated with ranking algorithms.\n In this article, we focus on two ranking algorithms, Google’s and Bing’s, and design, implement, and evaluate a ranking system to systematically validate assumptions others have made about these popular ranking algorithms. We demonstrate that linear learning models, coupled with a recursive partitioning ranking scheme, are capable of predicting ranking results with high accuracy. As an example, we manage to correctly predict 7 out of the top 10 pages for 78% of evaluated keywords. Moreover, for content-only ranking, our system can correctly predict 9 or more pages out of the top 10 ones for 77% of search terms. We show how our ranking system can be used to reveal the relative importance of ranking features in a search engine’s ranking function, provide guidelines for SEOs and Web masters to optimize their Web pages, validate or disprove new ranking features, and evaluate search engine ranking results for possible ranking bias.",
"title": ""
},
{
"docid": "f27d9dc7222674851bed60e5c7ebeffe",
"text": "The paper contains information about the function and basic properties of the actuator based on pneumatic artificial muscles. It describes the design method of control structure of such actuator and shows the nonlinear static and dynamic characteristics of this actuator. The step responses and the non-linear static and dynamic characteristics were measured by authors on the real pneumatic actuator with artificial muscles Festo MAS 20-250.",
"title": ""
},
{
"docid": "ab168b9599975ee3fe41aa72df6cda0a",
"text": "BACKGROUND\nThe United Kingdom has had a significant increase in addiction to and use of cocaine among 16-29-year olds from 6% in 1998 to 10% in 2000. In 2000, the United Kingdom had the highest recorded consumption of \"recent use\" cocaine in Europe, with 3.3% of young adults. Acupuncture is quick, inexpensive, and relatively safe, and may establish itself as an important addiction service in the future.\n\n\nAIM\nTo select investigations that meet the inclusion criteria and critically appraise them in order to answer the question: \"Is acupuncture effective in the treatment of cocaine addiction?\" The focus shall then be directed toward the use of the National Acupuncture Detoxification Association (NADA) protocol as the intervention and the selection of sham points for the control group.\n\n\nDATA SOURCES\nThe ARRC database was accessed from Trina Ward (M. Phil. student) at Thames Valley University. AMED, MEDLINE and Embase were also accessed along with \"hand\" searching methods at the British Library.\n\n\nINCLUSION AND EXCLUSION CRITERIA\nPeople addicted to either cocaine or crack cocaine as their main addiction, needle-acupuncture, single-double-blinded process, randomized subjects, a reference group incorporating a form of sham points.\n\n\nEXCLUSION CRITERIA\nuse of moxibustion, laser acupuncture, transcutaneous electrical nerve stimulation (TENS) electroacupuncture or conditions that did not meet the inclusion criteria.\n\n\nQUALITY ASSESSMENT\nThe criteria set by ter Riet, Kleijnen and Knipschild (in 1990); Hammerschlag and Morris (in 1990); Koes, Bouter and van der Heijden (in 1995), were modified into one set of criteria consisting of 27 different values.\n\n\nRESULTS\nSix randomized controlled trials (RCTs) met the inclusion criteria and were included in this review. All studies scored over 60 points indicating a relatively adequate methodology quality. The mean was 75 and the standard deviation was 6.80. A linear regression analysis did not yield a statistically significant association (n = 6, p = 0.11).\n\n\nCONCLUSIONS\nThis review could not confirm that acupuncture was an effective treatment for cocaine abuse. The NADA protocol of five treatment points still offers the acupuncturist the best possible combination of acupuncture points based upon Traditional Chinese Medicine. Throughout all the clinical trials reviewed, no side-effects of acupuncture were noted. This paper calls for the full set of 5 treatment points as laid out by the NADA to be included as the treatment intervention. Points on the helix, other than the liver yang points, should be selected as sham points for the control group.",
"title": ""
},
{
"docid": "3f79f0eee8878fd43187e9d48531a221",
"text": "In this paper, the design and development of a portable classroom attendance system based on fingerprint biometric is presented. Among the salient aims of implementing a biometric feature into a portable attendance system is security and portability. The circuit of this device is strategically constructed to have an independent source of energy to be operated, as well as its miniature design which made it more efficient in term of its portable capability. Rather than recording the attendance in writing or queuing in front of class equipped with fixed fingerprint or smart card reader. This paper introduces a portable fingerprint based biometric attendance system which addresses the weaknesses of the existing paper based attendance method or long time queuing. In addition, our biometric fingerprint based system is encrypted which preserves data integrity.",
"title": ""
}
] |
scidocsrr
|
2e3abfffd4e3ba10e1dc45f421dfe79a
|
Deep Multi-Output Forecasting: Learning to Accurately Predict Blood Glucose Trajectories
|
[
{
"docid": "b0bd9a0b3e1af93a9ede23674dd74847",
"text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.",
"title": ""
},
{
"docid": "97006c15d2158da060d8aa6caf64a14d",
"text": "A nonlinear model predictive controller has been developed to maintain normoglycemia in subjects with type 1 diabetes during fasting conditions such as during overnight fast. The controller employs a compartment model, which represents the glucoregulatory system and includes submodels representing absorption of subcutaneously administered short-acting insulin Lispro and gut absorption. The controller uses Bayesian parameter estimation to determine time-varying model parameters. Moving target trajectory facilitates slow, controlled normalization of elevated glucose levels and faster normalization of low glucose values. The predictive capabilities of the model have been evaluated using data from 15 clinical experiments in subjects with type 1 diabetes. The experiments employed intravenous glucose sampling (every 15 min) and subcutaneous infusion of insulin Lispro by insulin pump (modified also every 15 min). The model gave glucose predictions with a mean square error proportionally related to the prediction horizon with the value of 0.2 mmol L(-1) per 15 min. The assessment of clinical utility of model-based glucose predictions using Clarke error grid analysis gave 95% of values in zone A and the remaining 5% of values in zone B for glucose predictions up to 60 min (n = 1674). In conclusion, adaptive nonlinear model predictive control is promising for the control of glucose concentration during fasting conditions in subjects with type 1 diabetes.",
"title": ""
}
] |
[
{
"docid": "d763198d3bfb1d30b153e13245c90c08",
"text": "Inspired by the aerial maneuvering ability of lizards, we present the design and control of MSU (Michigan State University) tailbot - a miniature-tailed jumping robot. The robot can not only wheel on the ground, but also jump up to overcome obstacles. Moreover, once leaping into the air, it can control its body angle using an active tail to dynamically maneuver in midair for safe landings. We derive the midair dynamics equation and design controllers, such as a sliding mode controller, to stabilize the body at desired angles. To the best of our knowledge, this is the first miniature (maximum size 7.5 cm) and lightweight (26.5 g) robot that can wheel on the ground, jump to overcome obstacles, and maneuver in midair. Furthermore, tailbot is equipped with on-board energy, sensing, control, and wireless communication capabilities, enabling tetherless or autonomous operations. The robot in this paper exemplifies the integration of mechanical design, embedded system, and advanced control methods that will inspire the next-generation agile robots mimicking their biological counterparts. Moreover, it can serve as mobile sensor platforms for wireless sensor networks with many field applications.",
"title": ""
},
{
"docid": "e668a6b42058bc44925d073fd9ee0cdd",
"text": "Reducing the in-order delivery, or playback, delay of reliable transport layer protocols over error prone networks can significantly improve application layer performance. This is especially true for applications that have time sensitive constraints such as streaming services. We explore the benefits of a coded generalization of selective repeat ARQ for minimizing the in-order delivery delay. An analysis of the delay's first two moments is provided so that we can determine when and how much redundancy should be added to meet a user's requirements. Numerical results help show the gains over selective repeat ARQ, as well as the trade-offs between meeting the user's delay constraints and the costs inflicted on the achievable rate. Finally, the analysis is compared with experimental results to help illustrate how our work can be used to help inform system decisions.",
"title": ""
},
{
"docid": "3155b09ca1e44aa4fee2bb58ebb1fa35",
"text": "In this paper, we present a novel approach for identifying argumentative discourse structures in persuasive essays. The structure of argumentation consists of several components (i.e. claims and premises) that are connected with argumentative relations. We consider this task in two consecutive steps. First, we identify the components of arguments using multiclass classification. Second, we classify a pair of argument components as either support or non-support for identifying the structure of argumentative discourse. For both tasks, we evaluate several classifiers and propose novel feature sets including structural, lexical, syntactic and contextual features. In our experiments, we obtain a macro F1-score of 0.726 for identifying argument components and 0.722 for argumentative relations.",
"title": ""
},
{
"docid": "9dbb1b0b6a35bd78b35982a4957cdec4",
"text": "Many modern Web-services ignore existing Web-standards and develop their own interfaces to publish their services. This reduces interoperability and increases network latency, which in turn reduces scalability of the service. The Web grew from a few thousand requests per day to million requests per hour without significant loss of performance. Applying the same architecture underlying the modern Web to Web-services could improve existing and forthcoming applications. REST is the idealized model of the interactions within an Web-application and became the foundation of the modern Web-architecture, it has been designed to meet the needs of Internet-scale distributed hypermedia systems by emphasizing scalability, generality of interfaces, independent deployment and allowing intermediary components to reduce network latency.",
"title": ""
},
{
"docid": "1ca5d4ba5591dbc2c6c2044c19be2ffb",
"text": "Distractor generation is a crucial step for fill-in-the-blank question generation. We propose a generative model learned from training generative adversarial nets (GANs) to create useful distractors. Our method utilizes only context information and does not use the correct answer, which is completely different from previous Ontology-based or similarity-based approaches. Trained on the Wikipedia corpus, the proposed model is able to predict Wiki entities as distractors. Our method is evaluated on two biology question datasets collected from Wikipedia and actual college-level exams. Experimental results show that our context-based method achieves comparable performance to a frequently used word2vec-based method for the Wiki dataset. In addition, we propose a second-stage learner to combine the strengths of the two methods, which further improves the performance on both datasets, with 51.7% and 48.4% of generated distractors being acceptable.",
"title": ""
},
{
"docid": "57f8b44836e1f20528f6f7874369447f",
"text": "In semiarid regions, thousands of small reservoirs provide the rural population with water, but their storage volumes and hydrological impact are largely unknown. This paper analyzes the suitability of weather-independent radar satellite images for monitoring small reservoir surfaces. The surface areas of three reservoirs were extracted from 21 of 22 ENVISAT Advanced Synthetic Aperture Radar scenes, acquired bimonthly from June 2005 to August 2006. The reservoir surface areas were determined with a quasi-manual classification approach, as stringent classification rules often failed due to the spatial and temporal variability of the backscatter from the water. The land-water contrast is critical for the detection of water bodies. Additionally, wind has a significant impact on the classification results and affects the water surface and the backscattered radar signal (Bragg scattering) above a wind speed threshold of 2.6 mmiddots-1. The analysis of 15 months of wind speed data shows that, on 96% of the days, wind speeds were below the Bragg scattering criterion at the time of night time acquisitions, as opposed to 50% during the morning acquisition time. Night time acquisitions are strongly advisable over day time acquisitions due to lower wind interference. Over the year, radar images are most affected by wind during the onset of the rainy season (May and June). We conclude that radar and optical systems are complimentary. Radar is suitable during the rainy season but is affected by wind and lack of vegetation context during the dry season.",
"title": ""
},
{
"docid": "45e161a82768091ce2c0a641f277297b",
"text": "Industrial Control System (ICS) is used to monitor and control critical infrastructures. Programmable logic controllers (PLCs) are major components of ICS, which are used to form automation system. It is important to protect PLCs from any attacks and undesired incidents. However, it is not easy to apply traditional tools and techniques to PLCs for security protection and forensics because of its unique architectures. Semi-supervised machine learning algorithm, One-class Support Vector Machine (OCSVM), has been applied successfully to many anomaly detection problems. This paper proposes a novel methodology to detect anomalous events of PLC by using OCSVM. The methodology was applied to a simulated traffic light control system to illustrate its effectiveness and accuracy. Our results show that high accuracy of identification of anomalous PLC operations is obtained which can help investigators to perform PLC forensics efficiently and effectively.",
"title": ""
},
{
"docid": "7165a1158efb3d6c9298ffef13c6f0e8",
"text": "Virtualization of operating systems provides a common way to run different services in the cloud. Recently, the lightweight virtualization technologies claim to offer superior performance. In this paper, we present a detailed performance comparison of traditional hypervisor based virtualization and new lightweight solutions. In our measurements, we use several benchmarks tools in order to understand the strengths, weaknesses, and anomalies introduced by these different platforms in terms of processing, storage, memory and network. Our results show that containers achieve generally better performance when compared with traditional virtual machines and other recent solutions. Albeit containers offer clearly more dense deployment of virtual machines, the performance difference with other technologies is in many cases relatively small.",
"title": ""
},
{
"docid": "54762ce485c2db5398934621ba62c33d",
"text": "There are many tools that help programmers find code fragments, but most are inexpressive and rely on static information. We present a new technique for synthesizing code that is dynamic (giving accurate results and allowing programmers to reason about concrete executions), easy-to-use (supporting a wide range of correctness specifications), and interactive (allowing users to refine the candidate code snippets). Our implementation, which we call CodeHint, generates and evaluates code at runtime and hence can synthesize real-world Java code that involves I/O, reflection, native calls, and other advanced language features. We have evaluated CodeHint in two user studies and show that its algorithms are efficient and that it improves programmer productivity by more than a factor of two.",
"title": ""
},
{
"docid": "ce48548c0004b074b18f95792f3e6ce8",
"text": "In this paper, we study domain adaptation with a state-of-the-art hierarchical neural network for document-level sentiment classification. We first design a new auxiliary task based on sentiment scores of domain-independent words. We then propose two neural network architectures to respectively induce document embeddings and sentence embeddings that work well for different domains. When these document and sentence embeddings are used for sentiment classification, we find that with both pseudo and external sentiment lexicons, our proposed methods can perform similarly to or better than several highly competitive domain adaptation methods on a benchmark dataset of product reviews.",
"title": ""
},
{
"docid": "19acb49d484c0a5d949e2f7813253759",
"text": "In this paper we present a PDR (Pedestrian Dead Reckoning) system with a phone location awareness algorithm. PDR is a device which provides position information of the pedestrian. In general, the step length is estimated using a linear combination of the walking frequency and the acceleration variance for the mobile phone. It means that the step length estimation accuracy is affected by coefficients of the walking frequency and the acceleration variance which are called step length estimation parameters. Developed PDR is assumed that it is embedded in the mobile phone. Thus, parameters can be different from each phone location such as hand with swing motion, hand without any motion and pants pocket. It means that different parameters can degrade the accuracy of the step length estimation. Step length estimation result can be improved when appropriate parameters which are determined by phone location awareness algorithm are used. In this paper, the phone location awareness algorithm for PDR is proposed.",
"title": ""
},
{
"docid": "a8699e1ed8391e5a55fbd79ae3ac0972",
"text": "The benefits of an e-learning system will not be maximized unless learners use the system. This study proposed and tested alternative models that seek to explain student intention to use an e-learning system when the system is used as a supplementary learning tool within a traditional class or a stand-alone distance education method. The models integrated determinants from the well-established technology acceptance model as well as system and participant characteristics cited in the research literature. Following a demonstration and use phase of the e-learning system, data were collected from 259 college students. Structural equation modeling provided better support for a model that hypothesized stronger effects of system characteristics on e-learning system use. Implications for both researchers and practitioners are discussed. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8addf385803074288c1a07df92ed1b9f",
"text": "In a permanent magnet synchronous motor where inductances vary as a function of rotor angle, the 2 phase (d-q) equivalent circuit model is commonly used for simplicity and intuition. In this article, a two phase model for a PM synchronous motor is derived and the properties of the circuits and variables are discussed in relation to the physical 3 phase entities. Moreover, the paper suggests methods of obtaining complete model parameters from simple laboratory tests. Due to the lack of developed procedures in the past, obtaining model parameters were very difficult and uncertain, because some model parameters are not directly measurable and vary depending on the operating conditions. Formulation is mainly for interior permanent magnet synchronous motors but can also be applied to surface permanent magnet motors.",
"title": ""
},
{
"docid": "bdfb0ec2182434dad32049fa04f8c795",
"text": "This paper introduces a vision-based gesture mouse system, which is roughly independent from the lighting conditions, because it only uses the depth data for hand sign recognition. A Kinect sensor was used to develop the system, but other depth sensing cameras are adequate as well, if their resolutions are similar or better than the resolution of Kinect sensor. Our aim was to find a comfortable, user-friendly solution, which can be used for a long time without getting tired. The implementation of the system was developed in C++, and two types of test were performed too. We investigated how fast the user can position with the cursor and click on objects and we also examined which controls of the graphical user interfaces (GUI) are easy to use and which ones are difficult to use with our gesture mouse. Our system is precise enough to use efficiently most of the elements of traditional GUI such as buttons, icons, scrollbars, etc. The accuracy achieved by advanced users is only slightly below as if they used the traditional mouse.",
"title": ""
},
{
"docid": "3ced47ece49eeec3edc5d720df9bb864",
"text": "Complex space systems typically provide the operator a means to understand the current state of system components. The operator often has to manually determine whether the system is able to perform a given set of high level objectives based on this information. The operations team needs a way for the system to quantify its capability to successfully complete a mission objective and convey that information in a clear, concise way. A mission-level space cyber situational awareness tool suite integrates the data into a complete picture to display the current state of the mission. The Johns Hopkins University Applied Physics Laboratory developed the Spyder tool suite for such a purpose. The Spyder space cyber situation awareness tool suite allows operators to understand the current state of their systems, allows them to determine whether their mission objectives can be completed given the current state, and provides insight into any anomalies in the system. Spacecraft telemetry, spacecraft position, ground system data, ground computer hardware, ground computer software processes, network connections, and network data flows are all combined into a system model service that serves the data to various display tools. Spyder monitors network connections, port scanning, and data exfiltration to determine if there is a cyber attack. The Spyder Tool Suite provides multiple ways of understanding what is going on in a system. Operators can see the logical and physical relationships between system components to better understand interdependencies and drill down to see exactly where problems are occurring. They can quickly determine the state of mission-level capabilities. The space system network can be analyzed to find unexpected traffic. Spyder bridges the gap between infrastructure and mission and provides situational awareness at the mission level.",
"title": ""
},
{
"docid": "23bc28928a00ba437660efcb1d93c1a8",
"text": "Mental disorders occur in people in all countries, societies and in all ethnic groups, regardless socio-economic order with more frequent anxiety disorders. Through the process of time many treatment have been applied in order to address this complex mental issue. People with anxiety disorders can benefit from a variety of treatments and services. Following an accurate diagnosis, possible treatments include psychological treatments and mediation. Complementary and alternative medicine (CAM) plays a significant role in health care systems. Patients with chronic pain conditions, including arthritis, chronic neck and backache, headache, digestive problems and mental health conditions (including insomnia, depression, and anxiety) were high users of CAM therapies. Aromatherapy is a holistic method of treatment, using essential oils. There are several essential oils that can help in reducing anxiety disorders and as a result the embodied events that they may cause.",
"title": ""
},
{
"docid": "ccab9e95d4a0ad133c7c0f7e28b2c6f4",
"text": "Endoscopic abdominoplasty is feasible, safe, and effective in the proper surgical candidate. Excellent results can be expected when proper patient selection criteria are followed. With future refinements in technique and equipment, this procedure may be extended safely to those patients with more severe deformities.",
"title": ""
},
{
"docid": "dd170ec01ee5b969605dace70e283664",
"text": "This work discusses the regulation of the ball and plate system, the problemis to design a control laws which generates a voltage u for the servomotors to move the ball from the actual position to a desired one. The controllers are constructed by introducing nonlinear compensation terms into the traditional PD controller. In this paper, a complete physical system and controller design is explored from conception to modeling to testing and implementation. The stability of the control is presented. Experiment results are obtained via our prototype of the ball and plate system.",
"title": ""
},
{
"docid": "d71f2693331ecef85af77c122ee47496",
"text": "Deep Learning is a new area of Machine Learning research, which mainly addresses the problem of time consuming, often incomplete feature engineering in machine learning. Recursive Neural Network (RNN) is a new deep learning architecture that has been highly successful in several Natural Language Processing tasks. We propose a new approach for relation classification, using an RNN, based on the shortest path between two entities in the dependency graph. Most previous works on RNN are based on constituency-based parsing because phrasal nodes in a parse tree can capture compositionality in a sentence. Compared with constituency-based parse trees, dependency graphs can represent the relation more compactly. This is particularly important in sentences with distant entities, where the parse tree spans words that are not relevant to the relation. In such cases RNN cannot be trained effectively in a timely manner. On the other hand, dependency graphs lack phrasal nodes that complicates the application of RNN. In order to tackle this problem, we employ dependency constituent units called chains. Further, we devise two methods to incorporate chains into an RNN. The first model uses a fixed tree structure based on a heuristic, while the second one predicts the structure by means of a recursive autoencoder. Chain based RNN provides a smaller network which performs considerably faster, and achieves better classification results. Experiments on SemEval 2010 relation classification task and SemEval 2013 drug drug interaction task demonstrate the effectiveness of our approach compared with the state-of-the-art models.",
"title": ""
},
{
"docid": "e8a36f2eeae3cdd1bf2d83680aa9f82f",
"text": "We conducted a study to track the emotions, their behavioral correlates, and relationship with performance when novice programmers learned the basics of computer programming in the Python language. Twenty-nine participants without prior programming experience completed the study, which consisted of a 25 minute scaffolding phase (with explanations and hints) and a 15 minute fadeout phase (no explanations or hints) with a computerized learning environment. Emotional states were tracked via retrospective self-reports in which learners viewed videos of their faces and computer screens recorded during the learning session and made judgments about their emotions at approximately 100 points. The results indicated that flow/engaged (23%), confusion (22%), frustration (14%), and boredom (12%) were the major emotions students experienced, while curiosity, happiness, anxiety, surprise, anger, disgust, fear, and sadness were comparatively rare. The emotions varied as a function of instructional scaffolds and were systematically linked to different student behaviors (idling, constructing code, running code). Boredom, flow/engaged, and confusion were also correlated with performance outcomes. Implications of our findings for affect-sensitive learning interventions are discussed.",
"title": ""
}
] |
scidocsrr
|
dace90df6569b766fa23715ba20a2977
|
Sudden death and gradual decay in visual working memory.
|
[
{
"docid": "64d9f6973697749b6e2fa330101cbc77",
"text": "Evidence is presented that recognition judgments are based on an assessment of familiarity, as is described by signal detection theory, but that a separate recollection process also contributes to performance. In 3 receiver-operating characteristics (ROC) experiments, the process dissociation procedure was used to examine the contribution of these processes to recognition memory. In Experiments 1 and 2, reducing the length of the study list increased the intercept (d') but decreased the slope of the ROC and increased the probability of recollection but left familiarity relatively unaffected. In Experiment 3, increasing study time increased the intercept but left the slope of the ROC unaffected and increased both recollection and familiarity. In all 3 experiments, judgments based on familiarity produced a symmetrical ROC (slope = 1), but recollection introduced a skew such that the slope of the ROC decreased.",
"title": ""
}
] |
[
{
"docid": "3c4219212dfeb01d2092d165be0cfb44",
"text": "Classical substrate noise analysis considers the silicon resistivity of an integrated circuit only as doping dependent besides neglecting diffusion currents as well. In power circuits minority carriers are injected into the substrate and propagate by drift–diffusion. In this case the conductivity of the substrate is spatially modulated and this effect is particularly important in high injection regime. In this work a description of the coupling between majority and minority drift–diffusion currents is presented. A distributed model of the substrate is then proposed to take into account the conductivity modulation and its feedback on diffusion processes. The model is expressed in terms of equivalent circuits in order to be fully compatible with circuit simulators. The simulation results are then discussed for diodes and bipolar transistors and compared to the ones obtained from physical device simulations and measurements. 2014 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "fc62e84fc995deb1932b12821dfc0ada",
"text": "As these paired Commentaries discuss, neuroscientists and architects are just beginning to collaborate, each bringing what they know about their respective fields to the task of improving the environment of research buildings and laboratories.",
"title": ""
},
{
"docid": "82a40130bc83a2456c8368fa9275c708",
"text": "This paper presents a novel strategy for using ant colony optimization (ACO) to evolve the structure of deep recurrent neural networks. While versions of ACO for continuous parameter optimization have been previously used to train the weights of neural networks, to the authors’ knowledge they have not been used to actually design neural networks. The strategy presented is used to evolve deep neural networks with up to 5 hidden and 5 recurrent layers for the challenging task of predicting general aviation flight data, and is shown to provide improvements of 63 % for airspeed, a 97 % for altitude and 120 % for pitch over previously best published results, while at the same time not requiring additional input neurons for residual values. The strategy presented also has many benefits for neuro evolution, including the fact that it is easily parallizable and scalable, and can operate using any method for training neural networks. Further, the networks it evolves can typically be trained in fewer iterations than fully connected networks.",
"title": ""
},
{
"docid": "9cc299e5b86ba95372351ef31e567b31",
"text": "449 www.erpublication.org Abstract—Brain tumor is an uncontrolled growth of tissues in human brain. This tumor, when turns in to cancer becomes life-threatening. For images of human brain different techniques are used to capture image. These techniques involve X-Ray, Computer Tomography (CT) and Magnetic Resonance imaging MRI. For diagnosis, MRI is used to distinguish pathologic tissue from normal tissue, especially for brain related disorders and has more advantages over other techniques. The fundamental aspect that makes segmentation of medical images difficult is the complexity and variability of the anatomy that is being imaged. It may not be possible to locate certain structures without detailed anatomical knowledge. In this paper, a method to extract the brain tumor from the MRI image using clustering and watershed segmentation is proposed. The proposed method combines K-means clustering and watershed segmentation after applying some morphological operations for better results. The major advantage of watershed segmentation is that it is able to construct a complete division of the image but the disadvantages are over segmentation and sensitivity which was overcome by using K-means clustering to produce a primary segmentation of the image.",
"title": ""
},
{
"docid": "efd566ac16ce096fe44fb89147d6976c",
"text": "Advances of sensor and RFID technology provide significant new power for humans to sense, understand and manage the world. RFID provides fast data collection with precise identification of objects with unique IDs without line of sight, thus it can be used for identifying, locating, tracking and monitoring physical objects. Despite these benefits, RFID poses many challenges for data processing and management: i) RFID observations contain duplicates, which have to be filtered; ii) RFID observations have implicit meanings, which have to be transformed and aggregated into semantic data represented in their data models; and iii) RFID data are temporal, streaming, and in high volume, and have to be processed on the fly. Thus, a general RFID data processing framework is needed to automate the transformation of physical RFID observations into the virtual counterparts in the virtual world linked to business applications. In this paper, we take an event-oriented approach to process RFID data, by devising RFID application logic into complex events. We then formalize the specification and semantics of RFID events and rules. We demonstrate that traditional ECA event engine cannot be used to support highly temporally constrained RFID events, and develop an RFID event detection engine that can effectively process complex RFID events. The declarative event-based approach greatly simplifies the work of RFID data processing, and significantly reduces the cost of RFID data integration.",
"title": ""
},
{
"docid": "2f310c62ada7e2f7696b61a8ee0f74a3",
"text": "[This paper is the third revised version (2013). It was originally presented in a philosophical conference in Athens, Greece on 6 June 2006, Athens Institute of Education and Research. It was first published as Chapter 28 in The philosophical landscape. Third edition. Edited by Rolando M. Gripaldo. Manila: Philippine National Philosophical Research Society, 2007. Other editions appeared in Philosophia: International Journal of Philosophy 36/8 (1): January 2007 and in The making of a Filipino philosopher and other essays. [A collection of Gripaldo’s essays.] Chapter 2. Mandaluyong City: National Book Store, 2009.]",
"title": ""
},
{
"docid": "5510f5e1bcf352e3219097143200531f",
"text": "Research aimed at correcting words in text has focused on three progressively more difficult problems:(1) nonword error detection; (2) isolated-word error correction; and (3) context-dependent work correction. In response to the first problem, efficient pattern-matching and n-gram analysis techniques have been developed for detecting strings that do not appear in a given word list. In response to the second problem, a variety of general and application-specific spelling correction techniques have been developed. Some of them were based on detailed studies of spelling error patterns. In response to the third problem, a few experiments using natural-language-processing tools or statistical-language models have been carried out. This article surveys documented findings on spelling error patterns, provides descriptions of various nonword detection and isolated-word error correction techniques, reviews the state of the art of context-dependent word correction techniques, and discusses research issues related to all three areas of automatic error correction in text.",
"title": ""
},
{
"docid": "cc9e677fb714cfbab793a38b6c7c174e",
"text": "Singapore has embarked a project to build a remote monitoring system for an existing micro-grid located at Ubin Island. This paper introduces the design of the monitoring system. In this monitoring system, the module level of PV and cell level of battery conditions will be monitored. All the measurements within the micro-grid will be synchronized by using GPS modules for better understanding the micro-grid. And the resolution of the measurements can be as high as 10 samples per cycle. Because of the micro-grid is an isolated system, a remote monitoring operation center will be built for the users to monitor the micro-grid.",
"title": ""
},
{
"docid": "48485e967c5aa345a53b91b47cc0e6d0",
"text": "The buccinator musculomucosal flaps are actually considered the main reconstructive option for small-moderate defects of the oral mucosa. In this paper we present our experience with the posteriorly based buccinator musculomucosal flap. A retrospective review was performed of all patients who had had a Bozola flap reconstruction at the Operative Unit of Maxillo-Facial Surgery of Parma, Italy, between 2003 and 2010. The Bozola flap was used in 19 patients. In most cases they had defects of the palate (n=12). All flaps were harvested successfully and no major complications occurred. Minor complications were observed in two cases. At the end of the follow up all patients returned to a normal diet without alterations of speech and swallowing. We consider the Bozola flap the first choice for the reconstruction of defects involving the palate, the cheek and the postero-lateral tongue and floor of the mouth.",
"title": ""
},
{
"docid": "280385abc0aa67490ebbc7b5634c63fa",
"text": "We use words to communicate about things and kinds of things, their properties, relations and actions. Researchers are now creating robotic and simulated systems that ground language in machine perception and action, mirroring human abilities. A new kind of computational model is emerging from this work that bridges the symbolic realm of language with the physical realm of real-world referents. It explains aspects of context-dependent shifts of word meaning that cannot easily be explained by purely symbolic models. An exciting implication for cognitive modeling is the use of grounded systems to 'step into the shoes' of humans by directly processing first-person-perspective sensory data, providing a new methodology for testing various hypotheses of situated communication and learning.",
"title": ""
},
{
"docid": "62766b08b1666085543b732cf839dec0",
"text": "The research area of evolutionary multiobjective optimization (EMO) is reaching better understandings of the properties and capabilities of EMO algorithms, and accumulating much evidence of their worth in practical scenarios. An urgent emerging issue is that the favoured EMO algorithms scale poorly when problems have \"many\" (e.g. five or more) objectives. One of the chief reasons for this is believed to be that, in many-objective EMO search, populations are likely to be largely composed of nondominated solutions. In turn, this means that the commonly-used algorithms cannot distinguish between these for selective purposes. However, there are methods that can be used validly to rank points in a nondominated set, and may therefore usefully underpin selection in EMO search. Here we discuss and compare several such methods. Our main finding is that simple variants of the often-overlooked \"Average Ranking\" strategy usually outperform other methods tested, covering problems with 5-20 objectives and differing amounts of inter-objective correlation.",
"title": ""
},
{
"docid": "171d9acd0e2cb86a02d5ff56d4515f0d",
"text": "We explore two solutions to the problem of mistranslating rare words in neural machine translation. First, we argue that the standard output layer, which computes the inner product of a vector representing the context with all possible output word embeddings, rewards frequent words disproportionately, and we propose to fix the norms of both vectors to a constant value. Second, we integrate a simple lexical module which is jointly trained with the rest of the model. We evaluate our approaches on eight language pairs with data sizes ranging from 100k to 8M words, and achieve improvements of up to +4.3 BLEU, surpassing phrasebased translation in nearly all settings.1",
"title": ""
},
{
"docid": "c6c65ca2af3b92ad162a5579dbe6dcdc",
"text": "This paper reviews research on automatic summarising over the last decade. This period has seen a rapid growth of work in the area stimulated by technology and by several system evaluation programmes. The review makes use of several frameworks to organise the review, for summarising, for systems, for the task factors affecting summarising, and for evaluation design and practice. The review considers the evaluation strategies that have been applied to summarising and the issues they raise, and the major summary evaluation programmes. It examines the input, purpose and output factors that have been investigated in summarising research in the last decade, and discusses the classes of strategy, both extractive and non-extractive, that have been explored, illustrating the range of systems that have been built. This analysis of strategies is amplified by accounts of specific exemplar systems. The conclusions drawn from the review are that automatic summarisation research has made valuable progress in the last decade, with some practically useful approaches, better evaluation, and more understanding of the task. However as the review also makes clear, summarising systems are often poorly motivated in relation to the factors affecting summaries, and evaluation needs to be taken significantly further so as to engage with the purposes for which summaries are intended and the contexts in which they are used. A reduced version of this report, entitled ‘Automatic summarising: the state of the art’ will appear in Information Processing and Management, 2007.",
"title": ""
},
{
"docid": "bfef5aaa8bbe366bc2a680675e9b2e82",
"text": "Traditional approaches to the study of cognition emphasize an information-processing view that has generally excluded emotion. In contrast, the recent emergence of cognitive neuroscience as an inspiration for understanding human cognition has highlighted its interaction with emotion. This review explores insights into the relations between emotion and cognition that have resulted from studies of the human amygdala. Five topics are explored: emotional learning, emotion and memory, emotion's influence on attention and perception, processing emotion in social stimuli, and changing emotional responses. Investigations into the neural systems underlying human behavior demonstrate that the mechanisms of emotion and cognition are intertwined from early perception to reasoning. These findings suggest that the classic division between the study of emotion and cognition may be unrealistic and that an understanding of human cognition requires the consideration of emotion.",
"title": ""
},
{
"docid": "cd5a7ee450dbf6ec8f99ee7e5efc8c04",
"text": "This paper addresses the problem of coordinating multiple spacecraft to fly in tightly controlled formations. The main contribution of the paper is to introduce a coordination architecture that subsumes leader-following, behavioral, and virtual-structure approaches to the multiagent coordination problem. The architecture is illustrated through a detailed application of the ideas to the problem of synthesizing a multiple spacecraft interferometer in deep space.",
"title": ""
},
{
"docid": "aca770fa21637483c3ef0d028f8d3b64",
"text": "In the analysis of bibliometric networks, researchers often use mapping and clustering techniques in a combined fashion. Typically, however, mapping and clustering techniques that are used together rely on very different ideas and assumptions. We propose a unified approach to mapping and clustering of bibliometric networks. We show that the VOS mapping technique and a weighted and parameterized variant of modularity-based clustering can both be derived from the same underlying principle. We illustrate our proposed approach by producing a combined mapping and clustering of the most frequently cited publications that appeared in the field of information science in the period 1999–2008.",
"title": ""
},
{
"docid": "27fff4fe7d8c40eb0518639eb176dba9",
"text": "This paper presents a hybrid AC/DC micro grid concept to directly integrate DC/AC renewable sources and loads to DC/AC links respectively. The hybrid grid eliminates multiple DC-AC-DC&AC-DC-AC conversions in an individual AC&DC grid. The hybrid grid increases system efficiency, eliminates the embedded AC/DC and DC/DC converters in various home, office and industry facilities which can reduce size and cost of those facilities. The basic architecture of the hybrid grid is introduced in this paper. Different operation modes of the hybrid grid are discussed. The various control algorithms are investigated and proposed to harness the maximum power from various renewable sources, to store energy surplus during low peak loads, to eliminate unbalance problem in AC link, to maintain voltage stability and smooth power transfer between AC and DC links under various generation and load conditions. A prototype of the hybrid grid under construction is presented. Some simulation and test results are presented.",
"title": ""
},
{
"docid": "9b1edc2fbbf8c6ec584708be0dd25327",
"text": "To date, a large number of algorithms to solve the problem of autonomous exploration and mapping has been presented. However, few efforts have been made to compare these techniques. In this paper, an extensive study of the most important methods for autonomous exploration and mapping of unknown environments is presented. Furthermore, a representative subset of these techniques has been chosen to be analysed. This subset contains methods that differ in the level of multi-robot coordination and in the grade of integration with the simultaneous localization and mapping (SLAM) algorithm. These exploration techniques were tested in simulation and compared using different criteria as exploration time or map quality. The results of this analysis are shown in this paper. The weaknesses and strengths of each strategy have been stated and the most appropriate algorithm for each application has been determined.",
"title": ""
},
{
"docid": "88d1e600c4bdf1aa3ee19eecea885536",
"text": "The impact of victim resistance on rape completion and injury was examined utilizing a large probability sample of sexual assault incidents, derived from the National Crime Victimization Survey (1992-2002), and taking into account whether harm to the victim followed or preceded self-protection (SP) actions. Additional injuries besides rape, particularly serious injuries, following victim resistance are rare. Results indicate that most SP actions, both forceful and nonforceful, reduce the risk of rape completion, and do not significantly affect the risk of additional injury.",
"title": ""
},
{
"docid": "4fd421bbe92b40e85ffd66cf0084b1b8",
"text": "Real-time performance of adaptive digital signal processing algorithms is required in many applications but it often means a high computational load for many conventional processors. In this paper, we present a configurable hardware architecture for adaptive processing of noisy signals for target detection based on Constant False Alarm Rate (CFAR) algorithms. The architecture has been designed to deal with parallel/pipeline processing and to be configured for three version of CFAR algorithms, the Cell-Average, the Max and the Min CFAR. The proposed architecture has been implemented on a Field Programmable Gate Array (FPGA) device providing good performance improvements over software implementations. FPGA implementation results are presented and discussed.",
"title": ""
}
] |
scidocsrr
|
1d30f7381f8928527f017b85057db2bf
|
Feature Detector Using Adaptive Accelerated Segment Test
|
[
{
"docid": "e32f77e31a452ae6866652ce69c5faaa",
"text": "The efficient detection of interesting features is a crucial step for various tasks in Computer Vision. Corners are favored cues due to their two dimensional constraint and fast algorithms to detect them. Recently, a novel corner detection approach, FAST, has been presented which outperforms previous algorithms in both computational performance and repeatability. We will show how the accelerated segment test, which underlies FAST, can be significantly improved by making it more generic while increasing its performance. We do so by finding the optimal decision tree in an extended configuration space, and demonstrating how specialized trees can be combined to yield an adaptive and generic accelerated segment test. The resulting method provides high performance for arbitrary environments and so unlike FAST does not have to be adapted to a specific scene structure. We will also discuss how different test patterns affect the corner response of the accelerated segment test.",
"title": ""
},
{
"docid": "83ad3f9cce21b2f4c4f8993a3d418a44",
"text": "Effective and efficient generation of keypoints from an image is a well-studied problem in the literature and forms the basis of numerous Computer Vision applications. Established leaders in the field are the SIFT and SURF algorithms which exhibit great performance under a variety of image transformations, with SURF in particular considered as the most computationally efficient amongst the high-performance methods to date. In this paper we propose BRISK1, a novel method for keypoint detection, description and matching. A comprehensive evaluation on benchmark datasets reveals BRISK's adaptive, high quality performance as in state-of-the-art algorithms, albeit at a dramatically lower computational cost (an order of magnitude faster than SURF in cases). The key to speed lies in the application of a novel scale-space FAST-based detector in combination with the assembly of a bit-string descriptor from intensity comparisons retrieved by dedicated sampling of each keypoint neighborhood.",
"title": ""
}
] |
[
{
"docid": "6fab2f7c340b6edbffe30b061bcd991e",
"text": "A Majority-Inverter Graph (MIG) is a recently introduced logic representation form whose algebraic and Boolean properties allow for efficient logic optimization. In particular, when considering logic depth reduction, MIG algorithms obtained significantly superior synthesis results as compared to the state-of-the-art approaches based on AND-inverter graphs and commercial tools. In this paper, we present a new MIG optimization algorithm targeting size minimization based on functional hashing. The proposed algorithm makes use of minimum MIG representations which are precomputed for functions up to 4 variables using an approach based on Satisfiability Modulo Theories (SMT). Experimental results show that heavily-optimized MIGs can be further minimized also in size, thanks to our proposed methodology. When using the optimized MIGs as starting point for technology mapping, we were able to improve both depth and area for the arithmetic instances of the EPFL benchmarks beyond the current results achievable by state-of-the-art logic synthesis algorithms.",
"title": ""
},
{
"docid": "e9189d7d310a8c0a45cc1c59be6fbb2d",
"text": "The technological evolution emerges a unified (Industrial) Internet of Things network, where loosely coupled smart manufacturing devices build smart manufacturing systems and enable comprehensive collaboration possibilities that increase the dynamic and volatility of their ecosystems. On the one hand, this evolution generates a huge field for exploitation, but on the other hand also increases complexity including new challenges and requirements demanding for new approaches in several issues. One challenge is the analysis of such systems that generate huge amounts of (continuously generated) data, potentially containing valuable information useful for several use cases, such as knowledge generation, key performance indicator (KPI) optimization, diagnosis, predication, feedback to design or decision support. This work presents a review of Big Data analysis in smart manufacturing systems. It includes the status quo in research, innovation and development, next challenges, and a comprehensive list of potential use cases and exploitation possibilities.",
"title": ""
},
{
"docid": "5d6cb50477423bf9fc1ea6c27ad0f1b9",
"text": "We propose a framework for general probabilistic multi-step time series regression. Specifically, we exploit the expressiveness and temporal nature of Sequence-to-Sequence Neural Networks (e.g. recurrent and convolutional structures), the nonparametric nature of Quantile Regression and the efficiency of Direct Multi-Horizon Forecasting. A new training scheme, forking-sequences, is designed for sequential nets to boost stability and performance. We show that the approach accommodates both temporal and static covariates, learning across multiple related series, shifting seasonality, future planned event spikes and coldstarts in real life large-scale forecasting. The performance of the framework is demonstrated in an application to predict the future demand of items sold on Amazon.com, and in a public probabilistic forecasting competition to predict electricity price and load.",
"title": ""
},
{
"docid": "c55c339eb53de3a385df7d831cb4f24b",
"text": "Massive Open Online Courses (MOOCs) have gained tremendous popularity in the last few years. Thanks to MOOCs, millions of learners from all over the world have taken thousands of high-quality courses for free. Putting together an excellent MOOC ecosystem is a multidisciplinary endeavour that requires contributions from many different fields. Artificial intelligence (AI) and data mining (DM) are two such fields that have played a significant role in making MOOCs what they are today. By exploiting the vast amount of data generated by learners engaging in MOOCs, DM improves our understanding of the MOOC ecosystem and enables MOOC practitioners to deliver better courses. Similarly, AI, supported by DM, can greatly improve student experience and learning outcomes. In this survey paper, we first review the state-of-the-art artificial intelligence and data mining research applied to MOOCs, emphasising the use of AI and DM tools and techniques to improve student engagement, learning outcomes, and our understanding of the MOOC ecosystem. We then offer an overview of key trends and important research to carry out in the fields of AI and DM so that MOOCs can reach their full potential.",
"title": ""
},
{
"docid": "f52dca1ec4b77059639f6faf7c79746a",
"text": "We present an automatic approach to tree annotation in which basic nonterminal symbols are alternately split and merged to maximize the likelihood of a training treebank. Starting with a simple Xbar grammar, we learn a new grammar whose nonterminals are subsymbols of the original nonterminals. In contrast with previous work, we are able to split various terminals to different degrees, as appropriate to the actual complexity in the data. Our grammars automatically learn the kinds of linguistic distinctions exhibited in previous work on manual tree annotation. On the other hand, our grammars are much more compact and substantially more accurate than previous work on automatic annotation. Despite its simplicity, our best grammar achieves an F1 of 90.2% on the Penn Treebank, higher than fully lexicalized systems.",
"title": ""
},
{
"docid": "dcfc6f3c1eba7238bd6c6aa18dcff6df",
"text": "With the evaluation and simulation of long-term evolution/4G cellular network and hot discussion about new technologies or network architecture for 5G, the appearance of simulation and evaluation guidelines for 5G is in urgent need. This paper analyzes the challenges of building a simulation platform for 5G considering the emerging new technologies and network architectures. Based on the overview of evaluation methodologies issued for 4G candidates, challenges in 5G evaluation are formulated. Additionally, a cloud-based two-level framework of system-level simulator is proposed to validate the candidate technologies and fulfill the promising technology performance identified for 5G.",
"title": ""
},
{
"docid": "026408a6ad888ea0bcf298a23ef77177",
"text": "The microwave power transmission is an approach for wireless power transmission. As an important component of a microwave wireless power transmission systems, microwave rectennas are widely studied. A rectenna based on a microstrip dipole antenna and a microwave rectifier with high conversion efficiency were designed at 2.45 GHz. The dipole antenna achieved a gain of 5.2 dBi, a return loss greater than 10 dB, and a bandwidth of 20%. The microwave to DC (MW-DC) conversion efficiency of the rectifier was measured as 83% with 20 dBm input power and 600 Ω load. There are 72 rectennas to form an array with an area of 50 cm by 50 cm. The measured results show that the arrangement of the rectenna connection is an effective way to improve the total conversion efficiency, when the microwave power distribution is not uniform on rectenna array. The experimental results show that the highest microwave power transmission efficiency reaches 67.6%.",
"title": ""
},
{
"docid": "1a65b9d35bce45abeefe66882dcf4448",
"text": "Data is nowadays an invaluable resource, indeed it guides all business decisions in most of the computer-aided human activities. Threats to data integrity are thus of paramount relevance, as tampering with data may maliciously affect crucial business decisions. This issue is especially true in cloud computing environments, where data owners cannot control fundamental data aspects, like the physical storage of data and the control of its accesses. Blockchain has recently emerged as a fascinating technology which, among others, provides compelling properties about data integrity. Using the blockchain to face data integrity threats seems to be a natural choice, but its current limitations of low throughput, high latency, and weak stability hinder the practical feasibility of any blockchain-based solutions. In this paper, by focusing on a case study from the European SUNFISH project, which concerns the design of a secure by-design cloud federation platform for the public sector, we precisely delineate the actual data integrity needs of cloud computing environments and the research questions to be tackled to adopt blockchain-based databases. First, we detail the open research questions and the difficulties inherent in addressing them. Then, we outline a preliminary design of an effective blockchain-based database for cloud computing environments.",
"title": ""
},
{
"docid": "5d15ba47aaa29f388328824fa592addc",
"text": "Breast cancer continues to be a significant public health problem in the world. The diagnosing mammography method is the most effective technology for early detection of the breast cancer. However, in some cases, it is difficult for radiologists to detect the typical diagnostic signs, such as masses and microcalcifications on the mammograms. This paper describes a new method for mammographic image enhancement and denoising based on wavelet transform and homomorphic filtering. The mammograms are acquired from the Faculty of Medicine of the University of Akdeniz and the University of Istanbul in Turkey. Firstly wavelet transform of the mammograms is obtained and the approximation coefficients are filtered by homomorphic filter. Then the detail coefficients of the wavelet associated with noise and edges are modeled by Gaussian and Laplacian variables, respectively. The considered coefficients are compressed and enhanced using these variables with a shrinkage function. Finally using a proposed adaptive thresholding the fine details of the mammograms are retained and the noise is suppressed. The preliminary results of our work indicate that this method provides much more visibility for the suspicious regions.",
"title": ""
},
{
"docid": "102e1718e03b3a4e96ee8c2212738792",
"text": "This paper introduces a new method for the rapid development of complex rule bases involving cue phrases for the purpose of classifying text segments. The method is based on Ripple-Down Rules, a knowledge acquisition method that proved very successful in practice for building medical expert systems and does not require a knowledge engineer. We implemented our system KAFTAN and demonstrate the applicability of our method to the task of classifying scientific citations. Building cue phrase rules in KAFTAN is easy and efficient. We demonstrate the effectiveness of our approach by presenting experimental results where our resulting classifier clearly outperforms previously built classifiers in the recent literature.",
"title": ""
},
{
"docid": "423f246065662358b1590e8f59a2cc55",
"text": "Caused by the rising interest in traffic surveillance for simulations and decision management many publications concentrate on automatic vehicle detection or tracking. Quantities and velocities of different car classes form the data basis for almost every traffic model. Especially during mass events or disasters a wide-area traffic monitoring on demand is needed which can only be provided by airborne systems. This means a massive amount of image information to be handled. In this paper we present a combination of vehicle detection and tracking which is adapted to the special restrictions given on image size and flow but nevertheless yields reliable information about the traffic situation. Combining a set of modified edge filters it is possible to detect cars of different sizes and orientations with minimum computing effort, if some a priori information about the street network is used. The found vehicles are tracked between two consecutive images by an algorithm using Singular Value Decomposition. Concerning their distance and correlation the features are assigned pairwise with respect to their global positioning among each other. Choosing only the best correlating assignments it is possible to compute reliable values for the average velocities.",
"title": ""
},
{
"docid": "84b9601738c4df376b42d6f0f6190f53",
"text": "Cloud Computing is one of the most important trend and newest area in the field of information technology in which resources (e.g. CPU and storage) can be leased and released by customers through the Internet in an on-demand basis. The adoption of Cloud Computing in Education and developing countries is real an opportunity. Although Cloud computing has gained popularity in Pakistan especially in education and industry, but its impact in Pakistan is still unexplored especially in Higher Education Department. Already published work investigated in respect of factors influencing on adoption of cloud computing but very few investigated said analysis in developing countries. The Higher Education Institutions (HEIs) of Punjab, Pakistan are still not focused to discover cloud adoption factors. In this study, we prepared cloud adoption model for Higher Education Institutions (HEIs) of Punjab, a survey was carried out from 900 students all over Punjab. The survey was designed based upon literature and after discussion and opinions of academicians. In this paper, 34 hypothesis were developed that affect the cloud computing adoption in HEIs and tested by using powerful statistical analysis tools i.e. SPSS and SmartPLS. Statistical findings shows that 84.44% of students voted in the favor of cloud computing adoption in their colleges, while 99% supported Reduce Cost as most important factor in cloud adoption.",
"title": ""
},
{
"docid": "f24c9f07945572ed467f397e4274060e",
"text": "Scholarly digital libraries have become an important source of bibliographic records for scientific communities. Author name search is one of the most common query exercised in digital libraries. The name ambiguity problem in the context of author search in digital libraries, arising from multiple authors sharing the same name, poses many challenges. A number of name disambiguation methods have been proposed in the literature so far. A variety of bibliographic attributes have been considered in these methods. However, hardly any effort has been made to assess the potential contribution of these attributes. We, for the first time, evaluate the potential strength and/or weaknesses of these attributes by a rigorous course of experiments on a large data set. We also explore the potential utility of some attributes from different perspective. A close look reveals that most of the earlier work require one or more attributes which are difficult to obtain in practical applications. Based on this empirical study, we identify three very common and easy to access attributes and propose a two-step hierarchical clustering technique to solve name ambiguity using these attributes only. Experimental results on data set extracted from a popular digital library show that the proposed method achieves significantly high level of accuracy (> 90%) for most of the instances.",
"title": ""
},
{
"docid": "279302300cbdca5f8d7470532928f9bd",
"text": "The problem of feature selection is a difficult combinatorial task in Machine Learning and of high practical relevance, e.g. in bioinformatics. Genetic Algorithms (GAs) of fer a natural way to solve this problem. In this paper we present a special Genetic Algorithm, which especially take s into account the existing bounds on the generalization erro r for Support Vector Machines (SVMs). This new approach is compared to the traditional method of performing crossvalidation and to other existing algorithms for feature selection.",
"title": ""
},
{
"docid": "0ce556418f6557d86c59f178a206cd11",
"text": "The efficiency of decision processes which can be divided into two stages has been measured for the whole process as well as for each stage independently by using the conventional data envelopment analysis (DEA) methodology in order to identify the causes of inefficiency. This paper modifies the conventional DEA model by taking into account the series relationship of the two sub-processes within the whole process. Under this framework, the efficiency of the whole process can be decomposed into the product of the efficiencies of the two sub-processes. In addition to this sound mathematical property, the case of Taiwanese non-life insurance companies shows that some unusual results which have appeared in the independent model do not exist in the relational model. In other words, the relational model developed in this paper is more reliable in measuring the efficiencies and consequently is capable of identifying the causes of inefficiency more accurately. Based on the structure of the model, the idea of efficiency decomposition can be extended to systems composed of multiple stages connected in series. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "67ba6914f8d1a50b7da5024567bc5936",
"text": "Abstract—Braille alphabet is an important tool that enables visually impaired individuals to have a comfortable life like those who have normal vision. For this reason, new applications related to the Braille alphabet are being developed. In this study, a new Refreshable Braille Display was developed to help visually impaired individuals learn the Braille alphabet easier. By means of this system, any text downloaded on a computer can be read by the visually impaired individual at that moment by feeling it by his/her hands. Through this electronic device, it was aimed to make learning the Braille alphabet easier for visually impaired individuals with whom the necessary tests were conducted.",
"title": ""
},
{
"docid": "55370f9487be43f2fbd320c903005185",
"text": "Exemplar-based texture synthesis is the process of generating, from an input sample, new texture images of arbitrary size and which are perceptually equivalent to the sample. The two main approaches are statisticsbased methods and patch re-arrangement methods. In the first class, a texture is characterized by a statistical signature; then, a random sampling conditioned to this signature produces genuinely different texture images. The second class boils down to a clever “copy-paste” procedure, which stitches together large regions of the sample. Hybrid methods try to combines ideas from both approaches to avoid their hurdles. Current methods, including the recent CNN approaches, are able to produce impressive synthesis on various kinds of textures. Nevertheless, most real textures are organized at multiple scales, with global structures revealed at coarse scales and highly varying details at finer ones. Thus, when confronted with large natural images of textures the results of state-of-the-art methods degrade rapidly.",
"title": ""
},
{
"docid": "5e5ffa7890dd2e16cff9dbc9592f162e",
"text": "Spin-transfer torque magnetic memory (STT-MRAM) is currently under intense academic and industrial development, since it features non-volatility, high write and read speed and high endurance. In this work, we show that when used in a non-conventional regime, it can additionally act as a stochastic memristive device, appropriate to implement a “synaptic” function. We introduce basic concepts relating to spin-transfer torque magnetic tunnel junction (STT-MTJ, the STT-MRAM cell) behavior and its possible use to implement learning-capable synapses. Three programming regimes (low, intermediate and high current) are identified and compared. System-level simulations on a task of vehicle counting highlight the potential of the technology for learning systems. Monte Carlo simulations show its robustness to device variations. The simulations also allow comparing system operation when the different programming regimes of STT-MTJs are used. In comparison to the high and low current regimes, the intermediate current regime allows minimization of energy consumption, while retaining a high robustness to device variations. These results open the way for unexplored applications of STT-MTJs in robust, low power, cognitive-type systems.",
"title": ""
},
{
"docid": "134d2671fa44793c8969acb50c71c5c0",
"text": "OBJECTIVES\nTransferrin is a glycosylated protein responsible for transporting iron, an essential metal responsible for proper fetal development. Tobacco is a heavily used xenobiotic having a negative impact on the human body and pregnancy outcomes. Aims of this study was to examine the influence of tobacco smoking on transferrin sialic acid residues and their connection with fetal biometric parameters in women with iron-deficiency.\n\n\nMETHODS\nThe study involved 173 samples from pregnant women, smokers and non-smokers, iron deficient and not. Transferrin sialylation was determined by capillary electrophoresis. The cadmium (Cd) level was measured by atomic absorption and the sialic acid concentration by the resorcinol method.\n\n\nRESULTS\nWomen with iron deficiencies who smoked gave birth earlier than non-smoking, non-iron-deficient women. The Cd level, but not the cotinine level, was positively correlated with transferrin sialylation in the blood of iron-deficient women who smoked; 3-, 4-, 5- and 6-sialoTf correlated negatively with fetal biometric parameters in the same group.\n\n\nCONCLUSION\nIt has been shown the relationship between Cd from tobacco smoking and fetal biometric parameters observed only in the iron deficient group suggests an additive effect of these two factors, and indicate that mothers with anemia may be more susceptible to Cd toxicity and disturbed fetal development.",
"title": ""
},
{
"docid": "ab0c80a10d26607134828c6b350089aa",
"text": "Parkinson's disease (PD) is a neurodegenerative disorder with symptoms that progressively worsen with age. Pathologically, PD is characterized by the aggregation of α-synuclein in cells of the substantia nigra in the brain and loss of dopaminergic neurons. This pathology is associated with impaired movement and reduced cognitive function. The etiology of PD can be attributed to a combination of environmental and genetic factors. A popular animal model, the nematode roundworm Caenorhabditis elegans, has been frequently used to study the role of genetic and environmental factors in the molecular pathology and behavioral phenotypes associated with PD. The current review summarizes cellular markers and behavioral phenotypes in transgenic and toxin-induced PD models of C. elegans.",
"title": ""
}
] |
scidocsrr
|
0cd1331ebe2619f15f5d0e7f724f18f5
|
Skin segmentation using color pixel classification: analysis and comparison
|
[
{
"docid": "8fc0d896dfb5411079068f11800aac93",
"text": "This paper is concerned with estimating a probability density function of human skin color using a nite Gaussian mixture model whose parameters are estimated through the EM algorithm Hawkins statistical test on the normality and homoscedasticity common covariance matrix of the estimated Gaussian mixture models is performed and McLachlan s bootstrap method is used to test the number of components in a mixture Experimental results show that the estimated Gaussian mixture model ts skin images from a large database Applications of the estimated density function in image and video databases are presented",
"title": ""
},
{
"docid": "55fd332aa38c3240813e5947c65c867d",
"text": "Skin detection is an important process in many of computer vision algorithms. It usually is a process that starts at a pixel-level, and that involves a pre-process of colorspace transformation followed by a classification process. A colorspace transformation is assumed to increase separability between skin and non-skin classes, to increase similarity among different skin tones, and to bring a robust performance under varying illumination conditions, without any sound reasonings. In this work, we examine if the colorspace transformation does bring those benefits by measuring four separability measurements on a large dataset of 805 images with different skin tones and illumination. Surprising results indicate that most of the colorspace transformations do not bring the benefits which have been assumed.",
"title": ""
}
] |
[
{
"docid": "f6d3157155868f5fafe2533dfd8768b8",
"text": "Over the past few years, the task of conceiving effective attacks to complex networks has arisen as an optimization problem. Attacks are modelled as the process of removing a number k of vertices, from the graph that represents the network, and the goal is to maximise or minimise the value of a predefined metric over the graph. In this work, we present an optimization problem that concerns the selection of nodes to be removed to minimise the maximum betweenness centrality value of the residual graph. This metric evaluates the participation of the nodes in the communications through the shortest paths of the network. To address the problem we propose an artificial bee colony algorithm, which is a swarm intelligence approach inspired in the foraging behaviour of honeybees. In this framework, bees produce new candidate solutions for the problem by exploring the vicinity of previous ones, called food sources. The proposed method exploits useful problem knowledge in this neighbourhood exploration by considering the partial destruction and heuristic reconstruction of selected solutions. The performance of the method, with respect to other models from the literature that can be adapted to face this problem, such as sequential centrality-based attacks, module-based attacks, a genetic algorithm, a simulated annealing approach, and a variable neighbourhood search, is empirically shown. E-mail addresses: lozano@decsai.ugr.es (M. Lozano), cgarcia@uco.es (C. GarćıaMart́ınez), fjrodriguez@unex.es (F.J. Rodŕıguez), humberto@ugr.es (H.M. Trujillo). Preprint submitted to Information Sciences August 17, 2016 *Manuscript (including abstract) Click here to view linked References",
"title": ""
},
{
"docid": "0f7420282b9e16ef6fd26b87fe40eae2",
"text": "This paper presents a robot localization system for indoor environments using WiFi signal strength measure. We analyse the main causes of the WiFi signal strength variation and we experimentally demonstrate that a localization technique based on a propagation model doesn’t work properly in our test-bed. We have carried out a localization system based on a priori radio-map obtained automatically from a robot navigation in the environment in a semi-autonomous way. We analyse the effect of reducing calibration effort in order to diminish practical barriers to wider adoption of this type of location measurement technique. Experimental results using a real robot moving are shown. Finally, the conclusions and future works are",
"title": ""
},
{
"docid": "c6afc173351fe404f7c5b68d2a0bc0a8",
"text": "BACKGROUND\nCombined traumatic brain injury (TBI) and hemorrhagic shock (HS) is highly lethal. In a nonsurvival model of TBI + HS, addition of high-dose valproic acid (VPA) (300 mg/kg) to hetastarch reduced brain lesion size and associated swelling 6 hours after injury; whether this would have translated into better neurologic outcomes remains unknown. It is also unclear whether lower doses of VPA would be neuroprotective. We hypothesized that addition of low-dose VPA to normal saline (NS) resuscitation would result in improved long-term neurologic recovery and decreased brain lesion size.\n\n\nMETHODS\nTBI was created in anesthetized swine (40-43 kg) by controlled cortical impact, and volume-controlled hemorrhage (40% volume) was induced concurrently. After 2 hours of shock, animals were randomized (n = 5 per group) to NS (3× shed blood) or NS + VPA (150 mg/kg). Six hours after resuscitation, packed red blood cells were transfused, and animals were recovered. Peripheral blood mononuclear cells were analyzed for acetylated histone-H3 at lysine-9. A Neurological Severity Score (NSS) was assessed daily for 30 days. Brain magnetic resonance imaging was performed on Days 3 and 10. Cognitive performance was assessed by training animals to retrieve food from color-coded boxes.\n\n\nRESULTS\nThere was a significant increase in histone acetylation in the NS + VPA-treated animals compared with NS treatment. The NS + VPA group demonstrated significantly decreased neurologic impairment and faster speed of recovery as well as smaller brain lesion size compared with the NS group. Although the final cognitive function scores were similar between the groups, the VPA-treated animals reached the goal significantly faster than the NS controls.\n\n\nCONCLUSION\nIn this long-term survival model of TBI + HS, addition of low-dose VPA to saline resuscitation resulted in attenuated neurologic impairment, faster neurologic recovery, smaller brain lesion size, and a quicker normalization of cognitive functions.",
"title": ""
},
{
"docid": "c41c56eeb56975c4d65e3847aa6b8b01",
"text": "We address the problem of comparing sets of images for object recognition, where the sets may represent variations in an object's appearance due to changing camera pose and lighting conditions. canonical correlations (also known as principal or canonical angles), which can be thought of as the angles between two d-dimensional subspaces, have recently attracted attention for image set matching. Canonical correlations offer many benefits in accuracy, efficiency, and robustness compared to the two main classical methods: parametric distribution-based and nonparametric sample-based matching of sets. Here, this is first demonstrated experimentally for reasonably sized data sets using existing methods exploiting canonical correlations. Motivated by their proven effectiveness, a novel discriminative learning method over sets is proposed for set classification. Specifically, inspired by classical linear discriminant analysis (LDA), we develop a linear discriminant function that maximizes the canonical correlations of within-class sets and minimizes the canonical correlations of between-class sets. Image sets transformed by the discriminant function are then compared by the canonical correlations. Classical orthogonal subspace method (OSM) is also investigated for the similar purpose and compared with the proposed method. The proposed method is evaluated on various object recognition problems using face image sets with arbitrary motion captured under different illuminations and image sets of 500 general objects taken at different views. The method is also applied to object category recognition using ETH-80 database. The proposed method is shown to outperform the state-of-the-art methods in terms of accuracy and efficiency",
"title": ""
},
{
"docid": "7db2f661465cb18abf68e9148f50ce66",
"text": "When training the parameters for a natural language system, one would prefer to minimize 1-best loss (error) on an evaluation set. Since the error surface for many natural language problems is piecewise constant and riddled with local minima, many systems instead optimize log-likelihood, which is conveniently differentiable and convex. We propose training instead to minimize the expected loss, or risk. We define this expectation using a probability distribution over hypotheses that we gradually sharpen (anneal) to focus on the 1-best hypothesis. Besides the linear loss functions used in previous work, we also describe techniques for optimizing nonlinear functions such as precision or the BLEU metric. We present experiments training log-linear combinations of models for dependency parsing and for machine translation. In machine translation, annealed minimum risk training achieves significant improvements in BLEU over standard minimum error training. We also show improvements in labeled dependency parsing. 1 Direct Minimization of Error Researchers in empirical natural language processing have expended substantial ink and effort in developing metrics to evaluate systems automatically against gold-standard corpora. The ongoing evaluation literature is perhaps most obvious in the machine translation community’s efforts to better BLEU (Papineni et al., 2002). Despite this research, parsing or machine translation systems are often trained using the much simpler and harsher metric of maximum likelihood. One reason is that in supervised training, the log-likelihood objective function is generally convex, meaning that it has a single global maximum that can be easily found (indeed, for supervised generative models, the parameters at this maximum may even have a closed-form solution). In contrast to the likelihood surface, the error surface for discrete structured prediction is not only riddled with local minima, but piecewise constant ∗This work was supported by an NSF graduate research fellowship for the first author and by NSF ITR grant IIS0313193 and ONR grant N00014-01-1-0685. The views expressed are not necessarily endorsed by the sponsors. We thank Sanjeev Khudanpur, Noah Smith, Markus Dreyer, and the reviewers for helpful discussions and comments. and not everywhere differentiable with respect to the model parameters (Figure 1). Despite these difficulties, some work has shown it worthwhile to minimize error directly (Och, 2003; Bahl et al., 1988). We show improvements over previous work on error minimization by minimizing the risk or expected error—a continuous function that can be derived by combining the likelihood with any evaluation metric (§2). Seeking to avoid local minima, deterministic annealing (Rose, 1998) gradually changes the objective function from a convex entropy surface to the more complex risk surface (§3). We also discuss regularizing the objective function to prevent overfitting (§4). We explain how to compute expected loss under some evaluation metrics common in natural language tasks (§5). We then apply this machinery to training log-linear combinations of models for dependency parsing and for machine translation (§6). Finally, we note the connections of minimum risk training to max-margin training and minimum Bayes risk decoding (§7), and recapitulate our results (§8). 2 Training Log-Linear Models In this work, we focus on rescoring with loglinear models. In particular, our experiments consider log-linear combinations of a relatively small number of features over entire complex structures, such as trees or translations, known in some previous work as products of experts (Hinton, 1999) or logarithmic opinion pools (Smith et al., 2005). A feature in the combined model might thus be a log probability from an entire submodel. Giving this feature a small or negative weight can discount a submodel that is foolishly structured, badly trained, or redundant with the other features. For each sentence xi in our training corpus S, we are given Ki possible analyses yi,1, . . . yi,Ki . (These may be all of the possible translations or parse trees; or only the Ki most probable under Figure 1: The loss surface for a machine translation system: while other parameters are held constant, we vary the weights on the distortion and word penalty features. Note the piecewise constant regions with several local maxima. some other model; or only a random sample of size Ki.) Each analysis has a vector of real-valued features (i.e., factors, or experts) denoted fi,k. The score of the analysis yi,k is θ · fi,k, the dot product of its features with a parameter vector θ. For each sentence, we obtain a normalized probability distribution over the Ki analyses as pθ(yi,k | xi) = exp θ · fi,k ∑Ki k′=1 exp θ · fi,k′ (1) We wish to adjust this model’s parameters θ to minimize the severity of the errors we make when using it to choose among analyses. A loss function Ly∗(y) assesses a penalty for choosing y when y∗ is correct. We will usually write this simply as L(y) since y∗ is fixed and clear from context. For clearer exposition, we assume below that the total loss over some test corpus is the sum of the losses on individual sentences, although we will revisit that assumption in §5. 2.1 Minimizing Loss or Expected Loss One training criterion directly mimics test conditions. It looks at the loss incurred if we choose the best analysis of each xi according to the model:",
"title": ""
},
{
"docid": "50d42d832a0cd04becdaa26cc33a9782",
"text": "The performance of Fingerprint recognition system depends on minutiae which are extracted from raw fingerprint image. Often the raw fingerprint image captured from a scanner may not be of good quality, which leads to inaccurate extraction of minutiae. Hence it is essential to preprocess the fingerprint image before extracting the reliable minutiae for matching of two fingerprint images. Image enhancement technique followed by minutiae extraction completes the fingerprint recognition process. Fingerprint recognition process with a matcher constitutes Fingerprint recognition system ASIC implementation of image enhancement technique for fingerprint recognition process using Cadence tool is proposed. Further, the result obtained from hardware design is compared with that of software using MatLab tool.",
"title": ""
},
{
"docid": "a9808fef734b205146a2d8edf1171d6a",
"text": "The Buss-Perry Aggression Questionnaire (AQ) is a self-report measure of aggressiveness commonly employed in nonforensic and forensic settings and is included in violent offender pre- and posttreatment assessment batteries. The aim of the current study was to assess the fit of the four-factor model of the AQ with violent offenders ( N = 271), a population for which the factor structure of the English version of the AQ has not previously been examined. Confirmatory factor analyses did not yield support for the four-factor model of the original 29-item AQ. Acceptable fit was obtained with the 12-item short form, but careful examination of the relationships between the latent factors revealed that the four subscales of the AQ may not represent distinct aspects of aggressiveness. Our findings call into question whether the AQ optimally measures trait aggressiveness among violent offenders.",
"title": ""
},
{
"docid": "25cbc3f8f9ecbeb89c2c49c044e61c2a",
"text": "This study investigated lying behavior and the behavior of people who are deceived by using a deception game (Gneezy, 2005) in both anonymity and face-to-face treatments. Subjects consist of students and non-students (citizens) to investigate whether lying behavior is depended on socioeconomic backgrounds. To explore how liars feel about lying, we give senders a chance to confess their behaviors to their counter partner for the guilty aversion of lying. The following results are obtained: i) a frequency of lying behavior for students is significantly higher than that for non-students at a payoff in the anonymity treatment, but that is not significantly difference between the anonymity and face-to-face treatments; ii) lying behavior is not influenced by gender; iii) a frequency of confession is higher in the face-to-face treatment than in the anonymity treatment; and iv) the receivers who are deceived are more likely to believe a sender’s message to be true in the anonymity treatment. This study implies that the existence of the partner prompts liars to confess their behavior because they may feel remorse or guilt.",
"title": ""
},
{
"docid": "dccec6a01de3b68d1e2a7ff8b0da7b9a",
"text": "Using social media for political analysis is becoming a common practice, especially during election time. Many researchers and media are trying to use social media to understand the public opinion and trend. In this paper, we investigate how we could use Twitter to predict public opinion and thus predict American republican presidential election results. We analyzed millions of tweets from September 2011 leading up to the republican primary elections. First we examine the previous methods regarding predicting election results with social media and then we integrate our understanding of social media and propose a prediction model to predict the public opinions towards Republican Presidential Elections. Our results highlight the feasibility of using social media to predict public opinions and thus replace traditional polling.",
"title": ""
},
{
"docid": "4a4789547dcbe5b23190f2ab7cda01d7",
"text": "Model predictive control (MPC) has been one of the most promising control strategies in industrial processes for decades. Due to its remarkable advantages, it has been extended to many areas of robotic research, especially motion control. Therefore, the goal of this paper is to review motion control of wheeled mobile robots (WMRs) using MPC. Principles as well as key issues in real-time implementations are first addressed. We then update the current literature of MPC for motion control. We also classify publications by using three criteria, i.e., MPC models, robot kinematic models, and basic motion tasks. MPC models categorized here include nonlinear MPC, linear MPC, neural network MPC, and generalized predictive control (GPC), while robot kinematic models we focus on consist of unicycle-type vehicles, car-like vehicles, and omnidirectional vehicles. Basic motion tasks, in general, are classified into three groups, i.e., trajectory tracking, path following, and point stabilization. To show that MPC strategies are capable of real-time implementations, some experimental scenarios from our previous work are given. We also conclude by identifying some future research directions.",
"title": ""
},
{
"docid": "e5f48bc2f36682acf3048c922263786f",
"text": "Missing values in datasets and databases can be estimated via statistics, machine learning and artificial intelligence methods. This paper uses a novel hybrid neural network and weighted nearest neighbors to estimate missing values and provides good results with high performance. In this work, four different characteristic datasets were used and missing values were estimated. Error ratio, correlation coefficient, prediction accuracy were calculated between actual and estimated values and the results were compared with basic neural network-genetic algorithm estimation method.",
"title": ""
},
{
"docid": "f22c14a8fa1f5cb28604bbb7012a41e4",
"text": "The authors support the hypothesis that a causative agent in Parkinson's disease (PD) might be either fungus or bacteria with fungus-like properties - Actinobacteria, and that their spores may serve as 'infectious agents'. Updated research and the epidemiology of PD suggest that the disease might be induced by environmental factor(s), possibly with genetic susceptibility, and that α-synuclein probably should be regarded as part of the body's own defense mechanism. To explain the dual-hit theory with stage 1 involvement of the olfactory structures and the 'gut-brain'-axis, the environmental factor is probably airborne and quite 'robust' entering the body via the nose/mouth, then to be swallowed reaching the enteric nervous system with retained pathogenicity. Similar to the essence of smoking food, which is to eradicate microorganisms, a viable agent may be defused by tobacco smoke. Hence, the agent is likely to be a 'living' and not an inert agent. Furthermore, and accordant with the age-dependent incidence of LPD, this implies that a dormant viable agent have been escorted by α-synuclein via retrograde axonal transport from the nose and/or GI tract to hibernate in the associated cerebral nuclei. In the brain, PD spreads like a low-grade infection, and that patients develop symptoms in later life, indicate a relatively long incubation time. Importantly, Actinomyces species may form endospores, the hardiest known form of life on Earth. The authors hypothesize that certain spores may not be subject to degradation by macroautophagy, and that these spores become reactivated due to the age-dependent or genetic reduced macroautophagic function. Hence, the hibernating spore hypothesis explains both early-onset and late-onset PD. Evaluation of updated available information are all consistent with the hypothesis that PD may be induced by spores from fungi or Actinobacteria and thus supports Broxmeyer's hypothesis put forward 15years ago.",
"title": ""
},
{
"docid": "66223df5d0ef9776bfebb3125a3ac55a",
"text": "OBJECTIVES\nThe aims of this paper are to review and compare existing techniques for creation of interdental/interimplant papillae, to address factors that may influence its appearance and to present an approach that authors developed that could help clinicians to manage and recreate the interproximal papillae.\n\n\nMETHODS\nPapers related to interdental and interimplant papillae published over the last 30 years were selected and analyzed.\n\n\nRESULTS\nThorough treatment planning is essential for maintenance of the height of the interproximal papillae following tooth removal. The key for achieving an esthetically pleasing outcome is the clinicians' ability of properly managing/creating interdental/interimplant papillae. Bone support is the foundation for any soft tissue existence, techniques such as socket augmentation, orthodontic extrusion, guided bone regeneration, onlay graft and distraction osteogenesis are often used for this purpose. Soft tissue grafts as well as esthetic mimic restorations can also be used to enhance the esthetic outcomes.\n\n\nCONCLUSIONS\nAn esthetic triangle is developed to address the foundations that are essential for maintaining/creating papilla. These include adequate bone volume, proper soft tissue thickness as well as esthetic appearing restorations.",
"title": ""
},
{
"docid": "bc8429de57c0530438f5a8935b6227fd",
"text": "Malware is a computer program or a piece of software that is designed to penetrate and detriment computers without owner's permission. There are different malware types such as viruses, rootkits, keyloggers, worms, trojans, spywares, ransomware, backdoors, bots, logic bomb, etc. Volume, Variant and speed of propagation of malwares are increasing every year. Antivirus companies are receiving thousands of malwares on the daily basis, so detection of malwares is complex and time consuming task. There are many malwares detection techniques like signature based detection, behavior based detection and machine learning based techniques, etc. The signatures based detection system fails for new unknown malware. In case of behavior based detection, if the antivirus program identify attempt to change or alter a file or communication over internet then it will generate alarm signal, but still there is a chance of false positive rate. Also the obfuscation and polymorphism techniques are hinderers the malware detection process. In this paper we propose new method to detect malwares based on the frequency of opcodes in the portable executable file. This research applied machine learning algorithm to find false positives, false negatives, true positives and true negatives for malwares and got 96.67 per cent success rate.",
"title": ""
},
{
"docid": "cbad7caa1cc1362e8cd26034617c39f4",
"text": "Many state-machine Byzantine Fault Tolerant (BFT) protocols have been introduced so far. Each protocol addressed a different subset of conditions and use-cases. However, if the underlying conditions of a service span different subsets, choosing a single protocol will likely not be a best fit. This yields robustness and performance issues which may be even worse in services that exhibit fluctuating conditions and workloads. In this paper, we reconcile existing state-machine BFT protocols in a single adaptive BFT system, called ADAPT, aiming at covering a larger set of conditions and use-cases, probably the union of individual subsets of these protocols. At anytime, a launched protocol in ADAPT can be aborted and replaced by another protocol according to a potential change (an event) in the underlying system conditions. The launched protocol is chosen according to an \"evaluation process\" that takes into consideration both: protocol characteristics and its performance. This is achieved by applying some mathematical formulas that match the profiles of protocols to given user (e.g., service owner) preferences. ADAPT can assess the profiles of protocols (e.g., throughput) at run-time using Machine Learning prediction mechanisms to get accurate evaluations. We compare ADAPT with well known BFT protocols showing that it outperforms others as system conditions change and under dynamic workloads.",
"title": ""
},
{
"docid": "00f333b1875e28d6158b793a75fc13a3",
"text": "Over the last 20 years, cultural heritage has been a favored domain for personalization research. For years, researchers have experimented with the cutting edge technology of the day; now, with the convergence of internet and wireless technology, and the increasing adoption of the Web as a platform for the publication of information, the visitor is able to exploit cultural heritage material before, during and after the visit, having different goals and requirements in each phase. However, cultural heritage sites have a huge amount of information to present, which must be filtered and personalized in order to enable the individual user to easily access it. Personalization of cultural heritage information requires a system that is able to model the user (e.g., interest, knowledge and other personal characteristics), as well as contextual aspects, select the most appropriate content, and deliver it in the most suitable way. It should be noted that achieving this result is extremely challenging in the case of first-time users, such as tourists who visit a cultural heritage site for the first time (and maybe the only time in their life). In addition, as tourism is a social activity, adapting to the individual is not enough because groups and communities have to be modeled and supported as well, taking into account their mutual interests, previous mutual experience, and requirements. How to model and represent the user(s) and the context of the visit and how to reason with regard to the information that is available are the challenges faced by researchers in personalization of cultural heritage. Notwithstanding the effort invested so far, a definite solution is far from being reached, mainly because new technology and new aspects of personalization are constantly being introduced. This article surveys the research in this area. Starting from the earlier systems, which presented cultural heritage information in kiosks, it summarizes the evolution of personalization techniques in museum web sites, virtual collections and mobile guides, until recent extension of cultural heritage toward the semantic and social web. The paper concludes with current challenges and points out areas where future research is needed.",
"title": ""
},
{
"docid": "21b9b7995cabde4656c73e9e278b2bf5",
"text": "Topic modeling techniques have been recently applied to analyze and model source code. Such techniques exploit the textual content of source code to provide automated support for several basic software engineering activities. Despite these advances, applications of topic modeling in software engineering are frequently suboptimal. This can be attributed to the fact that current state-of-the-art topic modeling techniques tend to be data intensive. However, the textual content of source code, embedded in its identifiers, comments, and string literals, tends to be sparse in nature. This prevents classical topic modeling techniques, typically used to model natural language texts, to generate proper models when applied to source code. Furthermore, the operational complexity and multi-parameter calibration often associated with conventional topic modeling techniques raise important concerns about their feasibility as data analysis models in software engineering. Motivated by these observations, in this paper we propose a novel approach for topic modeling designed for source code. The proposed approach exploits the basic assumptions of the cluster hypothesis and information theory to discover semantically coherent topics in software systems. Ten software systems from different application domains are used to empirically calibrate and configure the proposed approach. The usefulness of generated topics is empirically validated using human judgment. Furthermore, a case study that demonstrates thet operation of the proposed approach in analyzing code evolution is reported. The results show that our approach produces stable, more interpretable, and more expressive topics than classical topic modeling techniques without the necessity for extensive parameter calibration.",
"title": ""
},
{
"docid": "e4b02298a2ff6361c0a914250f956911",
"text": "This paper studies efficient means in dealing with intracategory diversity in object detection. Strategies for occlusion and orientation handling are explored by learning an ensemble of detection models from visual and geometrical clusters of object instances. An AdaBoost detection scheme is employed with pixel lookup features for fast detection. The analysis provides insight into the design of a robust vehicle detection system, showing promise in terms of detection performance and orientation estimation accuracy.",
"title": ""
},
{
"docid": "3512d0a45a764330c8a66afab325d03d",
"text": "Self-concept clarity (SCC) references a structural aspect oftbe self-concept: the extent to which selfbeliefs are clearly and confidently defined, internally consistent, and stable. This article reports the SCC Scale and examines (a) its correlations with self-esteem (SE), the Big Five dimensions, and self-focused attention (Study l ); (b) its criterion validity (Study 2); and (c) its cultural boundaries (Study 3 ). Low SCC was independently associated with high Neuroticism, low SE, low Conscientiousness, low Agreeableness, chronic self-analysis, low internal state awareness, and a ruminative form of self-focused attention. The SCC Scale predicted unique variance in 2 external criteria: the stability and consistency of self-descriptions. Consistent with theory on Eastern and Western selfconstruals, Japanese participants exhibited lower levels of SCC and lower correlations between SCC and SE than did Canadian participants.",
"title": ""
},
{
"docid": "53bd1baec1e740c99a2fd22c858e8e60",
"text": "Garbage collection yields numerous software engineering benefits, but its quantitative impact on performance remains elusive. One can compare the cost of conservative garbage collection to explicit memory management in C/C++ programs by linking in an appropriate collector. This kind of direct comparison is not possible for languages designed for garbage collection (e.g., Java), because programs in these languages naturally do not contain calls to free. Thus, the actual gap between the time and space performance of explicit memory management and precise, copying garbage collection remains unknown.We introduce a novel experimental methodology that lets us quantify the performance of precise garbage collection versus explicit memory management. Our system allows us to treat unaltered Java programs as if they used explicit memory management by relying on oracles to insert calls to free. These oracles are generated from profile information gathered in earlier application runs. By executing inside an architecturally-detailed simulator, this \"oracular\" memory manager eliminates the effects of consulting an oracle while measuring the costs of calling malloc and free. We evaluate two different oracles: a liveness-based oracle that aggressively frees objects immediately after their last use, and a reachability-based oracle that conservatively frees objects just after they are last reachable. These oracles span the range of possible placement of explicit deallocation calls.We compare explicit memory management to both copying and non-copying garbage collectors across a range of benchmarks using the oracular memory manager, and present real (non-simulated) runs that lend further validity to our results. These results quantify the time-space tradeoff of garbage collection: with five times as much memory, an Appel-style generational collector with a non-copying mature space matches the performance of reachability-based explicit memory management. With only three times as much memory, the collector runs on average 17% slower than explicit memory management. However, with only twice as much memory, garbage collection degrades performance by nearly 70%. When physical memory is scarce, paging causes garbage collection to run an order of magnitude slower than explicit memory management.",
"title": ""
}
] |
scidocsrr
|
8a7f684df7fb4ecdbdc8d872948e98c0
|
Painting with Polygons: A Procedural Watercolor Engine
|
[
{
"docid": "9db49ae61207f4fc534170ab5b8eda60",
"text": "Existing natural media painting simulations have produced high quality results, but have required powerful compute hardware and have been limited to screen resolutions. Digital artists would like to be able to use watercolor-like painting tools, but at print resolutions and on lower end hardware such as laptops or even slates. We present a procedural algorithm for generating watercolor-like dynamic paint behaviors in a lightweight manner. Our goal is not to exactly duplicate watercolor painting, but to create a range of dynamic behaviors that allow users to achieve a similar style of process and result, while at the same time having a unique character of its own. Our stroke representation is vector-based, allowing for rendering at arbitrary resolutions, and our procedural pigment advection algorithm is fast enough to support painting on slate devices. We demonstrate our technique in a commercially available slate application used by professional artists.",
"title": ""
}
] |
[
{
"docid": "79e40d9afd262f4890463c2bdaa034fa",
"text": "This paper describes a system to create animated 3D scenes of car accidents from reports written in Swedish. The system has been developed using news reports of varying size and complexity. The text-to-scene conversion process consists of two stages. An information extraction module creates a structured representation of the accident and a visual simulator generates and animates the scene. We first describe the overall structure of the textto-scene conversion and the structure of the representation. We then explain the information extraction and visualization modules. We show snapshots of the car animation output and we conclude with the results we obtained. 1 Text-to-Scene Conversion As noted by Michel Denis, language and images are two different representation modes whose cooperation is needed in many forms of cognitive operations. The description of physical events, mathematical theorems, or structures of any kind using language is sometimes difficult to understand. Images and graphics can then help understand ideas or situations and realize their complexity. They have an indisputable capacity to represent and to communicate knowledge and are an effective means to represent and explain things, see (Kosslyn, 1983; Tufte, 1997; Denis, 1991). Narratives of a car accidents, for instance, often make use of space descriptions, movements, and directions that are sometimes difficult to grasp for most readers. We believe that forming consistent mental images are necessary to understand them properly. However, some people have difficulties in imagining situations and may need visual aids predesigned by professional analysts. In this paper, we will describe Carsim, a text-toscene converter that automates the generation of images from texts. 2 Related Work The conversion of natural language texts into graphics has been investigated in a few projects. NALIG (Adorni et al., 1984; Manzo et al., 1986) is an early example of them that was aimed at recreating static 2D scenes. One of its major goals was to study relationships between space and prepositions. NALIG considered simple phrases in Italian of the type subject, preposition, object that in spite of their simplicity can have ambiguous interpretations. From what is described in the papers, NALIG has not been extended to process sentences and even less to texts. WordsEye (Coyne and Sproat, 2001) is an impressive system that recreates 3D animated scenes from short descriptions. The number of 3D objects WordsEye uses – 12,000 – gives an idea of its ambition. WordsEye integrates resources such as the Collins’ dependency parser and the WordNet lexical database. The narratives cited as examples resemble imaginary fairy tales and WordsEye does not seem to address real world stories. CogViSys is a last example that started with the idea of generating texts from a sequence of video images. The authors found that it could also be useful to reverse the process and generate synthetic video sequences from texts. The logic engine behind the text-to-scene converter (Arens et al., 2002) is based on the Discourse Representation Theory. The system is limited to the visualization of single vehicle maneuvers at an intersection as the one described in this two-sentence narrative: A car came from Kriegstrasse. It turned left at the intersection. The authors give no further details on the text corpus and no precise description of the results.",
"title": ""
},
{
"docid": "2bd15d743690c8bcacb0d01650759d62",
"text": "With the large amount of available data and the variety of features they offer, electronic health records (EHR) have gotten a lot of interest over recent years, and start to be widely used by the machine learning and bioinformatics communities. While typical numerical fields such as demographics, vitals, lab measurements, diagnoses and procedures, are natural to use in machine learning models, there is no consensus yet on how to use the free-text clinical notes. We show how embeddings can be learned from patients’ history of notes, at the word, note and patient level, using simple neural and sequence models. We show on various relevant evaluation tasks that these embeddings are easily transferable to smaller problems, where they enable accurate predictions using only clinical notes.",
"title": ""
},
{
"docid": "3a74928dc955504a12dbfe7cd2deeb16",
"text": "Very few large-scale music research datasets are publicly available. There is an increasing need for such datasets, because the shift from physical to digital distribution in the music industry has given the listener access to a large body of music, which needs to be cataloged efficiently and be easily browsable. Additionally, deep learning and feature learning techniques are becoming increasingly popular for music information retrieval applications, and they typically require large amounts of training data to work well. In this paper, we propose to exploit an available large-scale music dataset, the Million Song Dataset (MSD), for classification tasks on other datasets, by reusing models trained on the MSD for feature extraction. This transfer learning approach, which we refer to as supervised pre-training, was previously shown to be very effective for computer vision problems. We show that features learned from MSD audio fragments in a supervised manner, using tag labels and user listening data, consistently outperform features learned in an unsupervised manner in this setting, provided that the learned feature extractor is of limited complexity. We evaluate our approach on the GTZAN, 1517-Artists, Unique and Magnatagatune datasets.",
"title": ""
},
{
"docid": "300bff5036b5b4e83a4bc605020b49e3",
"text": "Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.",
"title": ""
},
{
"docid": "a862ccdb188c7b559a4f27793c7873d8",
"text": "Several behavioral assays are currently used for high-throughput neurophenotyping and screening of genetic mutations and psychotropic drugs in zebrafish (Danio rerio). In this protocol, we describe a battery of two assays to characterize anxiety-related behavioral and endocrine phenotypes in adult zebrafish. Here, we detail how to use the 'novel tank' test to assess behavioral indices of anxiety (including reduced exploration, increased freezing behavior and erratic movement), which are quantifiable using manual registration and computer-aided video-tracking analyses. In addition, we describe how to analyze whole-body zebrafish cortisol concentrations that correspond to their behavior in the novel tank test. This protocol is an easy, inexpensive and effective alternative to other methods of measuring stress responses in zebrafish, thus enabling the rapid acquisition and analysis of large amounts of data. As will be shown here, fish anxiety-like behavior can be either attenuated or exaggerated depending on stress or drug exposure, with cortisol levels generally expected to parallel anxiety behaviors. This protocol can be completed over the course of 2 d, with a variable testing duration depending on the number of fish used.",
"title": ""
},
{
"docid": "c03e116de528bf16ecbec7f9bf65e87b",
"text": "Kelley's attribution theory is investigated. Subjects filled out a questionnaire that reported 16 different responses ostensibly made by other people. These responses represented four verb categories—emotions, accomplishments, opinions, and actions—and, for experimental subjects, each was accompanied by high or low consensus information, high or low distinctiveness information, and high or low consistency information. Control subjects were not given any information regarding the response. All subjects were asked to attribute each response to characteristics of the person (i.e., the actor), the stimulus, the circumstances, or to some combination of these three factors. In addition, the subjects' expectancies for future response and stimulus generalization on the part of the actor were measured. The three information variables and verb category each had a significant effect on causal attribution and on expectancy for behavioral generalization.",
"title": ""
},
{
"docid": "609bc0aa7dcd9ffc97e753642bec8c82",
"text": "Current trends in energy power generation are leading efforts related to the development of more reliable, sustainable sources and technologies for energy harvesting. Solar energy is one of these renewable energy resources, widely available in nature. Most of the solar panels used today to convert solar energy into chemical energy, and then to electrical energy, are stationary. Energy efficiency studies have shown that more electrical energy can be retrieved from solar panels if they are organized in arrays and then placed on a solar tracker that can then follow the sun as it moves during the day from east to west, and as it moves from north to south during the year, as seasons change. Adding more solar panels to solar tracker structures will improve its yield. It would also add more challenges when it comes to managing the overall weight of such structures, and their strength and reliability under different weather conditions, such as wind, changes in temperature, and atmospheric conditions. Hence, careful structural design and simulation is needed to establish the most optimal parameters in order for solar trackers to withstand all environmental conditions and to function with a high reliability for long periods of time.",
"title": ""
},
{
"docid": "f7e45feaa48b8d7741ac4cdb3ef4749b",
"text": "Classification problems refer to the assignment of some alt ern tives into predefined classes (groups, categories). Such problems often arise in several application fields. For instance, in assessing credit card applications the loan officer must evaluate the charact eristics of each applicant and decide whether an application should be accepted or rejected. Simil ar situations are very common in fields such as finance and economics, production management (fault diagnosis) , medicine, customer satisfaction measurement, data base management and retrieval, etc.",
"title": ""
},
{
"docid": "03ff1bdb156c630add72357005a142f5",
"text": "Recent advances in media generation techniques have made it easier for attackers to create forged images and videos. Stateof-the-art methods enable the real-time creation of a forged version of a single video obtained from a social network. Although numerous methods have been developed for detecting forged images and videos, they are generally targeted at certain domains and quickly become obsolete as new kinds of attacks appear. The method introduced in this paper uses a capsule network to detect various kinds of spoofs, from replay attacks using printed images or recorded videos to computergenerated videos using deep convolutional neural networks. It extends the application of capsule networks beyond their original intention to the solving of inverse graphics problems.",
"title": ""
},
{
"docid": "7da7efc4810dd79918f1d6608f4410ba",
"text": "Conventional hardware platforms consume huge amount of energy for cognitive learning due to the data movement between the processor and the off-chip memory. Brain-inspired device technologies using analogue weight storage allow to complete cognitive tasks more efficiently. Here we present an analogue non-volatile resistive memory (an electronic synapse) with foundry friendly materials. The device shows bidirectional continuous weight modulation behaviour. Grey-scale face classification is experimentally demonstrated using an integrated 1024-cell array with parallel online training. The energy consumption within the analogue synapses for each iteration is 1,000 × (20 ×) lower compared to an implementation using Intel Xeon Phi processor with off-chip memory (with hypothetical on-chip digital resistive random access memory). The accuracy on test sets is close to the result using a central processing unit. These experimental results consolidate the feasibility of analogue synaptic array and pave the way toward building an energy efficient and large-scale neuromorphic system.",
"title": ""
},
{
"docid": "d2566eb088c499b3454a6c73e5fb5034",
"text": "Neuroimaging studies of professional athletic or musical training have demonstrated considerable practice-dependent plasticity in various brain structures, which may reflect distinct training demands. In the present study, structural and functional brain alterations were examined in professional badminton players and compared with healthy controls using magnetic resonance imaging (MRI) and resting-state functional MRI. Gray matter concentration (GMC) was assessed using voxel-based morphometry (VBM), and resting-brain functions were measured by amplitude of low-frequency fluctuation (ALFF) and seed-based functional connectivity. Results showed that the athlete group had greater GMC and ALFF in the right and medial cerebellar regions, respectively. The athlete group also demonstrated smaller ALFF in the left superior parietal lobule and altered functional connectivity between the left superior parietal and frontal regions. These findings indicate that badminton expertise is associated with not only plastic structural changes in terms of enlarged gray matter density in the cerebellum, but also functional alterations in fronto-parietal connectivity. Such structural and functional alterations may reflect specific experiences of badminton training and practice, including high-capacity visuo-spatial processing and hand-eye coordination in addition to refined motor skills.",
"title": ""
},
{
"docid": "a8d36a671bb88e7bfe414d1ebe3a1959",
"text": "Phase Change Memory (PCM) has drawn great attention as a main memory due to its attractive characteristics such as non-volatility, byte-addressability, and in-place update. However, since the capacity of PCM is not fully mature yet, hybrid memory architecture that consists of DRAM and PCM has been suggested. In addition, page replacement algorithm based on hybrid memory architecture is actively being studied because existing page replacement algorithms cannot be used on hybrid memory architecture in that they do not consider the two weaknesses of PCM: high write latency and low endurance. In this paper, to mitigate the above hardware limitations of PCM, we revisit the page cache layer for the hybrid memory architecture. We also propose a novel page replacement algorithm, called M-CLOCK, to improve the performance of hybrid memory architecture and the lifespan of PCM. In particular, M-CLOCK aims to reduce the number of PCM writes that negatively affect the performance of hybrid memory architecture. Experimental results clearly show that M-CLOCK outperforms the state-of-the-art page replacement algorithms in terms of the number of PCM writes and effective memory access time by up to 98% and 34%, respectively.",
"title": ""
},
{
"docid": "eeda67ba0bc36bd1984789be93d8ce9c",
"text": "Using modified constructivist grounded theory, the purpose of the present study was to explore positive body image experiences in people with spinal cord injury. Nine participants (five women, four men) varying in age (21-63 years), type of injury (C3-T7; complete and incomplete), and years post-injury (4-36 years) were recruited. The following main categories were found: body acceptance, body appreciation and gratitude, social support, functional gains, independence, media literacy, broadly conceptualizing beauty, inner positivity influencing outer demeanour, finding others who have a positive body image, unconditional acceptance from others, religion/spirituality, listening to and taking care of the body, managing secondary complications, minimizing pain, and respect. Interestingly, there was consistency in positive body image characteristics reported in this study with those found in previous research, demonstrating universality of positive body image. However, unique characteristics (e.g., resilience, functional gains, independence) were also reported demonstrating the importance of exploring positive body image in diverse groups.",
"title": ""
},
{
"docid": "3227e141d4572b58214585c5047a9b8b",
"text": "Post-natal ontogenetic variation of the marmot mandible and ventral cranium is investigated in two species of the subgenus Petromarmota (M. caligata, M. flaviventris) and four species of the subgenus Marmota (M. caudata, M. himalayana, M. marmota, M. monax). Relationships between size and shape are analysed using geometric morphometric techniques. Sexual dimorphism is negligible, allometry explains the main changes in shape during growth, and males and females manifest similar allometric trajectories. Anatomical regions affected by size-related shape variation are similar in different species, but allometric trajectories are divergent. The largest modifications of the mandible and ventral cranium occur in regions directly involved in the mechanics of mastication. Relative to other anatomical regions, the size of areas of muscle insertion increases, while the size of sense organs, nerves and teeth generally decreases. Epigenetic factors, developmental constraints and size variation were found to be the major contributors in producing the observed allometric patterns. A phylogenetic signal was not evident in the comparison of allometric trajectories, but traits that allow discrimination of the Palaearctic marmots from the Nearctic species of Petromarmota are present early in development and are conserved during post-natal ontogeny.",
"title": ""
},
{
"docid": "24f68da70b879cc74b00e2bc9cae6f96",
"text": "This paper presents the power management scheme for a power electronics based low voltage microgrid in islanding operation. The proposed real and reactive power control is based on the virtual frequency and voltage frame, which can effectively decouple the real and reactive power flows and improve the system transient and stability performance. Detailed analysis of the virtual frame operation range is presented, and a control strategy to guarantee that the microgrid can be operated within the predetermined voltage and frequency variation limits is also proposed. Moreover, a reactive power control with adaptive voltage droop method is proposed, which automatically updates the maximum reactive power limit of a DG unit based on its current rating and actual real power output and features enlarged power output range and further improved system stability. Both simulation and experimental results are provided in this paper.",
"title": ""
},
{
"docid": "94b86e9d3f82fa070f24958590f3fefc",
"text": "In this paper, we utilize results from convex analysis and monotone operator theory to derive additional properties of the softmax function that have not yet been covered in the existing literature. In particular, we show that the softmax function is the monotone gradient map of the log-sum-exp function. By exploiting this connection, we show that the inverse temperature parameter determines the Lipschitz and co-coercivity properties of the softmax function. We then demonstrate the usefulness of these properties through an application in game-theoretic reinforcement learning.",
"title": ""
},
{
"docid": "9131f56c00023a3402b602940be621bb",
"text": "Location estimation of a wireless capsule endoscope at 400 MHz MICS band is implemented here using both RSSI and TOA-based techniques and their performance investigated. To improve the RSSI-based location estimation, a maximum likelihood (ML) estimation method is employed. For the TOA-based localization, FDTD coupled with continuous wavelet transform (CWT) is used to estimate the time of arrival and localization is performed using multilateration. The performances of the proposed localization algorithms are evaluated using a computational heterogeneous biological tissue phantom in the 402MHz-405MHz MICS band. Our investigations reveal that the accuracy obtained by TOA based method is superior to RSSI based estimates. It has been observed that the ML method substantially improves the accuracy of the RSSI-based location estimation.",
"title": ""
},
{
"docid": "2e8e41dd1bfdf4b7fd7beb946def43dc",
"text": "Body image disturbance in anorexia nervosa (AN) has been widely studied with regard to the patient’s own body, but little is known about perception of or attitude towards other women’s bodies in AN. The aim of the present study was to investigate how 20 girls aged 12–18 years and 19 adult women suffering from AN compared to 37 healthy adolescent girls and women estimate weight and attractiveness of women’s bodies belonging to different BMI categories (BMI 13.8–61.3 kg/m²). Weight and attractiveness ratings of the participant’s own body and information on physical comparisons were obtained, and effects on others’ weight and attractiveness ratings investigated. Differential evaluation processes were found: AN patients estimated other women’s weight higher than control participants. Patients showed a bias towards assessing extremely underweight women as more attractive and normal weight and overweight women as less attractive than healthy girls and women. These effects were more pronounced in adult than in adolescent AN patients. The tendency to engage in physical comparison with others significantly correlated with weight as well as attractiveness ratings in patients. A logistic regression model encompassing own attractiveness ratings, attractiveness bias towards strongly underweight others’ bodies and the interaction of this bias with age as predictors differentiated best between AN patients and controls. Our results indicate that females suffering from AN and healthy girls and women perceive other women’s bodies differently. Assessment of others’ weight and attractiveness may contribute to the maintenance of dysfunctional physical comparison processes.",
"title": ""
},
{
"docid": "55158927c639ed62b53904b97a0f7a97",
"text": "Speech comprehension and production are governed by control processes. We explore their nature and dynamics in bilingual speakers with a focus on speech production. Prior research indicates that individuals increase cognitive control in order to achieve a desired goal. In the adaptive control hypothesis we propose a stronger hypothesis: Language control processes themselves adapt to the recurrent demands placed on them by the interactional context. Adapting a control process means changing a parameter or parameters about the way it works (its neural capacity or efficiency) or the way it works in concert, or in cascade, with other control processes (e.g., its connectedness). We distinguish eight control processes (goal maintenance, conflict monitoring, interference suppression, salient cue detection, selective response inhibition, task disengagement, task engagement, opportunistic planning). We consider the demands on these processes imposed by three interactional contexts (single language, dual language, and dense code-switching). We predict adaptive changes in the neural regions and circuits associated with specific control processes. A dual-language context, for example, is predicted to lead to the adaptation of a circuit mediating a cascade of control processes that circumvents a control dilemma. Effective test of the adaptive control hypothesis requires behavioural and neuroimaging work that assesses language control in a range of tasks within the same individual.",
"title": ""
},
{
"docid": "eef5a8800ccb046a6995c943ff97e25e",
"text": "In this poster paper we introduce the RASH Online Conversion Service, i.e., a Web application that allows the conversion of ODT documents into RASH, a HTML-based markup language for writing scholarly articles, and from RASH into LaTeX according to Springer LNCS and ACM ICPS.",
"title": ""
}
] |
scidocsrr
|
1a651f6fede7922bac8ae6541df89a27
|
Schottky diode rectifier for power harvesting application
|
[
{
"docid": "aa9450cdbdb1162015b4d931c32010fb",
"text": "The design of a low-cost rectenna for low-power applications is presented. The rectenna is designed with the use of analytical models and closed-form analytical expressions. This allows for a fast design of the rectenna system. To acquire a small-area rectenna, a layered design is proposed. Measurements indicate the validity range of the analytical models.",
"title": ""
}
] |
[
{
"docid": "5d2a9544a5a3fd2ae37bf82b3fb362c0",
"text": "Mathematical modelling approaches have become increasingly abundant in cancer research. The complexity of cancer is well suited to quantitative approaches as it provides challenges and opportunities for new developments. In turn, mathematical modelling contributes to cancer research by helping to elucidate mechanisms and by providing quantitative predictions that can be validated. The recent expansion of quantitative models addresses many questions regarding tumour initiation, progression and metastases as well as intra-tumour heterogeneity, treatment responses and resistance. Mathematical models can complement experimental and clinical studies, but also challenge current paradigms, redefine our understanding of mechanisms driving tumorigenesis and shape future research in cancer biology.",
"title": ""
},
{
"docid": "54537c242bc89fbf15d9191be80c5073",
"text": "In the propositional setting, the marginal problem is to find a (maximum-entropy) distribution that has some given marginals. We study this problem in a relational setting and make the following contributions. First, we compare two different notions of relational marginals. Second, we show a duality between the resulting relational marginal problems and the maximum likelihood estimation of the parameters of relational models, which generalizes a well-known duality from the propositional setting. Third, by exploiting the relational marginal formulation, we present a statistically sound method to learn the parameters of relational models that will be applied in settings where the number of constants differs between the training and test data. Furthermore, based on a relational generalization of marginal polytopes, we characterize cases where the standard estimators based on feature’s number of true groundings needs to be adjusted and we quantitatively characterize the consequences of these adjustments. Fourth, we prove bounds on expected errors of the estimated parameters, which allows us to lower-bound, among other things, the effective sample size of relational training data.",
"title": ""
},
{
"docid": "b5faf8e1b68a07b5f41db7bc751bbcbe",
"text": "Exploration of structurally novel natural products greatly facilitates the discovery of biologically active pharmacophores that are biologically validated starting points for the development of new drugs. Endophytes that colonize the internal tissues of plant species, have been proven to produce a large number of structurally diverse secondary metabolites. These molecules exhibit remarkable biological activities, including antimicrobial, anticancer, anti-inflammatory and antiviral properties, to name but a few. This review surveys the structurally diverse natural products with new carbon skeletons, unusual ring systems, or rare structural moieties that have been isolated from endophytes between 1996 and 2016. It covers their structures and bioactivities. Biosynthesis and/or total syntheses of some important compounds are also highlighted. Some novel secondary metabolites with marked biological activities might deserve more attention from chemists and biologists in further studies.",
"title": ""
},
{
"docid": "571e2d2fcb55f16513a425b874102f69",
"text": "Distributed word representations have a rising interest in NLP community. Most of existing models assume only one vector for each individual word, which ignores polysemy and thus degrades their effectiveness for downstream tasks. To address this problem, some recent work adopts multiprototype models to learn multiple embeddings per word type. In this paper, we distinguish the different senses of each word by their latent topics. We present a general architecture to learn the word and topic embeddings efficiently, which is an extension to the Skip-Gram model and can model the interaction between words and topics simultaneously. The experiments on the word similarity and text classification tasks show our model outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "cf52c02c9aa4ca9274911f0098d6cb89",
"text": "The individuation of areas that are more likely to be impacted by new events in volcanic regions is of fundamental relevance for mitigating possible consequences, both in terms of loss of human lives and material properties. For this purpose, the lava flow hazard maps are increasingly used to evaluate, for each point of a map, the probability of being impacted by a future lava event. Typically, these maps are computed by relying on an adequate knowledge about the volcano, assessed by an accurate analysis of its past behavior, together with the explicit simulation of thousands of hypothetical events, performed by a reliable computational model. In this paper, General-Purpose Computation with Graphics Processing Units (GPGPU) is applied, in conjunction with the SCIARA lava flow Cellular Automata model, to the process of building the lava invasion maps. Using different GPGPU devices, the paper illustrates some different implementation strategies and discusses numerical results obtained for a case study at Mt. Etna (Italy), Europe’s most active volcano.",
"title": ""
},
{
"docid": "64306a76b61bbc754e124da7f61a4fbe",
"text": "For over 50 years, electron beams have been an important modality for providing an accurate dose of radiation to superficial cancers and disease and for limiting the dose to underlying normal tissues and structures. This review looks at many of the important contributions of physics and dosimetry to the development and utilization of electron beam therapy, including electron treatment machines, dose specification and calibration, dose measurement, electron transport calculations, treatment and treatment-planning tools, and clinical utilization, including special procedures. Also, future changes in the practice of electron therapy resulting from challenges to its utilization and from potential future technology are discussed.",
"title": ""
},
{
"docid": "999eda741a3c132ac8640e55721b53bb",
"text": "This paper presents an overview of color and texture descriptors that have been approved for the Final Committee Draft of the MPEG-7 standard. The color and texture descriptors that are described in this paper have undergone extensive evaluation and development during the past two years. Evaluation criteria include effectiveness of the descriptors in similarity retrieval, as well as extraction, storage, and representation complexities. The color descriptors in the standard include a histogram descriptor that is coded using the Haar transform, a color structure histogram, a dominant color descriptor, and a color layout descriptor. The three texture descriptors include one that characterizes homogeneous texture regions and another that represents the local edge distribution. A compact descriptor that facilitates texture browsing is also defined. Each of the descriptors is explained in detail by their semantics, extraction and usage. Effectiveness is documented by experimental results.",
"title": ""
},
{
"docid": "ee8a54ee9cd0b3c9a57d8c5ae2b237c2",
"text": "Relatively little is known about how commodity consumption amongst African-Americans affirms issues of social organization within society. Moreover, the lack of primary documentation on the attitudes of African-American (A-A) commodity consumers contributes to the distorting image of A-A adolescents who actively engage in name-brand sneaker consumption; consequently maintaining the stigma of A-A adolescents being ‘addicted to brands’ (Chin, 2001). This qualitative study sought to employ the attitudes of African-Americans from an urban/metropolitan high school in dialogue on the subject of commodity consumption; while addressing the concepts of structure and agency with respect to name-brand sneaker consumption. Additionally, this study integrated three theoretical frameworks that were used to assess the participants’ engagement as consumers of name-brand sneakers. Through a focus group and analysis of surveys, it was discovered that amongst the African-American adolescent population, sneaker consumption imparted a means of attaining a higher socio-economic status, while concurrently providing an outlet for ‘acting’ as agents within the constraints of a constructed social structure. This study develops a practical method of analyzing several issues within commodity consumption, specifically among African-American adolescents. Prior to an empirical application of several theoretical frameworks, the researcher assessed the role of sneaker production as it predates sneaker consumption. Labor-intensive production of name-brand footwear is almost exclusively located in Asia (Vanderbilt, 1998), and has become the formula for efficient, profitable production in name-brand sneaker factories. Moreover, the production of such footwear is controlled by the demand for commodified products in the global economy. Southeast Asian manufacturing facilities owned by popular athletic footwear companies generate between $830 million and $5 billion a year from sneaker consumption (Vanderbilt, 1998). The researcher asks, What are the characteristics that determine the role of African-American consumers within the name-brand sneaker industry? The manner in which athletic name-brand footwear is consumed is a process that is directly associated with the social satisfaction of the consumer (Stabile, 2000). In this study, the researcher investigated the attitudes of adolescents towards name-brand sneaker consumption and production in order to determine how their perceived socioeconomic status affected by their consumption. Miller (2002) suggests that the consumption practices of young African-Americans present a central understanding of the act of consumption itself. While an analysis of consumption is vital in determining how and to whom a product is marketed Chin (2001), whose argument will be discussed further into this study, McNair ScholarS JourNal • VoluMe 8 111 explicates that (commodity) consumption is significant because it provides an understanding of the socially constructed society in which economically disadvantaged children are a part of.",
"title": ""
},
{
"docid": "0801dc8a870053ba36c0db9d25314cfb",
"text": "Crowdsourcing is a new emerging distributed computing and business model on the backdrop of Internet blossoming. With the development of crowdsourcing systems, the data size of crowdsourcers, contractors and tasks grows rapidly. The worker quality evaluation based on big data analysis technology has become a critical challenge. This paper first proposes a general worker quality evaluation algorithm that is applied to any critical tasks such as tagging, matching, filtering, categorization and many other emerging applications, without wasting resources. Second, we realize the evaluation algorithm in the Hadoop platform using the MapReduce parallel programming model. Finally, to effectively verify the accuracy and the effectiveness of the algorithm in a wide variety of big data scenarios, we conduct a series of experiments. The experimental results demonstrate that the proposed algorithm is accurate and effective. It has high computing performance and horizontal scalability. And it is suitable for large-scale worker quality evaluations in a big data environment.",
"title": ""
},
{
"docid": "0307e707eed7ba6d84683ec16ee6773d",
"text": "We prove an unconditional lower bound that any linear program that achieves an O(n1-ε) approximation for clique has size 2Ω(nε). There has been considerable recent interest in proving unconditional lower bounds against any linear program. Fiorini et al. proved that there is no polynomial sized linear program for traveling salesman. Braun et al. proved that there is no polynomial sized O(n1/2 - ε)-approximate linear program for clique. Here we prove an optimal and unconditional lower bound against linear programs for clique that matches Hastad's celebrated hardness result. Interestingly, the techniques used to prove such lower bounds have closely followed the progression of techniques used in communication complexity. Here we develop an information theoretic framework to approach these questions, and we use it to prove our main result. Also we resolve a related question: How many bits of communication are needed to get ε-advantage over random guessing for disjointness? Kalyanasundaram and Schnitger proved that a protocol that gets constant advantage requires Ω(n) bits of communication. This result in conjunction with amplification implies that any protocol that gets ε-advantage requires Ω(ε2 n) bits of communication. Here we improve this bound to Ω(ε n), which is optimal for any ε > 0.",
"title": ""
},
{
"docid": "e59b203f3b104553a84603240ea467eb",
"text": "Experimental art deployed in the Augmented Reality (AR) medium is contributing to a reconfiguration of traditional perceptions of interface, audience participation, and perceptual experience. Artists, critical engineers, and programmers, have developed AR in an experimental topology that diverges from both industrial and commercial uses of the medium. In a general technical sense, AR is considered as primarily an information overlay, a datafied window that situates virtual information in the physical world. In contradistinction, AR as experimental art practice activates critical inquiry, collective participation, and multimodal perception. As an emergent hybrid form that challenges and extends already established 'fine art' categories, augmented reality art deployed on Portable Media Devices (PMD’s) such as tablets & smartphones fundamentally eschews models found in the conventional 'art world.' It should not, however, be considered as inscribing a new 'model:' rather, this paper posits that the unique hybrids advanced by mobile augmented reality art–– also known as AR(t)–– are closely related to the notion of the 'machinic assemblage' ( Deleuze & Guattari 1987), where a deep capacity to re-assemble marks each new artevent. This paper develops a new formulation, the 'software assemblage,’ to explore some of the unique mixed reality situations that AR(t) has set in motion.",
"title": ""
},
{
"docid": "6089388e6baf7177db7f51e3c8f94be4",
"text": "Lean approaches to product development (LPD) have had a strong influence on many industries and in recent years there have been many proponents for lean in software development as it can support the increasing industry need of scaling agile software development. With it's roots in industrial manufacturing and, later, industrial product development, it would seem natural that LPD would adapt well to large-scale development projects of increasingly software-intensive products, such as in the automotive industry. However, it is not clear what kind of experience and results have been reported on the actual use of lean principles and practices in software development for such large-scale industrial contexts. This was the motivation for this study as the context was an ongoing industry process improvement project at Volvo Car Corporation and Volvo Truck Corporation. The objectives of this study are to identify and classify state of the art in large-scale software development influenced by LPD approaches and use this established knowledge to support industrial partners in decisions on a software process improvement (SPI) project, and to reveal research gaps and proposed extensions to LPD in relation to its well-known principles and practices. For locating relevant state of the art we conducted a systematic mapping study, and the industrial applicability and relevance of results and said extensions to LPD were further analyzed in the context of an actual, industrial case. A total of 10,230 papers were found in database searches, of which 38 papers were found relevant. Of these, only 42 percent clearly addressed large-scale development. Furthermore, a majority of papers (76 percent) were non-empirical and many lacked information about study design, context and/or limitations. Most of the identified results focused on eliminating waste and creating flow in the software development process, but there was a lack of results for other LPD principles and practices. Overall, it can be concluded that research in the much hyped field of lean software development is in its nascent state when it comes to large scale development. There is very little support available for practitioners who want to apply lean approaches for improving large-scale software development, especially when it comes to inter-departmental interactions during development. This paper explicitly maps the area, qualifies available research, and identifies gaps, as well as suggests extensions to lean principles relevant for large scale development of software intensive systems.",
"title": ""
},
{
"docid": "ab6238c3fc84540f124ebdb7390882b7",
"text": "ImageCLEF is the image retrieval task of the Conference and Labs of the Evaluation Forum (CLEF). ImageCLEF has historically focused on the multimodal and language-independent retrieval of images. Many tasks are related to image classification and the annotation of image data as well as the retrieval of images. The tuberculosis task was held for the first time in 2017 and had a very encouraging participation with 9 groups submitting results to these very challenging tasks. Two tasks were proposed around tuberculosis: (1) the classification of the cases into five types of tuberculosis and (2) the detection of drug resistances among tuberculosis cases. Many different techniques were used by the participants ranging from Deep Learning to graph-based approaches and best results were obtained by a large variety of approaches. The prediction of tuberculosis types had relatively good performance but the detection of drug resistances remained a very difficult task. More research into this seems necessary.",
"title": ""
},
{
"docid": "7dd15be3097961436e7130a74037b689",
"text": "We present an approach to effectively use millions of images with noisy annotations in conjunction with a small subset of cleanly-annotated images to learn powerful image representations. One common approach to combine clean and noisy data is to first pre-train a network using the large noisy dataset and then fine-tune with the clean dataset. We show this approach does not fully leverage the information contained in the clean set. Thus, we demonstrate how to use the clean annotations to reduce the noise in the large dataset before fine-tuning the network using both the clean set and the full set with reduced noise. The approach comprises a multi-task network that jointly learns to clean noisy annotations and to accurately classify images. We evaluate our approach on the recently released Open Images dataset, containing ~9 million images, multiple annotations per image and over 6000 unique classes. For the small clean set of annotations we use a quarter of the validation set with ~40k images. Our results demonstrate that the proposed approach clearly outperforms direct fine-tuning across all major categories of classes in the Open Image dataset. Further, our approach is particularly effective for a large number of classes with wide range of noise in annotations (20-80% false positive annotations).",
"title": ""
},
{
"docid": "a43e646ee162a23806c3b8f0a9d69b23",
"text": "This paper describes the results of the ICDAR 2005 competition for locating text in camera captured scenes. For this we used the same data as the ICDAR 2003 competition, which has been kept private until now. This allows a direct comparison with the 2003 entries. The main result is that the leading 2005 entry has improved significantly on the leading 2003 entry, with an increase in average f-score from 0.5 to 0.62, where the f-score is the same adapted information retrieval measure used for the 2003 competition. The paper also discusses the Web-based deployment and evaluation of text locating systems, and one of the leading entries has now been deployed in this way. This mode of usage could lead to more complete and more immediate knowledge of the strengths and weaknesses of each newly developed system.",
"title": ""
},
{
"docid": "77a92d896da31390bb0bd0c593361c6b",
"text": "Non-inflammatory cystic lesions of the pancreas are increasingly recognized. Two distinct entities have been defined, i.e., intraductal papillary mucinous neoplasm (IPMN) and mucinous cystic neoplasm (MCN). Ovarian-type stroma has been proposed as a requisite to distinguish MCN from IPMN. Some other distinct features to characterize IPMN and MCN have been identified, but there remain ambiguities between the two diseases. In view of the increasing frequency with which these neoplasms are being diagnosed worldwide, it would be helpful for physicians managing patients with cystic neoplasms of the pancreas to have guidelines for the diagnosis and treatment of IPMN and MCN. The proposed guidelines represent a consensus of the working group of the International Association of Pancreatology.",
"title": ""
},
{
"docid": "28aa6f270e578881abb710ca2ddb904d",
"text": "An implantable real-time blood pressure monitoring microsystem for laboratory mice has been demonstrated. The system achieves a 10-bit blood pressure sensing resolution and can wirelessly transmit the pressure information to an external unit. The implantable device is operated in a batteryless manner, powered by an external RF power source. The received RF power level can be sensed and wirelessly transmitted along with blood pressure signal for feedback control of the external RF power. The microsystem employs an instrumented silicone cuff, wrapped around a blood vessel with a diameter of approximately 200 ¿m, for blood pressure monitoring. The cuff is filled by low-viscosity silicone oil with an immersed MEMS capacitive pressure sensor and integrated electronic system to detect a down-scaled vessel blood pressure waveform with a scaling factor of approximately 0.1. The integrated electronic system, consisting of a capacitance-to-voltage converter, an 11-bit ADC, an adaptive RF powering system, an oscillator-based 433 MHz FSK transmitter and digital control circuitry, is fabricated in a 1.5 ¿m CMOS process and dissipates a power of 300 ¿W. The packaged microsystem weighs 130 milligram and achieves a capacitive sensing resolution of 75 aF over 1 kHz bandwidth, equivalent to a pressure sensing resolution of 1 mmHg inside an animal vessel, with a dynamic range of 60 dB. Untethered laboratory animal in vivo evaluation demonstrates that the microsystem can capture real-time blood pressure information with a high fidelity under an adaptive RF powering and wireless data telemetry condition.",
"title": ""
},
{
"docid": "55f28d1e53668160a891db9c59cd13d0",
"text": "Interventions: Patients initially received nebulized albuterol treatment driven by 100% oxygen. Patients were randomized to the helium-oxygen or oxygen group and received nebulized racemic epinephrine via a face mask. After nebulization, humidified helium-oxygen or oxygen was delivered by HFNC. After 60 minutes of inhalation therapy, patients with an M-WCAS of 2 or higher received a second delivery of nebulized racemic epinephrine followed by helium-oxygen or oxygen delivered by HFNC. Main Outcome Measure: Degree of improvement of M-WCAS for 240 minutes or until emergency department discharge.",
"title": ""
},
{
"docid": "03a6656158a24606ee4ad6be0592e850",
"text": "It is well known that earthquakes are a regional event, strongly controlled by local geological structures and circumstances. Reducing the research area can reduce the influence of other irrelevant seismotectonics. A new sub regiondividing scheme, considering the seismotectonics influence, was applied for the artificial neural network (ANN) earthquake prediction model in the northeast seismic region of China (NSRC). The improved set of input parameters and prediction time duration are also discussed in this work. The new dividing scheme improved the prediction accuracy for different prediction time frames. Three different research regions were analyzed as an earthquake data source for the ANN model under different prediction time duration frames. The results show: (1) dividing the research region into smaller subregions can improve the prediction accuracies in NSRC, (2) larger research regions need shorter prediction durations to obtain better performance, (3) different areas have different sets of input parameters in NSRC, and (4) the dividing scheme, considering the seismotectonics frame of the region, yields better results.",
"title": ""
}
] |
scidocsrr
|
809feafa92616fb30750824291345673
|
Lens distortion correction using ideal image coordinates
|
[
{
"docid": "1b8d9c6a498821823321572a5055ecc3",
"text": "The objective of stereo camera calibration is to estimate the internal and external parameters of each camera. Using these parameters, the 3-D position of a point in the scene, which is identified and matched in two stereo images, can be determined by the method of triangulation. In this paper, we present a camera model that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortions. The proposed calibration procedure consists of two steps. In the first step, the calibration parameters are estimated using a closed-form solution based on a distortion-free camera model. In the second step, the parameters estimated in the first step are improved iteratively through a nonlinear optimization, taking into account camera distortions. According to minimum variance estimation, the objective function to be minimized is the mean-square discrepancy between the observed image points and their inferred image projections computed with the estimated calibration parameters. We introduce a type of measure that can be used to directly evaluate the performance of calibration and compare calibrations among different systems. The validity and performance of our calibration procedure are tested with both synthetic data and real images taken by teleand wide-angle lenses. The results consistently show significant improvements over less complete camera models.",
"title": ""
},
{
"docid": "62f4c947cae38cc7071b87597b54324a",
"text": "A bugbear of uncalibrated stereo reconstruction is that cameras which deviate from the pinhole model have to be pre-calibrated in order to correct for nonlinear lens distortion. If they are not, and point correspondence is attempted using the uncorrected images, the matching constraints provided by the fundamental matrix must be set so loose that point matching is significantly hampered. This paper shows how linear estimation of the fundamental matrix from two-view point correspondences may be augmented to include one term of radial lens distortion. This is achieved by (1) changing from the standard radiallens model to another which (as we show) has equivalent power, but which takes a simpler form in homogeneous coordinates, and (2) expressing fundamental matrix estimation as a Quadratic Eigenvalue Problem (QEP), for which efficient algorithms are well known. I derive the new estimator, and compare its performance against bundle-adjusted calibration-grid data. The new estimator is fast enough to be included in a RANSAC-based matching loop, and we show cases of matching being rendered possible by its use. I show how the same lens can be calibrated in a natural scene where the lack of straight lines precludes most previous techniques. The modification when the multi-view relation is a planar homography or trifocal tensor is described.",
"title": ""
}
] |
[
{
"docid": "3a71dd4c8d9e1cf89134141cfd97023e",
"text": "We introduce a novel solid modeling framework taking advantage of the architecture of parallel computing onmodern graphics hardware. Solidmodels in this framework are represented by an extension of the ray representation — Layered Depth-Normal Images (LDNI), which inherits the good properties of Boolean simplicity, localization and domain decoupling. The defect of ray representation in computational intensity has been overcome by the newly developed parallel algorithms running on the graphics hardware equipped with Graphics Processing Unit (GPU). The LDNI for a solid model whose boundary is representedby a closedpolygonalmesh canbe generated efficientlywith thehelp of hardware accelerated sampling. The parallel algorithm for computing Boolean operations on two LDNI solids runs well on modern graphics hardware. A parallel algorithm is also introduced in this paper to convert LDNI solids to sharp-feature preserved polygonal mesh surfaces, which can be used in downstream applications (e.g., finite element analysis). Different from those GPU-based techniques for rendering CSG-tree of solid models Hable and Rossignac (2007, 2005) [1,2], we compute and store the shape of objects in solid modeling completely on graphics hardware. This greatly eliminates the communication bottleneck between the graphics memory and the main memory. © 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "24743c98daddd3bc733921c643e723b9",
"text": "In this work, inspired by two different approaches available in the literature, we present two ways of nonlinear control for the attitude of a quadrotor unmanned aerial vehicle (UAV) : the first one is based on backstepping and the second one is developed directly on the special orthogonal group, SO(3), using the Lyapunov stability theory. In order to prove the advantages of these nonlinear controllers, they will be compared with a proporcional derivative (PD) and a linear quadratic regulator (LQR) controllers, which are the typical solutions for controlling the quadrotor attitude. About the attitude estimation, a set of sensors composed by a 3-axis accelerometer, a 3-axis gyroscope and a 3-axis magnetometer will be used and several estimators based on the Kalman Filter will be studied. Once the full model is developed (made up of the quadrotor motion, actuators and sensors models) and a simulator is built, two levels of control will be implemented in a cascade control configuration: a low level control, for stabilizing or tracking attitude and altitude, and a high level control (by means of an horizontal guidance controller) for tracking a desired path in an horizontal plane. Our simulation shows that the PD controller is not very reliable working with estimators, and that the nonlinear controllers present the best performace, although the LQR controller has also a quite acceptable behaviour.",
"title": ""
},
{
"docid": "634b4d0d8dd5e7f49986c75cb07b1822",
"text": "Handwriting of Chinese has long been an important skill in East Asia. However, automatic generation of handwritten Chinese characters poses a great challenge due to the large number of characters. Various machine learning techniques have been used to recognize Chinese characters, but few works have studied the handwritten Chinese character generation problem, especially with unpaired training data. In this work, we formulate the Chinese handwritten character generation as a problem that learns a mapping from an existing printed font to a personalized handwritten style. We further propose DenseNet CycleGAN to generate Chinese handwritten characters. Our method is applied not only to commonly used Chinese characters but also to calligraphy work with aesthetic values. Furthermore, we propose content accuracy and style discrepancy as the evaluation metrics to assess the quality of the handwritten characters generated. We then use our proposed metrics to evaluate the generated characters from CASIA dataset as well as our newly introduced Lanting calligraphy dataset.",
"title": ""
},
{
"docid": "a910a28224ac10c8b4d2781a73849499",
"text": "The computing machine Z3, buHt by Konrad Zuse from 1938 to 1941, could only execute fixed sequences of floating-point arithmetical operations (addition, subtraction, multiplication, division and square root) coded in a punched tape. We show in this paper that a single program loop containing this type of instructions can simulate any Turing machine whose tape is of bounded size. This is achieved by simulating conditional branching and indirect addressing by purely arithmetical means. Zuse's Z3 is therefore, at least in principle, as universal as today's computers which have a bounded memory size. This result is achieved at the cost of blowing up the size of the program stored on punched tape. Universal Machines and Single Loops Nobody has ever built a universal computer. The reason is that a universal computer consists, in theory, of a fixed processor and a memory of unbounded size. This is the case of Turing machines with their unbounded tapes. In the theory of general recursive functions there is also a small set of rules and some predefined functions, but there is no upper bound on the size of intermediate reduction terms. Modern computers are only potentially universal: They can perform any computation that a Turing machine with a bounded tape can perform. If more storage is required, more can be added without having to modify the processor (provided that the extra memory is still addressable).",
"title": ""
},
{
"docid": "445487bf85f9731b94f79a8efc9d2830",
"text": "The realism of avatars in terms of behavior and form is critical to the development of collaborative virtual environments. In the study we utilized state of the art, real-time face tracking technology to track and render facial expressions unobtrusively in a desktop CVE. Participants in dyads interacted with each other via either a video-conference (high behavioral realism and high form realism), voice only (low behavioral realism and low form realism), or an emotibox that rendered the dimensions of facial expressions abstractly in terms of color, shape, and orientation on a rectangular polygon (high behavioral realism and low form realism). Verbal and non-verbal self-disclosure were lowest in the videoconference condition while self-reported copresence and success of transmission and identification of emotions were lowest in the emotibox condition. Previous work demonstrates that avatar realism increases copresence while decreasing self-disclosure. We discuss the possibility of a hybrid realism solution that maintains high copresence without lowering self-disclosure, and the benefits of such an avatar on applications such as distance learning and therapy.",
"title": ""
},
{
"docid": "4a29051479ac4b3ad7e7cd84540dbdb6",
"text": "A compact, shared-aperture antenna (SAA) configuration consisting of various planar antennas embedded into a single footprint is presented in this article. An L-probefed, suspended-plate, horizontally polarized antenna operating in an 900-MHz band; an aperture-coupled, vertically polarized, microstrip antenna operating at 4.2-GHz; a 2 × 2 microstrip patch array operating at the X band; a low-side-lobe level (SLL), corporate-fed, 8 × 4 microstrip planar array for synthetic aperture radar (SAR) in the X band; and a printed, single-arm, circularly polarized, tilted-beam spiral antenna operating at the C band are integrated into a single aperture for simultaneous operation. This antenna system could find potential application in many airborne and unmanned aircraft vehicle (UAV) technologies. While the design of these antennas is not that critical, their optimal placement in a compact configuration for simultaneous operation with minimal interference poses a significant challenge to the designer. The placement optimization was arrived at based on extensive numerical fullwave optimizations.",
"title": ""
},
{
"docid": "fd0a441610f5aef8aa29edd469dcf88a",
"text": "We treat with tools from convex analysis the general problem of cutting planes, separating a point from a (closed convex) set P . Crucial for this is the computation of extreme points in the so-called reverse polar set, introduced by E. Balas in 1979. In the polyhedral case, this enables the computation of cuts that define facets of P . We exhibit three (equivalent) optimization problems to compute such extreme points; one of them corresponds to selecting a specific normalization to generate cuts. We apply the above development to the case where P is (the closed convex hull of) a union, and more particularly a union of polyhedra (case of disjunctive cuts). We conclude with some considerations on the design of efficient cut generators. The paper also contains an appendix, reviewing some fundamental concepts of convex analysis.",
"title": ""
},
{
"docid": "dc817bc11276d76f8d97f67e4b1b2155",
"text": "Abstract A Security Operation Center (SOC) is made up of five distinct modules: event generators, event collectors, message database, analysis engines and reaction management software. The main problem encountered when building a SOC is the integration of all these modules, usually built as autonomous parts, while matching availability, integrity and security of data and their transmission channels. In this paper we will discuss the functional architecture needed to integrate those modules. Chapter one will introduce the concepts behind each module and briefly describe common problems encountered with each of them. In chapter two we will design the global architecture of the SOC. We will then focus on collection & analysis of data generated by sensors in chapters three and four. A short conclusion will describe further research & analysis to be performed in the field of SOC design.",
"title": ""
},
{
"docid": "012a194f9296a510f209e0cd33f2f3da",
"text": "Virtual reality is the use of interactive simulations to present users with opportunities to perform in virtual environments that appear, sound, and less frequently, feel similar to real-world objects and events. Interactive computer play refers to the use of a game where a child interacts and plays with virtual objects in a computer-generated environment. Because of their distinctive attributes that provide ecologically realistic and motivating opportunities for active learning, these technologies have been used in pediatric rehabilitation over the past 15 years. The ability of virtual reality to create opportunities for active repetitive motor/sensory practice adds to their potential for neuroplasticity and learning in individuals with neurologic disorders. The objectives of this article is to provide an overview of how virtual reality and gaming are used clinically, to present the results of several example studies that demonstrate their use in research, and to briefly remark on future developments.",
"title": ""
},
{
"docid": "f38530be19fc66121fbce56552ade0ea",
"text": "A fully integrated low-dropout-regulated step-down multiphase-switched-capacitor DC-DC converter (a.k.a. charge pump, CP) with a fast-response adaptive-phase (Fast-RAP) digital controller is designed using a 65-nm CMOS process. Different from conventional designs, a low-dropout regulator (LDO) with an NMOS power stage is used without the need for an additional stepup CP for driving. A clock tripler and a pulse divider are proposed to enable the Fast-RAP control. As the Fast-RAP digital controller is designed to be able to respond faster than the cascaded linear regulator, transient response will not be affected by the adaptive scheme. Thus, light-load efficiency is improved without sacrificing the response time. When the CP operates at 90 MHz with 80.3% CP efficiency, only small ripples would appear on the CP output with the 18-phase interleaving scheme, and be further attenuated at VOUT by the 50-mV dropout regulator with only 4.1% efficiency overhead and 6.5% area overhead. The output ripple is less than 2 mV for a load current of 20 mA.",
"title": ""
},
{
"docid": "0a72b41ded091d150b6e92f7cc0180ca",
"text": "Large scale nonlinear support vector machines (SVMs) can be approximated by linear ones using a suitable feature map. The linear SVMs are in general much faster to learn and evaluate (test) than the original nonlinear SVMs. This work introduces explicit feature maps for the additive class of kernels, such as the intersection, Hellinger's, and χ2 kernels, commonly used in computer vision, and enables their use in large scale problems. In particular, we: 1) provide explicit feature maps for all additive homogeneous kernels along with closed form expression for all common kernels; 2) derive corresponding approximate finite-dimensional feature maps based on a spectral analysis; and 3) quantify the error of the approximation, showing that the error is independent of the data dimension and decays exponentially fast with the approximation order for selected kernels such as χ2. We demonstrate that the approximations have indistinguishable performance from the full kernels yet greatly reduce the train/test times of SVMs. We also compare with two other approximation methods: Nystrom's approximation of Perronnin et al. [1], which is data dependent, and the explicit map of Maji and Berg [2] for the intersection kernel, which, as in the case of our approximations, is data independent. The approximations are evaluated on a number of standard data sets, including Caltech-101 [3], Daimler-Chrysler pedestrians [4], and INRIA pedestrians [5].",
"title": ""
},
{
"docid": "c8b1a1af92123f29874bf9a4d308b7ec",
"text": "We present a novel method to summarize unconstrained videos using salient montages (i.e., a “melange” of frames in the video as shown in <xref ref-type=\"fig\" rid=\"fig1\">Fig. 1</xref>, by finding “montageable moments” and identifying the salient people and actions to depict in each montage. Our method aims at addressing the increasing need for generating concise visualizations from the large number of videos being captured from portable devices. Our main contributions are (1) the process of finding salient people and moments to form a montage, and (2) the application of this method to videos taken “in the wild” where the camera moves freely. As such, we demonstrate results on head-mounted cameras, where the camera moves constantly, as well as on videos downloaded from YouTube. In our experiments, we show that our method can reliably detect and track humans under significant action and camera motion. Moreover, the predicted salient people are more accurate than results from state-of-the-art video salieny method <xref ref-type=\"bibr\" rid=\"ref1\">[1]</xref> . Finally, we demonstrate that a novel “montageability” score can be used to retrieve results with relatively high precision which allows us to present high quality montages to users.",
"title": ""
},
{
"docid": "122a27336317372a0d84ee353bb94a4b",
"text": "Recently, many advanced machine learning approaches have been proposed for coreference resolution; however, all of the discriminatively-trained models reason over mentions rather than entities. That is, they do not explicitly contain variables indicating the “canonical” values for each attribute of an entity (e.g., name, venue, title, etc.). This canonicalization step is typically implemented as a post-processing routine to coreference resolution prior to adding the extracted entity to a database. In this paper, we propose a discriminatively-trained model that jointly performs coreference resolution and canonicalization, enabling features over hypothesized entities. We validate our approach on two different coreference problems: newswire anaphora resolution and research paper citation matching, demonstrating improvements in both tasks and achieving an error reduction of up to 62% when compared to a method that reasons about mentions only.",
"title": ""
},
{
"docid": "bdfb0ec2182434dad32049fa04f8c795",
"text": "This paper introduces a vision-based gesture mouse system, which is roughly independent from the lighting conditions, because it only uses the depth data for hand sign recognition. A Kinect sensor was used to develop the system, but other depth sensing cameras are adequate as well, if their resolutions are similar or better than the resolution of Kinect sensor. Our aim was to find a comfortable, user-friendly solution, which can be used for a long time without getting tired. The implementation of the system was developed in C++, and two types of test were performed too. We investigated how fast the user can position with the cursor and click on objects and we also examined which controls of the graphical user interfaces (GUI) are easy to use and which ones are difficult to use with our gesture mouse. Our system is precise enough to use efficiently most of the elements of traditional GUI such as buttons, icons, scrollbars, etc. The accuracy achieved by advanced users is only slightly below as if they used the traditional mouse.",
"title": ""
},
{
"docid": "0ca84e5ed06b21cb3110251068ac7bd3",
"text": "We present a wavelet-based, high performance, hierarchical scheme for image matching which includes (1) dynamic detection of interesting points as feature points at different levels of subband images via the wavelet transform, (2) adaptive thresholding selection based on compactness measures of fuzzy sets in image feature space, and (3) a guided searching strategy for the best matching from coarse level to fine level. In contrast to the traditional parallel approaches which rely on specialized parallel machines, we explored the potential of distributed systems for parallelism. The proposed image matching algorithms were implemented on a network of workstation clusters using parallel virtual machine (PVM). The results show that our wavelet-based hierarchical image matching scheme is efficient and effective for object recognition.",
"title": ""
},
{
"docid": "949a5da7e1a8c0de43dbcb7dc589851c",
"text": "Silicon photonics devices offer promising solution to meet the growing bandwidth demands of next-generation interconnects. This paper presents a 5 × 25 Gb/s carrier-depletion microring-based wavelength-division multiplexing (WDM) transmitter in 65 nm CMOS. An AC-coupled differential driver is proposed to realize 4 × VDD output swing as well as tunable DC-biasing. The proposed transmitter incorporates 2-tap asymmetric pre-emphasis to effectively cancel the optical nonlinearity of the ring modulator. An average-power-based dynamic wavelength stabilization loop is also demonstrated to compensate for thermal induced resonant wavelength drift. At 25 Gb/s operation, each transmitter channel consumes 113.5 mW and maintains 7 dB extinction ratio with a 4.4 V pp-diff output swing in the presence of thermal fluctuations.",
"title": ""
},
{
"docid": "a3d01e41454c47b29acf2e0d7e05b037",
"text": "Printed circuit board (PCB) implementations provide high repeatability and ease of manufacturing as well as potential cost savings for many applications. An optimization procedure is proposed for designing PCB inductors for use in domestic induction cooktops. Cables for classical inductors are built of multistranded round litz wire for efficiency reasons. A planar litz structure in a PCB has been applied to achieve a similar performance to that of traditional constructions (in terms of maximum output power, inductive efficiency, and thermal behavior). The inductor performance has been optimized by a finite-element-analysis-tool-based method taking into account the power losses in rectangular cross-sectional tracks of the cable as well as the geometrical constraints of PCB implementations and the requirements of the associated electronics. Finally, the proposed method has been validated by means of experimental measurements under large-signal conditions on a PCB inductor prototype.",
"title": ""
},
{
"docid": "610187e72a028a6287342ce814a2cf74",
"text": "The major aim of the present study is to emphasize the potential offered by Virtual Reality (VR) to develop new tools for research in experimental psychology. Despite several works have addressed cognitive, clinical and methodological issues concerning the application of this technology in psychological and neuro-psychological assessment and rehabilitation, there is a lack of discussion focusing on the role played by Virtual Reality and 3D computer graphics in experimental behaviour research. This chapter provides an introduction to the basic concepts and the historical background of experimental psychology along with a rationale for the application of Virtual Reality in this scientific discipline. In particular, the historical framework aims at emphasizing that the application of VR in experimental psychology represents the leading edge of the revolution that informatics has operated into the traditional psychology laboratory. We point out that the use of VR and Virtual Environments (VEs) as research tool might discover new methodological horizons for experimental psychology and that it has the potential to raise important questions concerning the nature of many psychological phenomena. In order to put the discussion on a concrete basis, we review the relevant literature regarding the application of VR to the main areas of psychological research, such as perception, memory, problem solving, mental imagery and attention. Finally, fundamental issues having important implications for the feasibility of a VR approach applied to psychological research are discussed.",
"title": ""
},
{
"docid": "8a0c295e620b68c07005d6d96d4acbe9",
"text": "One method of viral marketing involves seeding certain consumers within a population to encourage faster adoption of the product throughout the entire population. However, determining how many and which consumers within a particular social network should be seeded to maximize adoption is challenging. We define a strategy space for consumer seeding by weighting a combination of network characteristics such as average path length, clustering coefficient, and degree. We measure strategy effectiveness by simulating adoption on a Bass-like agent-based model, with five different social network structures: four classic theoretical models (random, lattice, small-world, and preferential attachment) and one empirical (extracted from Twitter friendship data). To discover good seeding strategies, we have developed a new tool, called BehaviorSearch, which uses genetic algorithms to search through the parameter-space of agent-based models. This evolutionary search also provides insight into the interaction between strategies and network structure. Our results show that one simple strategy (ranking by node degree) is near-optimal for the four theoretical networks, but that a more nuanced strategy performs significantly better on the empirical Twitter-based network. We also find a correlation between the optimal seeding budget for a network, and the inequality of the degree distribution.",
"title": ""
},
{
"docid": "a49058990cd1a68a4d7ac79dbf43e475",
"text": "In this paper we introduce a concept of syntactic n-grams (sn-grams). Sn-grams differ from traditional n-grams in the manner of what elements are considered neighbors. In case of sn-grams, the neighbors are taken by following syntactic relations in syntactic trees, and not by taking the words as they appear in the text. Dependency trees fit directly into this idea, while in case of constituency trees some simple additional steps should be made. Sn-grams can be applied in any NLP task where traditional n-grams are used. We describe how sn-grams were applied to authorship attribution. SVM classifier for several profile sizes was used. We used as baseline traditional n-grams of words, POS tags and characters. Obtained results are better when applying sn-grams.",
"title": ""
}
] |
scidocsrr
|
aff759e3a1f8a8da5a62a0fe6012608a
|
Semi-Automatic System for Testing Dielectric Properties of Low-Voltage Busbar
|
[
{
"docid": "5025766e66589289ccc31e60ca363842",
"text": "The use of web applications has become increasingly popular in our routine activities, such as reading the news, paying bills, and shopping on-line. As the availability of these services grows, we are witnessing an increase in the number and sophistication of attacks that target them. In particular, SQL injection, a class of code-injection attacks in which specially crafted input strings result in illegal queries to a database, has become one of the most serious threats to web applications. In this paper we present and evaluate a new technique for detecting and preventing SQL injection attacks. Our technique uses a model-based approach to detect illegal queries before they are executed on the database. In its static part, the technique uses program analysis to automatically build a model of the legitimate queries that could be generated by the application. In its dynamic part, the technique uses runtime monitoring to inspect the dynamically-generated queries and check them against the statically-built model. We developed a tool, AMNESIA, that implements our technique and used the tool to evaluate the technique on seven web applications. In the evaluation we targeted the subject applications with a large number of both legitimate and malicious inputs and measured how many attacks our technique detected and prevented. The results of the study show that our technique was able to stop all of the attempted attacks without generating any false positives.",
"title": ""
}
] |
[
{
"docid": "2a4eb6d12a50034b5318d246064cb86e",
"text": "In this paper, we study the 3D volumetric modeling problem by adopting the Wasserstein introspective neural networks method (WINN) that was previously applied to 2D static images. We name our algorithm 3DWINN which enjoys the same properties as WINN in the 2D case: being simultaneously generative and discriminative. Compared to the existing 3D volumetric modeling approaches, 3DWINN demonstrates competitive results on several benchmarks in both the generation and the classification tasks. In addition to the standard inception score, the Fréchet Inception Distance (FID) metric is also adopted to measure the quality of 3D volumetric generations. In addition, we study adversarial attacks for volumetric data and demonstrate the robustness of 3DWINN against adversarial examples while achieving appealing results in both classification and generation within a single model. 3DWINN is a general framework and it can be applied to the emerging tasks for 3D object and scene modeling.",
"title": ""
},
{
"docid": "9dd6d9f5643c4884e981676230f3ee66",
"text": "A rank-r matrix X ∈ Rm×n can be written as a product UV >, where U ∈ Rm×r and V ∈ Rn×r. One could exploit this observation in optimization: e.g., consider the minimization of a convex function f(X) over rank-r matrices, where the scaffold of rank-r matrices is modeled via the factorization in U and V variables. Such heuristic has been widely used before for specific problem instances, where the solution sought is (approximately) low-rank. Though such parameterization reduces the number of variables and is more efficient in computational speed and memory requirement (of particular interest is the case r min{m,n}), it comes at a cost: f(UV >) becomes a non-convex function w.r.t. U and V . In this paper, we study such parameterization in optimization of generic convex f and focus on first-order, gradient descent algorithmic solutions. We propose an algorithm we call the Bi-Factored Gradient Descent (BFGD) algorithm, an efficient first-order method that operates on the U, V factors. We show that when f is smooth, BFGD has local sublinear convergence, and linear convergence when f is both smooth and strongly convex. Moreover, for several key applications, we provide simple and efficient initialization schemes that provide approximate solutions good enough for the above convergence results to hold.",
"title": ""
},
{
"docid": "6bce7698f908721da38a3c6e6916a30e",
"text": "For learning in big datasets, the classification performance of ELM might be low due to input samples are not extracted features properly. To address this problem, the hierarchical extreme learning machine (H-ELM) framework was proposed based on the hierarchical learning architecture of multilayer perceptron. H-ELM composes of two parts; the first is the unsupervised multilayer encoding part and the second part is the supervised feature classification part. H-ELM can give higher accuracy rate than of the traditional ELM. However, it still has to enhance its classification performance. Therefore, this paper proposes a new method namely as the extending hierarchical extreme learning machine (EH-ELM). For the extended supervisor part of EH-ELM, we have got an idea from the two-layers extreme learning machine. To evaluate the performance of EH-ELM, three different image datasets; Semeion, MNIST, and NORB, were studied. The experimental results show that EH-ELM achieves better performance than of H-ELM and the other multi-layer framework.",
"title": ""
},
{
"docid": "f41e653e7e5bc694639634733e50d04b",
"text": "The private sector is often seen as a driver of exclusionary processes rather than a partner in improving the health and welfare of socially-excluded populations. However, private-sector initiatives and partnerships- collectively labelled corporate social responsibility (CSR) initiatives-may be able to positively impact social status, earning potential, and access to services and resources for socially-excluded populations. This paper presents case studies of CSR projects in Bangladesh that are designed to reduce social exclusion among marginalized populations and explores whether CSR initiatives can increase economic and social capabilities to reduce exclusion. The examples provide snapshots of projects that (a) increase job-skills and employment opportunities for women, disabled women, and rehabilitated drug-users and (b) provide healthcare services to female workers and their communities. The CSR case studies cover a limited number of people but characteristics and practices replicable and scaleable across different industries, countries, and populations are identified. Common success factors from the case studies form the basis for recommendations to design and implement more CSR initiatives targeting socially-excluded groups. The analysis found that CSR has potential for positive and lasting impact on developing countries, especifically on socially-excluded populations. However, there is a need for additional monitoring and critical evaluation.",
"title": ""
},
{
"docid": "e579b056407e01cc42b5d898ab06fd72",
"text": "Convolutional Neural Networks (ConvNets) are a powerful Deep Learning model, providing state-of-the-art accuracy to many emerging classification problems. However, ConvNet classification is a computationally heavy task, suffering from rapid complexity scaling. This paper presents fpgaConvNet, a novel domain-specific modelling framework together with an automated design methodology for the mapping of ConvNets onto reconfigurable FPGA-based platforms. By interpreting ConvNet classification as a streaming application, the proposed framework employs the Synchronous Dataflow (SDF) model of computation as its basis and proposes a set of transformations on the SDF graph that explore the performance-resource design space, while taking into account platform-specific resource constraints. A comparison with existing ConvNet FPGA works shows that the proposed fully-automated methodology yields hardware designs that improve the performance density by up to 1.62× and reach up to 90.75% of the raw performance of architectures that are hand-tuned for particular ConvNets.",
"title": ""
},
{
"docid": "555fbded31f3972c097e0c94d70e1a4e",
"text": "Predicting business process behaviour is an important aspect of business process management. Motivated by research in natural language processing, this paper describes an application of deep learning with recurrent neural networks to the problem of predicting the next event in a business process. This is both a novel method in process prediction, which has largely relied on explicit process models, and also a novel application of deep learning methods. The approach is evaluated on two real datasets and our results surpass the state-of-the-art in prediction precision.",
"title": ""
},
{
"docid": "7e2c5184ca6c738f3db3c0ada7cdf37a",
"text": "DNA microarray technology has led to an explosion of oncogenomic analyses, generating a wealth of data and uncovering the complex gene expression patterns of cancer. Unfortunately, due to the lack of a unifying bioinformatic resource, the majority of these data sit stagnant and disjointed following publication, massively underutilized by the cancer research community. Here, we present ONCOMINE, a cancer microarray database and web-based data-mining platform aimed at facilitating discovery from genome-wide expression analyses. To date, ONCOMINE contains 65 gene expression datasets comprising nearly 48 million gene expression measurements form over 4700 microarray experiments. Differential expression analyses comparing most major types of cancer with respective normal tissues as well as a variety of cancer subtypes and clinical-based and pathology-based analyses are available for exploration. Data can be queried and visualized for a selected gene across all analyses or for multiple genes in a selected analysis. Furthermore, gene sets can be limited to clinically important annotations including secreted, kinase, membrane, and known gene-drug target pairs to facilitate the discovery of novel biomarkers and therapeutic targets.",
"title": ""
},
{
"docid": "a466b8da35f820eaaf597e1768b3e3f4",
"text": "The Internet of Things technology has been widely used in the quality tracking of agricultural products, however, the safety of storage for tracked data is still a serious challenge. Recently, with the expansion of blockchain technology applied in cross-industry field, the unchangeable features of its stored data provide us new vision about ensuring the storage safety for tracked data. Unfortunately, when the blockchain technology is directly applied in agricultural products tracking and data storage, it is difficult to automate storage and obtain the hash data stored in the blockchain in batches base on the identity. Addressing this issue, we propose a double-chain storage structure, and design a secured data storage scheme for tracking agricultural products based on blockchain. Specifically, the chained data structure is utilized to store the blockchain transaction hash, and together with the chain of the blockchain to form a double-chain storage, which ensures the data of agricultural products will not be maliciously tampered or destructed. Finally, in the practical application system, we verify the correctness and security of the proposed storage scheme.",
"title": ""
},
{
"docid": "35d3dcb77620a69388e90318085c744d",
"text": "2-D face recognition in the presence of large pose variations presents a significant challenge. When comparing a frontal image of a face to a near profile image, one must cope with large occlusions, non-linear correspondences, and significant changes in appearance due to viewpoint. Stereo matching has been used to handle these problems, but performance of this approach degrades with large pose changes. We show that some of this difficulty is due to the effect that foreshortening of slanted surfaces has on window-based matching methods, which are needed to provide robustness to lighting change. We address this problem by designing a new, dynamic programming stereo algorithm that accounts for surface slant. We show that on the CMU PIE dataset this method results in significant improvements in recognition performance.",
"title": ""
},
{
"docid": "d35458a28d159b8c54721b7d88780431",
"text": "In this paper we present seven techniques that everybody should know to improve example-based single image super resolution (SR): 1) augmentation of data, 2) use of large dictionaries with efficient search structures, 3) cascading, 4) image self-similarities, 5) back projection refinement, 6) enhanced prediction by consistency check, and 7) context reasoning. We validate our seven techniques on standard SR benchmarks (i.e. Set5, Set14, B100) and methods (i.e. A+, SRCNN, ANR, Zeyde, Yang) and achieve substantial improvements. The techniques are widely applicable and require no changes or only minor adjustments of the SR methods. Moreover, our Improved A+ (IA) method sets new stateof-the-art results outperforming A+ by up to 0.9dB on average PSNR whilst maintaining a low time complexity.",
"title": ""
},
{
"docid": "a01302cad4754ecf162d485e00c72e38",
"text": "The problem of creating fair ship design curves is of major importance in Computer Aided Ship Design environment. The fairness of these curves is generally considered a subjective notion depending on the judgement of the designer (eg., visually pleasing, minimum variation of curvature, devoid of unnecessary bumps or wiggles, satisfying certain continuity requirements). Thus an automated fairing process based on objective criteria is clearly desirable. This paper presents an automated fairing algorithm for ship curves to satisfy objective geometric constraints. This procedure is based on the use of optimisation tools and cubic B-spline functions. The aim is to produce curves with a more gradual variation of curvature without deteriorating initial shapes. The optimisation based fairing procedure is applied to a variety of plane ship sections to demonstrate the capability and flexibility of the methodology. The resulting curves, with their corresponding curvature plots indicate that, provided that the designer can specify his objectives and constraints clearly, the procedure will generate fair ship definition curves within the constrained design space.",
"title": ""
},
{
"docid": "6c1b0b62855df7a22a4b3a92d9006605",
"text": "In an ideal robotic telepresence system the visitor would perceive and be perceived as being fully present at a remote location. This can happen if the visitor has a first person experience of the destination location including full sensory stimulation, and if the people at the remote location would be able to naturally perceive and interact with the visitor as a regular human being. In this paper, we report on the requirements and design considerations for a fully immersive robotic telepresence system that were gathered for an FP7 EU telepresence project aiming to address these issues. Based on these requirements we list a set of user tasks that such a system should support and discuss some possible design tradeoffs.",
"title": ""
},
{
"docid": "e0bb1bdcba38bcfbcc7b2da09cd05a3f",
"text": "Reconstructing the 3D surface from a set of provided range images – acquired by active or passive sensors – is an important step to generate faithful virtual models of real objects or environments. Since several approaches for high quality fusion of range images are already known, the runtime efficiency of the respective methods are of increased interest. In this paper we propose a highly efficient method for range image fusion resulting in very accurate 3D models. We employ a variational formulation for the surface reconstruction task. The global optimal solution can be found by gradient descent due to the convexity of the underlying energy functional. Further, the gradient descent procedure can be parallelized, and consequently accelerated by graphics processing units. The quality and runtime performance of the proposed method is demonstrated on wellknown multi-view stereo benchmark datasets.",
"title": ""
},
{
"docid": "922ce3f0662c9d1f374f17d90f2d4926",
"text": "Mathematical word problems (MWP) test critical aspects of reading comprehension in conjunction with generating a solution that agrees with the “story” in the problem. In this paper we design and construct an MWP solver in a systematic manner, as a step towards enabling comprehension in mathematics and teaching problem solving for children in the elementary grades. We do this by (a) identifying the discourse structure of MWPs that will enable comprehension in mathematics, and (b) utilizing the information in the discourse structure towards generating the solution in a systematic manner. We build a multistage software prototype that predicts the problem type, identifies the function of sentences in each problem, and extracts the necessary information from the question to generate the corresponding mathematical equation. Our prototype has an accuracy of 86% on a large corpus of MWPs of three problem types from elementary grade mathematics curriculum.",
"title": ""
},
{
"docid": "73581b5a936a75f936112747bd05003e",
"text": "We consider the problem of creating secure and resourceefficient blockchain networks i.e., enable a group of mutually distrusting participants to efficiently share state and then agree on an append-only history of valid operations on that shared state. This paper proposes a new approach to build such blockchain networks. Our key observation is that an append-only, tamper-resistant ledger (when used as a communication medium for messages sent by participants in a blockchain network) offers a powerful primitive to build a simple, flexible, and efficient consensus protocol, which in turn serves as a solid foundation for building secure and resource-efficient blockchain networks. A key ingredient in our approach is the abstraction of a blockchain service provider (BSP), which oversees creating and updating an append-only, tamper-resistant ledger, and a new distributed protocol called Caesar consensus, which leverages the BSP’s interface to enable members of a blockchain network to reach consensus on the BSP’s ledger—even when the BSP or a threshold number of members misbehave arbitrarily. By design, the BSP is untrusted, so it can run on any untrusted infrastructure and can be optimized for better performance without affecting end-to-end security. We implement our proposal in a system called VOLT. Our experimental evaluation suggests that VOLT incurs low resource costs and provides better performance compared to alternate approaches.",
"title": ""
},
{
"docid": "47c6de1c81b484204abfbd1f070ad03f",
"text": "Ti-based metal-organic frameworks (MOFs) are demonstrated as promising photosensitizers for photoelectrochemical (PEC) water splitting. Photocurrents of TiO2 nano wire photoelectrodes can be improved under visible light through sensitization with aminated Ti-based MOFs. As a host, other sensitizers or catalysts such as Au nanoparticles can be incorporated into the MOF layer thus further improving the PEC water splitting efficiency.",
"title": ""
},
{
"docid": "06574b1a35aef36494726f91dfe8f909",
"text": "This paper presents the extension of a birth simulator for medical training with an augmented reality system. The system presents an add-on of the user interface for our previous work on a mixed reality delivery simulator system [1]. This simulation system comprised direct haptic and auditory feedback, and provided important physiological data including values of blood pressure, heart rates, pain and oxygen supply, necessary for training physicians. Major drawback of the system was the indirect viewing of both the virtual models and the final delivery process. The current paper extends the existing system by bringing in the in-situ visualization. This plays an important role in increasing the efficiency of the training, since the physician now concentrates on the vaginal delivery rather than the remote computer screen. In addition, forceps are modeled and an external optical tracking system is integrated in order to provide visual feedback while training with the simulator for complicated procedures such as forceps delivery.",
"title": ""
},
{
"docid": "9eabe9a867edbceee72bd20d483ad886",
"text": "Inspired by recent advances of deep learning in instance segmentation and object tracking, we introduce the concept of convnet-based guidance applied to video object segmentation. Our model proceeds on a per-frame basis, guided by the output of the previous frame towards the object of interest in the next frame. We demonstrate that highly accurate object segmentation in videos can be enabled by using a convolutional neural network (convnet) trained with static images only. The key component of our approach is a combination of offline and online learning strategies, where the former produces a refined mask from the previous frame estimate and the latter allows to capture the appearance of the specific object instance. Our method can handle different types of input annotations such as bounding boxes and segments while leveraging an arbitrary amount of annotated frames. Therefore our system is suitable for diverse applications with different requirements in terms of accuracy and efficiency. In our extensive evaluation, we obtain competitive results on three different datasets, independently from the type of input annotation.",
"title": ""
},
{
"docid": "bd1c8fbc89bd53f390b3e50fb3bdca57",
"text": "Either by ensuring the continuing availability of information, or by deliberately caching content that might get deleted or removed, Web archiving services play an increasingly important role in today’s information ecosystem. Among these, the Wayback Machine has been proactively archiving, since 2001, versions of a large number of Web pages, while newer services like archive.is allow users to create on-demand snapshots of specific Web pages, which serve as time capsules that can be shared across the Web. In this paper, we present a large-scale analysis of Web archiving services and their use on social media, aiming to shed light on the actors involved in this ecosystem, the content that gets archived, and how it is shared. To this end, we crawl and study: 1) 21M URLs, spanning almost two years, from archive.is; and 2) 356K archive.is plus 391K Wayback Machine URLs that were shared on four social networks: Reddit, Twitter, Gab, and 4chan’s Politically Incorrect board (/pol/) over 14 months. We observe that news and social media posts are the most common types of content archived, likely due to their perceived ephemeral and/or controversial nature. Moreover, URLs of archiving services are extensively shared on “fringe” communities within Reddit and 4chan to preserve possibly contentious content. Lastly, we find evidence of moderators nudging or even forcing users to use archives, instead of direct links, for news sources with opposing ideologies, potentially depriving them of ad revenue.",
"title": ""
}
] |
scidocsrr
|
41fa7afba2491f466414c8ca9ddc4dc4
|
AES-256 Encryption in Communication using LabVIEW
|
[
{
"docid": "45c1907ce72b0100a3afa8f58e2e39b6",
"text": "Advanced Encryption Standard (AES), a Federal Information Processing Standard (FIPS), is an approved cryptographic algorithm that can be used to protect electronic data. The AES can be programmed in software or built with pure hardware. However Field Programmable Gate Arrays (FPGAs) offer a quicker and more customizable solution. This paper presents the AES algorithm with regard to FPGA and the Very High Speed Integrated Circuit Hardware Description language (VHDL). ModelSim SE PLUS 5.7g software is used for simulation and optimization of the synthesizable VHDL code. Synthesizing and implementation (i.e. Translate, Map and Place and Route) of the code is carried out on Xilinx - Project Navigator, ISE 8.2i suite. All the transformations of both Encryption and Decryption are simulated using an iterative design approach in order to minimize the hardware consumption. Xilinx XC3S400 device of Spartan Family is used for hardware evaluation. This paper proposes a method to integrate the AES encrypter and the AES decrypter. This method can make it a very low-complexity architecture, especially in saving the hardware resource in implementing the AES (Inv) Sub Bytes module and (Inv) Mix columns module etc. Most designed modules can be used for both AES encryption and decryption. Besides, the architecture can still deliver a high data rate in both encryption/decryption operations. The proposed architecture is suited for hardware-critical applications, such as smart card, PDA, and mobile phone, etc.",
"title": ""
}
] |
[
{
"docid": "592ccb18cfc7770fcb8b8adeea1b4b92",
"text": "We show the existence of a Locality-Sensitive Hashing (LSH) family for the angular distance that yields an approximate Near Neighbor Search algorithm with the asymptotically optimal running time exponent. Unlike earlier algorithms with this property (e.g., Spherical LSH [1, 2]), our algorithm is also practical, improving upon the well-studied hyperplane LSH [3] in practice. We also introduce a multiprobe version of this algorithm and conduct an experimental evaluation on real and synthetic data sets. We complement the above positive results with a fine-grained lower bound for the quality of any LSH family for angular distance. Our lower bound implies that the above LSH family exhibits a trade-off between evaluation time and quality that is close to optimal for a natural class of LSH functions.",
"title": ""
},
{
"docid": "840d7c9e0507bac0103f526a4c5d74d7",
"text": "http://dx.doi.org/10.1016/j.paid.2014.08.026 0191-8869/ 2014 Elsevier Ltd. All rights reserved. q This study was funded by a seed grant to the first author from the University of Western Sydney. ⇑ Corresponding author. Address: School of Social Sciences and Psychology, University of Western Sydney, Milperra, NSW 2214, Australia. Tel.: +61 (02) 9772 6447; fax: +61 (02) 9772 6757. E-mail address: p.jonason@uws.edu.au (P.K. Jonason). Peter K. Jonason a,⇑, Serena Wee , Norman P. Li b",
"title": ""
},
{
"docid": "911bc5b111e0c0454c155804e060b29e",
"text": "Graphical models have become the basic framework for topic based probabilistic modeling. Especially models with latent variables have proved to be effective in capturing hidden structures in the data. In this paper, we survey an important subclass Directed Probabilistic Topic Models (DPTMs) with soft clustering abilities and their applications for knowledge discovery in text corpora. From an unsupervised learning perspective, “topics are semantically related probabilistic clusters of words in text corpora; and the process for finding these topics is called topic modeling”. In topic modeling, a document consists of different hidden topics and the topic probabilities provide an explicit representation of a document to smooth data from the semantic level. It has been an active area of research during the last decade. Many models have been proposed for handling the problems of modeling text corpora with different characteristics, for applications such as document classification, hidden association finding, expert finding, community discovery and temporal trend analysis. We give basic concepts, advantages and disadvantages in a chronological order, existing models classification into different categories, their parameter estimation and inference making algorithms with models performance evaluation measures. We also discuss their applications, open challenges and future directions in this dynamic area of research.",
"title": ""
},
{
"docid": "a6463cb2add05c9b32744381184624e4",
"text": "Gathering, understanding and managing requirements is a key factor to the success of a software development effort. Requirement engineering is a critical task in all development methods including the agile development method. There are several requirement techniques available for requirement gathering which can be",
"title": ""
},
{
"docid": "b71f5e8673678757a4e6f3f10b3e1966",
"text": "Interior permanent magnet (IPM) synchronous machines result to be a valid motor topology in case of both high efficiency and high flux-weakening range. In industry applications the design of interior permanent magnet (IPM) synchronous machines requires to satisfy an increasingly number of limitations. For an IPM machine the key parameters that can be used for a performance maximization refer to many aspects: geometry, material property, cost, control strategy. According to the size and geometry limitations of an IPM motor for a very high flux-weakening speed range is required, the paper analyzes how to maximize the performance modifying the PM quantity. FE simulations, are firstly verified comparing the results with measurements on a prototype, and then they are used to evaluate the tradeoffs of the different cases.",
"title": ""
},
{
"docid": "f0090c4fe37833f5b1505cbf93fae2f8",
"text": "This paper presents an operational semantics for UML activity diagrams. The purpose of this semantics is three-fold: to give a robust basis for verifying model correctness; to help validate model transformations; and to provide a well-formed basis for assessing whether a proposed extension/interpretation of the modeling language is consistent with the standard. The challenges of a general formal framework for UML models include the semi-formality of the semantics specification, the extensibility of the language, and (sometimes deliberate, sometimes accidental) under-specification of model behavior in the standard. Our approach is based on structural operational semantics, which can be extended according to domain-specific needs. The presented semantics has been implemented and tested.",
"title": ""
},
{
"docid": "5d2230e6d7f560576231f52209703595",
"text": "This paper presents a twofold tunable planar hairpin filter to simultaneously control center frequency and bandwidth. Tunability is achieved by using functional thick film layers of the ferroelectric material Barium-Strontium-Titanate (BST). The center frequency of the filter is adjusted by varactors which are loading the hairpin resonators. Coupling varactors between the hairpin resonators enable the control of the bandwidth. The proposed filter structure is designed for a center frequency range from 650 MHz to 920 MHz and a bandwidth between 25 MHz and 85 MHz. This covers the specifications of the lower GSM bands. The functionality of the design is experimentally validated and confirmed by simulation results.",
"title": ""
},
{
"docid": "15906c9bd84e55aec215843ef9e542a0",
"text": "Recent growing interest in predicting and influencing consu mer behavior has generated a parallel increase in research efforts on Recommend er Systems. Many of the state-of-the-art Recommender Systems algorithms rely on o btaining user ratings in order to later predict unknown ratings. An underlying assumpt ion in this approach is that the user ratings can be treated as ground truth of the user’s t aste. However, users are inconsistent in giving their feedback, thus introducing an un known amount of noise that challenges the validity of this assumption. In this paper, we tackle the problem of analyzing and charact e izing the noise in user feedback through ratings of movies. We present a user st udy aimed at quantifying the noise in user ratings that is due to inconsistencies. We m easure RMSE values that range from0.557 to 0.8156. We also analyze how factors such as item sorting and time of rating affect this noise.",
"title": ""
},
{
"docid": "ccf6f5d7b73054752b45d753454130f7",
"text": "Emerging non-volatile memories such as phase-change RAM (PCRAM) offer significant advantages but suffer from write endurance problems. However, prior solutions are oblivious to soft errors (recently raised as a potential issue even for PCRAM) and are incompatible with high-level fault tolerance techniques such as chipkill. To additionally address such failures requires unnecessarily high costs for techniques that focus singularly on wear-out tolerance. In this paper, we propose fine-grained remapping with ECC and embedded pointers (FREE-p). FREE-p remaps fine-grained worn-out NVRAM blocks without requiring large dedicated storage. We discuss how FREE-p protects against both hard and soft errors and can be extended to chipkill. Further, FREE-p can be implemented purely in the memory controller, avoiding custom NVRAM devices. In addition to these benefits, FREE-p increases NVRAM lifetime by up to 26% over the state-of-the-art even with severe process variation while performance degradation is less than 2% for the initial 7 years.",
"title": ""
},
{
"docid": "72e9e772ede3d757122997d525d0f79c",
"text": "Deep learning systems, such as Convolutional Neural Networks (CNNs), can infer a hierarchical representation of input data that facilitates categorization. In this paper, we propose to learn affect-salient features for Speech Emotion Recognition (SER) using semi-CNN. The training of semi-CNN has two stages. In the first stage, unlabeled samples are used to learn candidate features by contractive convolutional neural network with reconstruction penalization. The candidate features, in the second step, are used as the input to semi-CNN to learn affect-salient, discriminative features using a novel objective function that encourages the feature saliency, orthogonality and discrimination. Our experiment results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and environment distortion), and outperforms several well-established SER features.",
"title": ""
},
{
"docid": "f7e63d994615d0a2902483bb2409f653",
"text": "A novel half-rate source-series-terminated (SST) transmitter in 65nm bulk CMOS technology is presented in this paper. Compared to previous half-rate SST transmitters, the proposed one consists of four binary-weighted slices increasing proportionally as 1x, 2x, 4x and 8x and the range of pre-emphasis level is increased greatly by the clock-match block to adapt to different channel. The half-rate transmitter can adjust the pre-emphasis level from 1.2dB to 23dB. The transmitter output impedance is adjustable from 33ohms to 64ohms. A power consumption of 24mW is measured at a transmit rate of 6 GB/s which is power-efficient compared to previous half-rate SST transmitter.",
"title": ""
},
{
"docid": "b6dbccc6b04c282ca366eddea77d0107",
"text": "Current methods for annotating and interpreting human genetic variation tend to exploit a single information type (for example, conservation) and/or are restricted in scope (for example, to missense changes). Here we describe Combined Annotation–Dependent Depletion (CADD), a method for objectively integrating many diverse annotations into a single measure (C score) for each variant. We implement CADD as a support vector machine trained to differentiate 14.7 million high-frequency human-derived alleles from 14.7 million simulated variants. We precompute C scores for all 8.6 billion possible human single-nucleotide variants and enable scoring of short insertions-deletions. C scores correlate with allelic diversity, annotations of functionality, pathogenicity, disease severity, experimentally measured regulatory effects and complex trait associations, and they highly rank known pathogenic variants within individual genomes. The ability of CADD to prioritize functional, deleterious and pathogenic variants across many functional categories, effect sizes and genetic architectures is unmatched by any current single-annotation method.",
"title": ""
},
{
"docid": "be689d89e1e5182895a473a52a1950cd",
"text": "This paper designs a Continuous Data Level Auditing system utilizing business process based analytical procedures and evaluates the system’s performance using disaggregated transaction records of a large healthcare management firm. An important innovation in the proposed architecture of the CDA system is the utilization of analytical monitoring as the second (rather than the first) stage of data analysis. The first component of the system utilizes automatic transaction verification to filter out exceptions, defined as transactions violating formal business process rules. The second component of the system utilizes business process based analytical procedures, denoted here ―Continuity Equations‖, as the expectation models for creating business process audit benchmarks. Our first objective is to examine several expectation models that can serve as the continuity equation benchmarks: a Linear Regression Model, a Simultaneous Equation Model, two Vector Autoregressive models, and a GARCH model. The second objective is to examine the impact of the choice of the level of data aggregation on anomaly detection performance. The third objective is to design a set of online learning and error correction protocols for automatic model inference and updating. Using a seeded error simulation approach, we demonstrate that the use of disaggregated business process data allows the detection of anomalies that slip through the analytical procedures applied to more aggregated data. Furthermore, the results indicate that under most circumstances the use of real time error correction results in superior performance, thus showing the benefit of continuous auditing.",
"title": ""
},
{
"docid": "6d41b17506d0e8964f850c065b9286cb",
"text": "Representation learning is a key issue for most Natural Language Processing (NLP) tasks. Most existing representation models either learn little structure information or just rely on pre-defined structures, leading to degradation of performance and generalization capability. This paper focuses on learning both local semantic and global structure representations for text classification. In detail, we propose a novel Sandwich Neural Network (SNN) to learn semantic and structure representations automatically without relying on parsers. More importantly, semantic and structure information contribute unequally to the text representation at corpus and instance level. To solve the fusion problem, we propose two strategies: Adaptive Learning Sandwich Neural Network (AL-SNN) and Self-Attention Sandwich Neural Network (SA-SNN). The former learns the weights at corpus level, and the latter further combines attention mechanism to assign the weights at instance level. Experimental results demonstrate that our approach achieves competitive performance on several text classification tasks, including sentiment analysis, question type classification and subjectivity classification. Specifically, the accuracies are MR (82.1%), SST-5 (50.4%), TREC (96%) and SUBJ (93.9%).",
"title": ""
},
{
"docid": "a4ad254998fb765f3048158915855413",
"text": "The ability to detect small objects and the speed of the object detector are very important for the application of autonomous driving, and in this paper, we propose an effective yet efficient one-stage detector, which gained the second place in the Road Object Detection competition of CVPR2018 workshop Workshop of Autonomous Driving(WAD). The proposed detector inherits the architecture of SSD and introduces a novel Comprehensive Feature Enhancement(CFE) module into it. Experimental results on this competition dataset as well as the MSCOCO dataset demonstrate that the proposed detector (named CFENet) performs much better than the original SSD and the stateof-the-art method RefineDet especially for small objects, while keeping high efficiency close to the original SSD. Specifically, the single scale version of the proposed detector can run at the speed of 21 fps, while the multi-scale version with larger input size achieves the mAP 29.69, ranking second on the leaderboard.",
"title": ""
},
{
"docid": "eae5470d2b5cfa6a595ee335a25c7b68",
"text": "For uplink large-scale MIMO systems, linear minimum mean square error (MMSE) signal detection algorithm is near-optimal but involves matrix inversion with high complexity. In this paper, we propose a low-complexity signal detection algorithm based on the successive overrelaxation (SOR) method to avoid the complicated matrix inversion. We first prove a special property that the MMSE filtering matrix is symmetric positive definite for uplink large-scale MIMO systems, which is the premise for the SOR method. Then a low-complexity iterative signal detection algorithm based on the SOR method as well as the convergence proof is proposed. The analysis shows that the proposed scheme can reduce the computational complexity from O(K3) to O(K2), where K is the number of users. Finally, we verify through simulation results that the proposed algorithm outperforms the recently proposed Neumann series approximation algorithm, and achieves the near-optimal performance of the classical MMSE algorithm with a small number of iterations.",
"title": ""
},
{
"docid": "4fb76fb4daa5490dca902c9177c9b465",
"text": "An improved faster region-based convolutional neural network (R-CNN) [same object retrieval (SOR) faster R-CNN] is proposed to retrieve the same object in different scenes with few training samples. By concatenating the feature maps of shallow and deep convolutional layers, the ability of Regions of Interest (RoI) pooling to extract more detailed features is improved. In the training process, a pretrained CNN model is fine-tuned using a query image data set, so that the confidence score can identify an object proposal to the object level rather than the classification level. In the query process, we first select the ten images for which the object proposals have the closest confidence scores to the query object proposal. Then, the image for which the detected object proposal has the minimum cosine distance to the query object proposal is considered as the query result. The proposed SOR faster R-CNN is applied to our Coke cans data set and three public image data sets, i.e., Oxford Buildings 5k, Paris Buildings 6k, and INS 13. The experimental results confirm that SOR faster R-CNN has better identification performance than fine-tuned faster R-CNN. Moreover, SOR faster R-CNN achieves much higher accuracy for detecting low-resolution images than the fine-tuned faster R-CNN on the Coke cans (0.094 mAP higher), Oxford Buildings (0.043 mAP higher), Paris Buildings (0.078 mAP higher), and INS 13 (0.013 mAP higher) data sets.",
"title": ""
},
{
"docid": "98d40e5a6df5b6a3ab39a04bf04c6a65",
"text": "T Internet has increased the flexibility of retailers, allowing them to operate an online arm in addition to their physical stores. The online channel offers potential benefits in selling to customer segments that value the convenience of online shopping, but it also raises new challenges. These include the higher likelihood of costly product returns when customers’ ability to “touch and feel” products is important in determining fit. We study competing retailers that can operate dual channels (“bricks and clicks”) and examine how pricing strategies and physical store assistance levels change as a result of the additional Internet outlet. A central result we obtain is that when differentiation among competing retailers is not too high, having an online channel can actually increase investment in store assistance levels (e.g., greater shelf display, more-qualified sales staff, floor samples) and decrease profits. Consequently, when the decision to open an Internet channel is endogenized, there can exist an asymmetric equilibrium where only one retailer elects to operate an online arm but earns lower profits than its bricks-only rival. We also characterize equilibria where firms open an online channel, even though consumers only use it for research and learning purposes but buy in stores. A number of extensions are discussed, including retail settings where firms carry multiple product categories, shipping and handling costs, and the role of store assistance in impacting consumer perceived benefits.",
"title": ""
},
{
"docid": "e67a7ba82594e024f96fc1deb4ff7498",
"text": "The software industry is more than ever facing the challenge of delivering WYGIWYW software (what you get is what you want). A well-structured document specifying adequate, complete, consistent, precise, and measurable requirements is a critical prerequisite for such software. Goals have been recognized to be among the driving forces for requirements elicitation, elaboration, organization, analysis, negotiation, documentation, and evolution. Growing experience with goal-oriented requirements engineering suggests synergistic links between research in this area and good practice. We discuss one journey along this road from influencing ideas and research results to tool developments to good practice in industrial projects. On the way, we discuss some lessons learnt, obstacles to technology transfer, and challenges for better requirements engineering research and practice.",
"title": ""
},
{
"docid": "4c3805a6db1d43d01196efe50c14822f",
"text": "Relational tables collected from HTML pages (\"web tables\") are used for a variety of tasks including table extension, knowledge base completion, and data transformation. Most of the existing algorithms for these tasks assume that the data in the tables has the form of binary relations, i.e., relates a single entity to a value or to another entity. Our exploration of a large public corpus of web tables, however, shows that web tables contain a large fraction of non-binary relations which will likely be misinterpreted by the state-of-the-art algorithms. In this paper, we propose a categorisation scheme for web table columns which distinguishes the different types of relations that appear in tables on the Web and may help to design algorithms which better deal with these different types. Designing an automated classifier that can distinguish between different types of relations is non-trivial, because web tables are relatively small, contain a high level of noise, and often miss partial key values. In order to be able to perform this distinction, we propose a set of features which goes beyond probabilistic functional dependencies by using the union of multiple tables from the same web site and from different web sites to overcome the problem that single web tables are too small for the reliable calculation of functional dependencies.",
"title": ""
}
] |
scidocsrr
|
0c5c416eb192436182e557ef2b9c75ab
|
A Study of Lexical Distribution in Citation Contexts through the IMRaD Standard
|
[
{
"docid": "96669cea810d2918f2d35875f87d45f2",
"text": "In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, a part-of-speech tagger (called TreeTagger) has been implemented which achieves 96.36 % accuracy on Penn-Treebank data which is better than that of a trigram tagger (96.06 %) on the same data.",
"title": ""
},
{
"docid": "e291f7ada6890ae9db8417b29f35d061",
"text": "This study proposes a new framework for citation content analysis (CCA), for syntactic and semantic analysis of citation content that can be used to better analyze the rich sociocultural context of research behavior. This framework could be considered the next generation of citation analysis. The authors briefly review the history and features of content analysis in traditional social sciences and its previous application in library and information science (LIS). Based on critical discussion of the theoretical necessity of a new method as well as the limits of citation analysis, the nature and purposes of CCA are discussed, and potential procedures to conduct CCA, including principles to identify the reference scope, a two-dimensional (citing and cited) and two-module (syntactic and semantic) codebook, are provided and described. Future work and implications are also suggested.",
"title": ""
}
] |
[
{
"docid": "8562e72ac7efd5d1861ae946c8c048b4",
"text": "Metacognition refers to any knowledge or cognitive process that monitors or controls cognition. We highlight similarities between metacognitive and executive control functions, and ask how these processes might be implemented in the human brain. A review of brain imaging studies reveals a circuitry of attentional networks involved in these control processes, with its source located in midfrontal areas. These areas are active during conflict resolution, error correction, and emotional regulation. A developmental approach to the organization of the anatomy involved in executive control provides an added perspective on how these mechanisms are influenced by maturation and learning, and how they relate to metacognitive activity.",
"title": ""
},
{
"docid": "ba24dcaa32589e5fed4c80f1f6b10fd2",
"text": "This paper presents a new zero-voltage-switching (ZVS) bidirectional dc-dc converter. Compared to the traditional full and half bridge bidirectional dc-dc converters for the similar applications, the new topology has the advantages of simple circuit topology with no total device rating (TDR) penalty, soft-switching implementation without additional devices, high efficiency and simple control. These advantages make the new converter promising for medium and high power applications especially for auxiliary power supply in fuel cell vehicles and power generation where the high power density, low cost, lightweight and high reliability power converters are required. The operating principle, theoretical analysis, and design guidelines are provided in this paper. The simulation and the experimental verifications are also presented.",
"title": ""
},
{
"docid": "76c19c70f11244be16248a1b4de2355a",
"text": "We have recently witnessed the emerging of cloud computing on one hand and robotics platforms on the other hand. Naturally, these two visions have been merging to give birth to the Cloud Robotics paradigm in order to offer even more remote services. But such a vision is still in its infancy. Architectures and platforms are still to be defined to efficiently program robots so they can provide different services, in a standardized way masking their heterogeneity. This paper introduces Open Mobile Cloud Robotics Interface (OMCRI), a Robot-as-a-Service vision based platform, which offers a unified easy access to remote heterogeneous mobile robots. OMCRI encompasses an extension of the Open Cloud Computing Interface (OCCI) standard and a gateway hosting mobile robot resources. We then provide an implementation of OMCRI based on the open source model-driven Eclipse-based OCCIware tool chain and illustrates its use for three off-the-shelf mobile robots: Lego Mindstorm NXT, Turtlebot, and Parrot AR. Drone.",
"title": ""
},
{
"docid": "59af45fa33fd70d044f9749e59ba3ca7",
"text": "Retweeting is the key mechanism for information diffusion in Twitter. It emerged as a simple yet powerful way of disseminating useful information. Even though a lot of information is shared via its social network structure in Twitter, little is known yet about how and why certain information spreads more widely than others. In this paper, we examine a number of features that might affect retweetability of tweets. We gathered content and contextual features from 74M tweets and used this data set to identify factors that are significantly associated with retweet rate. We also built a predictive retweet model. We found that, amongst content features, URLs and hashtags have strong relationships with retweetability. Amongst contextual features, the number of followers and followees as well as the age of the account seem to affect retweetability, while, interestingly, the number of past tweets does not predict retweetability of a user’s tweet. We believe that this research would inform the design of sensemaking tools for Twitter streams as well as other general social media collections. Keywords-Twitter; retweet; tweet; follower; social network; social media; factor analysis",
"title": ""
},
{
"docid": "d895b939ea60b41f7de7e64eb60e3b07",
"text": "Deep learning (DL) based semantic segmentation methods have been providing state-of-the-art performance in the last few years. More specifically, these techniques have been successfully applied to medical image classification, segmentation, and detection tasks. One deep learning technique, U-Net, has become one of the most popular for these applications. In this paper, we propose a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net models, which are named RU-Net and R2U-Net respectively. The proposed models utilize the power of U-Net, Residual Network, as well as RCNN. There are several advantages of these proposed architectures for segmentation tasks. First, a residual unit helps when training deep architecture. Second, feature accumulation with recurrent residual convolutional layers ensures better feature representation for segmentation tasks. Third, it allows us to design better U-Net architecture with same number of network parameters with better performance for medical image segmentation. The proposed models are tested on three benchmark datasets such as blood vessel segmentation in retina images, skin cancer segmentation, and lung lesion segmentation. The experimental results show superior performance on segmentation tasks compared to equivalent models including UNet and residual U-Net (ResU-Net).",
"title": ""
},
{
"docid": "e0d040efd131db568d875b80c6adc111",
"text": "Familism is a cultural value that emphasizes interdependent family relationships that are warm, close, and supportive. We theorized that familism values can be beneficial for romantic relationships and tested whether (a) familism would be positively associated with romantic relationship quality and (b) this association would be mediated by less attachment avoidance. Evidence indicates that familism is particularly relevant for U.S. Latinos but is also relevant for non-Latinos. Thus, we expected to observe the hypothesized pattern in Latinos and explored whether the pattern extended to non-Latinos of European and East Asian cultural background. A sample of U.S. participants of Latino (n 1⁄4 140), European (n 1⁄4 176), and East Asian (n 1⁄4 199) cultural background currently in a romantic relationship completed measures of familism, attachment, and two indices of romantic relationship quality, namely, partner support and partner closeness. As predicted, higher familism was associated with higher partner support and partner closeness, and these associations were mediated by lower attachment avoidance in the Latino sample. This pattern was not observed in the European or East Asian background samples. The implications of familism for relationships and psychological processes relevant to relationships in Latinos and non-Latinos are discussed. 1 University of California, Irvine, USA 2 University of California, Los Angeles, USA Corresponding author: Belinda Campos, Department of Chicano/Latino Studies, University of California, Irvine, 3151 Social Sciences Plaza A, Irvine, CA 92697, USA. Email: bcampos@uci.edu Journal of Social and Personal Relationships 1–20 a The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0265407514562564 spr.sagepub.com J S P R at UNIV CALIFORNIA IRVINE on January 5, 2015 spr.sagepub.com Downloaded from",
"title": ""
},
{
"docid": "f9b60eaec9320b61db6edae9baeacbe2",
"text": "The latest two international educational assessments found global prevalence of sleep deprivation in students, consistent with what has been reported in sleep research. However, despite the fundamental role of adequate sleep in cognitive and social functioning, this important issue has been largely overlooked by educational researchers. Drawing upon evidence from sleep research, literature on the heavy media use by children and adolescents, and data from web analytics on youth-oriented game sites and mobile analytics on youth-oriented game apps, we argue that heavy media use, particularly digital game play, may be an important contributor to sleep deprivation in students. Therefore, educational researchers, policy makers, teachers, and parents should pay greater attention to student sleep and develop programs and interventions to improve both quality and quantity of student sleep.",
"title": ""
},
{
"docid": "8b3962dc5895a46c913816f208aa8e60",
"text": "Glaucoma is the second leading cause of blindness worldwide. It is a disease in which fluid pressure in the eye increases continuously, damaging the optic nerve and causing vision loss. Computational decision support systems for the early detection of glaucoma can help prevent this complication. The retinal optic nerve fiber layer can be assessed using optical coherence tomography, scanning laser polarimetry, and Heidelberg retina tomography scanning methods. In this paper, we present a novel method for glaucoma detection using a combination of texture and higher order spectra (HOS) features from digital fundus images. Support vector machine, sequential minimal optimization, naive Bayesian, and random-forest classifiers are used to perform supervised classification. Our results demonstrate that the texture and HOS features after z-score normalization and feature selection, and when combined with a random-forest classifier, performs better than the other classifiers and correctly identifies the glaucoma images with an accuracy of more than 91%. The impact of feature ranking and normalization is also studied to improve results. Our proposed novel features are clinically significant and can be used to detect glaucoma accurately.",
"title": ""
},
{
"docid": "54546694b5b43b561237d50ce4a67dfc",
"text": "We describe a load balancing system for parallel intrusion detection on multi-core systems using a novel model allowing fine-grained selection of the network traffic to be analyzed. The system receives data from a network and distributes it to multiple IDSs running on individual CPU cores. In contrast to related approaches, we do not assume a static association of flows to IDS processes but adaptively determine the load of each IDS process to allocate network flows for a limited time window. We developed a priority model for the selection of network data and the assignment process. Special emphasis is given to environments with highly dynamic network traffic, where only a fraction of all data can be analyzed due to system constraints. We show that IDSs analyzing packet payload data disproportionately suffer from random packet drops due to overload. Our proposed system ensures loss-free analysis for selected data streams in a specified time interval. Our primary focus lies on the treatment of dynamic network behavior: neither data should be lost unintentionally, nor analysis processes should be needlessly idle. To evaluate the priority model and assignment systems, we implemented a prototype and evaluated it with real network traffic.",
"title": ""
},
{
"docid": "f97af106e447f761aa65eeec960ce6ee",
"text": "Graphitic carbon nitride (g-C3N4) behaving as a layered feature with graphite was indexed as a high-content nitrogen-doping carbon material, attracting increasing attention for application in energy storage devices. However, poor conductivity and resulting serious irreversible capacity loss were pronounced for g-C3N4 material due to its high nitrogen content. In this work, magnesiothermic denitriding technology is demonstrated to reduce the nitrogen content of g-C3N4 (especially graphitic nitrogen) for enhanced lithium storage properties as lithium ion battery anodes. The obtained nitrogen-deficient g-C3N4 (ND-g-C3N4) exhibits a thinner and more porous structure composed of an abundance of relatively low nitrogen doping wrinkled graphene nanosheets. A highly reversible lithium storage capacity of 2753 mAh/g was obtained after the 300th cycle with an enhanced cycling stability and rate capability. The presented nitrogen-deficient g-C3N4 with outstanding electrochemical performances may unambiguously promote the application of g-C3N4 materials in energy-storage devices.",
"title": ""
},
{
"docid": "c77494588aa7fb12235e131b20faa4e4",
"text": "A multiband planar monopole antenna fed by microstrip line feed with Defected Ground Structure (DGS) is presented for simultaneously satisfying wireless local area network (WLAN) and worldwide interoperability for microwave access (WiMAX) applications. The proposed antenna consists of a rectangular microstrip patch with rectangular slit, including the circular defect etched on the ground plane forming DGS structure. The soft nature of the DGS facilitates improvement in the performance of microstrip antennas. The simulated -10 dB bandwidth for return loss is from 2. 9-3. 77 GHz, 3. 91-6. 36, covering the WLAN: 5. 15–5. 35 and 5. 725–5. 85 GHz and WiMAX: 3. 3–3. 8 and 5. 25–5. 85 GHz bands. The design and optimization of DGS structures along with the parametric study were carried out using IE3D ZELAND which is based on method of moment.",
"title": ""
},
{
"docid": "4ede3f2caa829e60e4f87a9b516e28bd",
"text": "This report describes the difficulties of training neural networks and in particular deep neural networks. It then provides a literature review of training methods for deep neural networks, with a focus on pre-training. It focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and Stacked Autoencoders and provides an outreach on further and alternative approaches. It also includes related practical recommendations from the literature on training them. In the second part, initial experiments using some of the covered methods are performed on two databases. In particular, experiments are performed on the MNIST hand-written digit dataset and on facial emotion data from a Kaggle competition. The results are discussed in the context of results reported in other research papers. An error rate lower than the best contribution to the Kaggle competition is achieved using an optimized Stacked Autoencoder.",
"title": ""
},
{
"docid": "edf78a6b10d018a476e79dd34df1fef1",
"text": "STATEMENT OF THE PROBLEM\nResin bonding is essential for clinical longevity of indirect restorations. Especially in light of the increasing popularity of computer-aided design/computer-aided manufacturing-fabricated indirect restorations, there is a need to assess optimal bonding protocols for new ceramic/polymer materials and indirect composites.\n\n\nPURPOSE OF THE STUDY\nThe aim of this article was to review and assess the current scientific evidence on the resin bond to indirect composite and new ceramic/polymer materials.\n\n\nMATERIALS AND METHODS\nAn electronic PubMed database search was conducted from 1966 to September 2013 for in vitro studies pertaining the resin bond to indirect composite and new ceramic/polymer materials.\n\n\nRESULTS\nThe search revealed 198 titles. Full-text screening was carried out for 43 studies, yielding 18 relevant articles that complied with inclusion criteria. No relevant studies could be identified regarding new ceramic/polymer materials. Most common surface treatments are aluminum-oxide air-abrasion, silane treatment, and hydrofluoric acid-etching for indirect composite restoration. Self-adhesive cements achieve lower bond strengths in comparison with etch-and-rinse systems. Thermocycling has a greater impact on bonding behavior than water storage.\n\n\nCONCLUSIONS\nAir-particle abrasion and additional silane treatment should be applied to enhance the resin bond to laboratory-processed composites. However, there is an urgent need for in vitro studies that evaluate the bond strength to new ceramic/polymer materials.\n\n\nCLINICAL SIGNIFICANCE\nThis article reviews the available dental literature on resin bond of laboratory composites and gives scientifically based guidance for their successful placement. Furthermore, this review demonstrated that future research for new ceramic/polymer materials is required.",
"title": ""
},
{
"docid": "aaf30f184fcea3852f73a5927100cac7",
"text": "Dyslexia is a neurodevelopmental reading disability estimated to affect 5-10% of the population. While there is yet no full understanding of the cause of dyslexia, or agreement on its precise definition, it is certain that many individuals suffer persistent problems in learning to read for no apparent reason. Although it is generally agreed that early intervention is the best form of support for children with dyslexia, there is still a lack of efficient and objective means to help identify those at risk during the early years of school. Here we show that it is possible to identify 9-10 year old individuals at risk of persistent reading difficulties by using eye tracking during reading to probe the processes that underlie reading ability. In contrast to current screening methods, which rely on oral or written tests, eye tracking does not depend on the subject to produce some overt verbal response and thus provides a natural means to objectively assess the reading process as it unfolds in real-time. Our study is based on a sample of 97 high-risk subjects with early identified word decoding difficulties and a control group of 88 low-risk subjects. These subjects were selected from a larger population of 2165 school children attending second grade. Using predictive modeling and statistical resampling techniques, we develop classification models from eye tracking records less than one minute in duration and show that the models are able to differentiate high-risk subjects from low-risk subjects with high accuracy. Although dyslexia is fundamentally a language-based learning disability, our results suggest that eye movements in reading can be highly predictive of individual reading ability and that eye tracking can be an efficient means to identify children at risk of long-term reading difficulties.",
"title": ""
},
{
"docid": "3fbea8b5feb0c5a471aa0ec91d2e2d1a",
"text": "Neural models combining representation learning and reasoning in an end-to-end trainable manner are receiving increasing interest. However, their use is severely limited by their computational complexity, which renders them unusable on real world datasets. We focus on the Neural Theorem Prover (NTP) model proposed by Rocktäschel and Riedel (2017), a continuous relaxation of the Prolog backward chaining algorithm where unification between terms is replaced by the similarity between their embedding representations. For answering a given query, this model needs to consider all possible proof paths, and then aggregate results – this quickly becomes infeasible even for small Knowledge Bases (KBs). We observe that we can accurately approximate the inference process in this model by considering only proof paths associated with the highest proof scores. This enables inference and learning on previously impracticable KBs.",
"title": ""
},
{
"docid": "c65cebec214fc6c45e266bfcce731676",
"text": "Creativity is central to much human problem solving and innovation. Brainstorming processes attempt to leverage group creativity, but group dynamics sometimes limit their utility. We present IdeaExpander, a tool to support group brainstorming by intelligently selecting pictorial stimuli based on the group's conversation The design is based on theories of how perception, thinking, and communication interact; a pilot study (N=16) suggests that it increases individuals' idea production and that people value it.",
"title": ""
},
{
"docid": "48150241f1e40c0ae87d235c697440fb",
"text": "Poly(3-hydroxybutyrate) (PHB) biodegradable polymeric membranes were evaluated as platform for progesterone (Prg)-controlled release. In the design of new drug delivery systems, it is important to understand the mass transport mechanism involved, as well as predict the process kinetics. Drug release experiments were conducted and the experimental results were evaluated using engineering approaches that were extrapolated to the pharmaceutical field by our research group. Membranes were loaded with different Prg concentrations and characterized by scanning electron microscopy (SEM), differential scanning calorimetry (DSC), and Fourier transform infrared spectroscopy (FTIR). SEM images showed that membranes have a dense structure before and after the progesterone addition. DSC and FTIR allowed determining the influence of the therapeutic agent in the membrane properties. The in vitro experiments were performed using two different techniques: (A) returning the sample to the receptor solution (constant volume of the delivery medium) and (B) extracting total volume of the receptor solution. In this work, we present a simple and accurate “lumped” second-order kinetic model. This lumped model considers the different mass transport steps involved in drug release systems. The model fits very well the experimental data using any of the two experimental procedures, in the range 0 ≤ t ≤ ∞ or 0 ≤ M t ≤ M ∞. The drug release analysis using our proposed approaches is relevant for establishing in vitro–in vivo correlations in future tests in animals.",
"title": ""
},
{
"docid": "a58d2058fd310ca553aee16a84006f96",
"text": "This systematic literature review describes the epidemiology of dengue disease in Mexico (2000-2011). The annual number of uncomplicated dengue cases reported increased from 1,714 in 2000 to 15,424 in 2011 (incidence rates of 1.72 and 14.12 per 100,000 population, respectively). Peaks were observed in 2002, 2007, and 2009. Coastal states were most affected by dengue disease. The age distribution pattern showed an increasing number of cases during childhood, a peak at 10-20 years, and a gradual decline during adulthood. All four dengue virus serotypes were detected. Although national surveillance is in place, there are knowledge gaps relating to asymptomatic cases, primary/secondary infections, and seroprevalence rates of infection in all age strata. Under-reporting of the clinical spectrum of the disease is also problematic. Dengue disease remains a serious public health problem in Mexico.",
"title": ""
},
{
"docid": "b712552d760c887131f012e808dca253",
"text": "To the same utterance, people’s responses in everyday dialogue may be diverse largely in terms of content semantics, speaking styles, communication intentions and so on. Previous generative conversational models ignore these 1-to-n relationships between a post to its diverse responses, and tend to return high-frequency but meaningless responses. In this study we propose a mechanism-aware neural machine for dialogue response generation. It assumes that there exists some latent responding mechanisms, each of which can generate different responses for a single input post. With this assumption we model different responding mechanisms as latent embeddings, and develop a encoder-diverter-decoder framework to train its modules in an end-to-end fashion. With the learned latent mechanisms, for the first time these decomposed modules can be used to encode the input into mechanism-aware context, and decode the responses with the controlled generation styles and topics. Finally, the experiments with human judgements, intuitive examples, detailed discussions demonstrate the quality and diversity of the generated responses with 9.80% increase of acceptable ratio over the best of six baseline methods.",
"title": ""
},
{
"docid": "7ac9b7bc77ffa229d448b2234857dca8",
"text": "How do neurons in a decision circuit integrate time-varying signals, in favor of or against alternative choice options? To address this question, we used a recurrent neural circuit model to simulate an experiment in which monkeys performed a direction-discrimination task on a visual motion stimulus. In a recent study, it was found that brief pulses of motion perturbed neural activity in the lateral intraparietal area (LIP), and exerted corresponding effects on the monkey's choices and response times. Our model reproduces the behavioral observations and replicates LIP activity which, depending on whether the direction of the pulse is the same or opposite to that of a preferred motion stimulus, increases or decreases persistently over a few hundred milliseconds. Furthermore, our model accounts for the observation that the pulse exerts a weaker influence on LIP neuronal responses when the pulse is late relative to motion stimulus onset. We show that this violation of time-shift invariance (TSI) is consistent with a recurrent circuit mechanism of time integration. We further examine time integration using two consecutive pulses of the same or opposite motion directions. The induced changes in the performance are not additive, and the second of the paired pulses is less effective than its standalone impact, a prediction that is experimentally testable. Taken together, these findings lend further support for an attractor network model of time integration in perceptual decision making.",
"title": ""
}
] |
scidocsrr
|
fc6dfc1fcb067b73cf71c4ce294ea912
|
A Neural Network Approach to Missing Marker Reconstruction in Human Motion Capture
|
[
{
"docid": "c210a68c57d7bfb15c7f646c3d890cd8",
"text": "Motion capture is frequently used for studies in biomechanics, and has proved particularly useful in understanding human motion. Unfortunately, motion capture approaches often fail when markers are occluded or missing and a mechanism by which the position of missing markers can be estimated is highly desirable. Of particular interest is the problem of estimating missing marker positions when no prior knowledge of marker placement is known. Existing approaches to marker completion in this scenario can be broadly divided into tracking approaches using dynamical modelling, and low rank matrix completion. This paper shows that these approaches can be combined to provide a marker completion algorithm that not only outperforms its respective components, but also solves the problem of incremental position error typically associated with tracking approaches.",
"title": ""
}
] |
[
{
"docid": "95365d5f04b2cefcca339fbc19464cbb",
"text": "Manipulation and re-use of images in scientific publications is a concerning problem that currently lacks a scalable solution. Current tools for detecting image duplication are mostly manual or semi-automated, despite the availability of an overwhelming target dataset for a learning-based approach. This paper addresses the problem of determining if, given two images, one is a manipulated version of the other by means of copy, rotation, translation, scale, perspective transform, histogram adjustment, or partial erasing. We propose a data-driven solution based on a 3-branch Siamese Convolutional Neural Network. The ConvNet model is trained to map images into a 128-dimensional space, where the Euclidean distance between duplicate images is smaller than or equal to 1, and the distance between unique images is greater than 1. Our results suggest that such an approach has the potential to improve surveillance of the published and in-peer-review literature for image manipulation.",
"title": ""
},
{
"docid": "8727ee03a7b9ba26f38b9da0d5ed4fa7",
"text": "This paper explores the new and growing field of topological data analysis (TDA). TDA is a data analysis method that provides information about the ’shape’ of data. The paper describes what types of shapes TDA detects and why these shapes having meaning. Additionally, concepts from algebraic topology, the mathematics behind TDA, will be discussed. Specifically, the concepts of persistent homology and barcodes will be developed. Finally, the paper will show how these concepts from algebraic topology can be applied to analyze data. Acknowledgments I would first like to thank my advisor, Scott Taylor. He provided me with guidance and support throughout this year long project. Second, I would like to acknowledge my reader, Jim Scott. His input, especially concerning statistical matters, was very valuable. Additionally, I would like to thank Randy Downer for helping me with Perseus, a software packages for topological data analysis. Finally, I would like to thank Xiaojie (Jackie) Chen and Dan Medici, both of whom provided me with support and helped me with LaTeX, the system that was used to typeset this document.",
"title": ""
},
{
"docid": "e054c2d3b52441eaf801e7d2dd54dce9",
"text": "The concept of centrality is often invoked in social network analysis, and diverse indices have been proposed to measure it. This paper develops a unified framework for the measurement of centrality. All measures of centrality assess a node’s involvement in the walk structure of a network. Measures vary along four key dimensions: type of nodal involvement assessed, type of walk considered, property of walk assessed, and choice of summary measure. If we cross-classify measures by type of nodal involvement (radial versus medial) and property of walk assessed (volume versus length), we obtain a four-fold polychotomization with one cell empty which mirrors Freeman’s 1979 categorization. At a more substantive level, measures of centrality summarize a node’s involvement in or contribution to the cohesiveness of the network. Radial measures in particular are reductions of pair-wise proximities/cohesion to attributes of nodes or actors. The usefulness and interpretability of radial measures depend on the fit of the cohesion matrix to the onedimensional model. In network terms, a network that is fit by a one-dimensional model has a core-periphery structure in which all nodes revolve more or less closely around a single core. This in turn implies that the network does not contain distinct cohesive subgroups. Thus, centrality is shown to be intimately connected with the cohesive subgroup structure of a network. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "342fbe3f670653ffb93fa62c7f6953fc",
"text": "That the initialization can have a significant impact on the performance of evolutionary algorithms (EAs) is a well known fact in the empirical evolutionary computation literature. Surprisingly, it has nevertheless received only little attention from the theoretical community.\n We bridge this gap by providing a thorough runtime analysis for a simple iterated random sampling initialization. In the latter, instead of starting an EA with a random sample, it is started in the best of k search points that are taken from the search space uniformly at random. Implementing this strategy comes at almost no cost, neither in the actual coding work nor in terms of wall-clock time.\n Taking the best of two random samples already decreases the Θ(n log n) expected runtime of the (1+1)~EA and Randomized Local Search on OneMax by an additive term of order √n. The optimal gain that one can achieve with iterated random sampling is an additive term of order √n log n}. This also determines the best possible mutation-based EA for OneMax, a question left open in [Sudholt, IEEE TEC 2013].\n At the heart of our analysis is a very precise bound for the maximum of k independent Binomially distributed variables with success probability 1/2.",
"title": ""
},
{
"docid": "e5e1146fd0704357d865574da45ab2e5",
"text": "This paper presents a compact low-loss tunable X-band bandstop filter implemented on a quartz substrate using both miniature RF microelectromechanical systems (RF-MEMS) capacitive switches and GaAs varactors. The two-pole filter is based on capacitively loaded folded-λ/2 resonators that are coupled to a microstrip line, and the filter analysis includes the effects of nonadjacent inter-resonator coupling. The RF-MEMS filter tunes from 11.34 to 8.92 GHz with a - 20-dB rejection bandwidth of 1.18%-3.51% and a filter quality factor of 60-135. The GaAs varactor loaded filter tunes from 9.56 to 8.66 GHz with a - 20-dB bandwidth of 1.65%-2% and a filter quality factor of 55-90. Nonlinear measurements at the filter null with Δf = 1 MHz show that the RF-MEMS loaded filter results in > 25-dBm higher third-order intermodulation intercept point and P-1 dB compared with the varactor loaded filter. Both filters show high rejection levels ( > 24 dB) and low passband insertion loss ( <; 0.8 dB) from dc to the first spurious response at 19.5 GHz. The filter topology can be extended to higher order designs with an even number of poles.",
"title": ""
},
{
"docid": "37249acdade38893c6d026ba1961ccc1",
"text": "High performance PMOSFETs with gate length as short as 18-nm are reported. A self-aligned double-gate MOSFET structure (FinFET) is used to suppress the short channel effect. A 45 nm gate-length PMOS FinEET has an I/sub dsat/ of 410 /spl mu/A//spl mu/m (or 820 /spl mu/A//spl mu/m depending on the definition of the width of a double-gate device) at Vd=Vg=1.2 V and Tox=2.5 nm. The quasi-planar nature of this variant of the double-gate MOSFETs makes device fabrication relatively easy using the conventional planar MOSFET process technologies. Simulation shows possible scaling to 10-nm gate length.",
"title": ""
},
{
"docid": "8548c2a61f4854be6b8d4fb0bc315518",
"text": "The use of covert-channel methods to bypass security policies has increased considerably in the recent years. Malicious users neutralize security restriction by encapsulating protocols like peer-to-peer, chat or http proxy into other allowed protocols like Domain Name Server (DNS) or HTTP. This paper illustrates a machine learning approach to detect one particular covert-channel technique: DNS tunneling. Despite packet inspection may guarantee reliable intrusion detection in this context, it may suffer of scalability performance when a large set of sockets should be monitored in real time. Detecting the presence of DNS intruders by an aggregation-based monitoring is of main interest as it avoids packet inspection, thus preserving privacy and scalability. The proposed monitoring mechanism looks at simple statistical properties of protocol messages, such as statistics of packets inter-arrival times and of packets sizes. The analysis is complicated by two drawbacks: silent intruders (generating small statistical variations of legitimate traffic) and quick statistical fingerprints generation (to obtain a detection tool really applicable in the field). Results from experiments conducted on a live network are obtained by replicating individual detections over successive samples over time and by making a global decision through a majority voting scheme. The technique overcomes traditional classifier limitations. An insightful analysis of the performance leads to discover a unique intrusion detection tool, applicable in the presence of different tunneled applications. Copyright © 2014 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "b27a7921ce2005727f1bf768802d660c",
"text": "Four methods for reviewing a body of research literature – narrative review, descriptive review, vote-counting, and meta-analysis – are compared. Meta-analysis as a formalized, systematic review method is discussed in detail in terms of its history, current status, advantages, common analytic methods, and recent developments. Meta-analysis is found to be underutilized in IS. Suggestions on encouraging the use of metaanalysis in IS research and procedures recommended for meta-analysis are also provided.",
"title": ""
},
{
"docid": "2ddd492da2191f685daa111d5f89eedd",
"text": "Given the abundance of cameras and LCDs in today's environment, there exists an untapped opportunity for using these devices for communication. Specifically, cameras can tune to nearby LCDs and use them for network access. The key feature of these LCD-camera links is that they are highly directional and hence enable a form of interference-free wireless communication. This makes them an attractive technology for dense, high contention scenarios. The main challenge however, to enable such LCD-camera links is to maximize coverage, that is to deliver multiple Mb/s over multi-meter distances, independent of the view angle. To do so, these links need to address unique types of channel distortions, such as perspective distortion and blur.\n This paper explores this novel communication medium and presents PixNet, a system for transmitting information over LCD-camera links. PixNet generalizes the popular OFDM transmission algorithms to address the unique characteristics of the LCD-camera link which include perspective distortion, blur, and sensitivity to ambient light. We have built a prototype of PixNet using off-the-shelf LCDs and cameras. An extensive evaluation shows that a single PixNet link delivers data rates of up to 12 Mb/s at a distance of 10 meters, and works with view angles as wide as 120 degree°.",
"title": ""
},
{
"docid": "43c619d24864eb97498700315aea2d45",
"text": "BACKGROUND\nThe central nervous system (CNS) is involved in organic integration. Nervous modulation via bioactive compounds can modify metabolism in order to prevent systemic noncommunicable diseases (NCDs). Concerning this, plant polyphenols are proposed as neurotropic chemopreventive/ therapeutic agents, given their redox and regulating properties.\n\n\nOBJECTIVE\nTo review polyphenolic pharmacology and potential neurological impact on NCDs.\n\n\nMETHOD\nFirst, polyphenolic chemistry was presented, as well as pharmacology, i.e. kinetics and dynamics. Toxicology was particularly described. Then, functional relevance of these compounds was reviewed focusing on the metabolic CNS participation to modulate NCDs, with data being finally integrated.\n\n\nRESULTS\nOxidative stress is a major risk factor for NCDs. Polyphenols regulate the redox biology of different organic systems including the CNS, which participates in metabolic homeostasis. Polyphenolic neurotropism is determined by certain pharmacological characteristics, modifying nervous and systemic physiopathology, acting on several biological targets. Nonetheless, because these phytochemicals can trigger toxic effects, they should not be recommended indiscriminately.\n\n\nCONCLUSION\nSumming up, the modulating effects of polyphenols allow for the physiological role of CNS on metabolism and organic integration to be utilized in order to prevent NCDs, without losing sight of the risks.",
"title": ""
},
{
"docid": "343ad5204ee034972654aba86439730f",
"text": "This paper presents a Doppler radar vital sign detection system with random body movement cancellation (RBMC) technique based on adaptive phase compensation. An ordinary camera was integrated with the system to measure the subject's random body movement (RBM) that is fed back as phase information to the radar system for RBMC. The linearity of the radar system, which is strictly related to the circuit saturation problem in noncontact vital sign detection, has been thoroughly analyzed and discussed. It shows that larger body movement does not necessarily mean larger radar baseband output. High gain configuration at baseband is required for acceptable SNR in noncontact vital sign detection. The phase compensation at radar RF front-end helps to relieve the high-gain baseband from potential saturation in the presence of large body movement. A simple video processing algorithm was presented to extract the RBM without using any marker. Both theoretical analysis and simulation have been carried out to validate the linearity analysis and the proposed RBMC technique. Two experiments were carried out in the lab environment. One is the phase compensation at RF front end to extract a phantom motion in the presence of another large shaker motion, and the other one is to measure the subject person breathing normally but randomly moving his body back and forth. The experimental results show that the proposed radar system is effective to relieve the linearity burden of the baseband circuit and help compensate the RBM.",
"title": ""
},
{
"docid": "f0a5d33084588ed4b7fc4905995f91e2",
"text": "A new microstrip dual-band polarization reconfigurable antenna is presented for wireless local area network (WLAN) systems operating at 2.4 and 5.8 GHz. The antenna consists of a square microstrip patch that is aperture coupled to a microstrip line located along the diagonal line of the patch. The dual-band operation is realized by employing the TM10 and TM30 modes of the patch antenna. Four shorting posts are inserted into the patch to adjust the frequency ratio of the two modes. The center of each edge of the patch is connected to ground via a PIN diode for polarization switching. By switching between the different states of PIN diodes, the proposed antenna can radiate either horizontal, vertical, or 45° linear polarization in the two frequency bands. Measured results on reflection coefficients and radiation patterns agree well with numerical simulations.",
"title": ""
},
{
"docid": "18c90883c96b85dc8b3ef6e1b84c3494",
"text": "Data Selection is a popular step in Machine Translation pipelines. Feature Decay Algorithms (FDA) is a technique for data selection that has shown a good performance in several tasks. FDA aims to maximize the coverage of n-grams in the test set. However, intuitively, more ambiguous n-grams require more training examples in order to adequately estimate their translation probabilities. This ambiguity can be measured by alignment entropy. In this paper we propose two methods for calculating the alignment entropies for n-grams of any size, which can be used for improving the performance of FDA. We evaluate the substitution of the n-gramspecific entropy values computed by these methods to the parameters of both the exponential and linear decay factor of FDA. The experiments conducted on German-to-English and Czechto-English translation demonstrate that the use of alignment entropies can lead to an increase in the quality of the results of FDA.",
"title": ""
},
{
"docid": "d6edac3a6675c9edb2b36e75ac356ebd",
"text": "Ranking web pages for presenting the most relevant web pages to user’s queries is one of the main issues in any search engine. In this paper, two new ranking algorithms are offered, using Reinforcement Learning (RL) concepts. RL is a powerful technique of modern artificial intelligence that tunes agent’s parameters, interactively. In the first step, with formulation of ranking as an RL problem, a new connectivity-based ranking algorithm, called RL Rank, is proposed. In RL Rank, agent is considered as a surfer who travels between web pages by clicking randomly on a link in the current page. Each web page is considered as a",
"title": ""
},
{
"docid": "2e98d3fba411bcf5d73029a0fb933a88",
"text": "The ability of a cell to respond to a particular hormone depends on the presence of specific receptors for those hormones. Once the hormone has bound to its receptor, and following structural and biochemical modifications to the receptor, it separates from cytoplasmic chaperone proteins, thereby exposing the nuclear localization sequences that result in the activation of the receptor and initiation of the biological actions of the hormone on the target cell. In addition, recent work has demonstrated new pathways of steroid signaling through orphan and cell surface receptors that contribute to more rapid, “non-nuclear” or non-transcriptional effects of steroid hormones, often involving G-protein-mediated pathways. This review will summarize some of these studies for estrogens, androgens and progestins.",
"title": ""
},
{
"docid": "658e219b7b7a9b057b8b3ceb04565301",
"text": "The term \"bone\" refers to a family of materials that have complex hierarchically organized structures. These structures are primarily adapted to the variety of mechanical functions that bone fulfills. Here we review the structure-mechanical relations of one bone structural type, lamellar bone. This is the most abundant type in many mammals, including humans. A lamellar unit is composed of five sublayers. Each sublayer is an array of aligned mineralized collagen fibrils. The orientations of these arrays differ in each sublayer with respect to both collagen fibril axes and crystal layers, such that a complex rotated plywood-like structure is formed. Specific functions for lamellar bone, as opposed to the other bone types, could not be identified. It is therefore proposed that the lamellar structure is multifunctional-the \"concrete\" of the bone family of materials. Experimentally measured mechanical properties of lamellar bone demonstrate a clear-cut anisotropy with respect to the axis direction of long bones. A comparison of the elastic and ultimate properties of parallel arrays of lamellar units formed in primary bone with cylindrically shaped osteonal structures in secondary formed bone shows that most of the intrinsic mechanical properties are built into the lamellar structure. The major advantages of osteonal bone are its fracture properties. Mathematical modeling of the elastic properties based on the lamellar structure and using a rule-of-mixtures approach can closely simulate the measured mechanical properties, providing greater insight into the structure-mechanical relations of lamellar bone.",
"title": ""
},
{
"docid": "b52d6922473adaf4485df96af75baf55",
"text": "Hepatoblastoma is the most common primary hepatic tumor in children. Precocious puberty is a rare paraneoplastic syndrome that can occur in male children with hepatoblastoma as a result of elevated human chorionic gonadotropin (HCG). The clinical signs of precocious puberty may be detected months before the physical effects of tumor growth such as abdominal pain and distension. Therefore, consideration of hepatoblastoma in young boys presenting with precocious puberty can lead to earlier detection, diagnosis, treatment and thus arrest further virilization. This report describes the presentation, imaging and relevant laboratory findings in two pediatric patients with hepatoblastoma presenting with precocious puberty.",
"title": ""
},
{
"docid": "358e4c55233f3837cf95b8c269447cd2",
"text": "In this correspondence, the construction of low-density parity-check (LDPC) codes from circulant permutation matrices is investigated. It is shown that such codes cannot have a Tanner graph representation with girth larger than 12, and a relatively mild necessary and sufficient condition for the code to have a girth of 6, 8,10, or 12 is derived. These results suggest that families of LDPC codes with such girth values are relatively easy to obtain and, consequently, additional parameters such as the minimum distance or the number of redundant check sums should be considered. To this end, a necessary condition for the codes investigated to reach their maximum possible minimum Hamming distance is proposed.",
"title": ""
},
{
"docid": "c49ae120bca82ef0d9e94115ad7107f2",
"text": "An evaluation and comparison of seven of the world’s major building codes and standards is conducted in this study, with specific discussion of their estimations of the alongwind, acrosswind, and torsional response, where applicable, for a given building. The codes and standards highlighted by this study are those of the United States, Japan, Australia, the United Kingdom, Canada, China, and Europe. In addition, the response predicted by using the measured power spectra of the alongwind, acrosswind, and torsional responses for several building shapes tested in a wind tunnel are presented, and a comparison between the response predicted by wind tunnel data and that estimated by some of the standards is conducted. This study serves not only as a comparison of the response estimates by international codes and standards, but also introduces a new set of wind tunnel data for validation of wind tunnel-based empirical expressions. 1.0 Introduction Under the influence of dynamic wind loads, typical high-rise buildings oscillate in the alongwind, acrosswind, and torsional directions. The alongwind motion primarily results from pressure fluctuations on the windward and leeward faces, which generally follows the fluctuations in the approach flow, at least in the low frequency range. Therefore, alongwind aerodynamic loads may be quantified analytically utilizing quasi-steady and strip theories, with dynamic effects customarily represented by a random-vibrationbased “Gust Factor Approach” (Davenport 1967, Vellozzi & Cohen 1968, Vickery 1970, Simiu 1976, Solari 1982, ESDU 1989, Gurley & Kareem 1993). However, the acrosswind motion is introduced by pressure fluctuations on the side faces which are influenced by fluctuations in the separated shear layers and wake dynamics (Kareem 1982). This renders the applicability of strip and quasi-steady theories rather doubtful. Similarly, the wind-induced torsional effects result from an imbalance in the instantaneous pressure distribution on the building surface. These load effects are further amplified in asymmetric buildings as a result of inertial coupling (Kareem 1985). Due to the complexity of the acrosswind and torsional responses, physical modeling of fluid-structure interactions remains the only viable means of obtaining information on wind loads, though recently, research in the area of computational fluid dynam1. Graduate Student & Corresponding Author, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556. e-mail: Tracy.L.Kijewski.1@nd.edu 2. Professor, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556",
"title": ""
}
] |
scidocsrr
|
a5f8eb914b8230b0374a716ebe7c939c
|
Artificial Intelligence – Consumers and Industry Impact
|
[
{
"docid": "2fc294f2ab50b917f36155c0b9e1847d",
"text": "Social and cultural conventions are an often-neglected aspect of intelligent-machine development.",
"title": ""
}
] |
[
{
"docid": "f8209a4b6cb84b63b1f034ec274fe280",
"text": "A major challenge in topic classification (TC) is the high dimensionality of the feature space. Therefore, feature extraction (FE) plays a vital role in topic classification in particular and text mining in general. FE based on cosine similarity score is commonly used to reduce the dimensionality of datasets with tens or hundreds of thousands of features, which can be impossible to process further. In this study, TF-IDF term weighting is used to extract features. Selecting relevant features and determining how to encode them for a learning machine method have a vast impact on the learning machine methods ability to extract a good model. Two different weighting methods (TF-IDF and TF-IDF Global) were used and tested on the Reuters-21578 text categorization test collection. The obtained results emerged a good candidate for enhancing the performance of English topics FE. Simulation results the Reuters-21578 text categorization show the superiority of the proposed algorithm.",
"title": ""
},
{
"docid": "8d9fbeda9f6a77e927ac14b0d426d1d3",
"text": "This paper describes a new detector for finding perspective rectangle structural features that runs in real-time. Given the vanishing points within an image, the algorithm recovers the edge points that are aligned along the vanishing lines. We then efficiently recover the intersections of pairs of lines corresponding to different vanishing points. The detector has been designed for robot visual mapping, and we present the application of this detector to real-time stereo matching and reconstruction over a corridor sequence for this goal.",
"title": ""
},
{
"docid": "e8ebec3b64e05cad3ab4c9b3d2bfa191",
"text": "Multidimensional databases have recently gained widespread acceptance in the commercial world for supporting on-line analytical processing (OLAP) applications. We propose a hypercube-based data model and a few algebraic operations that provide semantic foundation to multidimensional databases and extend their current functionality. The distinguishing feature of the proposed model is the symmetric treatment not only of all dimensions but also measures. The model also is very exible in that it provides support for multiple hierarchies along each dimension and support for adhoc aggregates. The proposed operators are composable, reorderable, and closed in application. These operators are also minimal in the sense that none can be expressed in terms of others nor can any one be dropped without sacri cing functionality. They make possible the declarative speci cation and optimization of multidimensional database queries that are currently speci ed operationally. The operators have been designed to be translated to SQL and can be implemented either on top of a relational database system or within a special purpose multidimensional database engine. In e ect, they provide an algebraic application programming interface (API) that allows the separation of the frontend from the backend. Finally, the proposed model provides a framework in which to study multidimensional databases and opens several new research problems. Current Address: Oracle Corporation, Redwood City, California. Current Address: University of California, Berkeley, California.",
"title": ""
},
{
"docid": "afffadc35ac735d11e1a415c93d1c39f",
"text": "We examine self-control problems — modeled as time-inconsistent, presentbiased preferences—in a model where a person must do an activity exactly once. We emphasize two distinctions: Do activities involve immediate costs or immediate rewards, and are people sophisticated or naive about future self-control problems? Naive people procrastinate immediate-cost activities and preproperate—do too soon—immediate-reward activities. Sophistication mitigates procrastination, but exacerbates preproperation. Moreover, with immediate costs, a small present bias can severely harm only naive people, whereas with immediate rewards it can severely harm only sophisticated people. Lessons for savings, addiction, and elsewhere are discussed. (JEL A12, B49, C70, D11, D60, D74, D91, E21)",
"title": ""
},
{
"docid": "bbf5561f88f31794ca95dd991c074b98",
"text": "O CTO B E R 2014 | Volume 18, Issue 4 GetMobile Every time you use a voice command on your smartphone, you are benefitting from a technique called cloud offload. Your speech is captured by a microphone, pre-processed, then sent over a wireless network to a cloud service that converts speech to text. The result is then forwarded to another cloud service or sent back to your mobile device, depending on the application. Speech recognition and many other resource-intensive mobile services require cloud offload. Otherwise, the service would be too slow and drain too much of your battery. Research projects on cloud offload are hot today, with MAUI [4] in 2010, Odessa [13] and CloneCloud [2] in 2011, and COMET [8] in 2012. These build on a rich heritage of work dating back to the mid-1990s on a theme that is broadly characterized as cyber foraging. They are also relevant to the concept of cloudlets [18] that has emerged as an important theme in mobile-cloud convergence. Reflecting my participation in this evolution from its origins, this article is a personal account of the key developments in this research area. It focuses on mobile computing, ignoring many other uses of remote execution since the 1980s such as distributed processing, query processing, distributed object systems, and distributed partitioning.",
"title": ""
},
{
"docid": "c158fbbcf592ff372d0d317494f79537",
"text": "The concept of no- or minimal-preparation veneers is more than 25 years old, yet there is no classification system categorizing the extent of preparation for different veneer treatments. The lack of veneer preparation classifications creates misunderstanding and miscommunication with patients and within the dental profession. Such a system could be indicated in various clinical scenarios and would benefit dentists and patients, providing a guide for conservatively preparing and placing veneers. A classification system is proposed to divide preparation and veneering into reduction--referred to as space requirement, working thickness, or material room--volume of enamel remaining, and percentage of dentin exposed. Using this type of metric provides an accurate measurement system to quantify tooth structure removal, with preferably no reduction, on a case-by-case basis, dissolve uncertainty, and aid with multiple aspects of treatment planning and communication.",
"title": ""
},
{
"docid": "a7bd7a5b7d79ce8c5691abfdcecfeec7",
"text": "We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state with deep networks, and provide a framework for its use in tracking and prediction tasks. We show that our proposed model greatly outperforms standard deconvolutional methods and GANs for image generation, producing clear, photo-realistic images. We also develop a convolutional neural network model for state estimation and compare the result to an Extended Kalman Filter to estimate robot trajectories. We validate all models on a real robotic system.",
"title": ""
},
{
"docid": "08c6752ef763f74eb63b2546889f0860",
"text": "Subspace clustering refers to the problem of grouping data points that lie in a union of low-dimensional subspaces. One successful approach for solving this problem is sparse subspace clustering, which is based on a sparse representation of the data. In this paper, we extend SSC to non-linear manifolds by using the kernel trick. We show that the alternating direction method of multipliers can be used to efficiently find kernel sparse representations. Various experiments on synthetic as well real datasets show that non-linear mappings lead to sparse representation that give better clustering results than state-of-the-art methods.",
"title": ""
},
{
"docid": "db433a01dd2a2fd80580ffac05601f70",
"text": "While depth tends to improve network performances, it also m akes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed a t obtaining small and fast-to-execute models, and it has shown that a student netw ork could imitate the soft output of a larger teacher network or ensemble of networ ks. In this paper, we extend this idea to allow the training of a student that is d eeper and thinner than the teacher, using not only the outputs but also the inte rmediate representations learned by the teacher as hints to improve the traini ng process and final performance of the student. Because the student intermedia te hidden layer will generally be smaller than the teacher’s intermediate hidde n layer, additional parameters are introduced to map the student hidden layer to th e prediction of the teacher hidden layer. This allows one to train deeper studen s that can generalize better or run faster, a trade-off that is controlled by the ch osen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teache r network.",
"title": ""
},
{
"docid": "557b718f65e68f3571302e955ddb74d7",
"text": "Synthetic aperture radar (SAR) has been an unparalleled tool in cloudy and rainy regions as it allows observations throughout the year because of its all-weather, all-day operation capability. In this paper, the influence of Wenchuan Earthquake on the Sichuan Giant Panda habitats was evaluated for the first time using SAR interferometry and combining data from C-band Envisat ASAR and L-band ALOS PALSAR data. Coherence analysis based on the zero-point shifting indicated that the deforestation process was significant, particularly in habitats along the Min River approaching the epicenter after the natural disaster, and as interpreted by the vegetation deterioration from landslides, avalanches and debris flows. Experiments demonstrated that C-band Envisat ASAR data were sensitive to vegetation, resulting in an underestimation of deforestation; in contrast, L-band PALSAR data were capable of evaluating the deforestation process owing to a better penetration and the significant coherence gain on damaged forest areas. The percentage of damaged forest estimated by PALSAR decreased from 20.66% to 17.34% during 2009–2010, implying an approximate 3% recovery rate of forests in the earthquake OPEN ACCESS Remote Sens. 2014, 6 6284 impacted areas. This study proves that long-wavelength SAR interferometry is promising for rapid assessment of disaster-induced deforestation, particularly in regions where the optical acquisition is constrained.",
"title": ""
},
{
"docid": "a1ccca52f1563a2e208afcaa37e209d1",
"text": "BACKGROUND\nImplicit biases involve associations outside conscious awareness that lead to a negative evaluation of a person on the basis of irrelevant characteristics such as race or gender. This review examines the evidence that healthcare professionals display implicit biases towards patients.\n\n\nMETHODS\nPubMed, PsychINFO, PsychARTICLE and CINAHL were searched for peer-reviewed articles published between 1st March 2003 and 31st March 2013. Two reviewers assessed the eligibility of the identified papers based on precise content and quality criteria. The references of eligible papers were examined to identify further eligible studies.\n\n\nRESULTS\nForty two articles were identified as eligible. Seventeen used an implicit measure (Implicit Association Test in fifteen and subliminal priming in two), to test the biases of healthcare professionals. Twenty five articles employed a between-subjects design, using vignettes to examine the influence of patient characteristics on healthcare professionals' attitudes, diagnoses, and treatment decisions. The second method was included although it does not isolate implicit attitudes because it is recognised by psychologists who specialise in implicit cognition as a way of detecting the possible presence of implicit bias. Twenty seven studies examined racial/ethnic biases; ten other biases were investigated, including gender, age and weight. Thirty five articles found evidence of implicit bias in healthcare professionals; all the studies that investigated correlations found a significant positive relationship between level of implicit bias and lower quality of care.\n\n\nDISCUSSION\nThe evidence indicates that healthcare professionals exhibit the same levels of implicit bias as the wider population. The interactions between multiple patient characteristics and between healthcare professional and patient characteristics reveal the complexity of the phenomenon of implicit bias and its influence on clinician-patient interaction. The most convincing studies from our review are those that combine the IAT and a method measuring the quality of treatment in the actual world. Correlational evidence indicates that biases are likely to influence diagnosis and treatment decisions and levels of care in some circumstances and need to be further investigated. Our review also indicates that there may sometimes be a gap between the norm of impartiality and the extent to which it is embraced by healthcare professionals for some of the tested characteristics.\n\n\nCONCLUSIONS\nOur findings highlight the need for the healthcare profession to address the role of implicit biases in disparities in healthcare. More research in actual care settings and a greater homogeneity in methods employed to test implicit biases in healthcare is needed.",
"title": ""
},
{
"docid": "9cb832657be4d4d80682c1a49249a319",
"text": "0377-2217/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.ejor.2010.08.023 ⇑ Corresponding author. Tel.: +47 73593602; fax: + E-mail address: Marielle.Christiansen@iot.ntnu.no This paper considers a maritime inventory routing problem faced by a major cement producer. A heterogeneous fleet of bulk ships transport multiple non-mixable cement products from producing factories to regional silo stations along the coast of Norway. Inventory constraints are present both at the factories and the silos, and there are upper and lower limits for all inventories. The ship fleet capacity is limited, and in peak periods the demand for cement products at the silos exceeds the fleet capacity. In addition, constraints regarding the capacity of the ships’ cargo holds, the depth of the ports and the fact that different cement products cannot be mixed must be taken into consideration. A construction heuristic embedded in a genetic algorithmic framework is developed. The approach adopted is used to solve real instances of the problem within reasonable solution time and with good quality solutions. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7334904bb8b95fbf9668c388d30d4d72",
"text": "Write-optimized data structures like Log-Structured Merge-tree (LSM-tree) and its variants are widely used in key-value storage systems like Big Table and Cassandra. Due to deferral and batching, the LSM-tree based storage systems need background compactions to merge key-value entries and keep them sorted for future queries and scans. Background compactions play a key role on the performance of the LSM-tree based storage systems. Existing studies about the background compaction focus on decreasing the compaction frequency, reducing I/Os or confining compactions on hot data key-ranges. They do not pay much attention to the computation time in background compactions. However, the computation time is no longer negligible, and even the computation takes more than 60% of the total compaction time in storage systems using flash based SSDs. Therefore, an alternative method to speedup the compaction is to make good use of the parallelism of underlying hardware including CPUs and I/O devices. In this paper, we analyze the compaction procedure, recognize the performance bottleneck, and propose the Pipelined Compaction Procedure (PCP) to better utilize the parallelism of CPUs and I/O devices. Theoretical analysis proves that PCP can improve the compaction bandwidth. Furthermore, we implement PCP in real system and conduct extensive experiments. The experimental results show that the pipelined compaction procedure can increase the compaction bandwidth and storage system throughput by 77% and 62% respectively.",
"title": ""
},
{
"docid": "87bd2fc53cbe92823af786e60e82f250",
"text": "Cyc is a bold attempt to assemble a massive knowledge base (on the order of 108 axioms) spanning human consensus knowledge. This article examines the need for such an undertaking and reviews the authos' efforts over the past five years to begin its construction. The methodology and history of the project are briefly discussed, followed by a more developed treatment of the current state of the representation language used (epistemological level), techniques for efficient inferencing and default reasoning (heuristic level), and the content and organization of the knowledge base.",
"title": ""
},
{
"docid": "6960f780dfc491c6cdcbb6c53fd32363",
"text": "We learn to compute optical flow by combining a classical spatial-pyramid formulation with deep learning. This estimates large motions in a coarse-to-fine approach by warping one image of a pair at each pyramid level by the current flow estimate and computing an update to the flow. Instead of the standard minimization of an objective function at each pyramid level, we train one deep network per level to compute the flow update. Unlike the recent FlowNet approach, the networks do not need to deal with large motions, these are dealt with by the pyramid. This has several advantages. First, our Spatial Pyramid Network (SPyNet) is much simpler and 96% smaller than FlowNet in terms of model parameters. This makes it more efficient and appropriate for embedded applications. Second, since the flow at each pyramid level is small (",
"title": ""
},
{
"docid": "baafff8270bf3d33d70544130968f6d3",
"text": "The authors present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with magnetic resonance imaging (MRI) or computed tomography (CT). Because the authors allow for mixtures of materials and treat voxels as regions, their technique reduces errors that other classification techniques can create along boundaries between materials and is particularly useful for creating accurate geometric models and renderings from volume data. It also has the potential to make volume measurements more accurately and classifies noisy, low-resolution data well. There are two unusual aspects to the authors' approach. First, they assume that, due to partial-volume effects, or blurring, voxels can contain more than one material, e.g., both muscle and fat; the authors compute the relative proportion of each material in the voxels. Second, they incorporate information from neighboring voxels into the classification process by reconstructing a continuous function, /spl rho/(x), from the samples and then looking at the distribution of values that /spl rho/(x) takes on within the region of a voxel. This distribution of values is represented by a histogram taken over the region of the voxel; the mixture of materials that those values measure is identified within the voxel using a probabilistic Bayesian approach that matches the histogram by finding the mixture of materials within each voxel most likely to have created the histogram. The size of regions that the authors classify is chosen to match the sparing of the samples because the spacing is intrinsically related to the minimum feature size that the reconstructed continuous function can represent.",
"title": ""
},
{
"docid": "f4da31cf831dd3db5f3063c5ea1fca62",
"text": "SUMMARY Backtrack algorithms are applicable to a wide variety of problems. An efficient but readable version of such an algorithm is presented and its use in the problem of finding the maximal common subgraph of two graphs is described. Techniques available in this application area for ordering and pruning the backtrack search are discussed. This algorithm has been used successfully as a component of a program for analysing chemical reactions and enumerating the bond changes which have taken place.",
"title": ""
},
{
"docid": "32fad05dacb750e5539c66bb222b0e09",
"text": "Radio Frequency Identification (RFID) technology has received considerable attention from practitioners, driven by mandates from major retailers and the United States Department of Defense. RFID technology promises numerous benefits in the supply chain, such as increased visibility, security and efficiency. Despite such attentions and the anticipated benefits, RFID is not well-understood and many problems exist in the adoption and implementation of RFID. The purpose of this paper is to introduce RFID technology to practitioners and academicians by systematically reviewing the relevant literature, discussing how RFID systems work, their advantages, supply chain impacts, and the implementation challenges and the corresponding strategies, in the hope of providing guidance for practitioners in the implementation of RFID technology and offering a springboard for academicians to conduct future research in this area.",
"title": ""
},
{
"docid": "b0a24593396ef5f8029c560f87a07c45",
"text": "BACKGROUND\nYouth with disabilities are at risk of poor health outcomes as they transition to adult healthcare. Although space and place play an important role in accessing healthcare little is known about the spatial aspects of youth's transition from pediatric to adult healthcare.\n\n\nOBJECTIVE\nTo understand the spaces of well-being as youth with physical disabilities transition from pediatric to adult healthcare.\n\n\nMETHODS\nThis study draws on a qualitative design involving 63 in-depth interviews with young adults (n = 22), parents (n = 17), and clinicians (n = 24) involved in preparing young adults for transition. All participants were recruited from a pediatric rehabilitation hospital within a metropolitan area of Ontario, Canada. Data were analyzed using an inductive content analysis approach that was informed by the spaces of well-being framework.\n\n\nRESULTS\nThe results highlight that within the 'spaces of capability' those with more disability-related complications and/or those using a mobility device encountered challenges in their transition to adult care. The 'spaces of security' influencing youth's well-being during their transition included: temporary (in)security while they were away at college, and health (in)security. Most of the focus on youth's transition included 'integrative spaces', which can enhance or hinder their well-being. Such spaces included: spatial (dis)connections (distance to access care), embeddedness (family and community), physical access, and distance. Meanwhile, therapeutic spaces involved having spaces that youth were satisfied with and enhanced their well-being as they transitioned to adult care.\n\n\nCONCLUSIONS\nIn applying the spaces of well-being framework, the findings showed that youth had varied experiences regarding spaces of capability, security, integrative, and therapeutic spaces.",
"title": ""
},
{
"docid": "de84b1b739da8e272f8bf88889b1c4ad",
"text": "Stock market is the most popular investment scheme promising high returns albeit some risks. An intelligent stock prediction model would thus be desirable. So, this paper aims at surveying recent literature in the area of Neural Network, Hidden Markov Model and Support Vector Machine used to predict the stock market fluctuation. Neural networks and SVM are identified to be the leading machine learning techniques in stock market prediction area. Also, a model for predicting stock market using HMM is presented. Traditional techniques lack in covering stock price fluctuations and so new approaches have been developed for analysis of stock price variations. Markov Model is one such recent approach promising better results. In this paper a predicting method using Hidden Markov Model is proposed to provide better accuracy and a comparison of the existing techniques is also done.",
"title": ""
}
] |
scidocsrr
|
99ec25d15b4010422aae1ab34bb01b55
|
Towards an Engine for Lifelong Interactive Knowledge Learning in Human-Machine Conversations
|
[
{
"docid": "ffa5989436b8783314d60f7fb47c447a",
"text": "A long-term goal of machine learning research is to build an intelligent dialog agent. Most research in natural language understanding has focused on learning from fixed training sets of labeled data, with supervision either at the word level (tagging, parsing tasks) or sentence level (question answering, machine translation). This kind of supervision is not realistic of how humans learn, where language is both learned by, and used for, communication. In this work, we study dialog-based language learning, where supervision is given naturally and implicitly in the response of the dialog partner during the conversation. We study this setup in two domains: the bAbI dataset of [30] and large-scale question answering from [4]. We evaluate a set of baseline learning strategies on these tasks, and show that a novel model incorporating predictive lookahead is a promising approach for learning from a teacher’s response. In particular, a surprising result is that it can learn to answer questions correctly without any reward-based supervision at all.",
"title": ""
},
{
"docid": "75e14669377727660391ab3870d1627e",
"text": "Knowledge base (KB) completion aims to infer missing facts from existing ones in a KB. Among various approaches, path ranking (PR) algorithms have received increasing attention in recent years. PR algorithms enumerate paths between entitypairs in a KB and use those paths as features to train a model for missing fact prediction. Due to their good performances and high model interpretability, several methods have been proposed. However, most existing methods suffer from scalability (high RAM consumption) and feature explosion (trains on an exponentially large number of features) problems. This paper proposes a Context-aware Path Ranking (C-PR) algorithm to solve these problems by introducing a selective path exploration strategy. C-PR learns global semantics of entities in the KB using word embedding and leverages the knowledge of entity semantics to enumerate contextually relevant paths using bidirectional random walk. Experimental results on three large KBs show that the path features (fewer in number) discovered by C-PR not only improve predictive performance but also are more interpretable than existing baselines.",
"title": ""
}
] |
[
{
"docid": "7dbb7d378eae5c4b77076aa9504ba871",
"text": "The authors present a Markov random field model which allows realistic edge modeling while providing stable maximum a posterior (MAP) solutions. The model, referred to as a generalized Gaussian Markov random field (GGMRF), is named for its similarity to the generalized Gaussian distribution used in robust detection and estimation. The model satisfies several desirable analytical and computational properties for map estimation, including continuous dependence of the estimate on the data, invariance of the character of solutions to scaling of data, and a solution which lies at the unique global minimum of the a posteriori log-likelihood function. The GGMRF is demonstrated to be useful for image reconstruction in low-dosage transmission tomography.",
"title": ""
},
{
"docid": "cc291cfa92227d97784702bd108edae1",
"text": "Graphene's optical properties in the infrared and terahertz can be tailored and enhanced by patterning graphene into periodic metamaterials with sub-wavelength feature sizes. Here we demonstrate polarization-sensitive and gate-tunable photodetection in graphene nanoribbon arrays. The long-lived hybrid plasmon-phonon modes utilized are coupled excitations of electron density oscillations and substrate (SiO2) surface polar phonons. Their excitation by s-polarization leads to an in-resonance photocurrent, an order of magnitude larger than the photocurrent observed for p-polarization, which excites electron-hole pairs. The plasmonic detectors exhibit photo-induced temperature increases up to four times as large as comparable two-dimensional graphene detectors. Moreover, the photocurrent sign becomes polarization sensitive in the narrowest nanoribbon arrays owing to differences in decay channels for photoexcited hybrid plasmon-phonons and electrons. Our work provides a path to light-sensitive and frequency-selective photodetectors based on graphene's plasmonic excitations.",
"title": ""
},
{
"docid": "61b6021f99649010437096abc13119ed",
"text": "Given electroencephalogram (EEG) data measured from several subjects under the same conditions, our goal is to estimate common task-related bases in a linear model that capture intra-subject variations as well as inter-subject variations. Such bases capture the common phenomenon in group data, which is a core of group analysis. In this paper we present a method of nonnegative matrix factorization (NMF) that is well suited to analyzing EEG data of multiple subjects. The method is referred to as group nonnegative matrix factorization (GNMF) where we seek task-related common bases reflecting both intra-subject and inter-subject variations, as well as bases involving individual characteristics. We compare GNMF with NMF and some modified NMFs, in the task of learning spectral features from EEG data. Experiments on brain computer interface (BCI) competition data indicate that GNMF improves the EEG classification performance. In addition, we also show that GNMF is useful in the task of subject-tosubject transfer where the prediction for an unseen subject is performed based on a linear model learned from different subjects in the same group.",
"title": ""
},
{
"docid": "2ab6bc212e45c3d5775e760e5a01c0ef",
"text": "The face recognition systems are used to recognize the person by using merely a person’s image. The face detection scheme is the primary method which is used to extract the region of interest (ROI). The ROI is further processed under the face recognition scheme. In the proposed model, we are going to use the cross-correlation algorithm along with the viola jones for the purpose of face recognition to recognize the person. The proposed model is proposed using the Cross-correlation algorithm along with cross correlation scheme in order to recognize the person by evaluating the facial features.",
"title": ""
},
{
"docid": "8b971925c3a9a70b6c3eaffedf5a3985",
"text": "We consider the NP-complete problem of finding an enclosing rectangle of minimum area that will contain a given a set of rectangles. We present two different constraintsatisfaction formulations of this problem. The first searches a space of absolute placements of rectangles in the enclosing rectangle, while the other searches a space of relative placements between pairs of rectangles. Both approaches dramatically outperform previous approaches to optimal rectangle packing. For problems where the rectangle dimensions have low precision, such as small integers, absolute placement is generally more efficient, whereas for rectangles with high-precision dimensions, relative placement will be more effective. In two sets of experiments, we find both the smallest rectangles and squares that can contain the set of squares of size 1 × 1, 2 × 2, . . . ,N × N , for N up to 27. In addition, we solve an open problem dating to 1966, concerning packing the set of consecutive squares up to 24 × 24 in a square of size 70 × 70. Finally, we find the smallest enclosing rectangles that can contain a set of unoriented rectangles of size 1 × 2, 2 × 3, 3 × 4, . . . ,N × (N + 1), for N up to 25.",
"title": ""
},
{
"docid": "59a98a769d8aa5565f522369e65f02fc",
"text": "Common nonlinear activation functions used in neural networks can cause training difficulties due to the saturation behavior of the activation function, which may hide dependencies that are not visible to vanilla-SGD (using first order gradients only). Gating mechanisms that use softly saturating activation functions to emulate the discrete switching of digital logic circuits are good examples of this. We propose to exploit the injection of appropriate noise so that the gradients may flow easily, even if the noiseless application of the activation function would yield zero gradient. Large noise will dominate the noise-free gradient and allow stochastic gradient descent to explore more. By adding noise only to the problematic parts of the activation function, we allow the optimization procedure to explore the boundary between the degenerate (saturating) and the well-behaved parts of the activation function. We also establish connections to simulated annealing, when the amount of noise is annealed down, making it easier to optimize hard objective functions. We find experimentally that replacing such saturating activation functions by noisy variants helps training in many contexts, yielding state-of-the-art or competitive results on different datasets and task, especially when training seems to be the most difficult, e.g., when curriculum learning is necessary to obtain good results.",
"title": ""
},
{
"docid": "20edbb4e0d7ba85da7427b4f6b8c28d9",
"text": "The use of visual models such as pictures, diagrams and animations in science education is increasing. This is because of the complex nature associated with the concepts in the field. Students, especially entrant students, often report misconceptions and learning difficulties associated with various concepts especially those that exist at a microscopic level, such as DNA, the gene and meiosis as well as those that exist in relatively large time scales such as evolution. However the role of visual literacy in the construction of knowledge in science education has not been investigated much. This article explores the theoretical process of visualization answering the question \"how can visual literacy be understood based on the theoretical cognitive process of visualization in order to inform the understanding, teaching and studying of visual literacy in science education?\" Based on various theories on cognitive processes during learning for science and general education the author argues that the theoretical process of visualization consists of three stages, namely, Internalization of Visual Models, Conceptualization of Visual Models and Externalization of Visual Models. The application of this theoretical cognitive process of visualization and the stages of visualization in science education are discussed.",
"title": ""
},
{
"docid": "bd100b77d129163277b9ea6225fd3af3",
"text": "Spatial interactions (or flows), such as population migration and disease spread, naturally form a weighted location-to-location network (graph). Such geographically embedded networks (graphs) are usually very large. For example, the county-to-county migration data in the U.S. has thousands of counties and about a million migration paths. Moreover, many variables are associated with each flow, such as the number of migrants for different age groups, income levels, and occupations. It is a challenging task to visualize such data and discover network structures, multivariate relations, and their geographic patterns simultaneously. This paper addresses these challenges by developing an integrated interactive visualization framework that consists three coupled components: (1) a spatially constrained graph partitioning method that can construct a hierarchy of geographical regions (communities), where there are more flows or connections within regions than across regions; (2) a multivariate clustering and visualization method to detect and present multivariate patterns in the aggregated region-to-region flows; and (3) a highly interactive flow mapping component to map both flow and multivariate patterns in the geographic space, at different hierarchical levels. The proposed approach can process relatively large data sets and effectively discover and visualize major flow structures and multivariate relations at the same time. User interactions are supported to facilitate the understanding of both an overview and detailed patterns.",
"title": ""
},
{
"docid": "3b88cd186023cc5d4a44314cdb521d0e",
"text": "RATIONALE, AIMS AND OBJECTIVES\nThis article aims to provide evidence to guide multidisciplinary clinical practitioners towards successful initiation and long-term maintenance of oral feeding in preterm infants, directed by the individual infant maturity.\n\n\nMETHOD\nA comprehensive review of primary research, explorative work, existing guidelines, and evidence-based opinions regarding the transition to oral feeding in preterm infants was studied to compile this document.\n\n\nRESULTS\nCurrent clinical hospital practices are described and challenged and the principles of cue-based feeding are explored. \"Traditional\" feeding regimes use criteria, such as the infant's weight, gestational age and being free of illness, and even caregiver intuition to initiate or delay oral feeding. However, these criteria could compromise the infant and increase anxiety levels and frustration for parents and caregivers. Cue-based feeding, opposed to volume-driven feeding, lead to improved feeding success, including increased weight gain, shorter hospital stay, fewer adverse events, without increasing staff workload while simultaneously improving parents' skills regarding infant feeding. Although research is available on cue-based feeding, an easy-to-use clinical guide for practitioners could not be found. A cue-based infant feeding regime, for clinical decision making on providing opportunities to support feeding success in preterm infants, is provided in this article as a framework for clinical reasoning.\n\n\nCONCLUSIONS\nCue-based feeding of preterm infants requires care providers who are trained in and sensitive to infant cues, to ensure optimal feeding success. An easy-to-use clinical guideline is presented for implementation by multidisciplinary team members. This evidence-based guideline aims to improve feeding outcomes for the newborn infant and to facilitate the tasks of nurses and caregivers.",
"title": ""
},
{
"docid": "13503c2cb633e162f094727df62092d3",
"text": "In this article, we investigate word sense distributions in noun compounds (NCs). Our primary goal is to disambiguate the word sense of component words in NCs, based on investigation of “semantic collocation” between them. We use sense collocation and lexical substitution to build supervised and unsupervised word sense disambiguation (WSD) classifiers, and show our unsupervised learner to be superior to a benchmark WSD system. Further, we develop a word sense-based approach to interpreting the semantic relations in NCs.",
"title": ""
},
{
"docid": "427796f5c37e41363c1664b47596eacf",
"text": "A trading and portfolio management system called QSR is proposed. It uses Q-learning and Sharpe ratio maximization algorithm. We use absolute proot and relative risk-adjusted proot as performance function to train the system respectively, and employ a committee of two networks to do the testing. The new proposed algorithm makes use of the advantages of both parts and can be used in a more general case. We demonstrate with experimental results that the proposed approach generates appreciable proots from trading in the foreign exchange markets.",
"title": ""
},
{
"docid": "638e0059bf390b81de2202c22427b937",
"text": "Oral and gastrointestinal mucositis is a toxicity of many forms of radiotherapy and chemotherapy. It has a significant impact on health, quality of life and economic outcomes that are associated with treatment. It also indirectly affects the success of antineoplastic therapy by limiting the ability of patients to tolerate optimal tumoricidal treatment. The complex pathogenesis of mucositis has only recently been appreciated and reflects the dynamic interactions of all of the cell and tissue types that comprise the epithelium and submucosa. The identification of the molecular events that lead to treatment-induced mucosal injury has provided targets for mechanistically based interventions to prevent and treat mucositis.",
"title": ""
},
{
"docid": "9117bb0ed6ab5fb573f16b5a09798711",
"text": "When does knowledge transfer benefit performance? Combining field data from a global consulting firm with an agent-based model, we examine how efforts to supplement one’s knowledge from coworkers interact with individual, organizational, and environmental characteristics to impact organizational performance. We find that once cost and interpersonal exchange are included in the analysis, the impact of knowledge transfer is highly contingent. Depending on specific characteristics and circumstances, knowledge transfer can better, matter little to, or even harm performance. Three illustrative studies clarify puzzling past results and offer specific boundary conditions: (1) At the individual level, better organizational support for employee learning diminishes the benefit of knowledge transfer for organizational performance. (2) At the organization level, broader access to organizational memory makes global knowledge transfer less beneficial to performance. (3) When the organizational environment becomes more turbulent, the organizational performance benefits of knowledge transfer decrease. The findings imply that organizations may forgo investments in both organizational memory and knowledge exchange, that wide-ranging knowledge exchange may be unimportant or even harmful for performance, and that organizations operating in turbulent environments may find that investment in knowledge exchange undermines performance rather than enhances it. At a time when practitioners are urged to make investments in facilitating knowledge transfer and collaboration, appreciation of the complex relationship between knowledge transfer and performance will help in reaping benefits while avoiding liabilities.",
"title": ""
},
{
"docid": "4e54ca27e8f28deefac8219cb8d02d16",
"text": "The design, simulation studies, and experimental verification of an electrically small, low-profile, broadside-radiating Huygens circularly polarized (HCP) antenna are reported. To realize its unique circular polarization cardioid-shaped radiation characteristics in a compact structure, two pairs of the metamaterial-inspired near-field resonant parasitic elements, the Egyptian axe dipole (EAD) and the capacitively loaded loop (CLL), are integrated into a crossed-dipole configuration. The EAD (CLL) elements act as the orthogonal electric dipole (magnetic dipole) radiators. Balanced broadside-radiated electric and magnetic field amplitudes with the requisite 90° phase difference between them are realized by exciting these two pairs of electric and magnetic dipoles with a specially designed, unbalanced crossed-dipole structure. The electrically small (ka = 0.73) design operates at 1575 MHz. It is low profile $0.04\\lambda _{\\mathbf {0}}$ , and its entire volume is only $0.0018\\lambda _{\\mathbf {0}}^{\\mathbf {3}}$ . A prototype of this optimized HCP antenna system was fabricated, assembled, and tested. The measured results are in good agreement with their simulated values. They demonstrate that the prototype HCP antenna resonates at 1584 MHz with a 0.6 dB axial ratio, and produces the predicted Huygens cardioid-shaped radiation patterns. The measured peak realized LHCP gain was 2.7 dBic, and the associated front-to-back ratio was 17.7 dB.",
"title": ""
},
{
"docid": "29dcdc7c19515caad04c6fb58e7de4ea",
"text": "The standard way to procedurally generate random terrain for video games and other applications is to post-process the output of a fast noise generator such as Perlin noise. Tuning the post-processing to achieve particular types of terrain requires game designers to be reasonably well-trained in mathematics. A well-known variant of Perlin noise called value noise is used in a process accessible to designers trained in geography to generate geotypical terrain based on elevation statistics drawn from widely available sources such as the United States Geographical Service. A step-by-step process for downloading and creating terrain from realworld USGS elevation data is described, and an implementation in C++ is given.",
"title": ""
},
{
"docid": "7b7e7db68753dc40fce611ce06dc7c74",
"text": "Ontology learning is the process of acquiring (constructing or integrating) an ontology (semi-) automatically. Being a knowledge acquisition task, it is a complex activity, which becomes even more complex in the context of the BOEMIE project, due to the management of multimedia resources and the multi-modal semantic interpretation that they require. The purpose of this chapter is to present a survey of the most relevant methods, techniques and tools used for the task of ontology learning. Adopting a practical perspective, an overview of the main activities involved in ontology learning is presented. This breakdown of the learning process is used as a basis for the comparative analysis of existing tools and approaches. The comparison is done along dimensions that emphasize the particular interests of the BOEMIE project. In this context, ontology learning in BOEMIE is treated and compared to the state of the art, explaining how BOEMIE addresses problems observed in existing systems and contributes to issues that are not frequently considered by existing approaches.",
"title": ""
},
{
"docid": "17ae550374220164f05c3421b6ff7cd1",
"text": "Abstract meaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclicmeaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclic graphs. AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences. We introduce a neural parser which treats alignments as latent variables within a joint probabilistic model of concepts, relations and alignments. As exact inference requires marginalizing over alignments and is infeasible, we use the variational autoencoding framework and a continuous relaxation of the discrete alignments. We show that joint modeling is preferable to using a pipeline of align and parse. The parser achieves the best reported results on the standard benchmark (73.6% on LDC2016E25).",
"title": ""
},
{
"docid": "b70716877c23701d0897ab4a42a5beba",
"text": "We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image. Limited by the nature of deep neural network, previous methods usually represent a 3D shape in volume or point cloud, and it is non-trivial to convert them to the more ready-to-use mesh model. Unlike the existing methods, our network represents 3D mesh in a graph-based convolutional neural network and produces correct geometry by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. We adopt a coarse-to-fine strategy to make the whole deformation procedure stable, and define various of mesh related losses to capture properties of different levels to guarantee visually appealing and physically accurate 3D geometry. Extensive experiments show that our method not only qualitatively produces mesh model with better details, but also achieves higher 3D shape estimation accuracy compared to the state-of-the-art.",
"title": ""
},
{
"docid": "dc4e9b951f83843b17c620a4b766282d",
"text": "Security threats have been a major concern as a result of emergence of technology in every aspect including internet market, computational and communication technologies. To solve this issue effective mechanism of “cryptography” is used to ensure integrity, privacy, availability, authentication, computability, identification and accuracy. Cryptology techniques like PKC and SKC are used of data recovery. In current work, we describe exploration of efficient approach of private key architecture on the basis of attributes: effectiveness, scalability, flexibility, reliability and degree of security issues essential for safe wired and wireless communication. The work explores efficient private key algorithm based on security of individual system and scalability under criteria of memory-cpu utilization together with encryption performance. The exploration results in AES as superior over other algorithm. The work opens a new direction over cloud security and internet of things.",
"title": ""
},
{
"docid": "dfcf58ee43773271d01cd5121c60fde0",
"text": "Semantic segmentation as a pixel-wise segmentation task provides rich object information, and it has been widely applied in many fields ranging from autonomous driving to medical image analysis. There are two main challenges on existing approaches: the first one is the obfuscation between objects resulted from the prediction of the network and the second one is the lack of localization accuracy. Hence, to tackle these challenges, we proposed global encoding module (GEModule) and dilated decoder module (DDModule). Specifically, the GEModule that integrated traditional dictionary learning and global semantic context information is to select discriminative features and improve performance. DDModule that combined dilated convolution and dense connection is used to decoder module and to refine the prediction results. We evaluated our proposed architecture on two public benchmarks, Cityscapes and CamVid data set. We conducted a series of ablation studies to exploit the effectiveness of each module, and our approach has achieved an intersection-over-union scores of 71.3% on the Cityscapes data set and 60.4% on the CamVid data set.",
"title": ""
}
] |
scidocsrr
|
27afdfd8d91bdc3eed280c1076da7782
|
A Soft-label Method for Noise-tolerant Distantly Supervised Relation Extraction
|
[
{
"docid": "3ca057959a24245764953a6aa1b2ed84",
"text": "Distant supervision for relation extraction is an efficient method to scale relation extraction to very large corpora which contains thousands of relations. However, the existing approaches have flaws on selecting valid instances and lack of background knowledge about the entities. In this paper, we propose a sentence-level attention model to select the valid instances, which makes full use of the supervision information from knowledge bases. And we extract entity descriptions from Freebase and Wikipedia pages to supplement background knowledge for our task. The background knowledge not only provides more information for predicting relations, but also brings better entity representations for the attention module. We conduct three experiments on a widely used dataset and the experimental results show that our approach outperforms all the baseline systems significantly.",
"title": ""
}
] |
[
{
"docid": "85d7ff422f9753543494f6a1c4bdf21c",
"text": "Early in the last century, 3 events put Colorado in the orthodontic spotlight: the discovery-by an orthodontist-of the caries-preventive powers of fluoridated water, the formation of dentistry's first specialty board, and the founding of a supply company by and for orthodontists. Meanwhile, inventive practitioners were giving the profession more choices of treatment modalities, and stainless steel was making its feeble debut.",
"title": ""
},
{
"docid": "4d9f0cf629cd3695a2ec249b81336d28",
"text": "We introduce an over-sketching interface for feature-preserving surface mesh editing. The user sketches a stroke that is the suggested position of part of a silhouette of the displayed surface. The system then segments all image-space silhouettes of the projected surface, identifies among all silhouette segments the best matching part, derives vertices in the surface mesh corresponding to the silhouette part, selects a sub-region of the mesh to be modified, and feeds appropriately modified vertex positions together with the sub-mesh into a mesh deformation tool. The overall algorithm has been designed to enable interactive modification of the surface --- yielding a surface editing system that comes close to the experience of sketching 3D models on paper.",
"title": ""
},
{
"docid": "7fd396ca8870c3a2fe99e63f24aaf9f7",
"text": "This paper presents a one-point calibration gaze tracking method based on eyeball kinematics using stereo cameras. By using two cameras and two light sources, the optic axis of the eye can be estimated. One-point calibration is required to estimate the angle of the visual axis from the optic axis. The eyeball rotates with optic and visual axes based on the eyeball kinematics (Listing's law). Therefore, we introduced eyeball kinematics to the one-point calibration process in order to properly estimate the visual axis. The prototype system was developed and it was found that the accuracy was under 1° around the center and bottom of the display.",
"title": ""
},
{
"docid": "bf9e828c9e3ee8d64d387cd518fb6b2d",
"text": "As smartphone penetration saturates, we are witnessing a new trend in personal mobile devices—wearable mobile devices or simply wearables as it is often called. Wearables come in many different forms and flavors targeting different accessories and clothing that people wear. Although small in size, they are often expected to continuously sense, collect, and upload various physiological data to improve quality of life. These requirements put significant demand on improving communication security and reducing power consumption of the system, fueling new research in these areas. In this paper, we first provide a comprehensive survey and classification of commercially available wearables and research prototypes. We then examine the communication security issues facing the popular wearables followed by a survey of solutions studied in the literature. We also categorize and explain the techniques for improving the power efficiency of wearables. Next, we survey the research literature in wearable computing. We conclude with future directions in wearable market and research.",
"title": ""
},
{
"docid": "ae804d35d922d4bd3e00a913e15ab053",
"text": "Brain tumor is an uncontrolled growth of tissues in human brain. This tumor, when turns in to cancer become life-threatening. So medical imaging, it is necessary to detect the exact location of tumor and its type. For locating tumor in magnetic resonance image (MRI) segmentation of MRI plays an important role. This paper includes survey on different segmentation techniques applied to MR Images for locating tumor. It also includes a proposed method for the same using Fuzzy C-Means algorithm and an algorithm to find area of tumor which is usefull to decide type of brain tumor.",
"title": ""
},
{
"docid": "968797472eeedd75ff9b89909bc4f84d",
"text": "In this paper, we investigate the issue of minimizing data center energy usage. In particular, we formulate a problem of virtual machine placement with the objective of minimizing the total power consumption of all the servers. To do this, we examine a CPU power consumption model and then incorporate the model into an mixed integer programming formulation. In order to find optimal or near-optimal solutions fast, we resolve two difficulties: non-linearity of the power model and integer decision variables. We first show how to linearize the problem, and then give a relaxation and iterative rounding algorithm. Computation experiments have shown that the algorithm can solve the problem much faster than the standard integer programming algorithms, and it consistently yields near-optimal solutions. We also provide a heuristic min-cost algorithm, which finds less optimal solutions but works even faster.",
"title": ""
},
{
"docid": "a2e192b3b17b261e525ed7abc3543d26",
"text": "A new version of a special-purpose processor for running lazy functional programs is presented. This processor – the Reduceron – exploits parallel memories and dynamic analyses to increase evaluation speed, and is implemented using reconfigurable hardware. Compared to a more conventional functional language implementation targeting a standard RISC processor running on the same reconfigurable hardware, the Reduceron offers a significant improvement in run-time performance.",
"title": ""
},
{
"docid": "88244630956a9b83e689f3b7a42731a1",
"text": "An electrically small electric-based metamaterial-in- spired antenna that is designed for narrow bandwidth operations in the VHF and UHF bands is presented. It is demonstrated that the idealized lossless versions have overall efficiencies at 100% without any external matching circuits and quality factors approaching the Chu limit. When conductor losses are introduced, the overall efficiencies remain high.",
"title": ""
},
{
"docid": "d32c13d9d2338cdfd63686ce0adf1960",
"text": "Mobility has always been a big challenge in cellular networks, because it is responsible for traffic fluctuations that eventually result into inconstant resource usage and the need for proper Quality of Service management. When applications get deployed at the network edge, the challenges even grow because software is harder to hand-over than traffic streams. Cloud technologies have been designed with different specifications, and should be properly revised to balance efficiency and effectiveness in distributed and capillary infrastructures. In this paper, we propose some extensions to OpenStack for power management and Quality of Service. Our framework provides additional APIs for setting the service level and interacting with power-saving mechanisms. It is designed to be easily integrated with modern software orchestration tools and workload consolidation algorithms. We report real measurements from an experimental proof-of-concept.",
"title": ""
},
{
"docid": "642dc4e6c10dd2ed6c5cf913f4da2738",
"text": "Nonaka’s paper [1994. A dynamic theory of organizational knowledge creation. Organ. Sci. 5(1) 14–37] contributed to the concepts of “tacit knowledge” and “knowledge conversion” in organization science. We present work that shaped the development of organizational knowledge creation theory and identify two premises upon which more than 15 years of extensive academic work has been conducted: (1) tacit and explicit knowledge can be conceptually distinguished along a continuum; (2) knowledge conversion explains, theoretically and empirically, the interaction between tacit and explicit knowledge. Recently, scholars have raised several issues regarding the understanding of tacit knowledge as well as the interaction between tacit and explicit knowledge in the theory. The purpose of this article is to introduce and comment on the debate about organizational knowledge creation theory. We aim to help scholars make sense of this debate by synthesizing six fundamental questions on organizational knowledge creation theory. Next, we seek to elaborate and advance the theory by responding to questions and incorporating new research. Finally, we discuss implications of our endeavor for organization science.",
"title": ""
},
{
"docid": "758978c4b8f3bdd0a57fe9865892fbc3",
"text": "The foundation of a process model lies in its structural specifications. Using a generic process modeling language for workflows, we show how a structural specification may contain deadlock and lack of synchronization conflicts that could compromise the correct execution of workflows. In general, identification of such conflicts is a computationally complex problem and requires development of effective algorithms specific for the target modeling language. We present a visual verification approach and algorithm that employs a set of graph reduction rules to identify structural conflicts in process models for the given workflow modeling language. We also provide insights into the correctness and complexity of the reduction process. Finally, we show how the reduction algorithm may be used to count possible instance subgraphs of a correct process model. The main contribution of the paper is a new technique for satisfying well-defined correctness criteria in process models.",
"title": ""
},
{
"docid": "36a694668a10bc0475f447adb1e09757",
"text": "Previous findings indicated that when people observe someone’s behavior, they spontaneously infer the traits and situations that cause the target person’s behavior. These inference processes are called spontaneous trait inferences (STIs) and spontaneous situation inferences (SSIs). While both patterns of inferences have been observed, no research has examined the extent to which people from different cultural backgrounds produce these inferences when information affords both trait and situation inferences. Based on the theoretical frameworks of social orientations and thinking styles, we hypothesized that European Canadians would be more likely to produce STIs than SSIs because of the individualistic/independent social orientation and the analytic thinking style dominant in North America, whereas Japanese would produce both STIs and SSIs equally because of the collectivistic/interdependent social orientation and the holistic thinking style dominant in East Asia. Employing the savings-in-relearning paradigm, we presented information that affords both STIs and SSIs and examined cultural differences in the extent of both inferences. The results supported our hypotheses. The relationships between culturally dominant styles of thought and the inference processes in impression formation are discussed.",
"title": ""
},
{
"docid": "0db0761e87cf381b3b214f6cb56e26fc",
"text": "This study explores the geographic dependencies of echo-chamber communication on Twitter during the Brexit referendum campaign. We review the literature on filter bubbles, echo chambers, and polarization to test five hypotheses positing that echo-chamber communication is associated with homophily in the physical world, chiefly the geographic proximity between users advocating sides of the campaign. The results support the hypothesis that echo chambers in the Leave campaign are associated with geographic propinquity, whereas in the Remain campaign the reverse relationship was found. This study presents evidence that geographically proximate social enclaves interact with polarized political discussion where echo-chamber communication is observed. The article concludes with a discussion of these findings and the contribution to research on filter bubbles and echo chambers.",
"title": ""
},
{
"docid": "9b44cee4e65922bb07682baf0d395730",
"text": "Zero-shot learning has gained popularity due to its potential to scale recognition models without requiring additional training data. This is usually achieved by associating categories with their semantic information like attributes. However, we believe that the potential offered by this paradigm is not yet fully exploited. In this work, we propose to utilize the structure of the space spanned by the attributes using a set of relations. We devise objective functions to preserve these relations in the embedding space, thereby inducing semanticity to the embedding space. Through extensive experimental evaluation on five benchmark datasets, we demonstrate that inducing semanticity to the embedding space is beneficial for zero-shot learning. The proposed approach outperforms the state-of-the-art on the standard zero-shot setting as well as the more realistic generalized zero-shot setting. We also demonstrate how the proposed approach can be useful for making approximate semantic inferences about an image belonging to a category for which attribute information is not available.",
"title": ""
},
{
"docid": "c07eedc87181fa7af8494b95a0c454d3",
"text": "Studies on fault detection and diagnosis of planetary gearboxes are quite limited compared with those of fixed-axis gearboxes. Different from fixed-axis gearboxes, planetary gearboxes exhibit unique behaviors, which invalidate fault diagnosis methods that work well for fixed-axis gearboxes. It is a fact that for systems as complex as planetary gearboxes, multiple sensors mounted on different locations provide complementary information on the health condition of the systems. On this basis, a fault detection method based on multi-sensor data fusion is introduced in this paper. In this method, two features developed for planetary gearboxes are used to characterize the gear health conditions, and an adaptive neuro-fuzzy inference system (ANFIS) is utilized to fuse all features from different sensors. In order to demonstrate the effectiveness of the proposed method, experiments are carried out on a planetary gearbox test rig, on which multiple accelerometers are mounted for data collection. The comparisons between the proposed method and the methods based on individual sensors show that the former achieves much higher accuracies in detecting planetary gearbox faults.",
"title": ""
},
{
"docid": "25c2bab5bd1d541629c23bb6a929f968",
"text": "A novel transition from coaxial cable to microstrip is presented in which the coax connector is perpendicular to the substrate of the printed circuit. Such a right-angle transition has practical advantages over more common end-launch geometries in some situations. The design is compact, easy to fabricate, and provides repeatable performance of better than 14 dB return loss and 0.4 dB insertion loss from DC to 40 GHz.",
"title": ""
},
{
"docid": "7342475811dd69ef812e2b2f91c283ba",
"text": "Detecting pedestrians in cluttered scenes is a challenging problem in computer vision. The difficulty is added when several pedestrians overlap in images and occlude each other. We observe, however, that the occlusion/visibility statuses of overlapping pedestrians provide useful mutual relationship for visibility estimation—the visibility estimation of one pedestrian facilitates the visibility estimation of another. In this paper, we propose a mutual visibility deep model that jointly estimates the visibility statuses of overlapping pedestrians. The visibility relationship among pedestrians is learned from the deep model for recognizing co-existing pedestrians. Then the evidence of co-existing pedestrians is used for improving the single pedestrian detection results. Compared with existing image-based pedestrian detection approaches, our approach has the lowest average miss rate on the Caltech-Train dataset and the ETH dataset. Experimental results show that the mutual visibility deep model effectively improves the pedestrian detection results. The mutual visibility deep model leads to 6–15 % improvements on multiple benchmark datasets.",
"title": ""
},
{
"docid": "c17b60594f81c3e5edaaff6dd2088b24",
"text": "Photoreduction of dioxygen in photosystem I (PSI) of chloroplasts generates superoxide radicals as the primary product. In intact chloroplasts, the superoxide and the hydrogen peroxide produced via the disproportionation of superoxide are so rapidly scavenged at the site of their generation that the active oxygens do not inactivate the PSI complex, the stromal enzymes, or the scavenging system itself. The overall reaction for scavenging of active oxygens is the photoreduction of dioxygen to water via superoxide and hydrogen peroxide in PSI by the electrons derived from water in PSII, and the water-water cycle is proposed for these sequences. An overview is given of the molecular mechanism of the water-water cycle and microcompartmentalization of the enzymes participating in it. Whenever the water-water cycle operates properly for scavenging of active oxygens in chloroplasts, it also effectively dissipates excess excitation energy under environmental stress. The dual functions of the water-water cycle for protection from photoinihibition are discussed.",
"title": ""
},
{
"docid": "25a4932881c7d20b101bc94210c48753",
"text": "Multilevel inverters have been widely accepted for high-power high-voltage applications. Their performance is highly superior to that of conventional two-level inverters due to reduced harmonic distortion, lower electromagnetic interference, and higher dc link voltages. However, it has some disadvantages such as increased number of components, complex pulsewidth modulation control method, and voltage-balancing problem. In this paper, a new topology with a reversing-voltage component is proposed to improve the multilevel performance by compensating the disadvantages mentioned. This topology requires fewer components compared to existing inverters (particularly in higher levels) and requires fewer carrier signals and gate drives. Therefore, the overall cost and complexity are greatly reduced particularly for higher output voltage levels. Finally, a prototype of the seven-level proposed topology is built and tested to show the performance of the inverter by experimental results.",
"title": ""
},
{
"docid": "c4346bf13f8367fe3046ab280ac94183",
"text": "Human world is becoming more and more dependent on computers and information technology (IT). The autonomic capabilities in computers and IT have become the need of the day. These capabilities in software and systems increase performance, accuracy, availability and reliability with less or no human intervention (HI). Database has become the integral part of information system in most of the organizations. Databases are growing w.r.t size, functionality, heterogeneity and due to this their manageability needs more attention. Autonomic capabilities in Database Management Systems (DBMSs) are also essential for ease of management, cost of maintenance and hide the low level complexities from end users. With autonomic capabilities administrators can perform higher-level tasks. The DBMS that has the ability to manage itself according to the environment and resources without any human intervention is known as Autonomic DBMS (ADBMS). The paper explores and analyzes the autonomic components of Oracle by considering autonomic characteristics. This analysis illustrates how different components of Oracle manage itself autonomically. The research is focused to find and earmark those areas in Oracle where the human intervention is required. We have performed the same type of research over Microsoft SQL Server and DB2 [1, 2]. A comparison of autonomic components of Oracle with SQL Server is provided to show their autonomic status.",
"title": ""
}
] |
scidocsrr
|
2f09e5cc12555ce12fbbd972cc9f2776
|
Combining Data Owner-Side and Cloud-Side Access Control for Encrypted Cloud Storage
|
[
{
"docid": "ef7d2afe9206e56479a4098b6255aa4b",
"text": "Cloud is becoming a dominant computing platform. Naturally, a question that arises is whether we can beat notorious DDoS attacks in a cloud environment. Researchers have demonstrated that the essential issue of DDoS attack and defense is resource competition between defenders and attackers. A cloud usually possesses profound resources and has full control and dynamic allocation capability of its resources. Therefore, cloud offers us the potential to overcome DDoS attacks. However, individual cloud hosted servers are still vulnerable to DDoS attacks if they still run in the traditional way. In this paper, we propose a dynamic resource allocation strategy to counter DDoS attacks against individual cloud customers. When a DDoS attack occurs, we employ the idle resources of the cloud to clone sufficient intrusion prevention servers for the victim in order to quickly filter out attack packets and guarantee the quality of the service for benign users simultaneously. We establish a mathematical model to approximate the needs of our resource investment based on queueing theory. Through careful system analysis and real-world data set experiments, we conclude that we can defeat DDoS attacks in a cloud environment.",
"title": ""
},
{
"docid": "70cc8c058105b905eebdf941ca2d3f2e",
"text": "Cloud computing is an emerging computing paradigm in which resources of the computing infrastructure are provided as services over the Internet. As promising as it is, this paradigm also brings forth many new challenges for data security and access control when users outsource sensitive data for sharing on cloud servers, which are not within the same trusted domain as data owners. To keep sensitive user data confidential against untrusted servers, existing solutions usually apply cryptographic methods by disclosing data decryption keys only to authorized users. However, in doing so, these solutions inevitably introduce a heavy computation overhead on the data owner for key distribution and data management when fine-grained data access control is desired, and thus do not scale well. The problem of simultaneously achieving fine-grainedness, scalability, and data confidentiality of access control actually still remains unresolved. This paper addresses this challenging open issue by, on one hand, defining and enforcing access policies based on data attributes, and, on the other hand, allowing the data owner to delegate most of the computation tasks involved in fine-grained data access control to untrusted cloud servers without disclosing the underlying data contents. We achieve this goal by exploiting and uniquely combining techniques of attribute-based encryption (ABE), proxy re-encryption, and lazy re-encryption. Our proposed scheme also has salient properties of user access privilege confidentiality and user secret key accountability. Extensive analysis shows that our proposed scheme is highly efficient and provably secure under existing security models.",
"title": ""
}
] |
[
{
"docid": "14f3ecd814f5affe186146288d83697c",
"text": "Accidental intra-arterial filler injection may cause significant tissue injury and necrosis. Hyaluronic acid (HA) fillers, currently the most popular, are the focus of this article, which highlights complications and their symptoms, risk factors, and possible treatment strategies. Although ischemic events do happen and are therefore important to discuss, they seem to be exceptionally rare and represent a small percentage of complications in individual clinical practices. However, the true incidence of this complication is unknown because of underreporting by clinicians. Typical clinical findings include skin blanching, livedo reticularis, slow capillary refill, and dusky blue-red discoloration, followed a few days later by blister formation and finally tissue slough. Mainstays of treatment (apart from avoidance by meticulous technique) are prompt recognition, immediate treatment with hyaluronidase, topical nitropaste under occlusion, oral acetylsalicylic acid (aspirin), warm compresses, and vigorous massage. Secondary lines of treatment may involve intra-arterial hyaluronidase, hyperbaric oxygen therapy, and ancillary vasodilating agents such as prostaglandin E1. Emergency preparedness (a \"filler crash cart\") is emphasized, since early intervention is likely to significantly reduce morbidity. A clinical summary chart is provided, organized by complication presentation.",
"title": ""
},
{
"docid": "a2e91a00e2f3bc23b5de83ca39566c84",
"text": "This paper addresses an emerging new field of research that combines the strengths and capabilities of electronics and textiles in one: electronic textiles, or e-textiles. E-textiles, also called Smart Fabrics, have not only \"wearable\" capabilities like any other garment, but also local monitoring and computation, as well as wireless communication capabilities. Sensors and simple computational elements are embedded in e-textiles, as well as built into yarns, with the goal of gathering sensitive information, monitoring vital statistics and sending them remotely (possibly over a wireless channel) for further processing. Possible applications include medical (infant or patient) monitoring, personal information processing systems, or remote monitoring of deployed personnel in military or space applications. We illustrate the challenges imposed by the dual textile/electronics technology on their modeling and optimization methodology.",
"title": ""
},
{
"docid": "e559afd57c31b67f30942a519d079109",
"text": "We show how to use a variational approximation to the logistic function to perform approximate inference in Bayesian networks containing discrete nodes with continuous parents. Essentially, we convert the logistic function to a Gaussian, which facilitates exact inference, and then iteratively adjust the variational parameters to improve the quality of the approximation. We demonstrate experimentally that this approximation is much faster than sampling, but comparable in accuracy. We also introduce a simple new technique for handling evidence, which allows us to handle arbitrary distributionson observed nodes, as well as achieving a significant speedup in networks with discrete variables of large cardinality.",
"title": ""
},
{
"docid": "de24242bef4464a0126ce3806b795ac8",
"text": "Music must first be defined and distinguished from speech, and from animal and bird cries. We discuss the stages of hominid anatomy that permit music to be perceived and created, with the likelihood of both Homo neanderthalensis and Homo sapiens both being capable. The earlier hominid ability to emit sounds of variable pitch with some meaning shows that music at its simplest level must have predated speech. The possibilities of anthropoid motor impulse suggest that rhythm may have preceded melody, though full control of rhythm may well not have come any earlier than the perception of music above. There are four evident purposes for music: dance, ritual, entertainment personal, and communal, and above all social cohesion, again on both personal and communal levels. We then proceed to how instruments began, with a brief survey of the surviving examples from the Mousterian period onward, including the possible Neanderthal evidence and the extent to which they showed “artistic” potential in other fields. We warn that our performance on replicas of surviving instruments may bear little or no resemblance to that of the original players. We continue with how later instruments, strings, and skin-drums began and developed into instruments we know in worldwide cultures today. The sound of music is then discussed, scales and intervals, and the lack of any consistency of consonant tonality around the world. This is followed by iconographic evidence of the instruments of later antiquity into the European Middle Ages, and finally, the history of public performance, again from the possibilities of early humanity into more modern times. This paper draws the ethnomusicological perspective on the entire development of music, instruments, and performance, from the times of H. neanderthalensis and H. sapiens into those of modern musical history, and it is written with the deliberate intention of informing readers who are without special education in music, and providing necessary information for inquiries into the origin of music by cognitive scientists.",
"title": ""
},
{
"docid": "aff44289b241cdeef627bba97b68a505",
"text": "Personalization is a ubiquitous phenomenon in our daily online experience. While such technology is critical for helping us combat the overload of information we face, in many cases, we may not even realize that our results are being tailored to our personal tastes and preferences. Worse yet, when such a system makes a mistake, we have little recourse to correct it.\n In this work, we propose a framework for addressing this problem by developing a new user-interpretable feature set upon which to base personalized recommendations. These features, which we call badges, represent fundamental traits of users (e.g., \"vegetarian\" or \"Apple fanboy\") inferred by modeling the interplay between a user's behavior and self-reported identity. Specifically, we consider the microblogging site Twitter, where users provide short descriptions of themselves in their profiles, as well as perform actions such as tweeting and retweeting. Our approach is based on the insight that we can define badges using high precision, low recall rules (e.g., \"Twitter profile contains the phrase 'Apple fanboy'\"), and with enough data, generalize to other users by observing shared behavior. We develop a fully Bayesian, generative model that describes this interaction, while allowing us to avoid the pitfalls associated with having positive-only data.\n Experiments on real Twitter data demonstrate the effectiveness of our model at capturing rich and interpretable user traits that can be used to provide transparency for personalization.",
"title": ""
},
{
"docid": "4ddbdf0217d13c8b349137f1e59910d6",
"text": "In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.",
"title": ""
},
{
"docid": "7c1691fd1140b3975b61f8e2ce3dcd9b",
"text": "In this paper, we consider the evolution of structure within large online social networks. We present a series of measurements of two such networks, together comprising in excess of five million people and ten million friendship links, annotated with metadata capturing the time of every event in the life of the network. Our measurements expose a surprising segmentation of these networks into three regions: singletons who do not participate in the network; isolated communities which overwhelmingly display star structure; and a giant component anchored by a well-connected core region which persists even in the absence of stars.We present a simple model of network growth which captures these aspects of component structure. The model follows our experimental results, characterizing users as either passive members of the network; inviters who encourage offline friends and acquaintances to migrate online; and linkers who fully participate in the social evolution of the network.",
"title": ""
},
{
"docid": "86e2873956b79e6bc9826763096e639c",
"text": "ever do anything that is a waste of time – and be prepared to wage long, tedious wars over this principle, \" said Michael O'Connor, project manager at Trimble Navigation in Christchurch, New Zealand. This product group at Trimble is typical of the homegrown approach to agile software development methodologies. While interest in agile methodologies has blossomed in the past two years, its roots go back more than a decade. Teams using early versions of Scrum, Dynamic Systems Development Methodology (DSDM), and adaptive software development (ASD) were delivering successful projects in the early-to mid-1990s. This article attempts to answer the question, \" What constitutes agile software development? \" Because of the breadth of agile approaches and the people who practice them, this is not as easy a question to answer as one might expect. I will try to answer this question by first focusing on the sweet-spot problem domain for agile approaches. Then I will delve into the three dimensions that I refer to as agile ecosystems: barely sufficient methodology, collaborative values, and chaordic perspective. Finally, I will examine several of these agile ecosystems. All problems are different and require different strategies. While battlefield commanders plan extensively, they realize that plans are just a beginning; probing enemy defenses (creating change) and responding to enemy actions (responding to change) are more important. Battlefield commanders succeed by defeating the enemy (the mission), not conforming to a plan. I cannot imagine a battlefield commander saying, \" We lost the battle, but by golly, we were successful because we followed our plan to the letter. \" Battlefields are messy, turbulent, uncertain, and full of change. No battlefield commander would say, \" If we just plan this battle long and hard enough, and put repeatable processes in place, we can eliminate change early in the battle and not have to deal with it later on. \" A growing number of software projects operate in the equivalent of a battle zone – they are extreme projects. This is where agile approaches shine. Project teams operating in this zone attempt to utilize leading or bleeding-edge technologies , respond to erratic requirements changes, and deliver products quickly. Projects may have a relatively clear mission , but the specific requirements can be volatile and evolving as customers and development teams alike explore the unknown. These projects, which I call high-exploration factor projects, do not succumb to rigorous, plan-driven methods. …",
"title": ""
},
{
"docid": "9827845631238f79060345a4e86bd185",
"text": "We formulate and investigate the novel problem of finding the skyline k-tuple groups from an n-tuple dataset - i.e., groups of k tuples which are not dominated by any other group of equal size, based on aggregate-based group dominance relationship. The major technical challenge is to identify effective anti-monotonic properties for pruning the search space of skyline groups. To this end, we show that the anti-monotonic property in the well-known Apriori algorithm does not hold for skyline group pruning. We then identify order-specific property which applies to SUM, MIN, and MAX and weak candidate-generation property which applies to MIN and MAX only. Experimental results on both real and synthetic datasets verify that the proposed algorithms achieve orders of magnitude performance gain over a baseline method.",
"title": ""
},
{
"docid": "b30a31d14e226eea0bc00b68c3f38607",
"text": "String matching plays an important role in field of Computer Science and there are many algorithm of String matching, the important aspect is that which algorithm is to be used in which condition. BM(Boyer-Moore) algorithm is standard benchmark of string matching algorithm so here we explain the BM(Boyer-Moore) algorithm and then explain its improvement as BMH (Boyer-Moore-Horspool), BMHS (Boyer-Moore-Horspool-Sundays), BMHS2 (Boyer-MooreHorspool-Sundays 2), improved BMHS( improved BoyerMoore-Horspool-Sundays) ,BMI (Boyer-Moore improvement) and CBM (composite Boyer-Moore).And also analyze and compare them using a example and find which one is better in which conditions. Keywords-String Matching: BM; BMH; BMHS; BMHS2; improved BMHS; BMI; CBM",
"title": ""
},
{
"docid": "e06e917918a60a6452ee0b0037d3f284",
"text": "In this paper, we examine what types of reputation information users find valuable when selecting someone to interact with in online environments. In an online experiment, we asked users to imagine that they were looking for a partner for a social chat. We found that similarity to the user and ratings from the user's friends were the most valuable pieces of reputation information when selecting chat partners. The context in which reputations were used (social chat, game or newsgroup) affected the self-reported utility of the pieces of reputation information",
"title": ""
},
{
"docid": "b63e88701018a80a7815ee43b62e90fd",
"text": "Educational data mining and learning analytics promise better understanding of student behavior and knowledge, as well as new information on the tacit factors that contribute to student actions. This knowledge can be used to inform decisions related to course and tool design and pedagogy, and to further engage students and guide those at risk of failure. This working group report provides an overview of the body of knowledge regarding the use of educational data mining and learning analytics focused on the teaching and learning of programming. In a literature survey on mining students' programming processes for 2005-2015, we observe a significant increase in work related to the field. However, the majority of the studies focus on simplistic metric analysis and are conducted within a single institution and a single course. This indicates the existence of further avenues of research and a critical need for validation and replication to better understand the various contributing factors and the reasons why certain results occur. We introduce a novel taxonomy to analyse replicating studies and discuss the importance of replicating and reproducing previous work. We describe what is the state of the art in collecting and sharing programming data. To better understand the challenges involved in replicating or reproducing existing studies, we report our experiences from three case studies using programming data. Finally, we present a discussion of future directions for the education and research community.",
"title": ""
},
{
"docid": "feee488a72016554ebf982762d51426e",
"text": "Optical imaging sensors, such as television or infrared cameras, collect information about targets or target regions. It is thus necessary to control the sensor's line-of-sight (LOS) to achieve accurate pointing. Maintaining sensor orientation toward a target is particularly challenging when the imaging sensor is carried on a mobile vehicle or when the target is highly dynamic. Controlling an optical sensor LOS with an inertially stabilized platform (ISP) can meet these challenges.A target tracker is a process, typically involving image processing techniques, for detecting targets in optical imagery. This article describes the use and design of ISPs and target trackers for imaging optical sensors.",
"title": ""
},
{
"docid": "8cfce71cc96c98063b29ec0603f5d18c",
"text": "Time-series of count data are generated in many different contexts, such as web access logging, freeway traffic monitoring, and security logs associated with buildings. Since this data measures the aggregated behavior of individual human beings, it typically exhibits a periodicity in time on a number of scales (daily, weekly,etc.) that reflects the rhythms of the underlying human activity and makes the data appear non-homogeneous. At the same time, the data is often corrupted by a number of bursty periods of unusual behavior such as building events, traffic accidents, and so forth. The data mining problem of finding and extracting these anomalous events is made difficult by both of these elements. In this paper we describe a framework for unsupervised learning in this context, based on a time-varying Poisson process model that can also account for anomalous events. We show how the parameters of this model can be learned from count time series using statistical estimation techniques. We demonstrate the utility of this model on two datasets for which we have partial ground truth in the form of known events, one from freeway traffic data and another from building access data, and show that the model performs significantly better than a non-probabilistic, threshold-based technique. We also describe how the model can be used to investigate different degrees of periodicity in the data, including systematic day-of-week and time-of-day effects, and make inferences about the detected events (e.g., popularity or level of attendance). Our experimental results indicate that the proposed time-varying Poisson model provides a robust and accurate framework for adaptively and autonomously learning how to separate unusual bursty events from traces of normal human activity.",
"title": ""
},
{
"docid": "d48e3a276417392ae14c06f4fc7927ac",
"text": "A recently discovered Early Cretaceous (early late Albian) dinosaur tracksite at Parede beach (Cascais, Portugal) reveals evidence of dinoturbation and at least two sauropod trackways. One of these trackways can be classified as narrow-gauge, which represents unique evidence in the Albian of the Iberian Peninsula and provides for the improvement of knowledge of this kind of trackway and its probable trackmaker, in an age when the sauropod record is scarce. These dinosaur tracks are preserved on the upper surface of a marly limestone bed that belongs to the Galé Formation (Água Doce Member, middle to lower upper Albian). The study of thin-sections of the beds C22/24 and C26 in the Parede section has revealed a microfacies composed of foraminifers, radiolarians, ostracods, corals, bivalves, gastropods, and echinoids in a mainly wackestone texture with biomicritic matrix. These assemblages match with the lithofacies, marine molluscs, echinids, and ichnofossils sampled from the section and indicate a shallow marine, inner shelf palaeoenvironment with a shallowing-upward trend. The biofacies and the sequence analysis are compatible with the early late Albian age attributed to the tracksite. These tracks and the moderate dinoturbation index indicate sauropod activity in this palaeoenvironment. Titanosaurs can be dismissed as possible trackmakers on the basis of the narrow-gauge trackway, and probably by the kidney-shaped manus morphology and the pes-dominated configuration of the trackway. Narrow-gauge sauropod trackways have been positively associated with coastal palaeoenvironments, and the Parede tracksite supports this interpretation. In addition, this tracksite adds new data about the presence of sauropod pes-dominated trackways in cohesive substrates. As the Portuguese Cretaceous sauropod osteological remains are very scarce, the Parede tracksite yields new and relevant evidence of these dinosaurs. Furthermore, the Parede tracksite is the youngest evidence of sauropods in the Portuguese record and some of the rare evidence of sauropods in Europe during the Albian. This discovery enhances the palaeobiological data for the Early Cretaceous Sauropoda of the Iberian Peninsula, where the osteological remains of these dinosaurs are relatively scarce in this region of southwestern Europe. Therefore, this occurrence is also of overall interest due to its impact on Cretaceous Sauropoda palaeobiogeography.",
"title": ""
},
{
"docid": "f4da31cf831dd3db5f3063c5ea1fca62",
"text": "SUMMARY Backtrack algorithms are applicable to a wide variety of problems. An efficient but readable version of such an algorithm is presented and its use in the problem of finding the maximal common subgraph of two graphs is described. Techniques available in this application area for ordering and pruning the backtrack search are discussed. This algorithm has been used successfully as a component of a program for analysing chemical reactions and enumerating the bond changes which have taken place.",
"title": ""
},
{
"docid": "7afe4444a805f1994a40f98e01908509",
"text": "It is well known that CMOS scaling trends are now accompanied by less desirable byproducts such as increased energy dissipation. To combat the aforementioned challenges, solutions are sought at both the device and architectural levels. With this context, this work focuses on embedding a low voltage device, a Tunneling Field Effect Transistor (TFET) within a Cellular Neural Network (CNN) -- a low power analog computing architecture. Our study shows that TFET-based CNN systems, aside from being fully functional, also provide significant power savings when compared to the conventional resistor-based CNN. Our initial studies suggest that power savings are possible by carefully engineering lower voltage, lower current TFET devices without sacrificing performance. Moreover, TFET-based CNN reduces implementation footprints by eliminating the hardware required to realize output transfer functions. Application dynamics are verified through simulations. We conclude the paper with a discussion of desired device characteristics for CNN architectures with enhanced functionality.",
"title": ""
},
{
"docid": "f6333ab767879cf1673bb50aeeb32533",
"text": "Github facilitates the pull-request mechanism as an outstanding social coding paradigm by integrating with social media. The review process of pull-requests is a typical crowd sourcing job which needs to solicit opinions of the community. Recommending appropriate reviewers can reduce the time between the submission of a pull-request and the actual review of it. In this paper, we firstly extend the traditional Machine Learning (ML) based approach of bug triaging to reviewer recommendation. Furthermore, we analyze social relations between contributors and reviewers, and propose a novel approach to recommend highly relevant reviewers by mining comment networks (CN) of given projects. Finally, we demonstrate the effectiveness of these two approaches with quantitative evaluations. The results show that CN-based approach achieves a significant improvement over the ML-based approach, and on average it reaches a precision of 78% and 67% for top-1 and top-2 recommendation respectively, and a recall of 77% for top-10 recommendation.",
"title": ""
},
{
"docid": "e8010fdc14ace06ffad91561694dd310",
"text": "This paper describes the performance comparison of a wind power systems based on two different induction generators as well as the experimental demonstration of a wind turbine simulator for the maximum power extraction. The two induction machines studied for the comparison are the squirrel-cage induction generator (SCIG) and the doubly fed induction generator (DFIG). The techniques of direct grid integration, independent power control, and the droop phenomenon of distribution line are studied and compared between the SCIG and DFIG systems. Both systems are modeled in Matlab/Simulink environment, and the operation is tested for the wind turbine maximum power extraction algorithm results. Based on the simulated wind turbine parameters, a commercial induction motor drive was programmed to emulate the wind turbine and is coupled to the experimental generator systems. The turbine experimental results matched well with the theoretical turbine operation.",
"title": ""
},
{
"docid": "02d5d8e3ebdee2a2c1919e3fc9862109",
"text": "Biometric systems are vulnerable to the diverse attacks that emerged as a challenge to assure the reliability in adopting these systems in real-life scenario. In this work, we propose a novel solution to detect a presentation attack based on exploring both statistical and Cepstral features. The proposed Presentation Attack Detection (PAD) algorithm will extract the statistical features that can capture the micro-texture variation using Binarized Statistical Image Features (BSIF) and Cepstral features that can reflect the micro changes in frequency using 2D Cepstrum analysis. We then fuse these features to form a single feature vector before making a decision on whether a capture attempt is a normal presentation or an artefact presentation using linear Support Vector Machine (SVM). Extensive experiments carried out on a publicly available face and iris spoof database show the efficacy of the proposed PAD algorithm with an Average Classification Error Rate (ACER) = 10.21% on face and ACER = 0% on the iris biometrics.",
"title": ""
}
] |
scidocsrr
|
c560534d1277a7f650d71830605b38be
|
Skin picking and trichotillomania in adults with obsessive-compulsive disorder.
|
[
{
"docid": "f84f279b6ef3b112a0411f5cba82e1b0",
"text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed",
"title": ""
}
] |
[
{
"docid": "dbf694e11b78835dbc31ef4249bfff73",
"text": "Insider attacks are a well-known problem acknowledged as a threat as early as 1980s. The threat is attributed to legitimate users who abuse their privileges, and given their familiarity and proximity to the computational environment, can easily cause significant damage or losses. Due to the lack of tools and techniques, security analysts do not correctly perceive the threat, and hence consider the attacks as unpreventable. In this paper, we present a theory of insider threat assessment. First, we describe a modeling methodology which captures several aspects of insider threat, and subsequently, show threat assessment methodologies to reveal possible attack strategies of an insider.",
"title": ""
},
{
"docid": "233c63982527a264b91dfb885361b657",
"text": "One unfortunate consequence of the success story of wireless sensor networks (WSNs) in separate research communities is an evergrowing gap between theory and practice. Even though there is a increasing number of algorithmic methods for WSNs, the vast majority has never been tried in practice; conversely, many practical challenges are still awaiting efficient algorithmic solutions. The main cause for this discrepancy is the fact that programming sensor nodes still happens at a very technical level. We remedy the situation by introducing Wiselib, our algorithm library that allows for simple implementations of algorithms onto a large variety of hardware and software. This is achieved by employing advanced C++ techniques such as templates and inline functions, allowing to write generic code that is resolved and bound at compile time, resulting in virtually no memory or computation overhead at run time. The Wiselib runs on different host operating systems, such as Contiki, iSense OS, and ScatterWeb. Furthermore, it runs on virtual nodes simulated by Shawn. For any algorithm, the Wiselib provides data structures that suit the specific properties of the target platform. Algorithm code does not contain any platform-specific specializations, allowing a single implementation to run natively on heterogeneous networks. In this paper, we describe the building blocks of the Wiselib, and analyze the overhead. We demonstrate the effectiveness of our approach by showing how routing algorithms can be implemented. We also report on results from experiments with real sensor-node hardware.",
"title": ""
},
{
"docid": "4650411615ad68be9596e5de3c0613f1",
"text": "Based on the limitations of traditional English class, an English listening class was designed by Edmodo platform through making use of the advantages of flipped classroom. On this class, students will carry out online autonomous learning before class, teacher will guide students learning collaboratively in class, as well as after-school reflection and summary will be realized. By analyzing teaching effect on flipped classroom, it can provide reference and teaching model for English listening classes in local universities.",
"title": ""
},
{
"docid": "107d6605a6159d5a278b49b8c020cdd9",
"text": "Internet applications increasingly rely on scalable data structures that must support high throughput and store huge amounts of data. These data structures can be hard to implement efficiently. Recent proposals have overcome this problem by giving up on generality and implementing specialized interfaces and functionality (e.g., Dynamo [4]). We present the design of a more general and flexible solution: a fault-tolerant and scalable distributed B-tree. In addition to the usual B-tree operations, our B-tree provides some important practical features: transactions for atomically executing several operations in one or more B-trees, online migration of B-tree nodes between servers for load-balancing, and dynamic addition and removal of servers for supporting incremental growth of the system. Our design is conceptually simple. Rather than using complex concurrency and locking protocols, we use distributed transactions to make changes to B-tree nodes. We show how to extend the B-tree and keep additional information so that these transactions execute quickly and efficiently. Our design relies on an underlying distributed data sharing service, Sinfonia [1], which provides fault tolerance and a light-weight distributed atomic primitive. We use this primitive to commit our transactions. We implemented our B-tree and show that it performs comparably to an existing open-source B-tree and that it scales to hundreds of machines. We believe that our approach is general and can be used to implement other distributed data structures easily.",
"title": ""
},
{
"docid": "58710f81203e204bf0fcbd19bc57b921",
"text": "In this demo, we demonstrate a functional prototype of an air quality monitoring box (AQBox) built from cheap/commodity off- the-shelf (COTS) sensors. We use a set of MQ gas sensors, a temperature and humidity sensor, a dust sensor and a GPS. We instrument the box, powered by an on-board battery, with a 3G cellular connection to upload sensed data to the cloud. The box is suitable for deploying in developing countries where other means to monitor air quality, such as large expensive environmental sensors affixed to certain locations (such as at weather stations) and use of satellite, is not available or not viable. We shall demonstrate the construction and function of the box as well as the collection and analysis of captured data (both in real-time and offline). Built and deployed in large numbers, we believe, these boxes can be a cheap solution to perpetual air quality monitoring for modern cities.",
"title": ""
},
{
"docid": "c56c392e1a7d58912eeeb1718379fa37",
"text": "The changing face of technology has played an integral role in the development of the hotel and restaurant industry. The manuscript investigated the impact that technology has had on the hotel and restaurant industry. A detailed review of the literature regarding the growth of technology in the industry was linked to the development of strategic direction. The manuscript also looked at the strategic analysis methodology for evaluating and taking advantage of current and future technological innovations for the hospitality industry. Identification and implementation of these technologies can help in building a sustainable competitive advantage for hotels and restaurants.",
"title": ""
},
{
"docid": "19e070089a8495a437e81da50f3eb21c",
"text": "Mobile payment refers to the use of mobile devices to conduct payment transactions. Users can use mobile devices for remote and proximity payments; moreover, they can purchase digital contents and physical goods and services. It offers an alternative payment method for consumers. However, there are relative low adoption rates in this payment method. This research aims to identify and explore key factors that affect the decision of whether to use mobile payments. Two well-established theories, the Technology Acceptance Model (TAM) and the Innovation Diffusion Theory (IDT), are applied to investigate user acceptance of mobile payments. Survey data from mobile payments users will be used to test the proposed hypothesis and the model.",
"title": ""
},
{
"docid": "6d925c32d3900512e0fd0ed36b683c69",
"text": "This paper presents a detailed design process of an ultra-high speed, switched reluctance machine for micro machining. The performance goal of the machine is to reach a maximum rotation speed of 750,000 rpm with an output power of 100 W. The design of the rotor involves reducing aerodynamic drag, avoiding mechanical resonance, and mitigating excessive stress. The design of the stator focuses on meeting the torque requirement while minimizing core loss and copper loss. The performance of the machine and the strength of the rotor structure are both verified through finite-element simulations The final design is a 6/4 switched reluctance machine with a 6mm diameter rotor that is wrapped in a carbon fiber sleeve and exhibits 13.6 W of viscous loss. The stator has shoeless poles and exhibits 19.1 W of electromagnetic loss.",
"title": ""
},
{
"docid": "837c34e3999714c0aa0dcf901aa278cf",
"text": "A novel high temperature superconducting interdigital bandpass filter is proposed by using coplanar waveguide quarter-wavelength resonators. The CPW resonators are arranged in parallel, and consequently the filter becomes very compact. The filter is a 5-pole Chebyshev BPF with a midband frequency of 5.0GHz and an equal-ripple fractional bandwidth of 3.2%. It is fabricated using a YBCO film deposited on an MgO substrate. The measured filtering characteristics agree well with EM simulations and show a low insertion loss in spite of the small size of the filter.",
"title": ""
},
{
"docid": "a4dd8ab8b45a8478ca4ac7e19debf777",
"text": "Most sensory, cognitive and motor functions depend on the interactions of many neurons. In recent years, there has been rapid development and increasing use of technologies for recording from large numbers of neurons, either sequentially or simultaneously. A key question is what scientific insight can be gained by studying a population of recorded neurons beyond studying each neuron individually. Here, we examine three important motivations for population studies: single-trial hypotheses requiring statistical power, hypotheses of population response structure and exploratory analyses of large data sets. Many recent studies have adopted dimensionality reduction to analyze these populations and to find features that are not apparent at the level of individual neurons. We describe the dimensionality reduction methods commonly applied to population activity and offer practical advice about selecting methods and interpreting their outputs. This review is intended for experimental and computational researchers who seek to understand the role dimensionality reduction has had and can have in systems neuroscience, and who seek to apply these methods to their own data.",
"title": ""
},
{
"docid": "db534e232e485f83d9808cde9052cdb0",
"text": "Due to conformal capability, research on transmission lines has received much attention lately. Many studies have been reported in the last decade in which transmission lines have been analyzed extensively using various techniques. It is well known that transmission lines are used for transmission of information, but in this case the main aim is to deliver information from generator to receiver with low attenuation. To achieve this, the load should be matched to the characteristic impedance of the line, meaning that the wave coefficient should be near 1 (one). One of the most important methods for line matching is through quarter-wavelength line (quarter-wave transformer). Analysis of transmission lines using numerical methods is difficult because of any possible error that can occur. Therefore, the best solution in this case would be the use of any software package which is designed for analysis of transmission lines. In this paper we will use Sonet software which is generally used for the analysis of planar lines.",
"title": ""
},
{
"docid": "410aa6bb03299e5fda9c28f77e37bc5b",
"text": "Spamming has been a widespread problem for social networks. In recent years there is an increasing interest in the analysis of anti-spamming for microblogs, such as Twitter. In this paper we present a systematic research on the analysis of spamming in Sina Weibo platform, which is currently a dominant microblogging service provider in China. Our research objectives are to understand the specific spamming behaviors in Sina Weibo and find approaches to identify and block spammers in Sina Weibo based on spamming behavior classifiers. To start with the analysis of spamming behaviors we devise several effective methods to collect a large set of spammer samples, including uses of proactive honeypots and crawlers, keywords based searching and buying spammer samples directly from online merchants. We processed the database associated with these spammer samples and interestingly we found three representative spamming behaviors: aggressive advertising, repeated duplicate reposting and aggressive following. We extract various features and compare the behaviors of spammers and legitimate users with regard to these features. It is found that spamming behaviors and normal behaviors have distinct characteristics. Based on these findings we design an automatic online spammer identification system. Through tests with real data it is demonstrated that the system can effectively detect the spamming behaviors and identify spammers in Sina Weibo.",
"title": ""
},
{
"docid": "fe4046a3cf32de51c9ff75be49b34648",
"text": "A method of preventing the degradation in the isolation between the orthogonal polarization ports caused by beamforming network routing in combined edge/aperture fed dual-polarized microstrip-patch planar array antennas is described. The simulated and measured performance of such planar arrays is demonstrated. Measured port isolations of 50 dB at center frequency, and more than 40 dB over a 4% bandwidth, are achieved. In addition, insight into the physical reasons for the improved port-to-port isolation levels, of the proposed element geometry and beamforming network layout, is obtained through prudent use of the electromagnetic modelling.",
"title": ""
},
{
"docid": "aeadbf476331a67bec51d5d6fb6cc80b",
"text": "Gamification, an emerging idea for using game-design elements and principles to make everyday tasks more engaging, is permeating many different types of information systems. Excitement surrounding gamification results from its many potential organizational benefits. However, little research and design guidelines exist regarding gamified information systems. We therefore write this commentary to call upon information systems scholars to investigate the design and use of gamified information systems from a variety of disciplinary perspectives and theories, including behavioral economics, psychology, social psychology, information systems, etc. We first explicate the idea of gamified information systems, provide real-world examples of successful and unsuccessful systems, and based on a synthesis of the available literature, present a taxonomy of gamification design elements. We then develop a framework for research and design: its main theme is to create meaningful engagement for users, that is, gamified information systems should be designed to address the dual goals of instrumental and experiential outcomes. Using this framework, we develop a set of design principles and research questions, using a running case to illustrate some of our ideas. We conclude with a summary of opportunities for IS researchers to extend our knowledge of gamified information systems, and at the same time, advance",
"title": ""
},
{
"docid": "a5a0e1b984eac30c225190c0cba63ab4",
"text": "The traditional intrusion detection system is not flexible in providing security in cloud computing because of the distributed structure of cloud computing. This paper surveys the intrusion detection and prevention techniques and possible solutions in Host Based and Network Based Intrusion Detection System. It discusses DDoS attacks in Cloud environment. Different Intrusion Detection techniques are also discussed namely anomaly based techniques and signature based techniques. It also surveys different approaches of Intrusion Prevention System.",
"title": ""
},
{
"docid": "298c3b480f44be031c0c4262816298c1",
"text": "Information extraction (IE) - the problem of extracting structured information from unstructured text - has become an increasingly important topic in recent years. A SIGMOD 2006 tutorial [3] outlined challenges and opportunities for the database community to advance the state of the art in information extraction, and posed the following grand challenge: \"Can we build a System R for information extraction?\n Our tutorial gives an overview of progress the database community has made towards meeting this challenge. In particular, we start by discussing design requirements in building an enterprise IE system. We then survey recent technological advances towards addressing these requirements, broadly categorized as: (1) Languages for specifying extraction programs in a declarative way, thus allowing database-style performance optimizations; (2) Infrastructure needed to ensure scalability, and (3) Development support for enterprise IE systems. Finally, we outline several open challenges and opportunities for the database community to further advance the state of the art in enterprise IE systems. The tutorial is intended for students and researchers interested in information extraction and its applications, and assumes no prior knowledge of the area.",
"title": ""
},
{
"docid": "3f49f74eabc407b1b5b5899badefce3d",
"text": "The purpose of this study is to determine restaurant service quality. The aims are to: (a) assess customers’ expectations and perceptions, (b) establish the significance of difference between perceived and expected service quality, (c) identify the number of dimensions for expectations and perceptions scales of modified DINESERV model, (d) test the reliability of the applied DINESERV model. The empirical research was conducted using primary data. The questionnaire is based on Stevens et al. (1995) and Andaleeb and Conway’s (2006) research. In order to meet survey goals, descriptive, bivariate and multivariate (exploratory factor analysis and reliability analysis) statistical analyses were conducted. The empirical results show that expectations scores are higher than perceptions scores, which indicate low level of service quality. Furthermore, this study identified seven factors that best explain customers’ expectations and two factors that best explain customers’ perceptions regarding restaurant service. The results of this study would help management identify the strengths and weaknesses of service quality and implement an effective strategy to meet the customers’ expectations.",
"title": ""
},
{
"docid": "4c00cf339ccc28708c19cf8feec767ec",
"text": "This paper presents vCorfu, a strongly consistent cloudscale object store built over a shared log. vCorfu augments the traditional replication scheme of a shared log to provide fast reads and leverages a new technique, composable state machine replication, to compose large state machines from smaller ones, enabling the use of state machine replication to be used to efficiently in huge data stores. We show that vCorfu outperforms Cassandra, a popular state-of-the art NOSQL stores while providing strong consistency (opacity, read-own-writes), efficient transactions, and global snapshots at cloud scale.",
"title": ""
},
{
"docid": "376646286bea50e173cc3c928d3f96a3",
"text": "We formulate an integer program to solve a highly constrained academic timetabling problem at the United States Merchant Marine Academy. The IP instance that results from our real case study has approximately both 170,000 rows and columns and solves to optimality in 4–24 hours using a commercial solver on a portable computer (near optimal feasible solutions were often found in 4–12 hours). Our model is applicable to both high schools and small colleges who wish to deviate from group scheduling. We also solve a necessary preprocessing student subgrouping problem, which breaks up big groups of students into small groups so they can optimally fit into small capacity classes.",
"title": ""
}
] |
scidocsrr
|
fb15e46d9fef1ba05834fafc4e559629
|
A Smart Safety Helmet using IMU and EEG sensors for worker fatigue detection
|
[
{
"docid": "36ad6c19ba2c6525e75c8d42835c3b9f",
"text": "Characteristics of physical activity are indicative of one's mobility level, latent chronic diseases and aging process. Accelerometers have been widely accepted as useful and practical sensors for wearable devices to measure and assess physical activity. This paper reviews the development of wearable accelerometry-based motion detectors. The principle of accelerometry measurement, sensor properties and sensor placements are first introduced. Various research using accelerometry-based wearable motion detectors for physical activity monitoring and assessment, including posture and movement classification, estimation of energy expenditure, fall detection and balance control evaluation, are also reviewed. Finally this paper reviews and compares existing commercial products to provide a comprehensive outlook of current development status and possible emerging technologies.",
"title": ""
},
{
"docid": "8b5bf8cf3832ac9355ed5bef7922fb5c",
"text": "Determining one's own position by means of a smartphone is an important issue for various applications in the fields of personal navigation or location-based services. Places like large airports, shopping malls or extensive underground parking lots require personal navigation but satellite signals and GPS connection cannot be obtained. Thus, alternative or complementary systems are needed. In this paper a system concept to integrate a foot-mounted inertial measurement unit (IMU) with an Android smartphone is presented. We developed a prototype to demonstrate and evaluate the implementation of pedestrian strapdown navigation on a smartphone. In addition to many other approaches we also fuse height measurements from a barometric sensor in order to stabilize height estimation over time. A very low-cost single-chip IMU is used to demonstrate applicability of the outlined system concept for potential commercial applications. In an experimental study we compare the achievable accuracy with a commercially available IMU. The evaluation shows very competitive results on the order of a few percent of traveled distance. Comparing performance, cost and size of the presented IMU the outlined approach carries an enormous potential in the field of indoor pedestrian navigation.",
"title": ""
}
] |
[
{
"docid": "f6d3157155868f5fafe2533dfd8768b8",
"text": "Over the past few years, the task of conceiving effective attacks to complex networks has arisen as an optimization problem. Attacks are modelled as the process of removing a number k of vertices, from the graph that represents the network, and the goal is to maximise or minimise the value of a predefined metric over the graph. In this work, we present an optimization problem that concerns the selection of nodes to be removed to minimise the maximum betweenness centrality value of the residual graph. This metric evaluates the participation of the nodes in the communications through the shortest paths of the network. To address the problem we propose an artificial bee colony algorithm, which is a swarm intelligence approach inspired in the foraging behaviour of honeybees. In this framework, bees produce new candidate solutions for the problem by exploring the vicinity of previous ones, called food sources. The proposed method exploits useful problem knowledge in this neighbourhood exploration by considering the partial destruction and heuristic reconstruction of selected solutions. The performance of the method, with respect to other models from the literature that can be adapted to face this problem, such as sequential centrality-based attacks, module-based attacks, a genetic algorithm, a simulated annealing approach, and a variable neighbourhood search, is empirically shown. E-mail addresses: lozano@decsai.ugr.es (M. Lozano), cgarcia@uco.es (C. GarćıaMart́ınez), fjrodriguez@unex.es (F.J. Rodŕıguez), humberto@ugr.es (H.M. Trujillo). Preprint submitted to Information Sciences August 17, 2016 *Manuscript (including abstract) Click here to view linked References",
"title": ""
},
{
"docid": "c3dae434ea177ec6f2d2247f8fea3cb4",
"text": "The crack and large deflection of long-span PC box Girder Bridge is common in their service stage. The lower estimation for the loss of prestress is one of possible reasons. Combined with a long-span prestressed concrete box girder bridges, the loss of longitudinal and vertical prestress of a long-span PC box girder bridge is measured during the construction and service stage. Based on the measured results, the loss of the longitudinal and vertical prestress of the long-span PC box girder bridge is analyzed by GA-ANN method and the code of JTG D62-2004, respectively. The results demonstrate that the instantaneous loss of the longitudinal prestress is more than 40% of the designed value, and the total loss of the vertical prestress is about 40% more than the designed one as well. The value of the prestress loss is much higher than the expected one of the design. The reason lies in the inappropriate construction technology, such as not tension the prestress to the designed value, tension at an early age, and excessive different of the tube wobble and so on. Therefore, the engineering quality control must be strengthened during the construction stage of the long-span PC box girder bridge for avoiding excessive prestress loss.",
"title": ""
},
{
"docid": "822e9be6fa3440640d4b3153ed5e1678",
"text": "Knowledge tracing serves as a keystone in delivering personalized education. However, few works attempted to model students’ knowledge state in the setting of Second Language Acquisition. The Duolingo Shared Task on Second Language Acquisition Modeling (Settles et al., 2018) provides students’ trace data that we extensively analyze and engineer features from for the task of predicting whether a student will correctly solve a vocabulary exercise. Our analyses of students’ learning traces reveal that factors like exercise format and engagement impact their exercise performance to a large extent. Overall, we extracted 23 different features as input to a Gradient Tree Boosting framework, which resulted in an AUC score of between 0.80 and 0.82 on the official test set.",
"title": ""
},
{
"docid": "e2ed03468a61a529f498646485cdbee6",
"text": "Statistical classification of byperspectral data is challenging because the inputs are high in dimension and represent multiple classes that are sometimes quite mixed, while the amount and quality of ground truth in the form of labeled data is typically limited. The resulting classifiers are often unstable and have poor generalization. This work investigates two approaches based on the concept of random forests of classifiers implemented within a binary hierarchical multiclassifier system, with the goal of achieving improved generalization of the classifier in analysis of hyperspectral data, particularly when the quantity of training data is limited. A new classifier is proposed that incorporates bagging of training samples and adaptive random subspace feature selection within a binary hierarchical classifier (BHC), such that the number of features that is selected at each node of the tree is dependent on the quantity of associated training data. Results are compared to a random forest implementation based on the framework of classification and regression trees. For both methods, classification results obtained from experiments on data acquired by the National Aeronautics and Space Administration (NASA) Airborne Visible/Infrared Imaging Spectrometer instrument over the Kennedy Space Center, Florida, and by Hyperion on the NASA Earth Observing 1 satellite over the Okavango Delta of Botswana are superior to those from the original best basis BHC algorithm and a random subspace extension of the BHC.",
"title": ""
},
{
"docid": "e9d987351816570b29d0144a6a7bd2ae",
"text": "One’s state of mind will influence her perception of the world and people within it. In this paper, we explore attitudes and behaviors toward online social media based on whether one is depressed or not. We conducted semistructured face-to-face interviews with 14 active Twitter users, half of whom were depressed and the other half non-depressed. Our results highlight key differences between the two groups in terms of perception towards online social media and behaviors within such systems. Non-depressed individuals perceived Twitter as an information consuming and sharing tool, while depressed individuals perceived it as a tool for social awareness and emotional interaction. We discuss several design implications for future social networks that could better accommodate users with depression and provide insights towards helping depressed users meet their needs through online social media.",
"title": ""
},
{
"docid": "95350d45a65cb6932f26be4c4d417a30",
"text": "This paper presents a detailed performance comparison (including efficiency, EMC performance and component electrical stress) between boost and buck type PFC under critical conduction mode (CRM). In universal input (90–265Vac) applications, the CRM buck PFC has around 1% higher efficiency compared to its counterpart at low-line (90Vac) condition. Due to the low voltage swing of switch, buck PFC has a better CM EMI performance than boost PFC. It seems that the buck PFC is more attractive in low power applications which only need to meet the IEC61000-3-2 Class D standard based on the comparison. The experimental results from two 100-W prototypes are also presented for side by side comparison.",
"title": ""
},
{
"docid": "cf1beda3b3f03b59cefba4aecff92fe2",
"text": "Multi-modal data is becoming more common in big data background. Finding the semantically similar objects from different modality is one of the heart problems of multi-modal learning. Most of the current methods try to learn the intermodal correlation with extrinsic supervised information, while intrinsic structural information of each modality is neglected. The performance of these methods heavily depends on the richness of training samples. However, obtaining the multi-modal training samples is still a labor and cost intensive work. In this paper, we bring a extrinsic correlation between the space structures of each modalities in coreference resolution. With this correlation, a semisupervised learning model for multi-modal coreference resolution is proposed. We firstly extract high-level features of images and text, then compute the distances of each object from some reference points to build the space structure of each modality. With a shared reference point set, the space structures of each modality are correlated. We employ the correlation to build a commonly shared space that the semantic distance between multimodal objects can be computed directly. The experiments on two multi-modal datasets show that our model performs better than the existing methods with insufficient training data.",
"title": ""
},
{
"docid": "f00b6fafc57f121af7b510f53f26dad5",
"text": "Knowledge based question answering (KBQA) has attracted much attention from both academia and industry in the field of Artificial Intelligence. However, many existing knowledge bases (KBs) are built by static triples. It is hard to answer user questions with different conditions, which will lead to significant answer variances in questions with similar intent. In this work, we propose to extract conditional knowledge base (CKB) from user question-answer pairs for answering user questions with different conditions through dialogue. Given a subject, we first learn user question patterns and conditions. Then we propose an embedding based co-clustering algorithm to simultaneously group the patterns and conditions by leveraging the answers as supervisor information. After that, we extract the answers to questions conditioned on both question pattern clusters and condition clusters as a CKB. As a result, when users ask a question without clearly specifying the conditions, we use dialogues in natural language to chat with users for question specification and answer retrieval. Experiments on real question answering (QA) data show that the dialogue model using automatically extracted CKB can more accurately answer user questions and significantly improve user satisfaction for questions with missing conditions.",
"title": ""
},
{
"docid": "013f9499b9a3e1ffdd03aa4de48d233b",
"text": "We consider private data analysis in the setting in which a trusted and trustworthy curator, having obtained a large data set containing private information, releases to the public a \"sanitization\" of the data set that simultaneously protects the privacy of the individual contributors of data and offers utility to the data analyst. The sanitization may be in the form of an arbitrary data structure, accompanied by a computational procedure for determining approximate answers to queries on the original data set, or it may be a \"synthetic data set\" consisting of data items drawn from the same universe as items in the original data set; queries are carried out as if the synthetic data set were the actual input. In either case the process is non-interactive; once the sanitization has been released the original data and the curator play no further role.\n For the task of sanitizing with a synthetic dataset output, we map the boundary between computational feasibility and infeasibility with respect to a variety of utility measures. For the (potentially easier) task of sanitizing with unrestricted output format, we show a tight qualitative and quantitative connection between hardness of sanitizing and the existence of traitor tracing schemes.",
"title": ""
},
{
"docid": "347278d002cdea4fe830b5d1a6b7bc62",
"text": "The question of what function is served by the cortical column has occupied neuroscientists since its original description some 60years ago. The answer seems tractable in the somatosensory cortex when considering the inputs to the cortical column and the early stages of information processing, but quickly breaks down once the multiplicity of output streams and their sub-circuits are brought into consideration. This article describes the early stages of information processing in the barrel cortex, through generation of the center and surround receptive field components of neurons that subserve integration of multi whisker information, before going on to consider the diversity of properties exhibited by the layer 5 output neurons. The layer 5 regular spiking (RS) neurons differ from intrinsic bursting (IB) neurons in having different input connections, plasticity mechanisms and corticofugal projections. In particular, layer 5 RS cells employ noise reduction and homeostatic plasticity mechanism to preserve and even increase information transfer, while IB cells use more conventional Hebbian mechanisms to achieve a similar outcome. It is proposed that the rodent analog of the dorsal and ventral streams, a division reasonably well established in primate cortex, might provide a further level of organization for RS cell function and hence sub-circuit specialization.",
"title": ""
},
{
"docid": "0879f749188cbb88a8cefff60d0d4f6e",
"text": "Raw tomato contains a high level of lycopene, which has been reported to have many important health benefits. However, information on the changes of the lycopene content in tomato during cooking is limited. In this study, the lycopene content in raw and thermally processed (baked, microwaved, and fried) tomato slurries was investigated and analyzed using a high-performance liquid chromatography (HPLC) method. In the thermal stability study using a pure lycopene standard, 50% of lycopene was degraded at 100 ◦C after 60 min, 125 ◦C after 20 min, and 150 ◦C after less than 10 min. Only 64.1% and 51.5% lycopene was retained when the tomato slurry was baked at 177 ◦C and 218 ◦C for 15 min, respectively. At these temperatures, only 37.3% and 25.1% of lycopene was retained after baking for 45 min. In 1 min of the high power of microwave heating, 64.4% of lycopene still remained. However, more degradation of lycopene in the slurry was found in the frying study. Only 36.6% and 35.5% of lycopene was retained after frying at 145 and 165 ◦C for 1 min, respectively.",
"title": ""
},
{
"docid": "935c404529b02cee2620e52f7a09b84d",
"text": "We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distribution of parameterized tasks/goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: 1) learning the inverse kinematics in a highly-redundant robotic arm, 2) learning omnidirectional locomotion with motor primitives in a quadruped robot, 3) an arm learning to control a fishing rod with a flexible wire. We show that 1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; 2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; 3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.",
"title": ""
},
{
"docid": "4c9313e27c290ccc41f3874108593bf6",
"text": "Very few standards exist for fitting products to people. Footwear is a noteworthy example. This study is an attempt to evaluate the quality of footwear fit using two-dimensional foot outlines. Twenty Hong Kong Chinese students participated in an experiment that involved three pairs of dress shoes and one pair of athletic shoes. The participants' feet were scanned using a commercial laser scanner, and each participant wore and rated the fit of each region of each shoe. The shoe lasts were also scanned and were used to match the foot scans with the last scans. The ANOVA showed significant (p < 0.05) differences among the four pairs of shoes for the overall, fore-foot and rear-foot fit ratings. There were no significant differences among shoes for mid-foot fit rating. These perceived differences were further analysed after matching the 2D outlines of both last and feet. The point-wise dimensional difference between foot and shoe outlines were computed and analysed after normalizing with foot perimeter. The dimensional difference (DD) plots along the foot perimeter showed that fore-foot fit was strongly correlated (R(2) > 0.8) with two of the minimums in the DD-plot while mid-foot fit was strongly correlated (R(2) > 0.9) with the dimensional difference around the arch region and a point on the lateral side of the foot. The DD-plots allow the designer to determine the critical locations that may affect footwear fit in addition to quantifying the nature of misfit so that design changes to shape and material may be possible.",
"title": ""
},
{
"docid": "5213ed67780b194a609220677b9c1dd4",
"text": "Cardiovascular diseases (CVD) are initiated by endothelial dysfunction and resultant expression of adhesion molecules for inflammatory cells. Inflammatory cells secrete cytokines/chemokines and growth factors and promote CVD. Additionally, vascular cells themselves produce and secrete several factors, some of which can be useful for the early diagnosis and evaluation of disease severity of CVD. Among vascular cells, abundant vascular smooth muscle cells (VSMCs) secrete a variety of humoral factors that affect vascular functions in an autocrine/paracrine manner. Among these factors, we reported that CyPA (cyclophilin A) is secreted mainly from VSMCs in response to Rho-kinase activation and excessive reactive oxygen species (ROS). Additionally, extracellular CyPA augments ROS production, damages vascular functions, and promotes CVD. Importantly, a recent study in ATVB demonstrated that ambient air pollution increases serum levels of inflammatory cytokines. Moreover, Bell et al reported an association of air pollution exposure with high-density lipoprotein (HDL) cholesterol and particle number. In a large, multiethnic cohort study of men and women free of prevalent clinical CVD, they found that higher concentrations of PM2.5 over a 3-month time period was associated with lower HDL particle number, and higher annual concentrations of black carbon were associated with lower HDL cholesterol. Together with the authors’ previous work on biomarkers of oxidative stress, they provided evidence for potential pathways that may explain the link between air pollution exposure and acute cardiovascular events. The objective of this review is to highlight the novel research in the field of biomarkers for CVD.",
"title": ""
},
{
"docid": "c6f3d4b2a379f452054f4220f4488309",
"text": "3D Morphable Models (3DMMs) are powerful statistical models of 3D facial shape and texture, and among the state-of-the-art methods for reconstructing facial shape from single images. With the advent of new 3D sensors, many 3D facial datasets have been collected containing both neutral as well as expressive faces. However, all datasets are captured under controlled conditions. Thus, even though powerful 3D facial shape models can be learnt from such data, it is difficult to build statistical texture models that are sufficient to reconstruct faces captured in unconstrained conditions (in-the-wild). In this paper, we propose the first, to the best of our knowledge, in-the-wild 3DMM by combining a powerful statistical model of facial shape, which describes both identity and expression, with an in-the-wild texture model. We show that the employment of such an in-the-wild texture model greatly simplifies the fitting procedure, because there is no need to optimise with regards to the illumination parameters. Furthermore, we propose a new fast algorithm for fitting the 3DMM in arbitrary images. Finally, we have captured the first 3D facial database with relatively unconstrained conditions and report quantitative evaluations with state-of-the-art performance. Complementary qualitative reconstruction results are demonstrated on standard in-the-wild facial databases.",
"title": ""
},
{
"docid": "c56b670fb75b9b17dce43f748bb748a5",
"text": "Extreme-scale scientific applications are at a significant risk of being hit by soft errors on supercomputers as the scale of these systems and the component density continues to increase. In order to better understand the specific soft error vulnerabilities in scientific applications, we have built an empirical fault injection and consequence analysis tool - BIFIT - that allows us to evaluate how soft errors impact applications. In particular, BIFIT is designed with capability to inject faults at very specific targets: an arbitrarily-chosen execution point and any specific data structure. We apply BIFIT to three mission-critical scientific applications and investigate the applications vulnerability to soft errors by performing thousands of statistical tests. We, then, classify each applications individual data structures based on their sensitivity to these vulnerabilities, and generalize these classifications across applications. Subsequently, these classifications can be used to apply appropriate resiliency solutions to each data structure within an application. Our study reveals that these scientific applications have a wide range of sensitivities to both the time and the location of a soft error; yet, we are able to identify intrinsic relationships between application vulnerabilities and specific types of data objects. In this regard, BIFIT enables new opportunities for future resiliency research.",
"title": ""
},
{
"docid": "8f7375f788d7d152477c7816852dee0d",
"text": "Many decentralized, inter-organizational environments such as supply chains are characterized by high transactional uncertainty and risk. At the same time, blockchain technology promises to mitigate these issues by introducing certainty into economic transactions. This paper discusses the findings of a Design Science Research project involving the construction and evaluation of an information technology artifact in collaboration with Maersk, a leading international shipping company, where central documents in shipping, such as the Bill of Lading, are turned into a smart contract on blockchain. Based on our insights from the project, we provide first evidence for preliminary design principles for applications that aim to mitigate the transactional risk and uncertainty in decentralized environments using blockchain. Both the artifact and the first evidence for emerging design principles are novel, contributing to the discourse on the implications that the advent of blockchain technology poses for governing economic activity.",
"title": ""
},
{
"docid": "510881bfca7005dcc32fce2162e7e225",
"text": "Across many disciplines, interest is increasing in the use of computational text analysis in the service of social science questions. We survey the spectrum of current methods, which lie on two dimensions: (1) computational and statistical model complexity; and (2) domain assumptions. This comparative perspective suggests directions of research to better align new methods with the goals of social scientists. 1 Use cases for computational text analysis in the social sciences The use of computational methods to explore research questions in the social sciences and humanities has boomed over the past several years, as the volume of data capturing human communication (including text, audio, video, etc.) has risen to match the ambitious goal of understanding the behaviors of people and society [1]. Automated content analysis of text, which draws on techniques developed in natural language processing, information retrieval, text mining, and machine learning, should be properly understood as a class of quantitative social science methodologies. Employed techniques range from simple analysis of comparative word frequencies to more complex hierarchical admixture models. As this nascent field grows, it is important to clearly present and characterize the assumptions of techniques currently in use, so that new practitioners can be better informed as to the range of available models. To illustrate the breadth of current applications, we list a sampling of substantive questions and studies that have developed or applied computational text analysis to address them. • Political Science: How do U.S. Senate speeches reflect agendas and attention? How are Senate institutions changing [27]? What are the agendas expressed in Senators’ press releases [28]? Do U.S. Supreme Court oral arguments predict justices’ voting behavior [29]? Does social media reflect public political opinion, or forecast elections [12, 30]? What determines international conflict and cooperation [31, 32, 33]? How much did racial attitudes affect voting in the 2008 U.S. presidential election [34]? • Economics: How does sentiment in the media affect the stock market [2, 3]? Does sentiment in social media associate with stocks [4, 5, 6]? Do a company’s SEC filings predict aspects of stock performance [7, 8]? What determines a customer’s trust in an online merchant [9]? How can we measure macroeconomic variables with search queries and social media text [10, 11, 12]? How can we forecast consumer demand for movies [13, 14]? • Psychology: How does a person’s mental and affective state manifest in their language [15]? Are diurnal and seasonal mood cycles cross-cultural [16]?",
"title": ""
},
{
"docid": "072f3152a93eb2a75f716dd1aec131c4",
"text": "Research has not verified the theoretical or practical value of the brand attachment construct in relation to alternative constructs, particularly brand attitude strength. The authors make conceptual, measurement, and managerial contributions to this research issue. Conceptually, they define brand attachment, articulate its defining properties, and differentiate it from brand attitude strength. From a measurement perspective, they develop and validate a parsimonious measure of brand attachment, test the assumptions that underlie it, and demonstrate that it indicates the concept of attachment. They also demonstrate the convergent and discriminant validity of this measure in relation to brand attitude strength. Managerially, they demonstrate that brand attachment offers value over brand attitude strength in predicting (1) consumers’ intentions to perform difficult behaviors (those they regard as using consumer resources), (2) actual purchase behaviors, (3) brand purchase share (the share of a brand among directly competing brands), and (4) need share (the extent to which consumers rely on a brand to address relevant needs, including those brands in substitutable product categories).",
"title": ""
}
] |
scidocsrr
|
6b49d02c6be3abe3fe2462fdb907c502
|
Auto-patching DOM-based XSS at scale
|
[
{
"docid": "dde76ca0ed14039e77f09a9238d5e4a2",
"text": "JavaScript is widely used for writing client-side web applications and is getting increasingly popular for writing mobile applications. However, unlike C, C++, and Java, there are not that many tools available for analysis and testing of JavaScript applications. In this paper, we present a simple yet powerful framework, called Jalangi, for writing heavy-weight dynamic analyses. Our framework incorporates two key techniques: 1) selective record-replay, a technique which enables to record and to faithfully replay a user-selected part of the program, and 2) shadow values and shadow execution, which enables easy implementation of heavy-weight dynamic analyses. Our implementation makes no special assumption about JavaScript, which makes it applicable to real-world JavaScript programs running on multiple platforms. We have implemented concolic testing, an analysis to track origins of nulls and undefined, a simple form of taint analysis, an analysis to detect likely type inconsistencies, and an object allocation profiler in Jalangi. Our evaluation of Jalangi on the SunSpider benchmark suite and on five web applications shows that Jalangi has an average slowdown of 26X during recording and 30X slowdown during replay and analysis. The slowdowns are comparable with slowdowns reported for similar tools, such as PIN and Valgrind for x86 binaries. We believe that the techniques proposed in this paper are applicable to other dynamic languages.",
"title": ""
}
] |
[
{
"docid": "ca7e7fa988bf2ed1635e957ea6cd810d",
"text": "Knowledge graph (KG) is known to be helpful for the task of question answering (QA), since it provides well-structured relational information between entities, and allows one to further infer indirect facts. However, it is challenging to build QA systems which can learn to reason over knowledge graphs based on question-answer pairs alone. First, when people ask questions, their expressions are noisy (for example, typos in texts, or variations in pronunciations), which is non-trivial for the QA system to match those mentioned entities to the knowledge graph. Second, many questions require multi-hop logic reasoning over the knowledge graph to retrieve the answers. To address these challenges, we propose a novel and unified deep learning architecture, and an end-to-end variational learning algorithm which can handle noise in questions, and learn multi-hop reasoning simultaneously. Our method achieves state-of-the-art performance on a recent benchmark dataset in the literature. We also derive a series of new benchmark datasets, including questions for multi-hop reasoning, questions paraphrased by neural translation model, and questions in human voice. Our method yields very promising results on all these challenging datasets.",
"title": ""
},
{
"docid": "86aa31d70e44137ff16e81f79e1dac74",
"text": "The bee genus Lasioglossum Curtis is a model taxon for studying the evolutionary origins of and reversals in eusociality. This paper presents a phylogenetic analysis of Lasioglossum species and subgenera based on a data set consisting of 1240 bp of the mitochondrial cytochrome oxidase I (COI) gene for seventy-seven taxa (sixty-six ingroup and eleven outgroup taxa). Maximum parsimony was used to analyse the data set (using PAUP*4.0) by a variety of weighting methods, including equal weights, a priori weighting and a posteriori weighting. All methods yielded roughly congruent results. Michener's Hemihalictus series was found to be monophyletic in all analyses but one, while his Lasioglossum series formed a basal, paraphyletic assemblage in all analyses but one. Chilalictus was consistently found to be a basal taxon of Lasioglossum sensu lato and Lasioglossum sensu stricto was found to be monophyletic. Within the Hemihalictus series, major lineages included Dialictus + Paralictus, the acarinate Evylaeus + Hemihalictus + Sudila and the carinate Evylaeus + Sphecodogastra. Relationships within the Hemihalictus series were highly stable to altered weighting schemes, while relationships among the basal subgenera in the Lasioglossum series (Lasioglossum s.s., Chilalictus, Parasphecodes and Ctenonomia) were unclear. The social parasite of Dialictus, Paralictus, is consistently and unambiguously placed well within Dialictus, thus rendering Dialictus paraphyletic. The implications of this for understanding the origins of social parasitism are discussed.",
"title": ""
},
{
"docid": "f6a1d7b206ca2796d4e91f3e8aceeed8",
"text": "Objective To develop a classifier that tackles the problem of determining the risk of a patient of suffering from a cardiovascular disease within the next ten years. The system has to provide both a diagnosis and an interpretable model explaining the decision. In this way, doctors are able to analyse the usefulness of the information given by the system. Methods Linguistic fuzzy rule-based classification systems are used, since they provide a good classification rate and a highly interpretable model. More specifically, a new methodology to combine fuzzy rule-based classification systems with interval-valued fuzzy sets is proposed, which is composed of three steps: 1) the modelling of the linguistic labels of the classifier using interval-valued fuzzy sets; 2) the use of theKα operator in the inference process and 3) the application of a genetic tuning to find the best ignorance degree that each interval-valued fuzzy set represents as well as the best value for the parameter α of theKα operator in each rule. Results Correspondingauthor. Tel:+34-948166048. Fax:+34-948168924 Email addresses: joseantonio.sanz@unavarra.es (Jośe Antonio Sanz ), mikel.galar@unavarra.es (Mikel Galar),aranzazu.jurio@unavarra.es (Aranzazu Jurio), antonio.brugos@unavarra.es (Antonio Brugos), miguel.pagola@unavarra.es (Miguel Pagola),bustince@unavarra.es (Humberto Bustince) Preprint submitted to Elsevier November 13, 2013 © 2013. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/",
"title": ""
},
{
"docid": "b82facfc85ef2ae55f03beef7d1bb968",
"text": "Stock movements are essentially driven by new information. Market data, financial news, and social sentiment are believed to have impacts on stock markets. To study the correlation between information and stock movements, previous works typically concatenate the features of different information sources into one super feature vector. However, such concatenated vector approaches treat each information source separately and ignore their interactions. In this article, we model the multi-faceted investors’ information and their intrinsic links with tensors. To identify the nonlinear patterns between stock movements and new information, we propose a supervised tensor regression learning approach to investigate the joint impact of different information sources on stock markets. Experiments on CSI 100 stocks in the year 2011 show that our approach outperforms the state-of-the-art trading strategies.",
"title": ""
},
{
"docid": "43850ef433d1419ed37b7b12f3ff5921",
"text": "We have seen ten years of the application of AI planning to the problem of narrative generation in Interactive Storytelling (IS). In that time planning has emerged as the dominant technology and has featured in a number of prototype systems. Nevertheless key issues remain, such as how best to control the shape of the narrative that is generated (e.g., by using narrative control knowledge, i.e., knowledge about narrative features that enhance user experience) and also how best to provide support for real-time interactive performance in order to scale up to more realistic sized systems. Recent progress in planning technology has opened up new avenues for IS and we have developed a novel approach to narrative generation that builds on this. Our approach is to specify narrative control knowledge for a given story world using state trajectory constraints and then to treat these state constraints as landmarks and to use them to decompose narrative generation in order to address scalability issues and the goal of real-time performance in larger story domains. This approach to narrative generation is fully implemented in an interactive narrative based on the “Merchant of Venice.” The contribution of the work lies both in our novel use of state constraints to specify narrative control knowledge for interactive storytelling and also our development of an approach to narrative generation that exploits such constraints. In the article we show how the use of state constraints can provide a unified perspective on important problems faced in IS.",
"title": ""
},
{
"docid": "22658b675b501059ec5a7905f6b766ef",
"text": "The purpose of this study was to compare the physiological results of 2 incremental graded exercise tests (GXTs) and correlate these results with a short-distance laboratory cycle time trial (TT). Eleven men (age 25 +/- 5 years, Vo(2)max 62 +/- 8 ml.kg(-1).min(-1)) randomly underwent 3 laboratory tests performed on a cycle ergometer. The first 2 tests consisted of a GXT consisting of either 3-minute (GXT(3-min)) or 5-minute (GXT(5-min)) workload increments. The third test involved 1 laboratory 30-minute TT. The peak power output, lactate threshold, onset of blood lactate accumulation, and maximum displacement threshold (Dmax) determined from each GXT was not significantly different and in agreement when measured from the GXT(3-min) or GXT(5-min). Furthermore, similar correlation coefficients were found among the results of each GXT and average power output in the 30-minute cycling TT. Hence, the results of either GXT can be used to predict performance or for training prescription.",
"title": ""
},
{
"docid": "af2e881acf6744469389d3e81570341f",
"text": "Although smoking cessation is the primary goal for the control of cancer and other smoking-related diseases, chemoprevention provides a complementary approach applicable to high risk individuals such as current smokers and ex-smokers. The thiol N-acetylcysteine (NAC) works per se in the extracellular environment, and is a precursor of intracellular cysteine and glutathione (GSH). Almost 40 years of experience in the prophylaxis and therapy of a variety of clinical conditions, mostly involving GSH depletion and alterations of the redox status, have established the safety of this drug, even at very high doses and for long-term treatments. A number of studies performed since 1984 have indicated that NAC has the potential to prevent cancer and other mutation-related diseases. N-Acetylcysteine has an impressive array of mechanisms and protective effects towards DNA damage and carcinogenesis, which are related to its nucleophilicity, antioxidant activity, modulation of metabolism, effects in mitochondria, decrease of the biologically effective dose of carcinogens, modulation of DNA repair, inhibition of genotoxicity and cell transformation, modulation of gene expression and signal transduction pathways, regulation of cell survival and apoptosis, anti-inflammatory activity, anti-angiogenetic activity, immunological effects, inhibition of progression to malignancy, influence on cell cycle progression, inhibition of pre-neoplastic and neoplastic lesions, inhibition of invasion and metastasis, and protection towards adverse effects of other chemopreventive agents or chemotherapeutical agents. These mechanisms are herein reviewed and commented on with special reference to smoking-related end-points, as evaluated in in vitro test systems, experimental animals and clinical trials. It is important that all protective effects of NAC were observed under a range of conditions produced by a variety of treatments or imbalances of homeostasis. However, our recent data show that, at least in mouse lung, under physiological conditions NAC does not alter per se the expression of multiple genes detected by cDNA array technology. On the whole, there is overwhelming evidence that NAC has the ability to modulate a variety of DNA damage- and cancer-related end-points.",
"title": ""
},
{
"docid": "77281793a88329ca2cf9fd8eeaf01524",
"text": "This paper describes a new circuit integrated on silicon, which generates temperature-independent bias currents. Such a circuit is firstly employed to obtain a current reference with first-order temperature compensation, then it is modified to obtain second-order temperature compensation. The operation principle of the new circuits is described and the relationships between design and technology process parameters are derived. These circuits have been designed by a 0.35 /spl mu/m BiCMOS technology process and the thermal drift of the reference current has been evaluated by computer simulations. They show good thermal performance and in particular, the new second-order temperature-compensated current reference has a mean temperature drift of only 28 ppm//spl deg/C in the temperature range between -30/spl deg/C and 100/spl deg/C.",
"title": ""
},
{
"docid": "17bd8497b30045267f77572c9bddb64f",
"text": "0007-6813/$ see front matter D 200 doi:10.1016/j.bushor.2004.11.006 * Corresponding author. E-mail addresses: cseelos@sscg.org jmair@iese.edu (J. Mair).",
"title": ""
},
{
"docid": "786f6c09777788c3456e6613729c0292",
"text": "An experimental approach to studying the properties of word embeddings is proposed. Controlled experiments, achieved through modifications of the training corpus, permit the demonstration of direct relations between word properties and word vector direction and length. The approach is demonstrated using the word2vec CBOW model with experiments that independently vary word frequency and word co-occurrence noise. The experiments reveal that word vector length depends more or less linearly on both word frequency and the level of noise in the co-occurrence distribution of the word. The coefficients of linearity depend upon the word. The special point in feature space, defined by the (artificial) word with pure noise in its co-occurrence distribution, is found to be small but non-zero.",
"title": ""
},
{
"docid": "d114f37ccb079106a728ad8fe1461919",
"text": "This paper describes a stochastic hill climbing algorithm named SHCLVND to optimize arbitrary vectorial < n ! < functions. It needs less parameters. It uses normal (Gaussian) distributions to represent probabilities which are used for generating more and more better argument vectors. The-parameters of the normal distributions are changed by a kind of Hebbian learning. Kvasnicka et al. KPP95] used algorithm Stochastic Hill Climbing with Learning (HCwL) to optimize a highly multimodal vectorial function on real numbers. We have tested proposed algorithm by optimizations of the same and a similar function and show the results in comparison to HCwL. In opposite to it algorithm SHCLVND desribed here works directly on vectors of numbers instead their bit-vector representations and uses normal distributions instead of numbers to represent probabilities. 1 Overview In Section 2 we give an introduction with the way to the algorithm. Then we describe it exactly in Section 3. There is also given a compact notation in pseudo PASCAL-code, see Section 3.4. After that we give an example: we optimize highly multimodal functions with the proposed algorithm and give some visualisations of the progress in Section 4. In Section 5 there are a short summary and some ideas for future works. At last in Section 6 we give some hints for practical use of the algorithm. 2 Introduction This paper describes a hill climbing algorithm to optimize vectorial functions on real numbers. 2.1 Motivation Flexible algorithms for optimizing any vectorial function are interesting if there is no or only a very diicult mathematical solution known, e.g. parameter adjustments to optimize with respect to some relevant property the recalling behavior of a (trained) neuronal net HKP91, Roj93], or the resulting image of some image-processing lter.",
"title": ""
},
{
"docid": "7f2dff96e9c1742842fea6a43d17f93e",
"text": "We study shock-based methods for credible causal inference in corporate finance research. We focus on corporate governance research, survey 13,461 papers published between 2001 and 2011 in 22 major accounting, economics, finance, law, and management journals; and identify 863 empirical studies in which corporate governance is associated with firm value or other characteristics. We classify the methods used in these studies and assess whether they support a causal link between corporate governance and firm value or another outcome. Only a stall minority of studies have convincing causal inference strategies. The convincing strategies largely rely on external shocks – usually from legal rules – often called “natural experiments”. We examine the 74 shock-based papers and provide a guide to shock-based research design, which stresses the common features across different designs and the value of using combined designs.",
"title": ""
},
{
"docid": "24f68da70b879cc74b00e2bc9cae6f96",
"text": "This paper presents the power management scheme for a power electronics based low voltage microgrid in islanding operation. The proposed real and reactive power control is based on the virtual frequency and voltage frame, which can effectively decouple the real and reactive power flows and improve the system transient and stability performance. Detailed analysis of the virtual frame operation range is presented, and a control strategy to guarantee that the microgrid can be operated within the predetermined voltage and frequency variation limits is also proposed. Moreover, a reactive power control with adaptive voltage droop method is proposed, which automatically updates the maximum reactive power limit of a DG unit based on its current rating and actual real power output and features enlarged power output range and further improved system stability. Both simulation and experimental results are provided in this paper.",
"title": ""
},
{
"docid": "a5d568b4a86dcbda2c09894c778527ea",
"text": "INTRODUCTION\nHypoglycemia (Hypo) is the most common side effect of insulin therapy in people with type 1 diabetes (T1D). Over time, patients with T1D become unaware of signs and symptoms of Hypo. Hypo unawareness leads to morbidity and mortality. Diabetes alert dogs (DADs) represent a unique way to help patients with Hypo unawareness. Our group has previously presented data in abstract form which demonstrates the sensitivity and specificity of DADS. The purpose of our current study is to expand evaluation of DAD sensitivity and specificity using a method that reduces the possibility of trainer bias.\n\n\nMETHODS\nWe evaluated 6 dogs aging 1-10 years old who had received an average of 6 months of training for Hypo alert using positive training methods. Perspiration samples were collected from patients during Hypo (BG 46-65 mg/dL) and normoglycemia (BG 85-136 mg/dl) and were used in training. These samples were placed in glass vials which were then placed into 7 steel cans (1 Hypo, 2 normal, 4 blank) randomly placed by roll of a dice. The dogs alerted by either sitting in front of, or pushing, the can containing the Hypo sample. Dogs were rewarded for appropriate recognition of the Hypo samples using a food treat via a remote control dispenser. The results were videotaped and statistically evaluated for sensitivity (proportion of lows correctly alerted, \"true positive rate\") and specificity (proportion of blanks + normal samples not alerted, \"true negative rate\") calculated after pooling data across all trials for all dogs.\n\n\nRESULTS\nAll DADs displayed statistically significant (p value <0.05) greater sensitivity (min 50.0%-max 87.5%) to detect the Hypo sample than the expected random correct alert of 14%. Specificity ranged from a min of 89.6% to a max of 97.9% (expected rate is not defined in this scenario).\n\n\nCONCLUSIONS\nOur results suggest that properly trained DADs can successfully recognize and alert to Hypo in an in vitro setting using smell alone.",
"title": ""
},
{
"docid": "48a8cfc2ac8c8c63bbd15aba5a830ef9",
"text": "We extend prior research on masquerade detection using UNIX commands issued by users as the audit source. Previous studies using multi-class training requires gathering data from multiple users to train specific profiles of self and non-self for each user. Oneclass training uses data representative of only one user. We apply one-class Naïve Bayes using both the multivariate Bernoulli model and the Multinomial model, and the one-class SVM algorithm. The result shows that oneclass training for this task works as well as multi-class training, with the great practical advantages of collecting much less data and more efficient training. One-class SVM using binary features performs best among the oneclass training algorithms.",
"title": ""
},
{
"docid": "7dde662184f9dc0363df5cfeffc4724e",
"text": "WordNet is a lexical reference system, developed by the university of Princeton. This paper gives a detailed documentation of the Prolog database of WordNet and predicates to interface it. 1",
"title": ""
},
{
"docid": "d126bfbab45f7e942947b30806045123",
"text": "Despite increasing amounts of data and ever improving natural language generation techniques, work on automated journalism is still relatively scarce. In this paper, we explore the field and challenges associated with building a journalistic natural language generation system. We present a set of requirements that should guide system design, including transparency, accuracy, modifiability and transferability. Guided by the requirements, we present a data-driven architecture for automated journalism that is largely domain and language independent. We illustrate its practical application in the production of news articles upon a user request about the 2017 Finnish municipal elections in three languages, demonstrating the successfulness of the data-driven, modular approach of the design. We then draw some lessons for future automated journalism.",
"title": ""
},
{
"docid": "83ac82ef100fdf648a5214a50d163fe3",
"text": "We consider the problem of multi-robot taskallocation when robots have to deal with uncertain utility estimates. Typically an allocation is performed to maximize expected utility; we consider a means for measuring the robustness of a given optimal allocation when robots have some measure of the uncertainty (e.g., a probability distribution, or moments of such distributions). We introduce a new O(n) algorithm, the Interval Hungarian algorithm, that extends the classic KuhnMunkres Hungarian algorithm to compute the maximum interval of deviation (for each entry in the assignment matrix) which will retain the same optimal assignment. This provides an efficient measurement of the tolerance of the allocation to the uncertainties, for both a specific interval and a set of interrelated intervals. We conduct experiments both in simulation and with physical robots to validate the approach and to gain insight into the effect of location uncertainty on allocations for multi-robot multi-target navigation tasks.",
"title": ""
},
{
"docid": "385789e37297644dc79ce9e39ee0f7cd",
"text": "A key issue in Low Voltage (LV) distribution systems is to identify strategies for the optimal management and control in the presence of Distributed Energy Resources (DERs). To reduce the number of variables to be monitored and controlled, virtual levels of aggregation, called Virtual Microgrids (VMs), are introduced and identified by using new models of the distribution system. To this aim, this paper, revisiting and improving the approach outlined in a conference paper, presents a sensitivity-based model of an LV distribution system, supplied by an Medium/Low Voltage (MV/LV) substation and composed by several feeders, which is suitable for the optimal management and control of the grid and for VM definition. The main features of the proposed method are: it evaluates the sensitivity coefficients in a closed form; it provides an overview of the sensitivity of the network to the variations of each DER connected to the grid; and it presents a limited computational burden. A comparison of the proposed method with both the exact load flow solutions and a perturb-and-observe method is discussed in a case study. Finally, the method is used to evaluate the impact of the DERs on the nodal voltages of the network.",
"title": ""
},
{
"docid": "e964a46706179a92b775307166a64c8a",
"text": "I general, perceptions of information systems (IS) success have been investigated within two primary research streams—the user satisfaction literature and the technology acceptance literature. These two approaches have been developed in parallel and have not been reconciled or integrated. This paper develops an integrated research model that distinguishes beliefs and attitudes about the system (i.e., object-based beliefs and attitudes) from beliefs and attitudes about using the system (i.e., behavioral beliefs and attitudes) to build the theoretical logic that links the user satisfaction and technology acceptance literature. The model is then tested using a sample of 465 users from seven different organizations who completed a survey regarding their use of data warehousing software. The proposed model was supported, providing preliminary evidence that the two perspectives can and should be integrated. The integrated model helps build the bridge from design and implementation decisions to system characteristics (a core strength of the user satisfaction literature) to the prediction of usage (a core strength of the technology acceptance literature).",
"title": ""
}
] |
scidocsrr
|
02bfd62edbd54cc6b98240cb572aa784
|
On the Application of Danskin's Theorem to Derivative-Free Minimax Optimization
|
[
{
"docid": "a3099df83149b84e113d0f12b66e1ab7",
"text": "We propose a multistart CMA-ES with equal budgets for two interlaced restart strategies, one with an increasing population size and one with varying small population sizes. This BI-population CMA-ES is benchmarked on the BBOB-2009 noiseless function testbed and could solve 23, 22 and 20 functions out of 24 in search space dimensions 10, 20 and 40, respectively, within a budget of less than $10^6 D$ function evaluations per trial.",
"title": ""
}
] |
[
{
"docid": "09c808f014ff9b93795a5e040b2ad7de",
"text": "The Internet of Things (IoT) concept proposes that everyday objects are globally accessible from the Internet and integrate into new services having a remarkable impact on our society. Opposite to Internet world, things usually belong to resource-challenged environmentswhere energy, data throughput, and computing resources are scarce. Building upon existing standards in the field such as IEEE1451 and ZigBee and rooted in context semantics, this paper proposes CTP (CommunicationThings Protocol) as a protocol specification to allow interoperability among things with different communication standards as well as simplicity and functionality to build IoT systems. Also, this paper proposes the use of the IoT gateway as a fundamental component in IoT architectures to provide seamless connectivity and interoperability among things and connect two different worlds to build the IoT: the Things world and the Internet world. Both CTP and IoT gateway constitute a middleware content-centric architecture presented as the mechanism to achieve a balance between the intrinsic limitations of things in the physical world and what is required from them in the virtual world. Saidmiddleware content-centric architecture is implementedwithin the frame of two European projects targeting smart environments and proving said CTP’s objectives in real scenarios.",
"title": ""
},
{
"docid": "49e8c5d0aac226bbd5c81d467e632c4f",
"text": "After decades of study, automatic face detection and recognition systems are now accurate and widespread. Naturally, this means users who wish to avoid automatic recognition are becoming less able to do so. Where do we stand in this cat-and-mouse race? We currently live in a society where everyone carries a camera in their pocket. Many people willfully upload most or all of the pictures they take to social networks which invest heavily in automatic face recognition systems. In this setting, is it still possible for privacy-conscientious users to avoid automatic face detection and recognition? If so, how? Must evasion techniques be obvious to be effective, or are there still simple measures that users can use to protect themselves? In this work, we find ways to evade face detection on Facebook, a representative example of a popular social network that uses automatic face detection to enhance their service. We challenge widely-held beliefs about evading face detection: do our old techniques such as blurring the face region or wearing \"privacy glasses\" still work? We show that in general, state-of-the-art detectors can often find faces even if the subject wears occluding clothing or even if the uploader damages the photo to prevent faces from being detected.",
"title": ""
},
{
"docid": "9deea0426461c72df7ee56353ecf1d88",
"text": "This paper presents a novel approach to visualizing the time structure of musical waveforms. The acoustic similarity between any two instants of an audio recording is displayed in a static 2D representation, which makes structural and rhythmic characteristics visible. Unlike practically all prior work, this method characterizes self-similarity rather than specific audio attributes such as pitch or spectral features. Examples are presented for classical and popular music.",
"title": ""
},
{
"docid": "477af6326b8d51afcb15ef6107fe3cd7",
"text": "BACKGROUND\nThe few studies that have investigated the relationship between mobile phone use and sleep have mainly been conducted among children and adolescents. In adults, very little is known about mobile phone usage in bed our after lights out. This cross-sectional study set out to examine the association between bedtime mobile phone use and sleep among adults.\n\n\nMETHODS\nA sample of 844 Flemish adults (18-94 years old) participated in a survey about electronic media use and sleep habits. Self-reported sleep quality, daytime fatigue and insomnia were measured using the Pittsburgh Sleep Quality Index (PSQI), the Fatigue Assessment Scale (FAS) and the Bergen Insomnia Scale (BIS), respectively. Data were analyzed using hierarchical and multinomial regression analyses.\n\n\nRESULTS\nHalf of the respondents owned a smartphone, and six out of ten took their mobile phone with them to the bedroom. Sending/receiving text messages and/or phone calls after lights out significantly predicted respondents' scores on the PSQI, particularly longer sleep latency, worse sleep efficiency, more sleep disturbance and more daytime dysfunction. Bedtime mobile phone use predicted respondents' later self-reported rise time, higher insomnia score and increased fatigue. Age significantly moderated the relationship between bedtime mobile phone use and fatigue, rise time, and sleep duration. An increase in bedtime mobile phone use was associated with more fatigue and later rise times among younger respondents (≤ 41.5 years old and ≤ 40.8 years old respectively); but it was related to an earlier rise time and shorter sleep duration among older respondents (≥ 60.15 years old and ≥ 66.4 years old respectively).\n\n\nCONCLUSION\nFindings suggest that bedtime mobile phone use is negatively related to sleep outcomes in adults, too. It warrants continued scholarly attention as the functionalities of mobile phones evolve rapidly and exponentially.",
"title": ""
},
{
"docid": "4cd09cc6aa67d1314ca5de09d1240b65",
"text": "A new class of metrics appropriate for measuring effective similarity relations between sequences, say one type of similarity per metric, is studied. We propose a new \"normalized information distance\", based on the noncomputable notion of Kolmogorov complexity, and show that it minorizes every metric in the class (that is, it is universal in that it discovers all effective similarities). We demonstrate that it too is a metric and takes values in [0, 1]; hence it may be called the similarity metric. This is a theory foundation for a new general practical tool. We give two distinctive applications in widely divergent areas (the experiments by necessity use just computable approximations to the target notions). First, we computationally compare whole mitochondrial genomes and infer their evolutionary history. This results in a first completely automatic computed whole mitochondrial phylogeny tree. Secondly, we give fully automatically computed language tree of 52 different language based on translated versions of the \"Universal Declaration of Human Rights\".",
"title": ""
},
{
"docid": "9846794c512f847ca16c43bcf055a757",
"text": "Sensing and presenting on-road information of moving vehicles is essential for fully and semi-automated driving. It is challenging to track vehicles from affordable on-board cameras in crowded scenes. The mismatch or missing data are unavoidable and it is ineffective to directly present uncertain cues to support the decision-making. In this paper, we propose a physical model based on incompressible fluid dynamics to represent the vehicle’s motion, which provides hints of possible collision as a continuous scalar riskmap. We estimate the position and velocity of other vehicles from a monocular on-board camera located in front of the ego-vehicle. The noisy trajectories are then modeled as the boundary conditions in the simulation of advection and diffusion process. We then interactively display the animating distribution of substances, and show that the continuous scalar riskmap well matches the perception of vehicles even in presence of the tracking failures. We test our method on real-world scenes and discuss about its application for driving assistance and autonomous vehicle in the future.",
"title": ""
},
{
"docid": "693d9ee4f286ef03175cb302ef1b2a93",
"text": "We explore the question of whether phase-based time-of-flight (TOF) range cameras can be used for looking around corners and through scattering diffusers. By connecting TOF measurements with theory from array signal processing, we conclude that performance depends on two primary factors: camera modulation frequency and the width of the specular lobe (“shininess”) of the wall. For purely Lambertian walls, commodity TOF sensors achieve resolution on the order of meters between targets. For seemingly diffuse walls, such as posterboard, the resolution is drastically reduced, to the order of 10cm. In particular, we find that the relationship between reflectance and resolution is nonlinear—a slight amount of shininess can lead to a dramatic improvement in resolution. Since many realistic scenes exhibit a slight amount of shininess, we believe that off-the-shelf TOF cameras can look around corners.",
"title": ""
},
{
"docid": "b92851e1c50db1af8ec26734f472d989",
"text": "A new reflection-type phase shifter with a full 360deg relative phase shift range and constant insertion loss is presented. This feature is obtained by incorporating a new cascaded connection of varactors into the impedance-transforming quadrature coupler. The required reactance variation of a varactor can be reduced by controlling the impedance ratio of the quadrature coupler. The implemented phase shifter achieves a measured maximal relative phase shift of 407deg, an averaged insertion loss of 4.4 dB and return losses better than 19 dB at 2 GHz. The insertion-loss variation is within plusmn0.1 and plusmn0.2 dB over the 360deg and 407deg relative phase shift tuning range, respectively.",
"title": ""
},
{
"docid": "4736ae77defc37f96b235b3c0c2e56ff",
"text": "This review highlights progress over the past decade in research on the effects of mass trauma experiences on children and youth, focusing on natural disasters, war, and terrorism. Conceptual advances are reviewed in terms of prevailing risk and resilience frameworks that guide basic and translational research. Recent evidence on common components of these models is evaluated, including dose effects, mediators and moderators, and the individual or contextual differences that predict risk or resilience. New research horizons with profound implications for health and well-being are discussed, particularly in relation to plausible models for biological embedding of extreme stress. Strong consistencies are noted in this literature, suggesting guidelines for disaster preparedness and response. At the same time, there is a notable shortage of evidence on effective interventions for child and youth victims. Practical and theory-informative research on strategies to protect children and youth victims and promote their resilience is a global priority.",
"title": ""
},
{
"docid": "a8e6e1fc36c762744d45221430414035",
"text": "As with a quantitative study, critical analysis of a qualitative study involves an in-depth review of how each step of the research was undertaken. Qualitative and quantitative studies are, however, fundamentally different approaches to research and therefore need to be considered differently with regard to critiquing. The different philosophical underpinnings of the various qualitative research methods generate discrete ways of reasoning and distinct terminology; however, there are also many similarities within these methods. Because of this and its subjective nature, qualitative research it is often regarded as more difficult to critique. Nevertheless, an evidenced-based profession such as nursing cannot accept research at face value, and nurses need to be able to determine the strengths and limitations of qualitative as well as quantitative research studies when reviewing the available literature on a topic.",
"title": ""
},
{
"docid": "74aec9316b9b05c50fca7e121f419fa5",
"text": "Let H := (S0, A0, R0, S1, . . . , St−1, At−1, Rt−1, St) be the first t transitions in the episode H . We call H a partial trajectory of length t. Notice that we use subscripts on trajectories to denote the trajectory’s index in D and superscripts to denote partial trajectories—H i is the first t transitions of the ith trajectory in D. Let H be the set of all possible partial trajectories of length t.",
"title": ""
},
{
"docid": "845ee0b77e30a01d87e836c6a84b7d66",
"text": "This paper proposes an efficient and effective scheme to applying the sliding window approach popular in computer vision to 3D data. Specifically, the sparse nature of the problem is exploited via a voting scheme to enable a search through all putative object locations at any orientation. We prove that this voting scheme is mathematically equivalent to a convolution on a sparse feature grid and thus enables the processing, in full 3D, of any point cloud irrespective of the number of vantage points required to construct it. As such it is versatile enough to operate on data from popular 3D laser scanners such as a Velodyne as well as on 3D data obtained from increasingly popular push-broom configurations. Our approach is “embarrassingly parallelisable” and capable of processing a point cloud containing over 100K points at eight orientations in less than 0.5s. For the object classes car, pedestrian and bicyclist the resulting detector achieves best-in-class detection and timing performance relative to prior art on the KITTI dataset as well as compared to another existing 3D object detection approach.",
"title": ""
},
{
"docid": "7a8979f96411ef37c079d85c77c03bac",
"text": "Ankle-foot orthoses (AFOs) are orthotic devices that support the movement of the ankles of disabled people, for example, those suffering from hemiplegia or peroneal nerve palsy. We have developed an intelligently controllable AFO (i-AFO) in which the ankle torque is controlled by a compact magnetorheological fluid brake. Gait-control tests with the i-AFO were performed for a patient with flaccid paralysis of the ankles, who has difficulty in voluntary movement of the peripheral part of the inferior limb, and physical limitations on his ankles. By using the i-AFO, his gait control was improved by prevention of drop foot in the swing phase and by forward promotion in the stance phase.",
"title": ""
},
{
"docid": "7b806cbde7cd0c2682402441a578ec9c",
"text": "We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to diierent classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of Projection Pursuit Regression and several types of neural networks. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the diierent classes of basis functions correspond to diierent classes of prior probabilities on the approximating function spaces, and therefore to diierent types of smoothness assumptions. In summary, diierent multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to diierent classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that can be generalized to Hyper Basis Functions, b) some tensor product splines, and c) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions and several perceptron-like neural networks with one-hidden layer.",
"title": ""
},
{
"docid": "e97f74244a032204e49d9306032f09a7",
"text": "For the discovery of biomarkers in the retinal vasculature it is essential to classify vessels into arteries and veins. We automatically classify retinal vessels as arteries or veins based on colour features using a Gaussian Mixture Model, an Expectation-Maximization (GMM-EM) unsupervised classifier, and a quadrant-pairwise approach. Classification is performed on illumination-corrected images. 406 vessels from 35 images were processed resulting in 92% correct classification (when unlabelled vessels are not taken into account) as compared to 87.6%, 90.08%, and 88.28% reported in [12] [14] and [15]. The classifier results were compared against two trained human graders to establish performance parameters to validate the success of classification method. The proposed system results in specificity of (0.8978, 0.9591) and precision (positive predicted value) of (0.9045, 0.9408) as compared to specificity of (0.8920, 0.7918) and precision of (0.8802, 0.8118) for (arteries, veins) respectively as reported in [13]. The classification accuracy was found to be 0.8719 and 0.8547 for veins and arteries, respectively.",
"title": ""
},
{
"docid": "cd5a1c3b3dd0de3571132404ec7646a9",
"text": "The use of plant metabolites for medicinal and cosmetic purpose is today gaining popularity. The most important step in this exploitation of metabolites is extraction and isolation of compound of interest. These day we can identified two group of extraction technique called conventional technique using cheaper equipment, high amount of solvent and takes long extracting time, and new or green technique using costly equipment, elevated pressure and / or temperatures with short extracting time. After extracting secondary metabolites a step of purification and isolation are required using Chromatographic or NonChromatographic techniques. This paper reviews the different technique of extraction and identification of plant metabolites.",
"title": ""
},
{
"docid": "d4ea4a718837db4ecdfd64896661af77",
"text": "Laboratory studies have documented that women often respond less favorably to competition than men. Conditional on performance, men are often more eager to compete, and the performance of men tends to respond more positively to an increase in competition. This means that few women enter and win competitions. We review studies that examine the robustness of these differences as well the factors that may give rise to them. Both laboratory and field studies largely confirm these initial findings, showing that gender differences in competitiveness tend to result from differences in overconfidence and in attitudes toward competition. Gender differences in risk aversion, however, seem to play a smaller and less robust role. We conclude by asking what could and should be done to encourage qualified males and females to compete. 601 A nn u. R ev . E co n. 2 01 1. 3: 60 163 0. D ow nl oa de d fr om w w w .a nn ua lre vi ew s.o rg by $ {i nd iv id ua lU se r.d is pl ay N am e} o n 08 /1 6/ 11 . F or p er so na l u se o nl y.",
"title": ""
},
{
"docid": "77f5c568ed065e4f23165575c0a05da6",
"text": "Localization is the problem of determining the position of a mobile robot from sensor data. Most existing localization approaches are passive, i.e., they do not exploit the opportunity to control the robot's effectors during localization. This paper proposes an active localization approach. The approach provides rational criteria for (1) setting the robot's motion direction (exploration), and (2) determining the pointing direction of the sensors so as to most efficiently localize the robot. Furthermore, it is able to deal with noisy sensors and approximative world models. The appropriateness of our approach is demonstrated empirically using a mobile robot in a structured office environment.",
"title": ""
},
{
"docid": "74f021ad22d78c8fac9b0dcfd6294224",
"text": "__________________________ This paper provides an overview of the research related to second language learners and reading strategies. It also considers the more recent research focusing on the role of metacognitive awareness in the reading comprehension process. The following questions are addressed: 1) How can the relationship between reading strategies, metacognitive awareness, and reading proficiency be characterized? 2) What does research in this domain indicate about the reading process? 3) What research methodologies can be used to investigate metacognitive awareness and reading strategies? 4) What open questions still remain from the perspective of research in this domain, and what are some of the research and methodological concerns that need to be addressed in this area in order to advance the current conceptual understanding of the reading process in an L2. Since so much of second language research is grounded in first language research, findings from both L1 and L2 contexts are discussed. _________________________ Introduction The current explosion of research in second language reading has begun to focus on readers’ strategies. Reading strategies are of interest for what they reveal about the way readers manage their interaction with written text and how these strategies are related to text comprehension. Research in second language reading suggests that learners use a variety of strategies to assist them with the acquisition, storage, and retrieval of information (Rigney, 1978). Strategies are defined as learning techniques, behaviors, problem-solving or study skills which make learning more effective and efficient (Oxford and Crookall, 1989). In the context of second language learning, a distinction can be made between strategies that make learning more effective, versus strategies that improve comprehension. The former are generally referred to as learning strategies in the second language literature. Comprehension or reading strategies on the other hand, indicate how readers conceive of a task, how they make sense of what they read, and",
"title": ""
},
{
"docid": "6b07b8cbc1f583de85e21f8b90fdf183",
"text": "Extramedullary (EM) manifestations of acute leukemia include a wide variety of clinically significant phenomena that often pose therapeutic dilemmas. Myeloid sarcoma (MS) and leukemia cutis (LC) represent 2 well-known EM manifestations with a range of clinical presentations. MS (also known as granulocytic sarcoma or chloroma) is a rare EM tumor of immature myeloid cells. LC specifically refers to the infiltration of the epidermis, dermis, or subcutis by neoplastic leukocytes (leukemia cells), resulting in clinically identifiable cutaneous lesions. The molecular mechanisms underlying EM involvement are not well defined, but recent immunophenotyping, cytogenetic, and molecular analysis are beginning to provide some understanding. Certain cytogenetic abnormalities are associated with increased risk of EM involvement, potentially through altering tissue-homing pathways. The prognostic significance of EM involvement is not fully understood. Therefore, it has been difficult to define the optimal treatment of patients with MS or LC. The timing of EM development at presentation versus relapse, involvement of the marrow, and AML risk classification help to determine our approach to treatment of EM disease.",
"title": ""
}
] |
scidocsrr
|
0cee4633028e7e08868d6c88198fb65c
|
RtGender: A Corpus for Studying Differential Responses to Gender
|
[
{
"docid": "6ee134d05811540cfedfd467daa11342",
"text": "The framing of an action influences how we perceive its actor. We introduce connotation frames of power and agency, a pragmatic formalism organized using frame semantic representations, to model how different levels of power and agency are implicitly projected on actors through their actions. We use the new power and agency frames to measure the subtle, but prevalent, gender bias in the portrayal of modern film characters and provide insights that deviate from the well-known Bechdel test. Our contributions include an extended lexicon of connotation frames along with a web interface that provides a comprehensive analysis through the lens of connotation frames.",
"title": ""
},
{
"docid": "f5bc721d2b63912307c4ad04fb78dd2c",
"text": "When women perform math, unlike men, they risk being judged by the negative stereotype that women have weaker math ability. We call this predicament st reotype threat and hypothesize that the apprehension it causes may disrupt women’s math performance. In Study 1 we demonstrated that the pattern observed in the literature that women underperform on difficult (but not easy) math tests was observed among a highly selected sample of men and women. In Study 2 we demonstrated that this difference in performance could be eliminated when we lowered stereotype threat by describing the test as not producing gender differences. However, when the test was described as producing gender differences and stereotype threat was high, women performed substantially worse than equally qualified men did. A third experiment replicated this finding with a less highly selected population and explored the mediation of the effect. The implication that stereotype threat may underlie gender differences in advanced math performance, even",
"title": ""
}
] |
[
{
"docid": "38297fe227780c10979988c648dc7574",
"text": "Homomorphic signal processing techniques are used to place information imperceivably into audio data streams by the introduction of synthetic resonances in the form of closely spaced echoes These echoes can be used to place digital identi cation tags directly into an audio signal with minimal objectionable degradation of the original signal",
"title": ""
},
{
"docid": "7f5ff39232cd491e648d40b070e0709c",
"text": "Synthesizing terrain or adding detail to terrains manually is a long and tedious process. With procedural synthesis methods this process is faster but more difficult to control. This paper presents a new technique of terrain synthesis that uses an existing terrain to synthesize new terrain. To do this we use multi-resolution analysis to extract the high-resolution details from existing models and apply them to increase the resolution of terrain. Our synthesized terrains are more heterogeneous than procedural results, are superior to terrains created by texture transfer, and retain the large-scale characteristics of the original terrain.",
"title": ""
},
{
"docid": "8075cc962ce18cea46a8df4396512aa5",
"text": "In the last few years, neural representation learning approaches have achieved very good performance on many natural language processing tasks, such as language modelling and machine translation. This suggests that neural models will also achieve good performance on information retrieval (IR) tasks, such as relevance ranking, addressing the query-document vocabulary mismatch problem by using a semantic rather than lexical matching. Although initial iterations of neural models do not outperform traditional lexical-matching baselines, the level of interest and effort in this area is increasing, potentially leading to a breakthrough. The popularity of the recent SIGIR 2016 workshop on Neural Information Retrieval provides evidence to the growing interest in neural models for IR. While recent tutorials have covered some aspects of deep learning for retrieval tasks, there is a significant scope for organizing a tutorial that focuses on the fundamentals of representation learning for text retrieval. The goal of this tutorial will be to introduce state-of-the-art neural embedding models and bridge the gap between these neural models with early representation learning approaches in IR (e.g., LSA). We will discuss some of the key challenges and insights in making these models work in practice, and demonstrate one of the toolsets available to researchers interested in this area.",
"title": ""
},
{
"docid": "c7d69faeac74bcf85f28b2c61dab6af1",
"text": "STATEMENT OF THE PROBLEM Thoracic trauma is a notable cause of morbidity and mortality in American trauma centers, where 25% of traumatic deaths are related to injuries sustained within the thoracic cage.1 Chest injuries occur in 60% of polytrauma cases; therefore, a rough estimate of the occurrence of hemothorax related to trauma in the United States approaches 300,000 cases per year.2 The management of hemothorax and pneumothorax has been a complex problem since it was first described over 200 years ago. Although the majority of chest trauma can be managed nonoperatively, there are several questions surrounding the management of hemothorax and occult pneumothorax that are not as easily answered. The technologic advances have raised the question of what to do with incidentally found hemothorax and pneumothorax discovered during the trauma evaluation. Previously, we were limited by our ability to visualize quantities 500 mL of blood on chest radiograph. Now that smaller volumes of blood can be visualized via chest computed tomography (CT), the management of these findings presents interesting clinical questions. In addition to early identification of these processes, these patients often find themselves with late complications such as retained hemothorax and empyema. The approach to these complex problems continues to evolve. Finally, as minimally invasive surgery grows and finds new applications, there are reproducible benefits to the patients in pursuing these interventions as both a diagnostic and therapeutic interventions. Video-assisted thoracoscopic surgery (VATS) has a growing role in the management of trauma patients.",
"title": ""
},
{
"docid": "bf6ec95013dd55e1514bb4e260b15c80",
"text": "Binary classifiers are routinely evaluated with performance measures such as sensitivity and specificity, and performance is frequently illustrated with Receiver Operating Characteristics (ROC) plots. Alternative measures such as positive predictive value (PPV) and the associated Precision/Recall (PRC) plots are used less frequently. Many bioinformatics studies develop and evaluate classifiers that are to be applied to strongly imbalanced datasets in which the number of negatives outweighs the number of positives significantly. While ROC plots are visually appealing and provide an overview of a classifier's performance across a wide range of specificities, one can ask whether ROC plots could be misleading when applied in imbalanced classification scenarios. We show here that the visual interpretability of ROC plots in the context of imbalanced datasets can be deceptive with respect to conclusions about the reliability of classification performance, owing to an intuitive but wrong interpretation of specificity. PRC plots, on the other hand, can provide the viewer with an accurate prediction of future classification performance due to the fact that they evaluate the fraction of true positives among positive predictions. Our findings have potential implications for the interpretation of a large number of studies that use ROC plots on imbalanced datasets.",
"title": ""
},
{
"docid": "5297929e65e662360d8ff262e877b08a",
"text": "Frontal electroencephalographic (EEG) alpha asymmetry is widely researched in studies of emotion, motivation, and psychopathology, yet it is a metric that has been quantified and analyzed using diverse procedures, and diversity in procedures muddles cross-study interpretation. The aim of this article is to provide an updated tutorial for EEG alpha asymmetry recording, processing, analysis, and interpretation, with an eye towards improving consistency of results across studies. First, a brief background in alpha asymmetry findings is provided. Then, some guidelines for recording, processing, and analyzing alpha asymmetry are presented with an emphasis on the creation of asymmetry scores, referencing choices, and artifact removal. Processing steps are explained in detail, and references to MATLAB-based toolboxes that are helpful for creating and investigating alpha asymmetry are noted. Then, conceptual challenges and interpretative issues are reviewed, including a discussion of alpha asymmetry as a mediator/moderator of emotion and psychopathology. Finally, the effects of two automated component-based artifact correction algorithms-MARA and ADJUST-on frontal alpha asymmetry are evaluated.",
"title": ""
},
{
"docid": "69eac200c7ef5e656e9fb28c13efa9b6",
"text": "A differential RF-DC CMOS converter for RF energy scavenging based on a reconfigurable voltage rectifier topology is presented. The converter efficiency and sensitivity are optimized thanks to the proposed reconfigurable architecture. Prototypes, realized in 130 nm, provide a regulated output voltage of ~2 V when working at 868 MHz, with a -21 dBm sensitivity. The circuit efficiency peaks at 60%, remaining above the 40% for a 18 dB input power range.",
"title": ""
},
{
"docid": "802d66fda1701252d1addbd6d23f6b4c",
"text": "Powered wheelchair users often struggle to drive safely and effectively and, in more critical cases, can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists users as and when they require help. The system uses a multiple-hypothesis method to predict the driver's intentions and, if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance but also, perhaps more importantly, characterize the user performance in an experiment that combines eye tracking with a secondary task. Without assistance, participants experienced multiple collisions while driving around the predefined route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely but also they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brain-machine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input.",
"title": ""
},
{
"docid": "4b0fcab3e9599f24cae499a4a2cbbd55",
"text": "In June 2016, Apple made a bold announcement that it will deploy local differential privacy for some of their user data collection in order to ensure privacy of user data, even from Apple [21, 23]. The details of Apple’s approach remained sparse. Although several patents [17–19] have since appeared hinting at the algorithms that may be used to achieve differential privacy, they did not include a precise explanation of the approach taken to privacy parameter choice. Such choice and the overall approach to privacy budget use and management are key questions for understanding the privacy protections provided by any deployment of differential privacy. In this work, through a combination of experiments, static and dynamic code analysis of macOS Sierra (Version 10.12) implementation, we shed light on the choices Apple made for privacy budget management. We discover and describe Apple’s set-up for differentially private data processing, including the overall data pipeline, the parameters used for differentially private perturbation of each piece of data, and the frequency with which such data is sent to Apple’s servers. We find that although Apple’s deployment ensures that the (differential) privacy loss per each datum submitted to its servers is 1 or 2, the overall privacy loss permitted by the system is significantly higher, as high as 16 per day for the four initially announced applications of Emojis, New words, Deeplinks and Lookup Hints [21]. Furthermore, Apple renews the privacy budget available every day, which leads to a possible privacy loss of 16 times the number of days since user opt-in to differentially private data collection for those four applications. We applaud Apple’s deployment of differential privacy for its bold demonstration of feasibility of innovation while guaranteeing rigorous privacy. However, we argue that in order to claim the full benefits of differentially private data collection, Apple must give full transparency of its implementation and privacy loss choices, enable user choice in areas related to privacy loss, and set meaningful defaults on the daily and device lifetime privacy loss permitted. ACM Reference Format: Jun Tang, Aleksandra Korolova, Xiaolong Bai, XueqiangWang, and Xiaofeng Wang. 2017. Privacy Loss in Apple’s Implementation of Differential Privacy",
"title": ""
},
{
"docid": "cc124a93db48348e37aacac87081e3d4",
"text": "The design of an ultra-wideband crossover for use in printed microwave circuits is presented. It employs a pair of broadside-coupled microstrip-to-coplanar waveguide (CPW) transitions, and a pair of uniplanar microstrip-to-CPW transitions. A lumped-element equivalent circuit is used to explain the operation of the proposed crossover. Its performance is evaluated via full-wave electromagnetic simulations and measurements. The designed device is constructed on a single substrate, and thus, it is fully compatible with microstrip-based microwave circuits. The crossover is shown to operate across the frequency band from 3.1 to 11 GHz with more than 15 dB of isolation, less than 1 dB of insertion loss, and less than 0.1 ns of deviation in the group delay.",
"title": ""
},
{
"docid": "1ac4ac9b112c2554db37de2070d7c2df",
"text": "This paper studies empirically the effect of sampling and threshold-moving in training cost-sensitive neural networks. Both oversampling and undersampling are considered. These techniques modify the distribution of the training data such that the costs of the examples are conveyed explicitly by the appearances of the examples. Threshold-moving tries to move the output threshold toward inexpensive classes such that examples with higher costs become harder to be misclassified. Moreover, hard-ensemble and soft-ensemble, i.e., the combination of above techniques via hard or soft voting schemes, are also tested. Twenty-one UCl data sets with three types of cost matrices and a real-world cost-sensitive data set are used in the empirical study. The results suggest that cost-sensitive learning with multiclass tasks is more difficult than with two-class tasks, and a higher degree of class imbalance may increase the difficulty. It also reveals that almost all the techniques are effective on two-class tasks, while most are ineffective and even may cause negative effect on multiclass tasks. Overall, threshold-moving and soft-ensemble are relatively good choices in training cost-sensitive neural networks. The empirical study also suggests that some methods that have been believed to be effective in addressing the class imbalance problem may, in fact, only be effective on learning with imbalanced two-class data sets.",
"title": ""
},
{
"docid": "bb19e6b00fca27c455316f09a626407c",
"text": "On the basis of the most recent epidemiologic research, Autism Spectrum Disorder (ASD) affects approximately 1% to 2% of all children. (1)(2) On the basis of some research evidence and consensus, the Modified Checklist for Autism in Toddlers isa helpful tool to screen for autism in children between ages 16 and 30 months. (11) The Diagnostic Statistical Manual of Mental Disorders, Fourth Edition, changes to a 2-symptom category from a 3-symptom category in the Diagnostic Statistical Manual of Mental Disorders, Fifth Edition(DSM-5): deficits in social communication and social interaction are combined with repetitive and restrictive behaviors, and more criteria are required per category. The DSM-5 subsumes all the previous diagnoses of autism (classic autism, Asperger syndrome, and pervasive developmental disorder not otherwise specified) into just ASDs. On the basis of moderate to strong evidence, the use of applied behavioral analysis and intensive behavioral programs has a beneficial effect on language and the core deficits of children with autism. (16) Currently, minimal or no evidence is available to endorse most complementary and alternative medicine therapies used by parents, such as dietary changes (gluten free), vitamins, chelation, and hyperbaric oxygen. (16) On the basis of consensus and some studies, pediatric clinicians should improve their capacity to provide children with ASD a medical home that is accessible and provides family-centered, continuous, comprehensive and coordinated, compassionate, and culturally sensitive care. (20)",
"title": ""
},
{
"docid": "a39fb4e8c15878ba4fdac54f02451789",
"text": "The Cloud computing system can be easily threatened by various attacks, because most of the cloud computing systems provide service to so many people who are not proven to be trustworthy. Due to their distributed nature, cloud computing environment are easy targets for intruders[1]. There are various Intrusion Detection Systems having various specifications to each. Cloud computing have two approaches i. e. Knowledge-based IDS and Behavior-Based IDS to detect intrusions in cloud computing. Behavior-Based IDS assumes that an intrusion can be detected by observing a deviation from normal to expected behavior of the system or user[2]s. Knowledge-based IDS techniques apply knowledge",
"title": ""
},
{
"docid": "0134e8cb0f5043a2a7ce281bca6399f2",
"text": "In recent years, internet revolution resulted in an explosive growth in multimedia applications. The rapid advancement of internet has made it easier to send the data/image accurate and faster to the destination. Besides this, it is easier to modify and misuse the valuable information through hacking at the same time. Digital watermarking is one of the proposed solutions for copyright protection of multimedia data. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. In this paper an invisible watermarking technique (least significant bit) and a visible watermarking technique is implemented. This paper presents the general overview of image watermarking and different security issues. Various attacks are also performed on watermarked images and their impact on quality of images is also studied. In this paper, Image Watermarking using Least Significant Bit (LSB) algorithm has been used for embedding the message/logo into the image. This work has been implemented through MATLAB.",
"title": ""
},
{
"docid": "dfc03b016fe0b920479e335cef71a6ab",
"text": "Rowland Atkinson and John Flint are researchers at the Department of Urban Studies, University of Glasgow. Both have an interest in the spatial distribution and experience of social exclusion and have been commissioned to devise a methodology for tracing residents who leave regeneration areas in Scotland. •In its simplest formulation snowball sampling consists of identifying respondents who are then used to refer researchers on to other respondents. •Snowball sampling contradicts many of the assumptions underpinning conventional notions of sampling but has a number of advantages for sampling University of Surrey Sociology at Surrey",
"title": ""
},
{
"docid": "703caecb3069fa0d8718dd853f47788a",
"text": "Cloud computing is a newly emerging distributed computing which is evolved from Grid computing. Task scheduling is the core research of cloud computing which studies how to allocate the tasks among the physical nodes so that the tasks can get a balanced allocation or each task’s execution cost decreases to the minimum or the overall system performance is optimal. Unlike the previous task slices’ sequential execution of an independent task in the model of which the target is processing time, we build a model that targets at the response time, in which the task slices are executed in parallel. Then we give its solution with a method based on an improved adjusting entropy function. At last, we design a new task scheduling algorithm. Experimental results show that the response time of our proposed algorithm is much lower than the game-theoretic algorithm and balanced scheduling algorithm and compared with the balanced scheduling algorithm, game-theoretic algorithm is not necessarily superior in parallel although its objective function value is better.",
"title": ""
},
{
"docid": "c7e17d88cebf76f16434c9acbd492e5e",
"text": "Most recent studies on coreference resolution advocate accurate yet relatively complex models, relying on, for example, entitymention or graph-based representations. As it has been convincingly demonstrated at the recent CoNLL 2012 shared task, such algorithms considerably outperform popular basic approaches, in particular mention-pair models. This study advocates a novel approach that keeps the simplicity of a mention-pair framework, while showing state-of-the-art results. Apart from being very efficient and straightforward to implement, our model facilitates experimental work on the pairwise classifier, in particular on feature engineering. The proposed model achieves the performance level of up to 61.82% (MELA F, v4 scorer) on the CoNLL test data, on par with complex state-of-the-art systems.",
"title": ""
},
{
"docid": "a446793baa99390a00ea58e799fbf6e3",
"text": "A survey has been carried out to study the occurrence and distribution of Trichodorus primitivus, T. sparsus and T. viruliferus in the Czech Republic under the rhizosphere of orchards, forests, vineyards and strawberry. Total 208 sites were surveyed and only 29 sites were found positive for these species. All three species are reported in the Czech Republic for the first time.",
"title": ""
},
{
"docid": "7f06370a81e7749970cd0359c5b5f993",
"text": "The use of virtualization technologies in high performance computing (HPC) environments has traditionally been avoided due to their inherent performance overhead. However, with the rise of container-based virtualization implementations, such as Linux VServer, OpenVZ and Linux Containers (LXC), it is possible to obtain a very low overhead leading to near-native performance. In this work, we conducted a number of experiments in order to perform an in-depth performance evaluation of container-based virtualization for HPC. We also evaluated the trade-off between performance and isolation in container-based virtualization systems and compared them with Xen, which is a representative of the traditional hypervisor-based virtualization systems used today.",
"title": ""
}
] |
scidocsrr
|
615937f2f08bc7992cf6445b8fcd46c3
|
Study of Wind Turbine Fault Diagnosis Based on Unscented Kalman Filter and SCADA Data
|
[
{
"docid": "1cc81fa2fbfc2a47eb07bb7ef969d657",
"text": "Wind Turbines (WT) are one of the fastest growing sources of power production in the world today and there is a constant need to reduce the costs of operating and maintaining them. Condition monitoring (CM) is a tool commonly employed for the early detection of faults/failures so as to minimise downtime and maximize productivity. This paper provides a review of the state-of-the-art in the CM of wind turbines, describing the different maintenance strategies, CM techniques and methods, and highlighting in a table the various combinations of these that have been reported in the literature. Future research opportunities in fault diagnostics are identified using a qualitative fault tree analysis. Crown Copyright 2012 Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "93a6c94a3ecb3fcaf363b07c077e5579",
"text": "The state-of-the-art advancement in wind turbine condition monitoring and fault diagnosis for the recent several years is reviewed. Since the existing surveys on wind turbine condition monitoring cover the literatures up to 2006, this review aims to report the most recent advances in the past three years, with primary focus on gearbox and bearing, rotor and blades, generator and power electronics, as well as system-wise turbine diagnosis. There are several major trends observed through the survey. Due to the variable-speed nature of wind turbine operation and the unsteady load involved, time-frequency analysis tools such as wavelets have been accepted as a key signal processing tool for such application. Acoustic emission has lately gained much more attention in order to detect incipient failures because of the low-speed operation for wind turbines. There has been an increasing trend of developing model based reasoning algorithms for fault detection and isolation as cost-effective approach for wind turbines as relatively complicated system. The impact of unsteady aerodynamic load on the robustness of diagnostic signatures has been notified. Decoupling the wind load from condition monitoring decision making will reduce the associated down-time cost.",
"title": ""
},
{
"docid": "fdc6de60d4564efc3b94b44873ecd179",
"text": "Fault detection and diagnosis is an important problem in process engineering. It is the central component of abnormal event management (AEM) which has attracted a lot of attention recently. AEM deals with the timely detection, diagnosis and correction of abnormal conditions of faults in a process. Early detection and diagnosis of process faults while the plant is still operating in a controllable region can help avoid abnormal event progression and reduce productivity loss. Since the petrochemical industries lose an estimated 20 billion dollars every year, they have rated AEM as their number one problem that needs to be solved. Hence, there is considerable interest in this field now from industrial practitioners as well as academic researchers, as opposed to a decade or so ago. There is an abundance of literature on process fault diagnosis ranging from analytical methods to artificial intelligence and statistical approaches. From a modelling perspective, there are methods that require accurate process models, semi-quantitative models, or qualitative models. At the other end of the spectrum, there are methods that do not assume any form of model information and rely only on historic process data. In addition, given the process knowledge, there are different search techniques that can be applied to perform diagnosis. Such a collection of bewildering array of methodologies and alternatives often poses a difficult challenge to any aspirant who is not a specialist in these techniques. Some of these ideas seem so far apart from one another that a non-expert researcher or practitioner is often left wondering about the suitability of a method for his or her diagnostic situation. While there have been some excellent reviews in this field in the past, they often focused on a particular branch, such as analytical models, of this broad discipline. The basic aim of this three part series of papers is to provide a systematic and comparative study of various diagnostic methods from different perspectives. We broadly classify fault diagnosis methods into three general categories and review them in three parts. They are quantitative model-based methods, qualitative model-based methods, and process history based methods. In the first part of the series, the problem of fault diagnosis is introduced and approaches based on quantitative models are reviewed. In the remaining two parts, methods based on qualitative models and process history data are reviewed. Furthermore, these disparate methods will be compared and evaluated based on a common set of criteria introduced in the first part of the series. We conclude the series with a discussion on the relationship of fault diagnosis to other process operations and on emerging trends such as hybrid blackboard-based frameworks for fault diagnosis. # 2002 Published by Elsevier Science Ltd.",
"title": ""
},
{
"docid": "e6fcb4594a371af53cd990b2c9ee9493",
"text": "High operations and maintenance costs for wind turbines reduce their overall cost effectiveness. One of the biggest drivers of maintenance cost is unscheduled maintenance due to unexpected failures. Continuous monitoring of wind turbine health using automated failure detection algorithms can improve turbine reliability and reduce maintenance costs by detecting failures before they reach a catastrophic stage and by eliminating unnecessary scheduled maintenance. A SCADAdata based condition monitoring system uses data already collected at the wind turbine controller. It is a cost-effective way to monitor wind turbines for early warning of failures and performance issues. In this paper, we describe our exploration of existing wind turbine SCADA data for development of fault detection and diagnostic techniques for wind turbines. We used a number of measurements to develop anomaly detection algorithms and investigated classification techniques using clustering algorithms and principal components analysis for capturing fault signatures. Anomalous signatures due to a reported gearbox failure are identified from a set of original measurements including rotor speeds and produced power. INTRODUCTION Among the challenges, noted in the DOE-issued report ‘20% Wind Energy by 2030’ [1], are improvement of wind turbine performance and reduction in operating and maintenance costs. After the capital costs of commissioning wind turbine generators, the biggest costs are operations, maintenance, and insurance [1-3]. Reducing maintenance and operating costs can considerably reduce the payback period and provide the impetus for investment and widespread acceptance of this clean energy source. Maintenance costs can be reduced through continuous, automated monitoring of wind turbines. Wind turbines often operate in severe, remote environments and require frequent scheduled maintenance. Unscheduled maintenance due to unexpected failures can be costly, not only for maintenance support but also for lost production time. In addition, as wind turbines age, parts fail, and power production performance degrades, maintenance costs increase as a percentage of production. Monitoring and data analysis enables conditionbased rather than time-interval-based maintenance and performance tune-ups. Experience from other industries shows that condition monitoring detects failures before they reach a catastrophic or secondary-damage stage, extends asset life, keeps assets working at initial capacity factors, enables better maintenance planning and logistics, and can reduce routine maintenance. Traditionally, condition monitoring systems for wind turbines have focused on the detection of failures in the main bearing, generator, and gearbox, some of the highest cost components on a wind turbine [4–6]. Two widely-used methods are vibration analysis and oil monitoring [4, 5, 7, 8]. These are standalone systems that require installation of sensors and hardware. A supervisory control and data acquisition (SCADA) -data based condition monitoring system uses data already being collected at the wind turbine controller and is a costeffective way to monitor for early warning of failures and performance issues. In this paper, we describe our exploration of existing wind turbine SCADA data for development of fault detection and diagnostic techniques. Our ultimate goal is to be able to use SCADA-recorded data to provide advance warning of failures or performance issues. For the work described here, we used data from the Controls Advanced Research Turbine 2 (CART2) at the National Wind Technology Center (NWTC) at the National Renewable Energy Laboratory (NREL). A number of measurements from the turbine are used to develop anomaly detection algorithms. Classification techniques such as clustering and principal components analysis were investigated",
"title": ""
}
] |
[
{
"docid": "da4c173dc7dc9cec8c2ad507b4540192",
"text": "For Parkinson's disease (PD) detection at its best, early diagnosis of the disorder is the key criteria to be focused [5]. Tremors are one of the primary characteristics of PD [11][13]. The paper presents some of the effective biomechanical techniques that can aid in measuring PD tremors with a high grade of accuracy and sensitivity. The paper describes the application of salient tracking tools like magnetic trackers and optical markers, electrical diagnostic practices like Electromyogram (EMG) and the use of Microelectromechanical systems (MEMS) based inertial sensor modules(a combination of 3 axes accelerometer + 3 axes gyroscope) to record movement. The IMU and electrodes were suitably mounted on the hands of subjects who were made to perform certain pre-defined actions to record the wrist movement and electrical activity simultaneously. The ability of a subject to precisely follow a pre-defined task's motion trajectory is a crucial indicator of tremor. In this paper we study two techniques of trajectory measurement, in terms of the accuracy of measurement of the slightest deviations from a prescribed trajectory. It is shown that an IMU based motion tracking is more accurate than that tracked by EMG sensors.",
"title": ""
},
{
"docid": "de66a8238e9c71471ada4cf19ccfe15b",
"text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. SUMMARY In this paper we investigate the out-of-sample forecasting ability of feedforward and recurrent neural networks based on empirical foreign exchange rate data. A two-step procedure is proposed to construct suitable networks, in which networks are selected based on the predictive stochastic complexity (PSC) criterion, and the selected networks are estimated using both recursive Newton algorithms and the method of nonlinear least squares. Our results show that PSC is a sensible criterion for selecting networks and for certain exchange rate series, some selected network models have significant market timing ability and/or significantly lower out-of-sample mean squared prediction error relative to the random walk model.",
"title": ""
},
{
"docid": "9ef71280f129a524c66c183a609bd764",
"text": "The present study investigated the role specific types of alcohol-related problems and life satisfaction play in predicting motivation to change alcohol use. Participants were 548 college students mandated to complete a brief intervention following an alcohol-related policy violation. Using hierarchical multiple regression, we tested for the presence of interaction and quadratic effects on baseline data collected prior to the intervention. A significant interaction indicated that the relationship between a respondent's personal consequences and his/her motivation to change differs depending upon the level of concurrent social consequences. Additionally quadratic effects for abuse/dependence symptoms and life satisfaction were found. The quadratic probes suggest that abuse/dependence symptoms and poor life satisfaction are both positively associated with motivation to change for a majority of the sample; however, the nature of these relationships changes for participants with more extreme scores. Results support the utility of using a multidimensional measure of alcohol related problems and assessing non-linear relationships when assessing predictors of motivation to change. The results also suggest that the best strategies for increasing motivation may vary depending on the types of alcohol-related problems and level of life satisfaction the student is experiencing and highlight potential directions for future research.",
"title": ""
},
{
"docid": "05f2a86b58758d2b9fbdbd4ecdde01b2",
"text": "In this paper we discuss the use of Data Mining to provide a solution to the problem of cross-sales. We define and analyse the cross-sales problem and develop a hybrid methodology to solve it, using characteristic rule discovery and deviation detection. Deviation detection is used as a measure of interest to filter out the less interesting characteristic roles and only retain the best characteristic rules discovered. The effect of domain knowledge on the interestingness value of the discovered rules is discussed and techniques for relining the knowledge to increase this interestingness measure are studied. We also investigate the use of externally procured lifestyle and other survey data for data enrichment and discuss its use as additional domain knowledge. The developed methodology has been applied to a real world cross-sales problem within the financial sector, and the results are also presented in this paper. Although the application described is in the financial sector, the methodology is generic in nature and can be applied to other sectors. © 1998 Elsevier Science B.V. All rights reserved. Kevwords: Cross-sales: Data Mining; Characteristic rule discovery: Deviation detection",
"title": ""
},
{
"docid": "13d7abc974d44c8c3723c3b9c8534fec",
"text": "We propose a novel approach to automatically produce multiple colorized versions of a grayscale image. Our method results from the observation that the task of automated colorization is relatively easy given a low-resolution version of the color image. We first train a conditional PixelCNN to generate a low resolution color for a given grayscale image. Then, given the generated low-resolution color image and the original grayscale image as inputs, we train a second CNN to generate a high-resolution colorization of an image. We demonstrate that our approach produces more diverse and plausible colorizations than existing methods, as judged by human raters in a ”Visual Turing Test”.",
"title": ""
},
{
"docid": "e50ce59ede6ad5c7a89309aed6aa06aa",
"text": "In this paper, we discuss our ongoing efforts to construct a scientific paper browsing system that helps users to read and understand advanced technical content distributed in PDF. Since PDF is a format specifically designed for printing, layout and logical structures of documents are indistinguishably embedded in the file. It requires much effort to extract natural language text from PDF files, and reversely, display semantic annotations produced by NLP tools on the original page layout. In our browsing system, we tackle these issues caused by the gap between printable document and plain text. Our system provides ways to extract natural language sentences from PDF files together with their logical structures, and also to map arbitrary textual spans to their corresponding regions on page images. We setup a demonstration system using papers published in ACL anthology and demonstrate the enhanced search and refined recommendation functions which we plan to make widely available to NLP researchers.",
"title": ""
},
{
"docid": "71817d7adba74a7804767a5bc74e2d81",
"text": "We propose a novel 3D integration method, called Vertical integration after Stacking (ViaS) process. The process enables 3D integration at significantly low cost, since it eliminates costly processing steps such as chemical vapor deposition used to form inorganic insulator layers and Cu plating used for via filling of vertical conductors. Furthermore, the technique does not require chemical-mechanical polishing (CMP) nor temporary bonding to handle thin wafers. The integration technique consists of forming through silicon via (TSV) holes in pre-multi-stacked wafers (> 2 wafers) which have no initial vertical electrical interconnections, followed by insulation of holes by polymer coating and via filling by molten metal injection. In the technique, multiple wafers are etched at once to form TSV holes followed by coating of the holes by conformal thin polymer layers. Finally the holes are filled by using molten metal injection so that a formation of interlayer connections of arbitrary choice is possible. In this paper, we demonstrate 3-chip-stacked test vehicle with 50 × 50 μm-square TSVs assembled by using this technique.",
"title": ""
},
{
"docid": "10b9516ef7302db13dcf46e038b3f744",
"text": "A new fake iris detection method based on 3D feature of iris pattern is proposed. In pervious researches, they did not consider 3D structure of iris pattern, but only used 2D features of iris image. However, in our method, by using four near infra-red (NIR) illuminators attached on the left and right sides of iris camera, we could obtain the iris image in which the 3D structure of iris pattern could be shown distinctively. Based on that, we could determine the live or fake iris by wavelet analysis of the 3D feature of iris pattern. Experimental result showed that the Equal Error Rate (EER) of determining the live or fake iris was 0.33%. VC 2010 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 20, 162–166, 2010; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20227",
"title": ""
},
{
"docid": "b02b09e54aa574c6ce3a87356a3e9beb",
"text": "As we move to higher data rates, the performance of clock and data recovery (CDR) circuits becomes increasingly important in maintaining low bit error rates (BER) in wireline links. Digital CDRs are popular in part for their robustness, but their use of bang-bang phase detectors (BB-PD) makes their performance sensitive to changes in jitter caused by PVT variations, crosstalk or power supply noise. This is because the gain of a BB-PD depends on the CDR input jitter, causing the loop gain of the CDR to change if the jitter magnitude or spectrum varies. This problem is illustrated in Fig. 6.7.1 where small jitter leads to excessive loop gain and hence to an underdamped behaviour in the CDR jitter tolerance (JTOL), while large jitter leads to insufficient loop gain and hence to low overall JTOL. To prevent this, we propose a CDR with an adaptive loop gain, KG, as shown in Fig. 6.7.1.",
"title": ""
},
{
"docid": "13ecd39b2b49fb108ed03e28e8a0578b",
"text": "Optional stopping refers to the practice of peeking at data and then, based on the results, deciding whether or not to continue an experiment. In the context of ordinary significance-testing analysis, optional stopping is discouraged, because it necessarily leads to increased type I error rates over nominal values. This article addresses whether optional stopping is problematic for Bayesian inference with Bayes factors. Statisticians who developed Bayesian methods thought not, but this wisdom has been challenged by recent simulation results of Yu, Sprenger, Thomas, and Dougherty (2013) and Sanborn and Hills (2013). In this article, I show through simulation that the interpretation of Bayesian quantities does not depend on the stopping rule. Researchers using Bayesian methods may employ optional stopping in their own research and may provide Bayesian analysis of secondary data regardless of the employed stopping rule. I emphasize here the proper interpretation of Bayesian quantities as measures of subjective belief on theoretical positions, the difference between frequentist and Bayesian interpretations, and the difficulty of using frequentist intuition to conceptualize the Bayesian approach.",
"title": ""
},
{
"docid": "b3a7b289cd54ef0d8a8c175c40449577",
"text": "Global Internet threats have undergone a profound transformation from attacks designed solely to disable infrastructure to those that also target people and organizations. At the center of many of these attacks are collections of compromised computers, or Botnets, remotely controlled by the attackers, and whose members are located in homes, schools, businesses, and governments around the world [6]. In this survey paper we provide a brief look at how existing botnet research, the evolution and future of botnets, as well as the goals and visibility of today’s networks intersect to inform the field of botnet technology and defense.",
"title": ""
},
{
"docid": "3256b2050c603ca16659384a0e98a22c",
"text": "In this paper, we propose a Hough transform-based method to identify low-contrast defects in unevenly illuminated images, and especially focus on the inspection of mura defects in liquid crystal display (LCD) panels. The proposed method works on 1-D gray-level profiles in the horizontal and vertical directions of the surface image. A point distinctly deviated from the ideal line of a profile can be identified as a defect one. A 1-D gray-level profile in the unevenly illuminated image results in a nonstationary line signal. The most commonly used technique for straight line detection in a noisy image is Hough transform (HT). The standard HT requires a sufficient number of points lie exactly on the same straight line at a given parameter resolution so that the accumulator will show a distinct peak in the parameter space. It fails to detect a line in a nonstationary signal. In the proposed HT scheme, the points that contribute to the vote do not have to lie on a line. Instead, a distance tolerance to the line sought is first given. Any point with the distance to the line falls within the tolerance will be accumulated by taking the distance as the voting weight. A fast search procedure to tighten the possible ranges of line parameters is also proposed for mura detection in LCD images.",
"title": ""
},
{
"docid": "30decb72388cd024661c552670a28b11",
"text": "The increasing volume and unstructured nature of data available on the World Wide Web (WWW) makes information retrieval a tedious and mechanical task. Lots of this information is not semantic driven, and hence not machine process able, but its only in human readable form. The WWW is designed to builds up a source of reference for web of meaning. Ontology information on different subjects spread globally is made available at one place. The Semantic Web (SW), moreover as an extension of WWW is designed to build as a foundation of vocabularies and effective communication of Semantics. The promising area of Semantic Web is logical and lexical semantics. Ontology plays a major role to represent information more meaningfully for humans and machines for its later effective retrieval. This paper constitutes the requisite with a unique approach for a representation and reasoning with ontology for semantic analysis of various type of document and also surveys multiple approaches for ontology learning that enables reasoning with uncertain, incomplete and contradictory information in a domain context.",
"title": ""
},
{
"docid": "c4e3e580dc532e2e80c54da698005619",
"text": "Proximity search on heterogeneous graphs aims to measure the proximity between two nodes on a graph w.r.t. some semantic relation for ranking. Pioneer work often tries to measure such proximity by paths connecting the two nodes. However, paths as linear sequences have limited expressiveness for the complex network connections. In this paper, we explore a more expressive DAG (directed acyclic graph) data structure for modeling the connections between two nodes. Particularly, we are interested in learning a representation for the DAGs to encode the proximity between two nodes. We face two challenges to use DAGs, including how to efficiently generate DAGs and how to effectively learn DAG embedding for proximity search. We find distance-awareness as important for proximity search and the key to solve the above challenges. Thus we develop a novel Distance-aware DAG Embedding (D2AGE) model. We evaluate D2AGE on three benchmark data sets with six semantic relations, and we show that D2AGE outperforms the state-of-the-art baselines. We release the code on https://github.com/shuaiOKshuai.",
"title": ""
},
{
"docid": "1ae735b903b6d2bfae8a304544342064",
"text": "Deep neural networks have achieved significant success for image recognition problems. Despite the wide success, recent experiments demonstrated that neural networks are sensitive to small input perturbations, or adversarial noise. The lack of robustness is intuitively undesirable and limits neural networks applications in adversarial settings, and for image search and retrieval problems. Current approaches consider augmenting training dataset using adversarial examples to improve robustness. However, when using data augmentation, the model fails to anticipate changes in an adversary. In this paper, we consider maximizing the geometric margin of the classifier. Intuitively, a large margin relates to classifier robustness. We introduce novel margin maximization objective for deep neural networks. We theoretically show that the proposed objective is equivalent to the robust optimization problem for a neural network. Our work seamlessly generalizes SVM margin objective to deep neural networks. In the experiments, we extensively verify the effectiveness of the proposed margin maximization objective to improve neural network robustness and to reduce overfitting on MNIST and CIFAR-10 dataset.",
"title": ""
},
{
"docid": "33ae678f51e12da626e3ff9542654630",
"text": "Input-output examples are a simple and accessible way of describing program behaviour. Program synthesis from input-output examples has the potential of extending the range of computational tasks achievable by end-users who have no programming knowledge, but can articulate their desired computations by describing input-output behaviour. In this paper, we present Escher, a generic and efficient algorithm that interacts with the user via input-output examples, and synthesizes recursive programs implementing intended behaviour. Escher is parameterized by the components (instructions) that can be used in the program, thus providing a generic synthesis algorithm that can be instantiated to suit different domains. To search through the space of programs, Escher adopts a novel search strategy that utilizes special data structures for inferring conditionals and synthesizing recursive procedures. Our experimental evaluation of Escher demonstrates its ability to efficiently synthesize a wide range of programs, manipulating integers, lists, and trees. Moreover, we show that Escher outperforms a state-ofthe-art SAT-based synthesis tool from the literature.",
"title": ""
},
{
"docid": "b08ea654e0d5ab7286013207a522a708",
"text": "Recent advances in sensing and computing technologies have inspired a new generation of data analysis and visualization systems for video surveillance applications. We present a novel visualization system for video surveillance based on an Augmented Virtual Environment (AVE) that fuses dynamic imagery with 3D models in a real-time display to help observers comprehend multiple streams of temporal data and imagery from arbitrary views of the scene. This paper focuses on our recent technical extensions to our AVE system, including moving object detection, tracking, and 3D display for effective dynamic event comprehension and situational awareness. Moving objects are detected and tracked in video sequences and visualized as pseudo-3D elements in the AVE scene display in real-time. We show results that illustrate the utility and benefits of these new capabilities.",
"title": ""
},
{
"docid": "14024a813302548d0bd695077185de1c",
"text": "In this paper, we propose an innovative touch-less palm print recognition system. This project is motivated by the public’s demand for non-invasive and hygienic biometric technology. For various reasons, users are concerned about touching the biometric scanners. Therefore, we propose to use a low-resolution web camera to capture the user’s hand at a distance for recognition. The users do not need to touch any device for their palm print to be acquired. A novel hand tracking and palm print region of interest (ROI) extraction technique are used to track and capture the user’s palm in real-time video stream. The discriminative palm print features are extracted based on a new method that applies local binary pattern (LBP) texture descriptor on the palm print directional gradient responses. Experiments show promising result using the proposed method. Performance can be further improved when a modified probabilistic neural network (PNN) is used for feature matching. Verification can be performed in less than one second in the proposed system. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3bcd11082fc70d52da15a5e087ab5375",
"text": "The problem of maximizing information diffusion through a network is a topic of considerable recent interest. A conventional problem is to select a set of any arbitrary k nodes as the initial influenced nodes so that they can effectively disseminate the information to the rest of the network. However, this model is usually unrealistic in online social networks since we cannot typically choose arbitrary nodes in the network as the initial influenced nodes. From the point of view of an individual user who wants to spread information as much as possible, a more reasonable model is to try to initially share the information with only some of its neighbours rather than a set of any arbitrary nodes; but how can these neighbours be effectively chosen? We empirically study how to design more effective neighbours selection strategies to maximize information diffusion. Our experimental results through intensive simulation on several real- world network topologies show that an effective neighbours selection strategy is to use node degree information for short-term propagation while a naive random selection is also adequate for long-term propagation to cover more than half of a network. We also discuss the effects of the number of initial activated neighbours. If we particularly select the highest degree nodes as initial activated neighbours, the number of initial activated neighbours is not an important factor at least for long-term propagation of information.",
"title": ""
},
{
"docid": "70b6779247f28ddc2e153c7bc159c98d",
"text": "Radio-frequency identification (RFID) is a wireless technology for automatic identification using electromagnetic fields in the radio frequency spectrum. In addition to the easy deployment and decreasing prices for tags, this technology has many advantages to bar codes and other common identification methods, such as no required line of sight and the ability to read several tags simultaneously. Therefore it enjoys large popularity among large businesses and continues to spread in the consumer market. Common applications include the fields of electronic article surveillance, access control, tracking, and identification of objects and animals. This paper introduces RFID technology, analyzes modern applications, and tries to point out strengths and weaknesses of RFID systems.",
"title": ""
}
] |
scidocsrr
|
ec884e833879f5686f1bac13bf82fa9c
|
Collective intelligence in law enforcement - The WikiCrimes system
|
[
{
"docid": "11bc0abc0aec11c1cf189eb23fd1be9d",
"text": "Web spamming describes behavior that attempts to deceive search engine’s ranking algorithms. TrustRank is a recent algorithm that can combat web spam by propagating trust among web pages. However, TrustRank propagates trust among web pages based on the number of outgoing links, which is also how PageRank propagates authority scores among Web pages. This type of propagation may be suited for propagating authority, but it is not optimal for calculating trust scores for demoting spam sites. In this paper, we propose several alternative methods to propagate trust on the web. With experiments on a real web data set, we show that these methods can greatly decrease the number of web spam sites within the top portion of the trust ranking. In addition, we investigate the possibility of propagating distrust among web pages. Experiments show that combining trust and distrust values can demote more spam sites than the sole use of trust values.",
"title": ""
}
] |
[
{
"docid": "c9cae26169a89ad8349889b3fd221d32",
"text": "Dense kernel matrices Θ ∈ RN×N obtained from point evaluations of a covariance function G at locations {xi}1≤i≤N arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green’s functions of elliptic boundary value problems and approximately equally spaced sampling points, we show how to identify a subset S ⊂ {1, . . . , N} × {1, . . . , N}, with #S = O(N log(N) log(N/ )), such that the zero fill-in incomplete Cholesky factorisation of Θi,j1(i,j)∈S is an -approximation of Θ. This blockfactorisation can provably be obtained in complexity O(N log(N) log(N/ )) in space and O(N log(N) log(N/ )) in time. The algorithm only needs to know the spatial configuration of the xi and does not require an analytic representation of G. Furthermore, an approximate PCA with optimal rate of convergence in the operator norm can be easily read off from this decomposition. Hence, by using only subsampling and the incomplete Cholesky decomposition, we obtain at nearly linear complexity the compression, inversion and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky decomposition we also obtain a solver for elliptic PDE with complexity O(N log(N) log(N/ )) in space and O(N log(N) log(N/ )) in time.",
"title": ""
},
{
"docid": "96ca10848805887b31c106c5550e1c48",
"text": "Introduction In the middle of the last century, Dolff [1], Paquin [2] and Zimmermann et al. [3] developed principles for ureteroneocystostomy after gynaecological ureter injuries. Turner-Warwick and Worth [4] adopted these techniques, named it the ‘Psoas Bladder-Hitch Procedure’ and applied this technique of ureteroneocystostomy for the treatment of distal ureteric obstruction, ureteric fistulas and ‘distended duplication’ of the upper urinary tract.",
"title": ""
},
{
"docid": "b4c8dc55e3e8978996f7db0319501a08",
"text": "We develop a robust multi-scale structure-aware neural network for human pose estimation. This method improves the recent deep conv-deconv hourglass models with four key improvements: (1) multiscale supervision to strengthen contextual feature learning in matching body keypoints by combining feature heatmaps across scales, (2) multiscale regression network at the end to globally optimize the structural matching of the multi-scale features, (3) structure-aware loss used in the intermediate supervision and at the regression to improve the matching of keypoints and respective neighbors to infer a higher-order matching configurations, and (4) a keypoint masking training scheme that can effectively fine-tune our network to robustly localize occluded keypoints via adjacent matches. Our method can effectively improve state-of-theart pose estimation methods that suffer from difficulties in scale varieties, occlusions, and complex multi-person scenarios. This multi-scale supervision tightly integrates with the regression network to effectively (i) localize keypoints using the ensemble of multi-scale features, and (ii) infer global pose configuration by maximizing structural consistencies across multiple keypoints and scales. The keypoint masking training enhances these advantages to focus learning on hard occlusion samples. Our method achieves the leading position in the MPII challenge leaderboard among the state-of-the-art methods.",
"title": ""
},
{
"docid": "e86ad4e9b61df587d9e9e96ab4eb3978",
"text": "This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.",
"title": ""
},
{
"docid": "beb8cb3566af719308c9ec249c955ff0",
"text": " Abstract—This article presents the review of the computing models applied for solving problems of midterm load forecasting. The load forecasting results can be used in electricity generation such as energy reservation and maintenance scheduling. Principle, strategy and results of short term, midterm, and long term load forecasting using statistic methods and artificial intelligence technology (AI) are summaried, Which, comparison between each method and the articles have difference feature input and strategy. The last, will get the idea or literature review conclusion to solve the problem of mid term load forecasting (MTLF).",
"title": ""
},
{
"docid": "b07978a3871f0ba26fd6d1eb568b1b0a",
"text": "This paper presents an intermodulation distortion measurement system based on automated feedforward cancellation that achieves 113 dB of broadband spurious-free dynamic range for discrete tone separations down to 100 Hz. For 1-Hz tone separation, the dynamic range is 106 dB, limited by carrier phase noise. A single-tone cancellation formula is developed requiring only the power of the probing signal and the power of the combined probe and cancellation signal so that the phase shift required for cancellation can be predicted. The technique is applied to a two-path feedforward cancellation system in a bridge configuration. The effects of reflected signals and of group delay on system performance is discussed. Spurious frequency content and interchannel coupling are analyzed with respect to system linearity. Feedforward cancellation and consideration of electromagnetic radiation coupling and reverse-wave isolation effects extends the dynamic range of spectrum and vector analyzers by at least 40 dB. Application of the technique to the measurement of correlated and uncorrelated nonlinear distortion of an amplified wideband code-division multiple-access signal is presented.",
"title": ""
},
{
"docid": "f0846b4e74110ed469704c4a24407cc6",
"text": "Presently, a very large number of public and private data sets are available from local governments. In most cases, they are not semantically interoperable and a huge human effort would be needed to create integrated ontologies and knowledge base for smart city. Smart City ontology is not yet standardized, and a lot of research work is needed to identify models that can easily support the data reconciliation, the management of the complexity, to allow the data reasoning. In this paper, a system for data ingestion and reconciliation of smart cities related aspects as road graph, services available on the roads, traffic sensors etc., is proposed. The system allows managing a big data volume of data coming from a variety of sources considering both static and dynamic data. These data are mapped to a smart-city ontology, called KM4City (Knowledge Model for City), and stored into an RDF-Store where they are available for applications via SPARQL queries to provide new services to the users via specific applications of public administration and enterprises. The paper presents the process adopted to produce the ontology and the big data architecture for the knowledge base feeding on the basis of open and private data, and the mechanisms adopted for the data verification, reconciliation and validation. Some examples about the possible usage of the coherent big data knowledge base produced are also offered and are accessible from the RDF-store and related services. The article also presented the work performed about reconciliation algorithms and their comparative assessment and selection. & 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).",
"title": ""
},
{
"docid": "d5007c061227ec76a4e8ea795471db00",
"text": "The ramp loss is a robust but non-convex loss for classification. Compared with other non-convex losses, a local minimum of the ramp loss can be effectively found. The effectiveness of local search comes from the piecewise linearity of the ramp loss. Motivated by the fact that the `1-penalty is piecewise linear as well, the `1-penalty is applied for the ramp loss, resulting in a ramp loss linear programming support vector machine (rampLPSVM). The proposed ramp-LPSVM is a piecewise linear minimization problem and the related optimization techniques are applicable. Moreover, the `1-penalty can enhance the sparsity. In this paper, the corresponding misclassification error and convergence behavior are discussed. Generally, the ramp loss is a truncated hinge loss. Therefore ramp-LPSVM possesses some similar properties as hinge loss SVMs. A local minimization algorithm and a global search strategy are discussed. The good optimization capability of the proposed algorithms makes ramp-LPSVM perform well in numerical experiments: the result of rampLPSVM is more robust than that of hinge SVMs and is sparser than that of ramp-SVM, which consists of the ‖ · ‖K-penalty and the ramp loss.",
"title": ""
},
{
"docid": "484ddecc4ebcf33da0c3655034e47e37",
"text": "Determining the optimal thresholding for image segmentation has got more attention in recent years since it has many applications. There are several methods used to find the optimal thresholding values such as Otsu and Kapur based methods. These methods are suitable for bi-level thresholding case and they can be easily extended to the multilevel case, however, the process of determining the optimal thresholds in the case of multilevel thresholding is time-consuming. To avoid this problem, this paper examines the ability of two nature inspired algorithms namely: Whale Optimization Algorithm (WOA) and Moth-Flame Optimization (MFO) to determine the optimal multilevel thresholding for image segmentation. The MFO algorithm is inspired from the natural behavior of moths which have a special navigation style at night since they fly using the moonlight, whereas, the WOA algorithm emulates the natural cooperative behaviors of whales. The candidate solutions in the adapted algorithms were created using the image histogram, and then they were updated based on the characteristics of each algorithm. The solutions are assessed using the Otsu’s fitness function during the optimization operation. The performance of the proposed algorithms has been evaluated using several of benchmark images and has been compared with five different swarm algorithms. The results have been analyzed based on the best fitness values, PSNR, and SSIM measures, as well as time complexity and the ANOVA test. The experimental results showed that the proposed methods outperformed the other swarm algorithms; in addition, the MFO showed better results than WOA, as well as provided a good balance between exploration and exploitation in all images at small and high threshold numbers. © 2017 Elsevier Ltd. All rights reserved. r t e p m a h o r b t K",
"title": ""
},
{
"docid": "5495ed83b98364af094efa735b391ff1",
"text": "In this review we integrate results of long term experimental study on ant ”language” and intelligence which were fully based on fundamental ideas of Information Theory, such as the Shannon entropy, the Kolmogorov complexity, and the Shannon’s equation connecting the length of a message (l) and its frequency (p), i.e. l = − log p for rational communication systems. This approach, new for studying biological communication systems, enabled us to obtain the following important results on ants’ communication and intelligence: i) to reveal ”distant homing” in ants, that is, their ability to transfer information about remote events; ii) to estimate the rate of information transmission; iii) to reveal that ants are able to grasp regularities and to use them for ”compression” of information; iv) to reveal that ants are able to transfer to each other the information about the number of objects; v) to discover that ants can add and subtract small numbers. The obtained results show that Information Theory is not only wonderful mathematical theory, but many its results may be considered as Nature laws.",
"title": ""
},
{
"docid": "78e21364224b9aa95f86ac31e38916ef",
"text": "Gamification is the use of game design elements and game mechanics in non-game contexts. This idea has been used successfully in many web based businesses to increase user engagement. Some researchers suggest that it could also be used in web based education as a tool to increase student motivation and engagement. In an attempt to verify those theories, we have designed and built a gamification plugin for a well-known e-learning platform. We have made an experiment using this plugin in a university course, collecting quantitative and qualitative data in the process. Our findings suggest that some common beliefs about the benefits obtained when using games in education can be challenged. Students who completed the gamified experience got better scores in practical assignments and in overall score, but our findings also suggest that these students performed poorly on written assignments and participated less on class activities, although their initial motivation was higher. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9f9493c695ca8ed62447f4ce1a0c4907",
"text": "Our focus in this research is on the use of deep learning approaches for human activity recognition (HAR) scenario, in which inputs are multichannel time series signals acquired from a set of body-worn inertial sensors and outputs are predefined human activities. Here, we present a feature learning method that deploys convolutional neural networks (CNN) to automate feature learning from the raw inputs in a systematic way. The influence of various important hyper-parameters such as number of convolutional layers and kernel size on the performance of CNN was monitored. Experimental results indicate that CNNs achieved significant speed-up in computing and deciding the final class and marginal improvement in overall classification accuracy compared to the baseline models such as Support Vector Machines and Multi-layer perceptron networks.",
"title": ""
},
{
"docid": "a6ff0eb31f2beb3fb7d8d731ca96da03",
"text": "The paper analyses the entry strategies of software firms that adopt the Open Source production model. A new definition of business model is proposed. Empirical evidence, based on an exploratory survey taken on 146 Italian software firms, shows that firms adapted to an environment dominated by incumbent standards by combining Open Source and proprietary software. The paper examines the determinants of business models and discusses the stability of hybrid models in the evolution of the industry.",
"title": ""
},
{
"docid": "0d020e98448f2413e271c70e2a321fb4",
"text": "Material classification is an important application in computer vision. The inherent property of materials to partially polarize the reflected light can serve as a tool to classify them. In this paper, a real-time polarization sensing CMOS image sensor using a wire grid polarizer is proposed. The image sensor consist of an array of 128 × 128 pixels, occupies an area of 5 × 4 mm2 and it has been designed and fabricated in a 180-nm CMOS process. We show that this image sensor can be used to differentiate between metal and dielectric surfaces in real-time due to the different nature in partially polarizing the specular and diffuse reflection components of the reflected light. This is achieved by calculating the Fresnel reflection coefficients, the degree of polarization and the variations in the maximum and minimum transmitted intensities for varying specular angle of incidence. Differences in the physical parameters for various metal surfaces result in different surface reflection behavior, influencing the Fresnel reflection coefficients. It is also shown that the image sensor can differentiate among various metals by sensing the change in the polarization Fresnel ratio.",
"title": ""
},
{
"docid": "60cac74e5feffb45f3b926ce2ec8b0b9",
"text": "Battery power is an important resource in ad hoc networks. It has been observed that in ad hoc networks, energy consumption does not reflect the communication activities in the network. Many existing energy conservation protocols based on electing a routing backbone for global connectivity are oblivious to traffic characteristics. In this paper, we propose an extensible on-demand power management framework for ad hoc networks that adapts to traffic load. Nodes maintain soft-state timers that determine power management transitions. By monitoring routing control messages and data transmission, these timers are set and refreshed on-demand. Nodes that are not involved in data delivery may go to sleep as supported by the MAC protocol. This soft state is aggregated across multiple flows and its maintenance requires no additional out-of-band messages. We implement a prototype of our framework in the ns-2 simulator that uses the IEEE 802.11 MAC protocol. Simulation studies using our scheme with the Dynamic Source Routing protocol show a reduction in energy consumption near 50% when compared to a network without power management under both long-lived CBR traffic and on-off traffic loads, with comparable throughput and latency. Preliminary results also show that it outperforms existing routing backbone election approaches.",
"title": ""
},
{
"docid": "2b6f95a75b116150311153fe0e55c11a",
"text": "Gene–gene interactions (GGIs) are important markers for determining susceptibility to a disease. Multifactor dimensionality reduction (MDR) is a popular algorithm for detecting GGIs and primarily adopts the correct classification rate (CCR) to assess the quality of a GGI. However, CCR measurement alone may not successfully detect certain GGIs because of potential model preferences and disease complexities. In this study, multiple-criteria decision analysis (MCDA) based on MDR was named MCDA-MDR and proposed for detecting GGIs. MCDA facilitates MDR to simultaneously adopt multiple measures within the two-way contingency table of MDR to assess GGIs; the CCR and rule utility measure were employed. Cross-validation consistency was adopted to determine the most favorable GGIs among the Pareto sets. Simulation studies were conducted to compare the detection success rates of the MDR-only-based measure and MCDA-MDR, revealing that MCDA-MDR had superior detection success rates. The Wellcome Trust Case Control Consortium dataset was analyzed using MCDA-MDR to detect GGIs associated with coronary artery disease, and MCDA-MDR successfully detected numerous significant GGIs (p < 0.001). MCDA-MDR performance assessment revealed that the applied MCDA successfully enhanced the GGI detection success rate of the MDR-based method compared with MDR alone.",
"title": ""
},
{
"docid": "e3d8ce945e727e8b31a764ffd226353b",
"text": "Epilepsy is a neurological disorder with prevalence of about 1-2% of the world’s population (Mormann, Andrzejak, Elger & Lehnertz, 2007). It is characterized by sudden recurrent and transient disturbances of perception or behaviour resulting from excessive synchronization of cortical neuronal networks; it is a neurological condition in which an individual experiences chronic abnormal bursts of electrical discharges in the brain. The hallmark of epilepsy is recurrent seizures termed \"epileptic seizures\". Epileptic seizures are divided by their clinical manifestation into partial or focal, generalized, unilateral and unclassified seizures (James, 1997; Tzallas, Tsipouras & Fotiadis, 2007a, 2009). Focal epileptic seizures involve only part of cerebral hemisphere and produce symptoms in corresponding parts of the body or in some related mental functions. Generalized epileptic seizures involve the entire brain and produce bilateral motor symptoms usually with loss of consciousness. Both types of epileptic seizures can occur at all ages. Generalized epileptic seizures can be subdivided into absence (petit mal) and tonic-clonic (grand mal) seizures (James, 1997).",
"title": ""
},
{
"docid": "e48941f23ee19ec4b26c4de409a84fe2",
"text": "Object recognition is challenging especially when the objects from different categories are visually similar to each other. In this paper, we present a novel joint dictionary learning (JDL) algorithm to exploit the visual correlation within a group of visually similar object categories for dictionary learning where a commonly shared dictionary and multiple category-specific dictionaries are accordingly modeled. To enhance the discrimination of the dictionaries, the dictionary learning problem is formulated as a joint optimization by adding a discriminative term on the principle of the Fisher discrimination criterion. As well as presenting the JDL model, a classification scheme is developed to better take advantage of the multiple dictionaries that have been trained. The effectiveness of the proposed algorithm has been evaluated on popular visual benchmarks.",
"title": ""
},
{
"docid": "5ac59d652605f728f65d474bcc53b8d7",
"text": "Research has revealed that the correlation between distance and RSSI (Received Signal Strength Indication) values is the key of ranging and localization technologies in wireless sensor networks (WSNs). In this paper, an RSSI model that estimates the distance between sensor nodes in WSNs is presented. The performance of this model is evaluated and analyzed in a real system deployment in an indoor and outdoor environment by performing an empirical measurement using Crossbow IRIS wireless sensor motes. Our result shows that there is less error in distance estimation in an outdoor environment compared to indoor environment. The results of these evaluations would contribute towards obtaining accurate locations of wireless sensor nodes.",
"title": ""
}
] |
scidocsrr
|
63d4e7f772e289140337e8befd22ba31
|
Big data, open government and e-government: Issues, policies and recommendations
|
[
{
"docid": "d9b19dd523fd28712df61384252d331c",
"text": "Purpose – The purpose of this paper is to examine the ways in which governments build social media and information and communication technologies (ICTs) into e-government transparency initiatives, to promote collaboration with members of the public and the ways in members of the public are able to employ the same social media to monitor government activities. Design/methodology/approach – This study used an iterative strategy that involved conducting a literature review, content analysis, and web site analysis, offering multiple perspectives on government transparency efforts, the role of ICTs and social media in these efforts, and the ability of e-government initiatives to foster collaborative transparency through embedded ICTs and social media. Findings – The paper identifies key initiatives, potential impacts, and future challenges for collaborative e-government as a means of transparency. Originality/value – The paper is one of the first to examine the interrelationships between ICTs, social media, and collaborative e-government to facilitate transparency.",
"title": ""
}
] |
[
{
"docid": "f9232e4a2d18a4cf6858b5739434273f",
"text": "Face spoofing detection (i.e. face anti-spoofing) is emerging as a new research area and has already attracted a good number of works during the past five years. This paper addresses for the first time the key problem of the variation in the input image quality and resolution in face anti-spoofing. In contrast to most existing works aiming at extracting multiscale descriptors from the original face images, we derive a new multiscale space to represent the face images before texture feature extraction. The new multiscale space representation is derived through multiscale filtering. Three multiscale filtering methods are considered including Gaussian scale space, Difference of Gaussian scale space and Multiscale Retinex. Extensive experiments on three challenging and publicly available face anti-spoofing databases demonstrate the effectiveness of our proposed multiscale space representation in improving the performance of face spoofing detection based on gray-scale and color texture descriptors.",
"title": ""
},
{
"docid": "da74e402f4542b6cbfb27f04c7640eb4",
"text": "Hand-built verb clusters such as the widely used Levin classes (Levin, 1993) have proved useful, but have limited coverage. Verb classes automatically induced from corpus data such as those from VerbKB (Wijaya, 2016), on the other hand, can give clusters with much larger coverage, and can be adapted to specific corpora such as Twitter. We present a method for clustering the outputs of VerbKB: verbs with their multiple argument types, e.g.“marry(person, person)”, “feel(person, emotion).” We make use of a novel lowdimensional embedding of verbs and their arguments to produce high quality clusters in which the same verb can be in different clusters depending on its argument type. The resulting verb clusters do a better job than hand-built clusters of predicting sarcasm, sentiment, and locus of control in tweets.",
"title": ""
},
{
"docid": "e1c298ea1c0a778a91e302202b8e1463",
"text": "Computational topology has recently seen an important development toward data analysis, giving birth to the field of topological data analysis. Topological persistence, or persistent homology, appears as a fundamental tool in this field. In this paper, we study topological persistence in general metric spaces, with a statistical approach. We show that the use of persistent homology can be naturally considered in general statistical frameworks and that persistence diagrams can be used as statistics with interesting convergence properties. Some numerical experiments are performed in various contexts to illustrate our results.",
"title": ""
},
{
"docid": "e99c12645fd14528a150f915b3849c2b",
"text": "Teaching in the cyberspace classroom requires moving beyond old models of. pedagogy into new practices that are more facilitative. It involves much more than simply taking old models of pedagogy and transferring them to a different medium. Unlike the face-to-face classroom, in online distance education, attention needs to be paid to the development of a sense of community within the group of participants in order for the learning process to be successful. The transition to the cyberspace classroom can be successfully achieved if attention is paid to several key areas. These include: ensuring access to and familiarity with the technology in use; establishing guidelines and procedures which are relatively loose and free-flowing, and generated with significant input from participants; striving to achieve maximum participation and \"buy-in\" from the participants; promoting collaborative learning; and creating a double or triple loop in the learning process to enable participants to reflect on their learning process. All of these practices significantly contribute to the development of an online learning community, a powerful tool for enhancing the learning experience. Each of these is reviewed in detail in the paper. (AEF) Reproductions supplied by EDRS are the best that can be made from the original document. Making the Transition: Helping Teachers to Teach Online Rena M. Palloff, Ph.D. Crossroads Consulting Group and The Fielding Institute Alameda, CA",
"title": ""
},
{
"docid": "27f3060ef96f1656148acd36d50f02ce",
"text": "Video sensors become particularly important in traffic applications mainly due to their fast response, easy installation, operation and maintenance, and their ability to monitor wide areas. Research in several fields of traffic applications has resulted in a wealth of video processing and analysis methods. Two of the most demanding and widely studied applications relate to traffic monitoring and automatic vehicle guidance. In general, systems developed for these areas must integrate, amongst their other tasks, the analysis of their static environment (automatic lane finding) and the detection of static or moving obstacles (object detection) within their space of interest. In this paper we present an overview of image processing and analysis tools used in these applications and we relate these tools with complete systems developed for specific traffic applications. More specifically, we categorize processing methods based on the intrinsic organization of their input data (feature-driven, area-driven, or model-based) and the domain of processing (spatial/frame or temporal/video). Furthermore, we discriminate between the cases of static and mobile camera. Based on this categorization of processing tools, we present representative systems that have been deployed for operation. Thus, the purpose of the paper is threefold. First, to classify image-processing methods used in traffic applications. Second, to provide the advantages and disadvantages of these algorithms. Third, from this integrated consideration, to attempt an evaluation of shortcomings and general needs in this field of active research. q 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "1de0fb2c19bf7a61ac2c89af49e3b386",
"text": "Many situations in human life present choices between (a) narrowly preferred particular alternatives and (b) narrowly less preferred (or aversive) particular alternatives that nevertheless form part of highly preferred abstract behavioral patterns. Such alternatives characterize problems of self-control. For example, at any given moment, a person may accept alcoholic drinks yet also prefer being sober to being drunk over the next few days. Other situations present choices between (a) alternatives beneficial to an individual and (b) alternatives that are less beneficial (or harmful) to the individual that would nevertheless be beneficial if chosen by many individuals. Such alternatives characterize problems of social cooperation; choices of the latter alternative are generally considered to be altruistic. Altruism, like self-control, is a valuable temporally-extended pattern of behavior. Like self-control, altruism may be learned and maintained over an individual's lifetime. It needs no special inherited mechanism. Individual acts of altruism, each of which may be of no benefit (or of possible harm) to the actor, may nevertheless be beneficial when repeated over time. However, because each selfish decision is individually preferred to each altruistic decision, people can benefit from altruistic behavior only when they are committed to an altruistic pattern of acts and refuse to make decisions on a case-by-case basis.",
"title": ""
},
{
"docid": "9cf470291ddde91679d8250797a740d2",
"text": "Decentralized blockchains offer attractive advantages over traditional payments such as the ability to operate without a trusted authority and increased user privacy. However, the verification of blockchain payments requires the user to download and process the entire chain which can be infeasible for resource-constrained devices, such as mobile phones. To address such concerns, most major blockchain systems support lightweight clients that outsource most of the computational and storage burden to full blockchain nodes. However, such payment verification methods leak considerable information about the underlying clients, thus defeating user privacy that is considered one of the main goals of decentralized cryptocurrencies. In this paper, we propose a new approach to protect the privacy of lightweight clients in blockchain systems like Bitcoin. Our main idea is to leverage commonly available trusted execution capabilities, such as SGX enclaves. We design and implement a system called Bite where enclaves on full nodes serve privacy-preserving requests from lightweight clients. As we will show, naive serving of client requests from within SGX enclaves still leaks user information. Bite therefore integrates several privacy preservation measures that address external leakage as well as SGX side-channels. We show that the resulting solution provides strong privacy protection and at the same time improves the performance of current lightweight clients.",
"title": ""
},
{
"docid": "ac6430e097fb5a7dc1f7864f283dcf47",
"text": "In the task of Object Recognition, there exists a dichotomy between the categorization of objects and estimating object pose, where the former necessitates a view-invariant representation, while the latter requires a representation capable of capturing pose information over different categories of objects. With the rise of deep architectures, the prime focus has been on object category recognition. Deep learning methods have achieved wide success in this task. In contrast, object pose regression using these approaches has received relatively much less attention. In this paper we show how deep architectures, specifically Convolutional Neural Networks (CNN), can be adapted to the task of simultaneous categorization and pose estimation of objects. We investigate and analyze the layers of various CNN models and extensively compare between them with the goal of discovering how the layers of distributed representations of CNNs represent object pose information and how this contradicts object category representations. We extensively experiment on two recent large and challenging multi-view datasets. Our models achieve better than state-of-the-art performance on both datasets.",
"title": ""
},
{
"docid": "7db5807fc15aeb8dfe4669a8208a8978",
"text": "This document is an output from a project funded by the UK Department for International Development (DFID) for the benefit of developing countries. The views expressed are not necessarily those of DFID. Contents Contents i List of tables ii List of figures ii List of boxes ii Acronyms iii Acknowledgements iv Summary 1 1. Introduction: why worry about disasters? 7 Objectives of this Study 7 Global disaster trends 7 Why donors should be concerned 9 What donors can do 9 2. What makes a disaster? 11 Characteristics of a disaster 11 Disaster risk reduction 12 The diversity of hazards 12 Vulnerability and capacity, coping and adaptation 15 Resilience 16 Poverty and vulnerability: links and differences 16 'The disaster management cycle' 17 3. Why should disasters be a development concern? 19 3.1 Disasters hold back development 19 Disasters undermine efforts to achieve the Millennium Development Goals 19 Macroeconomic impacts of disasters 21 Reallocation of resources from development to emergency assistance 22 Disaster impact on communities and livelihoods 23 3.2 Disasters are rooted in development failures 25 Dominant development models and risk 25 Development can lead to disaster 26 Poorly planned attempts to reduce risk can make matters worse 29 Disaster responses can themselves exacerbate risk 30 3.3 'Disaster-proofing' development: what are the gains? 31 From 'vicious spirals' of failed development and disaster risk… 31 … to 'virtuous spirals' of risk reduction 32 Disaster risk reduction can help achieve the Millennium Development Goals 33 … and can be cost-effective 33 4. Why does development tend to overlook disaster risk? 36 4.1 Introduction 36 4.2 Incentive, institutional and funding structures 36 Political incentives and governance in disaster prone countries 36 Government-donor relations and moral hazard 37 Donors and multilateral agencies 38 NGOs 41 4.3 Lack of exposure to and information on disaster issues 41 4.4 Assumptions about the risk-reducing capacity of development 43 ii 5. Tools for better integrating disaster risk reduction into development 45 Introduction 45 Poverty Reduction Strategy Papers (PRSPs) 45 UN Development Assistance Frameworks (UNDAFs) 47 Country assistance plans 47 National Adaptation Programmes of Action (NAPAs) 48 Partnership agreements with implementing agencies and governments 49 Programme and project appraisal guidelines 49 Early warning and information systems 49 Risk transfer mechanisms 51 International initiatives and policy forums 51 Risk reduction performance targets and indicators for donors 52 6. Conclusions and recommendations 53 6.1 Main conclusions 53 6.2 Recommendations 54 Core recommendation …",
"title": ""
},
{
"docid": "3a12c19fce9d9fbde7fdb6afa161bb7e",
"text": "The accurate diagnosis of Alzheimer's disease (AD) is essential for patient care and will be increasingly important as disease modifying agents become available, early in the course of the disease. Although studies have applied machine learning methods for the computer-aided diagnosis of AD, a bottleneck in the diagnostic performance was shown in previous methods, due to the lacking of efficient strategies for representing neuroimaging biomarkers. In this study, we designed a novel diagnostic framework with deep learning architecture to aid the diagnosis of AD. This framework uses a zero-masking strategy for data fusion to extract complementary information from multiple data modalities. Compared to the previous state-of-the-art workflows, our method is capable of fusing multimodal neuroimaging features in one setting and has the potential to require less labeled data. A performance gain was achieved in both binary classification and multiclass classification of AD. The advantages and limitations of the proposed framework are discussed.",
"title": ""
},
{
"docid": "b37ecfa2f503035b91467aa7ce453869",
"text": "One of the important problems that our society faces is that people with disabilities are finding it hard to cope up with the fast growing technology. The access to communication technologies has become essential for the handicapped people. Generally deaf and dumb people use sign language for communication but they find difficulty in communicating with others who don’t understand sign language. Sign language is an expressive and natural way for communication between normal and dumb people (information majorly conveyed through the hand gesture). So, we need a translator to understand what they speak and communicate with us. The sign language translation system translates the normal sign language to speech and hence makes the communication between normal person and dumb people easier. So, the whole idea is to build a communication system that enables communications between speech-hearing impaired and a normal person.",
"title": ""
},
{
"docid": "e4adbb37f365197249d5e0aacb8f27d4",
"text": "Workplace stress can influence healthcare professionals' physical and emotional well-being by curbing their efficiency and having a negative impact on their overall quality of life. The aim of the present study was to investigate the impact that work environment in a local public general hospital can have on the health workers' mental-emotional health and find strategies in order to cope with negative consequences. The study took place from July 2010 to October 2010. Our sample consisted of 200 healthcare professionals aged 21-58 years working in a 240-bed general hospital and the response rate was 91.36%). Our research protocol was first approved by the hospital's review board. A standardized questionnaire that investigates strategies for coping with stressful conditions was used. A standardized questionnaire was used in the present study Coping Strategies for Stressful Events, evaluating the strategies that persons employ in order to overcome a stressful situation or event. The questionnaire was first tested for validity and reliability which were found satisfactory (Cronbach's α=0.862). Strict anonymity of the participants was guaranteed. The SPSS 16.0 software was used for the statistical analysis. Regression analysis showed that health professionals' emotional health can be influenced by strategies for dealing with stressful events, since positive re-assessment, quitting and seeking social support are predisposing factors regarding the three first quality of life factors of the World Health Organization Quality of Life - BREF. More specifically, for the physical health factor, positive re-assessment (t=3.370, P=0.001) and quitting (t=-2.564, P=0.011) are predisposing factors. For the 'mental health and spirituality' regression model, positive re-assessment (t=5.528, P=0.000) and seeking social support (t=-1.991, P=0.048) are also predisposing factors, while regarding social relationships positive re-assessment (t=4.289, P=0.000) is a predisposing factor. According to our findings, there was a notable lack of workplace stress management strategies, which the participants usually perceive as a lack of interest on behalf of the management regarding their emotional state. Some significant factors for lowering workplace stress were found to be the need to encourage and morally reward the staff and also to provide them with opportunities for further or continuous education.",
"title": ""
},
{
"docid": "1a8e346b6f2cd1c368f449f9a9474e5c",
"text": "Fuzzing is the process of finding security vulnerabilities in input-processing code by repeatedly testing the code with modified inputs. In this paper, we formalize fuzzing as a reinforcement learning problem using the concept of Markov decision processes. This in turn allows us to apply state-of-the-art deep Q-learning algorithms that optimize rewards, which we define from runtime properties of the program under test. By observing the rewards caused by mutating with a specific set of actions performed on an initial program input, the fuzzing agent learns a policy that can next generate new higher-reward inputs. We have implemented this new approach, and preliminary empirical evidence shows that reinforcement fuzzing can outperform baseline random fuzzing.",
"title": ""
},
{
"docid": "ef239b2f40847b9670b3c4b08630535f",
"text": "When a page of a book is scanned or photocopied, textual noise (extraneous symbols from the neighboring page) and/or non-textual noise (black borders, speckles, ...) appear along the border of the document. Existing document analysis methods can handle non-textual noise reasonably well, whereas textual noise still presents a major issue for document analysis systems. Textual noise may result in undesired text in optical character recognition (OCR) output that needs to be removed afterwards. Existing document cleanup methods try to explicitly detect and remove marginal noise. This paper presents a new perspective for document image cleanup by detecting the page frame of the document. The goal of page frame detection is to find the actual page contents area, ignoring marginal noise along the page border. We use a geometric matching algorithm to find the optimal page frame of structured documents (journal articles, books, magazines) by exploiting their text alignment property. We evaluate the algorithm on the UW-III database. The results show that the error rates are below 4% each of the performance measures used. Further tests were run on a dataset of magazine pages and on a set of camera captured document images. To demonstrate the benefits of using page frame detection in practical applications, we choose OCR and layout-based document image retrieval as sample applications. Experiments using a commercial OCR system show that by removing characters outside the computed page frame, the OCR error rate is reduced from 4.3 to 1.7% on the UW-III dataset. The use of page frame detection in layout-based document image retrieval application decreases the retrieval error rates by 30%.",
"title": ""
},
{
"docid": "2f147f15054fb65047a1d25b8531915a",
"text": "Semantically, objects in unstructured document are related each other to perform a certain entity relation. This certain entity relation such: drug-drug interaction through their compounds, buyer-seller relationship through the goods or services, etc. Motivated by that kind of interaction, this study proposes a method to extract those objects and their interactions. It is presented a general framework of objectinteraction mining of large corpora. The framework is started with the initial step in extracting a single object in the unstructured document. In this study, the initial step is a pattern learning method that is applied to drug-label documents to extract drug-names. We utilize an existing external knowledge to identify a certain regular expressions surrounding the targeted object and the probabilities of that regular expression, to perform the pattern learning process. The performance of this pattern learning approach is promising to apply in this relation extraction area. As presented in the results of this study, the best f-score performance of this method is 0.78 f-score. With adjusting of some parameters and or improving the method, the performance can be potentially improved",
"title": ""
},
{
"docid": "179b4e560ce520b04bab91d82532337e",
"text": "Non-orthogonal multiple access (NOMA) is recognized today as a most promising technology for future 5G cellular networks and a large number of papers have been published on the subject over the past few years. Interestingly, none of these authors seems to be aware that the foundation of NOMA actually dates back to the year 2000, when a series of papers introduced and investigated multiple access schemes using two sets of orthogonal signal waveforms and iterative interference cancellation at the receiver. The purpose of this paper is to shed light on that early literature and to describe a practical scheme based on that concept, which is particularly attractive for machine-type communications (MTC) in future 5G cellular networks. Using this approach, NOMA appears as a convenient extension of orthogonal multiple access rather than a strictly competing technology, and most important of all, the power imbalance between the transmitted user signals that is required to make the receiver work in other NOMA schemes is not required here.",
"title": ""
},
{
"docid": "6e44c8087c82e2adce968bf97d2e7dc6",
"text": "We propose an algorithm that is based on the Ant Colony Optimization (ACO) metaheuristic for producing harmonized melodies. The algorithm works in two stages. In the first stage it creates a melody. This melody is then harmonized according to the rules of Baroque harmony in the second stage. This is the first ACO algorithm to create music that uses domain knowledge and the first employed for harmonization of a melody.",
"title": ""
},
{
"docid": "71205109d592933f063574286817589b",
"text": "Robot therapy for elderly residents in a care house has been conducted from June, 2005. Two therapeutic seal robots were introduced, and activated for over 9 hours every day to interact with the residents. This paper presents a progress report of this experiment. In order to investigate psychological and social influences of the robots, each subject was interviewed, and their social network was analysed. In addition, their hormones in urine: 17 Ketosteroid sulfate (17-KS-S) and 17-hydroxycorticosteroids (17-OHCS) were obtained and analysed. The results indicate that the density of the social networks was increased through interaction with the seal robots. Furthermore, urinary tests showed that the reactions of the subjects' vital organs to stress were improved after the introduction of the robots",
"title": ""
},
{
"docid": "d509c5dfcadc2a031433f2a4bcadf79c",
"text": "The role of different project management techniques to implement projects successfully has been widely established in areas such as the planning and control of time, cost and quality. In spite of this the distinction between the project and project management is less than precise. This paper aims to identify the overlap between the definition of the project and project management and to discuss how the confusion between the two may affect their relationship. It identifies the different individuals involved on the project and project management, together with their objectives, expectations and influences. It demonstrates how a better appreciation of the distinction between the two will bring a higher possibility of project success. Copyright © Elsevier Science Ltd and IPMA",
"title": ""
},
{
"docid": "e13dcab3abbd1abf159ed87ba67dc490",
"text": "A virtual keyboard takes a large portion of precious screen real estate. We have investigated whether an invisible keyboard is a feasible design option, how to support it, and how well it performs. Our study showed users could correctly recall relative key positions even when keys were invisible, although with greater absolute errors and overlaps between neighboring keys. Our research also showed adapting the spatial model in decoding improved the invisible keyboard performance. This method increased the input speed by 11.5% over simply hiding the keyboard and using the default spatial model. Our 3-day multi-session user study showed typing on an invisible keyboard could reach a practical level of performance after only a few sessions of practice: the input speed increased from 31.3 WPM to 37.9 WPM after 20 - 25 minutes practice on each day in 3 days, approaching that of a regular visible keyboard (41.6 WPM). Overall, our investigation shows an invisible keyboard with adapted spatial model is a practical and promising interface option for the mobile text entry systems.",
"title": ""
}
] |
scidocsrr
|
f10892e37c794358a651697a493161ad
|
NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors
|
[
{
"docid": "63bf62c5c0027980958a90481a18d642",
"text": "Spiking neural network simulators provide environments in which to implement and experiment with models of biological brain structures. Simulating large-scale models is computationally expensive, however, due to the number and interconnectedness of neurons in the brain. Furthermore, where such simulations are used in an embodied setting, the simulation must be real-time in order to be useful. In this paper we present a platform (nemo) for such simulations which achieves high performance on parallel commodity hardware in the form of graphics processing units (GPUs). This work makes use of the Izhikevich neuron model which provides a range of realistic spiking dynamics while being computationally efficient. Learning is facilitated through spike-timing dependent synaptic plasticity. Our GPU kernel can deliver up to 550 million spikes per second using a single device. This corresponds to a real-time simulation of around 55 000 neurons under biologically plausible conditions with 1000 synapses per neuron and a mean firing rate of 10 Hz.",
"title": ""
}
] |
[
{
"docid": "2b75aedec2f8acc52e22e0f22123fb1e",
"text": "Reinforcement Learning (RL) is a generic framework for modeling decision making processes and as such very suited to the task of automatic summarization. In this paper we present a RL method, which takes into account intermediate steps during the creation of a summary. Furthermore, we introduce a new feature set, which describes sentences with respect to already selected sentences. We carry out a range of experiments on various data sets – including several DUC data sets, but also scientific publications and encyclopedic articles. Our results show that our approach a) successfully adapts to data sets from various domains, b) outperforms previous RL-based methods for summarization and state-of-the-art summarization systems in general, and c) can be equally applied to singleand multidocument summarization on various domains and document lengths.",
"title": ""
},
{
"docid": "4c9d20c4d264a950cb89bd41401ec99a",
"text": "The primary goal of a recommender system is to generate high quality user-centred recommendations. However, the traditional evaluation methods and metrics were developed before researchers understood all the factors that increase user satisfaction. This study is an introduction to a novel user and item classification framework. It is proposed that this framework should be used during user-centred evaluation of recommender systems and the need for this framework is justified through experiments. User profiles are constructed and matched against other users’ profiles to formulate neighbourhoods and generate top-N recommendations. The recommendations are evaluated to measure the success of the process. In conjunction with the framework, a new diversity metric is presented and explained. The accuracy, coverage, and diversity of top-N recommendations is illustrated and discussed for groups of users. It is found that in contradiction to common assumptions, not all users suffer as expected from the data sparsity problem. In fact, the group of users that receive the most accurate recommendations do not belong to the least sparse area of the dataset.",
"title": ""
},
{
"docid": "488110f56eee525ae4f06f21da795f78",
"text": "Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to deliver insightful explanations in the form of input space relevances for understanding feed-forward neural network classification decisions. In the present work, we extend the usage of LRP to recurrent neural networks. We propose a specific propagation rule applicable to multiplicative connections as they arise in recurrent network architectures such as LSTMs and GRUs. We apply our technique to a word-based bi-directional LSTM model on a five-class sentiment prediction task, and evaluate the resulting LRP relevances both qualitatively and quantitatively, obtaining better results than a gradient-based related method which was used in previous work.",
"title": ""
},
{
"docid": "526238c8369bb37048f3165b2ace0d15",
"text": "With their exceptional interactive and communicative capabilities, Online Social Networks (OSNs) allow destinations and companies to heighten their brand awareness. Many tourist destinations and hospitality brands are exploring the use of OSNs to form brand awareness and generate positive WOM. The purpose of this research is to propose and empirically test a theory-driven model of brand awareness in OSNs. A survey among 230 OSN users was deployed to test the theoretical model. The data was analyzed using SEM. Study results indicate that building brand awareness in OSNs increases WOM traffic. In order to foster brand awareness in OSN, it is important to create a virtually interactive environment, enabling users to exchange reliable, rich and updated information in a timely manner. Receiving financial and/or psychological rewards and accessing exclusive privileges in OSNs are important factors for users. Both system quality and information quality were found to be important precursors of brand awareness in OSNs. Study results support the importance of social media in online branding strategies. Virtual interactivity, system quality, information content quality, and rewarding activities influence and generate brand awareness, which in return, triggers WOM. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8f73870d5e999c0269059c73bb85e05c",
"text": "Placing the DRAM in the same package as a processor enables several times higher memory bandwidth than conventional off-package DRAM. Yet, the latency of in-package DRAM is not appreciably lower than that of off-package DRAM. A promising use of in-package DRAM is as a large cache. Unfortunately, most previous DRAM cache designs optimize mainly for cache hit latency and do not consider bandwidth efficiency as a first-class design constraint. Hence, as we show in this paper, these designs are suboptimal for use with in-package DRAM.\n We propose a new DRAM cache design, Banshee, that optimizes for both in-package and off-package DRAM bandwidth efficiency without degrading access latency. Banshee is based on two key ideas. First, it eliminates the tag lookup overhead by tracking the contents of the DRAM cache using TLBs and page table entries, which is efficiently enabled by a new lightweight TLB coherence protocol we introduce. Second, it reduces unnecessary DRAM cache replacement traffic with a new bandwidth-aware frequency-based replacement policy. Our evaluations show that Banshee significantly improves performance (15% on average) and reduces DRAM traffic (35.8% on average) over the best-previous latency-optimized DRAM cache design.",
"title": ""
},
{
"docid": "46533c7b42e2bad3fb0b65722479a552",
"text": "Agarwal, R., Krudys, G., and Tanniru, M. 1997. “Infusing Learning into the Information Systems Organization,” European Journal of Information Systems (6:1), pp. 25-40. Alavi, M., and Leidner, D. E. 2001. “Review: Knowledge Management and Knowledge Management Systems: Conceptual Foundations and Research Issues,” MIS Quarterly (25:1), pp. 107-136. Andersen, T. J. 2001. “Information Technology, Strategic Decision Making Approaches and Organizational Performance in Different Industrial Settings,” Journal of Strategic Information Systems (10:2), pp. 101-119. Andersen, T. J., and Segars, A. H. 2001. “The Impact of IT on Decision Structure and Firm Performance: Evidence from the Textile and Apparel Industry,” Information & Management (39:2), pp. 85-100. Andersson, M., Lindgren, R., and Henfridsson, A. 2008. “Architectural Knowledge in Inter-Organizational IT Innovation,” Journal of Strategic Information Systems (17:1), pp. 19-38. Armstrong, C. P., and Sambamurthy, V. 1999. “Information Technology Assimilation in Firms: The Influence of Senior Leadership and IT Infrastructures,” Information Systems Research (10:4), pp. 304-327. Auer, T. 1998. “Quality of IS Use,” European Journal of Information Systems (7:3), pp. 192-201. Bassellier, G., Benbasat, I., and Reich, B. H. 2003. “The Influence of Business Managers’ IT Competence on Championing IT,” Information Systems Research (14:4), pp. 317-336.",
"title": ""
},
{
"docid": "2c93fcf96c71c7c0a8dcad453da53f81",
"text": "Production cars are designed to understeer and rarely do they oversteer. If a car could automatically compensate for an understeer/oversteer problem, the driver would enjoy nearly neutral steering under varying operating conditions. Four-wheel steering is a serious effort on the part of automotive design engineers to provide near-neutral steering. Also in situations like low speed cornering, vehicle parking and driving in city conditions with heavy traffic in tight spaces, driving would be very difficult due to vehicle’s larger wheelbase and track width. Hence there is a requirement of a mechanism which result in less turning radius and it can be achieved by implementing four wheel steering mechanism instead of regular two wheel steering. In this project Maruti Suzuki 800 is considered as a benchmark vehicle. The main aim of this project is to turn the rear wheels out of phase to the front wheels. In order to achieve this, a mechanism which consists of two bevel gears and intermediate shaft which transmit 100% torque as well turns rear wheels in out of phase was developed. The mechanism was modelled using CATIA and the motion simulation was done using ADAMS. A physical prototype was realised. The prototype was tested for its cornering ability through constant radius test and was found 50% reduction in turning radius and the vehicle was operated at low speed of 10 kmph.",
"title": ""
},
{
"docid": "d83a90a3a080f4e3bce2a68d918d20ce",
"text": "We present a new class of low-bandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications’ data structures. Frequently used data structures have “average-case” expected running time that’s far more efficient than the worst case. For example, both binary trees and hash tables can degenerate to linked lists with carefully chosen input. We show how an attacker can effectively compute such input, and we demonstrate attacks against the hash table implementations in two versions of Perl, the Squid web proxy, and the Bro intrusion detection system. Using bandwidth less than a typical dialup modem, we can bring a dedicated Bro server to its knees; after six minutes of carefully chosen packets, our Bro server was dropping as much as 71% of its traffic and consuming all of its CPU. We show how modern universal hashing techniques can yield performance comparable to commonplace hash functions while being provably secure against these attacks.",
"title": ""
},
{
"docid": "dce032d1568e8012053de20fa7063c25",
"text": "Radial visualization continues to be a popular design choice in information visualization systems, due perhaps in part to its aesthetic appeal. However, it is an open question whether radial visualizations are truly more effective than their Cartesian counterparts. In this paper, we describe an initial user trial from an ongoing empirical study of the SQiRL (Simple Query interface with a Radial Layout) visualization system, which supports both radial and Cartesian projections of stacked bar charts. Participants were shown 20 diagrams employing a mixture of radial and Cartesian layouts and were asked to perform basic analysis on each. The participants' speed and accuracy for both visualization types were recorded. Our initial findings suggest that, in spite of the widely perceived advantages of Cartesian visualization over radial visualization, both forms of layout are, in fact, equally usable. Moreover, radial visualization may have a slight advantage over Cartesian for certain tasks. In a follow-on study, we plan to test users' ability to create, as well as read and interpret, radial and Cartesian diagrams in SQiRL.",
"title": ""
},
{
"docid": "98d6f207b9b032cd90f3b565b9e94fea",
"text": "The usage of machine learning techniques for the prediction of financial time series is investigated. Both discriminative and generative methods are considered and compared to more standard financial prediction techniques. Generative methods such as Switching Autoregressive Hidden Markov and changepoint models are found to be unsuccessful at predicting daily and minutely prices from a wide range of asset classes. Committees of discriminative techniques (Support Vector Machines (SVM), Relevance Vector Machines and Neural Networks) are found to perform well when incorporating sophisticated exogenous financial information in order to predict daily FX carry basket returns. The higher dimensionality that Electronic Communication Networks make available through order book data is transformed into simple features. These volumebased features, along with other price-based ones motivated by common trading rules, are used by Multiple Kernel Learning (MKL) to classify the direction of price movement for a currency over a range of time horizons. Outperformance relative to both individual SVM and benchmarks is found, along with an indication of which features are the most informative for financial prediction tasks. Fisher kernels based on three popular market microstructural models are added to the MKL set. Two subsets of this full set, constructed from the most frequently selected and highest performing individual kernels are also investigated. Furthermore, kernel learning is employed optimising hyperparameter and Fisher feature parameters with the aim of improving predictive performance. Significant improvements in out-of-sample predictive accuracy relative to both individual SVM and standard MKL is found using these various novel enhancements to the MKL algorithm.",
"title": ""
},
{
"docid": "7832707feef1e81c3a01e974c37a960b",
"text": "Most current commercial automated fingerprint-authentication systems on the market are based on the extraction of the fingerprint minutiae, and on medium resolution (500 dpi) scanners. Sensor manufacturers tend to reduce the sensing area in order to adapt it to low-power mobile hand-held communication systems and to lower the cost of their devices. An interesting alternative is designing a novel fingerprintauthentication system capable of dealing with an image from a small, high resolution (1000 dpi) sensor area based on combined level 2 (minutiae) and level 3 (sweat pores) feature extraction. In this paper, we propose a new strategy and implementation of a series of techniques for automatic level 2 and level 3 feature extraction in fragmentary fingerprint comparison. The main challenge in achieving high reliability while using a small portion of a fingerprint for matching is that there may not be a sufficient number of minutiae but the uniqueness of the pore configurations provides a powerful means to compensate for this insufficiency. A pilot study performed to test the presented approach confirms the efficacy of using pores in addition to the traditionally used minutiae in fragmentary fingerprint comparison.",
"title": ""
},
{
"docid": "4463a242a313f82527c4bdfff3d3c13c",
"text": "This paper examines the impact of capital structure on financial performance of Nigerian firms using a sample of thirty non-financial firms listed on the Nigerian Stock Exchange during the seven year period, 2004 – 2010. Panel data for the selected firms were generated and analyzed using ordinary least squares (OLS) as a method of estimation. The result shows that a firm’s capita structure surrogated by Debt Ratio, Dr has a significantly negative impact on the firm’s financial measures (Return on Asset, ROA, and Return on Equity, ROE). The study of these findings, indicate consistency with prior empirical studies and provide evidence in support of Agency cost theory.",
"title": ""
},
{
"docid": "07457116fbecf8e5182459961b8a87d0",
"text": "Modeling temporal sequences plays a fundamental role in various modern applications and has drawn more and more attentions in the machine learning community. Among those efforts on improving the capability to represent temporal data, the Long Short-Term Memory (LSTM) has achieved great success in many areas. Although the LSTM can capture long-range dependency in the time domain, it does not explicitly model the pattern occurrences in the frequency domain that plays an important role in tracking and predicting data points over various time cycles. We propose the State-Frequency Memory (SFM), a novel recurrent architecture that allows to separate dynamic patterns across different frequency components and their impacts on modeling the temporal contexts of input sequences. By jointly decomposing memorized dynamics into statefrequency components, the SFM is able to offer a fine-grained analysis of temporal sequences by capturing the dependency of uncovered patterns in both time and frequency domains. Evaluations on several temporal modeling tasks demonstrate the SFM can yield competitive performances, in particular as compared with the state-of-the-art LSTM models.",
"title": ""
},
{
"docid": "9a4cf33f429bd376be787feaa2881610",
"text": "By adopting a cultural transformation in its employees' approach to work and using manufacturing based continuous quality improvement methods, the surgical pathology division of Henry Ford Hospital, Detroit, MI, focused on reducing commonly encountered defects and waste in processes throughout the testing cycle. At inception, the baseline in-process defect rate was measured at nearly 1 in 3 cases (27.9%). After the year-long efforts of 77 workers implementing more than 100 process improvements, the number of cases with defects was reduced by 55% to 1 in 8 cases (12.5%), with a statistically significant reduction in the overall distribution of defects (P = .0004). Comparison with defects encountered in the pre-improvement period showed statistically significant reductions in pre-analytic (P = .0007) and analytic (P = .0002) test phase processes in the post-improvement period that included specimen receipt, specimen accessioning, grossing, histology slides, and slide recuts. We share the key improvements implemented that were responsible for the overall success in reducing waste and re-work in the broad spectrum of surgical pathology processes.",
"title": ""
},
{
"docid": "7d014f64578943f8ec8e5e27d313e148",
"text": "In this paper, we extend the Divergent Component of Motion (DCM, also called `Capture Point') to 3D. We introduce the “Enhanced Centroidal Moment Pivot point” (eCMP) and the “Virtual Repellent Point” (VRP), which allow for the encoding of both direction and magnitude of the external (e.g. leg) forces and the total force (i.e. external forces plus gravity) acting on the robot. Based on eCMP, VRP and DCM, we present a method for real-time planning and control of DCM trajectories in 3D. We address the problem of underactuation and propose methods to guarantee feasibility of the finally commanded forces. The capabilities of the proposed control framework are verified in simulations.",
"title": ""
},
{
"docid": "36165cb8c6690863ed98c490ba889a9e",
"text": "This paper presents a new low-cost digital control solution that maximizes the AC/DC flyback power supply efficiency. This intelligent digital approach achieves the combined benefits of high performance, low cost and high reliability in a single controller. It introduces unique multiple PWM and PFM operational modes adaptively based on the power supply load changes. While the multi-mode PWM/PFM control significantly improves the light-load efficiency and thus the overall average efficiency, it does not bring compromise to other system performance, such as audible noise, voltage ripples or regulations. It also seamlessly integrated an improved quasi-resonant switching scheme that enables valley-mode turn on in every switching cycle without causing modification to the main PWM/PFM control schemes. A digital integrated circuit (IC) that implements this solution, namely iW1696, has been fabricated and introduced to the industry recently. In addition to outlining the approach, this paper provides experimental results obtained on a 3-W (5V/550mA) cell phone charger that is built with the iW1696.",
"title": ""
},
{
"docid": "bf11641b432e551d61c56180d8f0e8eb",
"text": "Deep Reinforcement Learning algorithms lead to agents that can solve difficult decision making problems in complex environments. However, many difficult multi-agent competitive games, especially real-time strategy games are still considered beyond the capability of current deep reinforcement learning algorithms, although there has been a recent effort to change this (OpenAI, 2017; Vinyals et al., 2017). Moreover, when the opponents in a competitive game are suboptimal, the current Nash Equilibrium seeking, selfplay algorithms are often unable to generalize their strategies to opponents that play strategies vastly different from their own. This suggests that a learning algorithm that is beyond conventional self-play is necessary. We develop Hierarchical Agent with Self-Play , a learning approach for obtaining hierarchically structured policies that can achieve higher performance than conventional self-play on competitive games through the use of a diverse pool of sub-policies we get from Counter Self-Play (CSP). We demonstrate that the ensemble policy generated by Hierarchical Agent with Self-Play can achieve better performance while facing unseen opponents that use sub-optimal policies. On a motivating iterated Rock-Paper-Scissor game and a partially observable real-time strategic game (http://generals.io/), we are led to the conclusion that Hierarchical Agent with Self-Play can perform better than conventional self-play as well as achieve 77% win rate against FloBot, an open-source agent which has ranked at position number 2 on the online leaderboards.",
"title": ""
},
{
"docid": "3a8adb288f854fcb2193954cd45879d5",
"text": "In this paper, we introduce a novel concept for primary frequency-modulated continuous-wave (FMCW) radar. The approach is based on a phase locked loop-controlled interlaced chirp sequence (ICS) waveform and, in contrast to basic range-Doppler processing, it enables higher target velocities to be detected. It is thus very suitable for automotive applications. The interlaced ramps in the system are generated by two separate frequency synthesizers. These are combined by an RF switch to suppress transients caused by oscillator overshoot and to avoid incoherencies due to programming times of the phase locked loop (PLL) ICs. A prototype radar system was realized in K-Band. Promising test results bode well for other applications.",
"title": ""
},
{
"docid": "d6de2969e89e211f6faf8a47854ee43e",
"text": "Digital image forensics has attracted a lot of attention recently for its role in identifying the origin of digital image. Although different forensic approaches have been proposed, one of the most popular approaches is to rely on the imaging sensor pattern noise, where each sensor pattern noise uniquely corresponds to an imaging device and serves as the intrinsic fingerprint. The correlation-based detection is heavily dependent upon the accuracy of the extracted pattern noise. In this work, we discuss the way to extract the pattern noise, in particular, explore the way to make better use of the pattern noise. Unlike current methods that directly compare the whole pattern noise signal with the reference one, we propose to only compare the large components of these two signals. Our detector can better identify the images taken by different cameras. In the meantime, it needs less computational complexity.",
"title": ""
}
] |
scidocsrr
|
e56f8c998777186925b35880a48f91a6
|
Blind image deconvolution: theory and applications
|
[
{
"docid": "93297115eb5153a41a79efe582bd34b1",
"text": "Abslract Bayesian probabilily theory provides a unifying framework for dara modelling. In this framework the overall aims are to find models that are well-matched to, the &a, and to use &se models to make optimal predictions. Neural network laming is interpreted as an inference of the most probable parameters for Ihe model, given the training data The search in model space (i.e., the space of architectures, noise models, preprocessings, regularizes and weight decay constants) can then also be treated as an inference problem, in which we infer the relative probability of alternative models, given the data. This review describes practical techniques based on G ~ ~ s s ~ M approximations for implementation of these powerful methods for controlling, comparing and using adaptive network$.",
"title": ""
}
] |
[
{
"docid": "263e8b756862ab28d313578e3f6acbb1",
"text": "Goal posts detection is a critical robot soccer ability which is needed to be accurate, robust and efficient. A goal detection method using Hough transform to get the detailed goal features is presented in this paper. In the beginning, the image preprocessing and Hough transform implementation are described in detail. A new modification on the θ parameter range in Hough transform is explained and applied to speed up the detection process. Line processing algorithm is used to classify the line detected, and then the goal feature extraction method, including the line intersection calculation, is done. Finally, the goal distance from the robot body is estimated using triangle similarity. The experiment is performed on our university humanoid robot with the goal dimension of 225 cm in width and 110 cm in height, in yellow color. The result shows that the goal detection method, including the modification in Hough transform, is able to extract the goal features seen by the robot correctly, with the lowest speed of 5 frames per second. Additionally, the goal distance estimation is accomplished with maximum error of 20 centimeters.",
"title": ""
},
{
"docid": "94f11255e531a47969ba18112bf22777",
"text": "Basic scientific interest in using a semiconducting electrode in molecule-based electronics arises from the rich electrostatic landscape presented by semiconductor interfaces. Technological interest rests on the promise that combining existing semiconductor (primarily Si) electronics with (mostly organic) molecules will result in a whole that is larger than the sum of its parts. Such a hybrid approach appears presently particularly relevant for sensors and photovoltaics. Semiconductors, especially Si, present an important experimental test-bed for assessing electronic transport behavior of molecules, because they allow varying the critical interface energetics without, to a first approximation, altering the interfacial chemistry. To investigate semiconductor-molecule electronics we need reproducible, high-yield preparations of samples that allow reliable and reproducible data collection. Only in that way can we explore how the molecule/electrode interfaces affect or even dictate charge transport, which may then provide a basis for models with predictive power.To consider these issues and questions we will, in this Progress Report, review junctions based on direct bonding of molecules to oxide-free Si.describe the possible charge transport mechanisms across such interfaces and evaluate in how far they can be quantified.investigate to what extent imperfections in the monolayer are important for transport across the monolayer.revisit the concept of energy levels in such hybrid systems.",
"title": ""
},
{
"docid": "0ff76204fcdf1a7cf2a6d13a5d3b1597",
"text": "In this study, we found that the optimum take-off angle for a long jumper may be predicted by combining the equation for the range of a projectile in free flight with the measured relations between take-off speed, take-off height and take-off angle for the athlete. The prediction method was evaluated using video measurements of three experienced male long jumpers who performed maximum-effort jumps over a wide range of take-off angles. To produce low take-off angles the athletes used a long and fast run-up, whereas higher take-off angles were produced using a progressively shorter and slower run-up. For all three athletes, the take-off speed decreased and the take-off height increased as the athlete jumped with a higher take-off angle. The calculated optimum take-off angles were in good agreement with the athletes' competition take-off angles.",
"title": ""
},
{
"docid": "7741df913eece947fea6696fce89e139",
"text": "We survey the problem of comparing labeled trees based on simple local operations of deleting, inserting, and relabeling nodes. These operations lead to the tree edit distance, alignment distance, and inclusion problem. For each problem we review the results available and present, in detail, one or more of the central algorithms for solving the problem. keywords tree matching, edit distance",
"title": ""
},
{
"docid": "7d5300adb91df986d4fe94195422e35f",
"text": "This paper proposes a simple CNN model for creating general-purpose sentence embeddings that can transfer easily across domains and can also act as effective initialization for downstream tasks. Recently, averaging the embeddings of words in a sentence has proven to be a surprisingly successful and efficient way of obtaining sentence embeddings. However, these models represent a sentence, only in terms of features of words or uni-grams in it. In contrast, our model (CSE) utilizes both features of words and n-grams to encode sentences, which is actually a generalization of these bag-of-words models. The extensive experiments demonstrate that CSE performs better than average models in transfer learning setting and exceeds the state of the art in supervised learning setting by initializing the parameters with the pre-trained sentence embeddings.",
"title": ""
},
{
"docid": "e49dcbcb0bb8963d4f724513d66dd3a0",
"text": "To achieve general intelligence, agents must learn how to interact with others in a shared environment: this is the challenge of multiagent reinforcement learning (MARL). The simplest form is independent reinforcement learning (InRL), where each agent treats its experience as part of its (non-stationary) environment. In this paper, we first observe that policies learned using InRL can overfit to the other agents’ policies during training, failing to sufficiently generalize during execution. We introduce a new metric, joint-policy correlation, to quantify this effect. We describe an algorithm for general MARL, based on approximate best responses to mixtures of policies generated using deep reinforcement learning, and empirical game-theoretic analysis to compute meta-strategies for policy selection. The algorithm generalizes previous ones such as InRL, iterated best response, double oracle, and fictitious play. Then, we present a scalable implementation which reduces the memory requirement using decoupled meta-solvers. Finally, we demonstrate the generality of the resulting policies in two partially observable settings: gridworld coordination games and poker.",
"title": ""
},
{
"docid": "bf164afc6315bf29a07e6026a3db4a26",
"text": "iBeacons are a new way to interact with hardware. An iBeacon is a Bluetooth Low Energy device that only sends a signal in a specific format. They are like a lighthouse that sends light signals to boats. This paper explains what an iBeacon is, how it works and how it can simplify your daily life, what restriction comes with iBeacon and how to improve this restriction., as well as, how to use Location-based Services to track items. E.g., every time you touchdown at an airport and wait for your suitcase at the luggage reclaim, you have no information when your luggage will arrive at the conveyor belt. With an iBeacon inside your suitcase, it is possible to track the luggage and to receive a push notification about it even before you can see it. This is just one possible solution to use them. iBeacon can create a completely new shopping experience or make your home smarter. This paper demonstrates the luggage tracking use case and evaluates its possibilities and restrictions.",
"title": ""
},
{
"docid": "bb16c9b562a32993cd43d564f8d1d11e",
"text": "This research designs and realizes a zero-voltage switching (ZVS) three-phase DC-DC buck/boost converter that reduces the current ripple, switching losses and increases converter efficiency. The size and cost can be reduced when the proposed converter is designed with the coupled inductor scheme. This paper describes a three-phase DC-DC buck/boost converter with the coupled inductor and ZVS soft switching operation under different inductor current conduction modes. Simulation and experimental results are employed compare the performance of the proposed three-phase bidirectional converter in the PV system under battery charge and discharge operating modes for energy storage systems.",
"title": ""
},
{
"docid": "01fa6041e3a2c555c0e58a41a5521f8e",
"text": "This paper presents a detailed description of finite control set model predictive control (FCS-MPC) applied to power converters. Several key aspects related to this methodology are, in depth, presented and compared with traditional power converter control techniques, such as linear controllers with pulsewidth-modulation-based methods. The basic concepts, operating principles, control diagrams, and results are used to provide a comparison between the different control strategies. The analysis is performed on a traditional three-phase voltage source inverter, used as a simple and comprehensive reference frame. However, additional topologies and power systems are addressed to highlight differences, potentialities, and challenges of FCS-MPC. Among the conclusions are the feasibility and great potential of FCS-MPC due to present-day signal-processing capabilities, particularly for power systems with a reduced number of switching states and more complex operating principles, such as matrix converters. In addition, the possibility to address different or additional control objectives easily in a single cost function enables a simple, flexible, and improved performance controller for power-conversion systems.",
"title": ""
},
{
"docid": "9185a7823e699c758dde3a81f7d6d86d",
"text": "Reading text from photographs is a challenging problem that has received a significant amount of attention. Two key components of most systems are (i) text detection from images and (ii) character recognition, and many recent methods have been proposed to design better feature representations and models for both. In this paper, we apply methods recently developed in machine learning -- specifically, large-scale algorithms for learning the features automatically from unlabeled data -- and show that they allow us to construct highly effective classifiers for both detection and recognition to be used in a high accuracy end-to-end system.",
"title": ""
},
{
"docid": "a76071628d25db972127702b974d4849",
"text": "Surveying 3D scenes is a common task in robotics. Systems can do so autonomously by iteratively obtaining measurements. This process of planning observations to improve the model of a scene is called Next Best View (NBV) planning. NBV planning approaches often use either volumetric (e.g., voxel grids) or surface (e.g., triangulated meshes) representations. Volumetric approaches generalise well between scenes as they do not depend on surface geometry but do not scale to high-resolution models of large scenes. Surface representations can obtain high-resolution models at any scale but often require tuning of unintuitive parameters or multiple survey stages. This paper presents a scene-model-free NBV planning approach with a density representation. The Surface Edge Explorer (SEE) uses the density of current measurements to detect and explore observed surface boundaries. This approach is shown experimentally to provide better surface coverage in lower computation time than the evaluated state-of-the-art volumetric approaches while moving equivalent distances.",
"title": ""
},
{
"docid": "79727e2e749c620fcb6c6e1d03460ec1",
"text": "Human emotion and its electrophysiological correlates are still poorly understood. The present study examined whether the valence of perceived emotions would differentially influence EEG power spectra and heart rate (HR). Pleasant and unpleasant emotions were induced by consonant and dissonant music. Unpleasant (compared to pleasant) music evoked a significant decrease of HR, replicating the pattern of HR responses previously described for the processing of emotional pictures, sounds, and films. In the EEG, pleasant (contrasted to unpleasant) music was associated with an increase of frontal midline (Fm) theta power. This effect is taken to reflect emotional processing in close interaction with attentional functions. These findings show that Fm theta is modulated by emotion more strongly than previously believed.",
"title": ""
},
{
"docid": "b4fa57fec99131cdf0cb6fc4795fce43",
"text": "We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.",
"title": ""
},
{
"docid": "848aa6e3691541cce0c4aeecaeb66504",
"text": "IP Geolocation databases are widely used in online services to map end user IP addresses to their geographical locations. However, they use proprietary geolocation methods and in some cases they have poor accuracy. We propose a systematic approach to use publicly accessible reverse DNS hostnames for geolocating IP addresses. Our method is designed to be combined with other geolocation data sources. We cast the task as a machine learning problem where for a given hostname, we generate and rank a list of potential location candidates. We evaluate our approach against three state of the art academic baselines and two state of the art commercial IP geolocation databases. We show that our work significantly outperforms the academic baselines, and is complementary and competitive with commercial databases. To aid reproducibility, we open source our entire approach.",
"title": ""
},
{
"docid": "4437a0241b825fddd280517b9ae3565a",
"text": "The levels of pregnenolone, dehydroepiandrosterone (DHA), androstenedione, testosterone, dihydrotestosterone (DHT), oestrone, oestradiol, cortisol and luteinizing hormone (LH) were measured in the peripheral plasma of a group of young, apparently healthy males before and after masturbation. The same steroids were also determined in a control study, in which the psychological antipation of masturbation was encouraged, but the physical act was not carried out. The plasma levels of all steroids were significantly increased after masturbation, whereas steroid levels remained unchanged in the control study. The most marked changes after masturbation were observed in pregnenolone and DHA levels. No alterations were observed in the plasma levels of LH. Both before and after masturbation plasma levels of testosterone were significantly correlated to those of DHT and oestradiol, but not to those of the other steroids studied. On the other hand, cortisol levels were significantly correlated to those of pregnenolone, DHA, androstenedione and oestrone. In the same subjects, the levels of pregnenolone, DHA, androstenedione, testosterone and DHT, androstenedione and oestrone. In the same subjects, the levels of pregnenolone, DHA, androstenedione, testosterone and DHT in seminal plasma were also estimated; they were all significantly correlated to the levels of the corresponding steroid in the systemic blood withdrawn both before and after masturbation. As a practical consequence, the results indicate that whenever both blood and semen are analysed, blood sampling must precede semen collection.",
"title": ""
},
{
"docid": "343ba137056cac30d0d37e17a425d53b",
"text": "This thesis explores fundamental improvements in unsupervised deep learning algorithms. Taking a theoretical perspective on the purpose of unsupervised learning, and choosing learnt approximate inference in a jointly learnt directed generative model as the approach, the main question is how existing implementations of this approach, in particular auto-encoders, could be improved by simultaneously rethinking the way they learn and the way they perform inference. In such network architectures, the availability of two opposing pathways, one for inference and one for generation, allows to exploit the symmetry between them and to let either provide feedback signals to the other. The signals can be used to determine helpful updates for the connection weights from only locally available information, removing the need for the conventional back-propagation path and mitigating the issues associated with it. Moreover, feedback loops can be added to the usual usual feed-forward network to improve inference itself. The reciprocal connectivity between regions in the brain’s neocortex provides inspiration for how the iterative revision and verification of proposed interpretations could result in a fair approximation to optimal Bayesian inference. While extracting and combining underlying ideas from research in deep learning and cortical functioning, this thesis walks through the concepts of generative models, approximate inference, local learning rules, target propagation, recirculation, lateral and biased competition, predictive coding, iterative and amortised inference, and other related topics, in an attempt to build up a complex of insights that could provide direction to future research in unsupervised deep learning methods.",
"title": ""
},
{
"docid": "4fa7f7f723c2f2eee4c0e2c294273c74",
"text": "Tracking human vital signs of breathing and heart rates during sleep is important as it can help to assess the general physical health of a person and provide useful clues for diagnosing possible diseases. Traditional approaches (e.g., Polysomnography (PSG)) are limited to clinic usage. Recent radio frequency (RF) based approaches require specialized devices or dedicated wireless sensors and are only able to track breathing rate. In this work, we propose to track the vital signs of both breathing rate and heart rate during sleep by using off-the-shelf WiFi without any wearable or dedicated devices. Our system re-uses existing WiFi network and exploits the fine-grained channel information to capture the minute movements caused by breathing and heart beats. Our system thus has the potential to be widely deployed and perform continuous long-term monitoring. The developed algorithm makes use of the channel information in both time and frequency domain to estimate breathing and heart rates, and it works well when either individual or two persons are in bed. Our extensive experiments demonstrate that our system can accurately capture vital signs during sleep under realistic settings, and achieve comparable or even better performance comparing to traditional and existing approaches, which is a strong indication of providing non-invasive, continuous fine-grained vital signs monitoring without any additional cost.",
"title": ""
},
{
"docid": "06e53c86f6517dcaa2538f9920b362a5",
"text": "In a network topology for forwarding packets various routing protocols are being used. Routers maintain a routing table for successful delivery of the packets from the source node to the correct destined node. The extent of information stored by a router about the network depends on the algorithm it follows. Most of the popular routing algorithms used are RIP, OSPF, IGRP and EIGRP. Here in this paper we are analyzing the performance of these very algorithms on the basis of the cost of delivery, amount of overhead on each router, number of updates needed, failure recovery, delay encountered and resultant throughput of the system. We are trying to find out which protocol suits the best for the network and through a thorough analysis we have tried to find the pros and cons of each protocol.",
"title": ""
},
{
"docid": "131c163caef9ab345eada4b2d423aa9d",
"text": "Text pre-processing of Arabic Language is a challenge and crucial stage in Text Categorization (TC) particularly and Text Mining (TM) generally. Stemming algorithms can be employed in Arabic text preprocessing to reduces words to their stems/or root. Arabic stemming algorithms can be ranked, according to three category, as root-based approach (ex. Khoja); stem-based approach (ex. Larkey); and statistical approach (ex. N-Garm). However, no stemming of this language is perfect: The existing stemmers have a small efficiency. In this paper, in order to improve the accuracy of stemming and therefore the accuracy of our proposed TC system, an efficient hybrid method is proposed for stemming Arabic text. The effectiveness of the aforementioned four methods was evaluated and compared in term of the F-measure of the Naïve Bayesian classifier and the Support Vector Machine classifier used in our TC system. The proposed stemming algorithm was found to supersede the other stemming ones: The obtained results illustrate that using the proposed stemmer enhances greatly the performance of Arabic Text Categorization.",
"title": ""
},
{
"docid": "6da5c042639f9103210df726b7b0c7e6",
"text": "Cloud computing model is treated as prominent emerging technology in current business. It provides flexible infrastructure and easy way of handling a bulky data necessary to run organization successfully. What it's all about is on demand service with tiniest financial risk and quality service with least cost. In this paper, we are going to deliberate the concept of cloud computing and its varioustypes. We also figure out some advantages offered by cloud along with some flaws.",
"title": ""
}
] |
scidocsrr
|
8b32e805dfafb6b5b4e05b2a75ebf781
|
MPI CyberMotion Simulator: Implementation of a Novel Motion Simulator to Investigate Multisensory Path Integration in Three Dimensions
|
[
{
"docid": "fafbcccd49d324ea45dbe4c341d4c7d9",
"text": "This paper discusses the technical issues that were required to adapt a KUKA Robocoaster for use as a real-time motion simulator. Within this context, the paper addresses the physical modifications and the software control structure that were needed to have a flexible and safe experimental setup. It also addresses the delays and transfer function of the system. The paper is divided into two sections. The first section describes the control and safety structures of the MPI Motion Simulator. The second section shows measurements of latencies and frequency responses of the motion simulator. The results show that the frequency responses of the MPI Motion Simulator compare favorably with high-end Stewart Platforms, and therefore demonstrate the suitability of robot-based motion simulators for flight simulation.",
"title": ""
}
] |
[
{
"docid": "ede1cfd85dbb2aaa6451128c222d99a2",
"text": "Crowdsourcing is a crowd-based outsourcing, where a requester (task owner) can outsource tasks to workers (public crowd). Recently, mobile crowdsourcing, which can leverage workers' data from smartphones for data aggregation and analysis, has attracted much attention. However, when the data volume is getting large, it becomes a difficult problem for a requester to aggregate and analyze the incoming data, especially when the requester is an ordinary smartphone user or a start-up company with limited storage and computation resources. Besides, workers are concerned about their identity and data privacy. To tackle these issues, we introduce a three-party architecture for mobile crowdsourcing, where the cloud is implemented between workers and requesters to ease the storage and computation burden of the resource-limited requester. Identity privacy and data privacy are also achieved. With our scheme, a requester is able to verify the correctness of computation results from the cloud. We also provide several aggregated statistics in our work, together with efficient data update methods. Extensive simulation shows both the feasibility and efficiency of our proposed solution.",
"title": ""
},
{
"docid": "be1ac4321c710c325ed4ad5dae927b6c",
"text": "Current work at NASA's Johnson Space Center is focusing on the identification and design of novel robotic archetypes to fill roles complimentary to current space robots during in-space assembly and maintenance tasks. Tendril, NASA's latest robot designed for minimally invasive inspection, is one system born of this effort. Inspired by the biology of snakes, tentacles, and climbing plants, the Tendril robot is a long slender manipulator that can extend deep into crevasses and under thermal blankets to inspect areas largely inaccessible by conventional means. The design of the Tendril, with its multiple bending segments and 1 cm diameter, also serves as an initial step in exploring the whole body control known to continuum robots coupled with the small scale and dexterity found in medical and commercial minimally invasive devices. An overview of Tendril's design is presented along with preliminary results from testing that seeks to improve Tendril's performance through an iterative design process",
"title": ""
},
{
"docid": "f5f1b6e660b5010eb3d2ca60734511ca",
"text": "Arabic is the official language of hundreds of millions of people in twenty Middle East and northern African countries, and is the religious language of all Muslims of various ethnicities around the world. Surprisingly little has been done in the field of computerised language and lexical resources. It is therefore motivating to develop an Arabic (WordNet) lexical resource that discovers the richness of Arabic as described in Elkateb (2005). This paper describes our approach towards building a lexical resource in Standard Arabic. Arabic WordNet (AWN) will be based on the design and contents of the universally accepted Princeton WordNet (PWN) and will be mappable straightforwardly onto PWN 2.0 and EuroWordNet (EWN), enabling translation on the lexical level to English and dozens of other languages. Several tools specific to this task will be developed. AWN will be a linguistic resource with a deep formal semantic foundation. Besides the standard wordnet representation of senses, word meanings are defined with a machine understandable semantics in first order logic. The basis for this semantics is the Suggested Upper Merged Ontology (SUMO) and its associated domain ontologies. We will greatly extend the ontology and its set of mappings to provide formal terms and definitions equivalent to each synset.",
"title": ""
},
{
"docid": "018d05daa52fb79c17519f29f31026d7",
"text": "The aim of this paper is to review conceptual and empirical literature on the concept of distributed leadership (DL) in order to identify its origins, key arguments and areas for further work. Consideration is given to the similarities and differences between DL and related concepts, including ‘shared’, ‘collective’, ‘collaborative’, ‘emergent’, ‘co-’ and ‘democratic’ leadership. Findings indicate that, while there are some common theoretical bases, the relative usage of these concepts varies over time, between countries and between sectors. In particular, DL is a notion that has seen a rapid growth in interest since the year 2000, but research remains largely restricted to the field of school education and of proportionally more interest to UK than US-based academics. Several scholars are increasingly going to great lengths to indicate that, in order to be ‘distributed’, leadership need not necessarily be widely ‘shared’ or ‘democratic’ and, in order to be effective, there is a need to balance different ‘hybrid configurations’ of practice. The paper highlights a number of areas for further attention, including three factors relating to the context of much work on DL (power and influence; organizational boundaries and context; and ethics and diversity), and three methodological and developmental challenges (ontology; research methods; and leadership development, reward and recognition). It is concluded that descriptive and normative perspectives which dominate the literature should be supplemented by more critical accounts which recognize the rhetorical and discursive significance of DL in (re)constructing leader– follower identities, mobilizing collective engagement and challenging or reinforcing traditional forms of organization.",
"title": ""
},
{
"docid": "f3f441c2cf1224746c0bfbb6ce02706d",
"text": "This paper addresses the task of finegrained opinion extraction – the identification of opinion-related entities: the opinion expressions, the opinion holders, and the targets of the opinions, and the relations between opinion expressions and their targets and holders. Most existing approaches tackle the extraction of opinion entities and opinion relations in a pipelined manner, where the interdependencies among different extraction stages are not captured. We propose a joint inference model that leverages knowledge from predictors that optimize subtasks of opinion extraction, and seeks a globally optimal solution. Experimental results demonstrate that our joint inference approach significantly outperforms traditional pipeline methods and baselines that tackle subtasks in isolation for the problem of opinion extraction.",
"title": ""
},
{
"docid": "2a2bc3ccb5217c16f278011cbe7dcf2a",
"text": "The problem of Big Data in cyber security (i.e., too much network data to analyze) compounds itself every day. Our approach is based on a fundamental characteristic of Big Data: an overwhelming majority of the network traffic in a traditionally secured enterprise (i.e., using defense-in-depth) is non-malicious. Therefore, one way of eliminating the Big Data problem in cyber security is to ignore the overwhelming majority of an enterprise's non-malicious network traffic and focus only on the smaller amounts of suspicious or malicious network traffic. Our approach uses simple clustering along with a dataset enriched with known malicious domains (i.e., anchors) to accurately and quickly filter out the non-suspicious network traffic. Our algorithm has demonstrated the predictive ability to accurately filter out approximately 97% (depending on the algorithm used) of the non-malicious data in millions of Domain Name Service (DNS) queries in minutes and identify the small percentage of unseen suspicious network traffic. We demonstrate that the resulting network traffic can be analyzed with traditional reputation systems, blacklists, or in-house threat tracking sources (we used virustotal.com) to identify harmful domains that are being accessed from within the enterprise network. Specifically, our results show that the method can reduce a dataset of 400k query-answer domains (with complete malicious domain ground truth) down to only 3% containing 99% of all malicious domains. Further, we demonstrate that this capability scales to 10 million query-answer pairs, which it can reduce by 97% in less than an hour.",
"title": ""
},
{
"docid": "8841e54fab263088b55b175a0c148b19",
"text": "In their seminal work Active Learning: Creating Excitement in the Classroom, compiled in 1991 for the Association for the Study of Higher Education and the ERIC Clearinghouse on Higher Education, Bonwell and Eison defined strategies that promote active learning as “instructional activities involving students in doing things and thinking about what they are doing” (Bonwell and Eison, 1991). Approaches that promote active learning focus more on developing students’ skills than on transmitting information and require that students do something—read, discuss, write—that requires higher-order thinking. They also tend to place some emphasis on students’ explorations of their own attitudes and values. This definition is broad, and Bonwell and Eison explicitly recognize that a range of activities can fall within it. They suggest a spectrum of activities to promote active learning, ranging from very simple (e.g., pausing lecture to allow students to clarify and organize their ideas by discussing with neighbors) to more complex (e.g., using case studies as a focal point for decision-making). In their book Scientific Teaching, Handelsman, Miller and Pfund also note that the line between active learning and formative assessment is blurry and hard to define; after all, teaching that promotes students’ active learning asks students to do or produce something, which then can serve to help assess understanding (2007).",
"title": ""
},
{
"docid": "3481067aa5e7e10095f4cdb782e061b4",
"text": "We empirically explored the roles and scope of knowledge management systems in organizations. Building on a knowledgebased view of the firm, we hypothesized and empirically tested our belief that more integration is needed between technologies intended to support knowledge and those supporting business operations. Findings from a Delphi study and in-depth interviews illustrated this and led us to suggest a revised approach to developing organizational knowledge management systems. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3394eb51b71e5def4e4637963da347ab",
"text": "In this paper we present a model of e-learning suitable for teacher training sessions. The main purpose of our work is to define the components of the educational system which influences the successful adoption of e-learning in the field of education. We also present the factors of the readiness of e-learning mentioned in the literature available and classifies them into the 3 major categories that constitute the components of every organization and consequently that of education. Finally, we present an implementation model of e-learning through the use of virtual private networks, which lends an added value to the realization of e-learning.",
"title": ""
},
{
"docid": "4595c3661cdf3b38df430de48792af91",
"text": "Dynamic Difficulty Adjustment (DDA) consists in an alternative to the static game balancing performed in game design. DDA is done during execution, tracking the player's performance and adjusting the game to present proper challenges to the player. This approach seems appropriate to increase the player entertainment, since it provides balanced challenges, avoiding boredom or frustration during the gameplay. This paper presents a mechanism to perform the dynamic difficulty adjustment during a game match. The idea is to dynamically change the game AI, adapting it to the player skills. We implemented three different AIs to match player behaviors: beginner, regular and experienced in the game Defense of the Ancient (DotA), a modification (MOD) of the game Warcraft III. We performed a series of experiments and, after comparing all results, the presented mechanism was able to keep up with the player's abilities on 85% of all experiments. The remaining 15% failed to suit the player's need because the adjustment did not occur on the right moment.",
"title": ""
},
{
"docid": "001b3155f0d67fd153173648cd483ac2",
"text": "A new approach to the problem of multimodality medical image registration is proposed, using a basic concept from information theory, mutual information (MI), or relative entropy, as a new matching criterion. The method presented in this paper applies MI to measure the statistical dependence or information redundancy between the image intensities of corresponding voxels in both images, which is assumed to be maximal if the images are geometrically aligned. Maximization of MI is a very general and powerful criterion, because no assumptions are made regarding the nature of this dependence and no limiting constraints are imposed on the image content of the modalities involved. The accuracy of the MI criterion is validated for rigid body registration of computed tomography (CT), magnetic resonance (MR), and photon emission tomography (PET) images by comparison with the stereotactic registration solution, while robustness is evaluated with respect to implementation issues, such as interpolation and optimization, and image content, including partial overlap and image degradation. Our results demonstrate that subvoxel accuracy with respect to the stereotactic reference solution can be achieved completely automatically and without any prior segmentation, feature extraction, or other preprocessing steps which makes this method very well suited for clinical applications.",
"title": ""
},
{
"docid": "0028061d8bd57be4aaf6a01995b8c3bb",
"text": "Steganography is the art of concealing the existence of information within seemingly harmless carriers. It is a method similar to covert channels, spread spectrum communication and invisible inks which adds another step in security. A message in cipher text may arouse suspicion while an invisible message will not. A digital image is a flexible medium used to carry a secret message because the slight modification of a cover image is hard to distinguish by human eyes. In this paper, we propose a revised version of information hiding scheme using Sudoku puzzle. The original work was proposed by Chang et al. in 2008, and their work was inspired by Zhang and Wang's method and Sudoku solutions. Chang et al. successfully used Sudoku solutions to guide cover pixels to modify pixel values so that secret messages can be embedded. Our proposed method is a modification of Chang et al’s method. Here a 27 X 27 Reference matrix is used instead of 256 X 256 reference matrix as proposed in the previous method. The earlier version is for a grayscale image but the proposed method is for a colored image.",
"title": ""
},
{
"docid": "ace30c4ad4a74f1ba526b4868e47b5c5",
"text": "China and India are home to two of the world's largest populations, and both populations are aging rapidly. Our data compare health status, risk factors, and chronic diseases among people age forty-five and older in China and India. By 2030, 65.6 percent of the Chinese and 45.4 percent of the Indian health burden are projected to be borne by older adults, a population with high levels of noncommunicable diseases. Smoking (26 percent in both China and India) and inadequate physical activity (10 percent and 17.7 percent, respectively) are highly prevalent. Health policy and interventions informed by appropriate data will be needed to avert this burden.",
"title": ""
},
{
"docid": "b1a384176d320576ec8bc398474f5e68",
"text": "Concept mapping (a mixed qualitative–quantitative methodology) was used to describe and understand the psychosocial experiences of adults with confirmed and self-identified dyslexia. Using innovative processes of art and photography, Phase 1 of the study included 15 adults who participated in focus groups and in-depth interviews and were asked to elucidate their experiences with dyslexia. On index cards, 75 statements and experiences with dyslexia were recorded. The second phase of the study included 39 participants who sorted these statements into self-defined categories and rated each statement to reflect their personal experiences to produce a visual representation, or concept map, of their experience. The final concept map generated nine distinct cluster themes: Organization Skills for Success; Finding Success; A Good Support System Makes the Difference; On Being Overwhelmed; Emotional Downside; Why Can’t They See It?; Pain, Hurt, and Embarrassment From Past to Present; Fear of Disclosure; and Moving Forward. Implications of these findings are discussed.",
"title": ""
},
{
"docid": "51c7350e13aa4a65fc99b24fd8f0e8b9",
"text": "Fog computing is deemed as a highly virtualized paradigm that can enable computing at the Internet of Things devices, residing in the edge of the network, for the purpose of delivering services and applications more efficiently and effectively. Since fog computing originates from and is a non-trivial extension of cloud computing, it inherits many security and privacy challenges of cloud computing, causing the extensive concerns in the research community. To enable authentic and confidential communications among a group of fog nodes, in this paper, we propose an efficient key exchange protocol based on ciphertext-policy attribute-based encryption (CP-ABE) to establish secure communications among the participants. To achieve confidentiality, authentication, verifiability, and access control, we combine CP-ABE and digital signature techniques. We analyze the efficiency of our protocol in terms of security and performance. We also implement our protocol and compare it with the certificate-based scheme to illustrate its feasibility.",
"title": ""
},
{
"docid": "a5e3e238932cd4bfb8b26da579e1ec9b",
"text": "A broadband, high efficiency push-pull power amplifier is presented between 0.5 GHz and 1.5 GHz. Coaxial cable transmission line baluns are utilised to transform the impedance environment of the transistors down to 25 Ω, greatly simplifying the matching, whilst still providing a 50 Ω environment to interface with other components. Using packaged GaN HEMT transistors, typical output powers of 45 dBm and efficiencies of 44% to 75% have been measured across a 3:1 bandwidth. The small signal input match is less than −10 dB and small signal gain is greater than 10 dB across the entire band.",
"title": ""
},
{
"docid": "80d9439987b7eac8cf021be7dc533ec9",
"text": "While previous studies have investigated the determinants and consequences of online trust, online distrust has seldom been studied. Assuming that the positive antecedents of online trust are necessarily negative antecedents of online distrust or that positive consequences of online trust are necessarily negatively affected by online distrust is inappropriate. This study examines the different antecedents of online trust and distrust in relation to consumer and website characteristics. Moreover, this study further examines whether online trust and distrust asymmetrically affect behaviors with different risk levels. A model is developed and tested using a survey of 1,153 online consumers. LISREL was employed to test the proposed model. Overall, different consumer and website characteristics influence online trust and distrust, and online trust engenders different behavioral outcomes to online distrust. The authors also discuss the theoretical and managerial implications of the study findings.",
"title": ""
},
{
"docid": "33390e96d05644da201db3edb3ad7338",
"text": "This paper addresses the difficult problem of finding an optimal neural architecture design for a given image classification task. We propose a method that aggregates two main results of the previous state-of-the-art in neural architecture search. These are, appealing to the strong sampling efficiency of a search scheme based on sequential modelbased optimization (SMBO) [15], and increasing training efficiency by sharing weights among sampled architectures [18]. Sequential search has previously demonstrated its capabilities to find state-of-the-art neural architectures for image classification. However, its computational cost remains high, even unreachable under modest computational settings. Affording SMBO with weight-sharing alleviates this problem. On the other hand, progressive search with SMBO is inherently greedy, as it leverages a learned surrogate function to predict the validation error of neural architectures. This prediction is directly used to rank the sampled neural architectures. We propose to attenuate the greediness of the original SMBO method by relaxing the role of the surrogate function so it predicts architecture sampling probability instead. We demonstrate with experiments on the CIFAR-10 dataset that our method, denominated Efficient progressive neural architecture search (EPNAS), leads to increased search efficiency, while retaining competitiveness of found architectures.",
"title": ""
},
{
"docid": "b45aae55cc4e7bdb13463eff7aaf6c60",
"text": "Text retrieval systems typically produce a ranking of documents and let a user decide how far down that ranking to go. In contrast, programs that filter text streams, software that categorizes documents, agents which alert users, and many other IR systems must make decisions without human input or supervision. It is important to define what constitutes good effectiveness for these autonomous systems, tune the systems to achieve the highest possible effectiveness, and estimate how the effectiveness changes as new data is processed. We show how to do this for binary text classification systems, emphasizing that different goals for the system lead to different optimal behaviors. Optimizing and estimating effectiveness is greatly aided if classifiers that explicitly estimate the probability of class membership are used.",
"title": ""
},
{
"docid": "7c11bd23338b6261f44319198fcdc082",
"text": "Zooplankton are quite significant to the ocean ecosystem for stabilizing balance of the ecosystem and keeping the earth running normally. Considering the significance of zooplantkon, research about zooplankton has caught more and more attentions. And zooplankton recognition has shown great potential for science studies and mearsuring applications. However, manual recognition on zooplankton is labour-intensive and time-consuming, and requires professional knowledge and experiences, which can not scale to large-scale studies. Deep learning approach has achieved remarkable performance in a number of object recognition benchmarks, often achieveing the current best performance on detection or classification tasks and the method demonstrates very promising and plausible results in many applications. In this paper, we explore a deep learning architecture: ZooplanktoNet to classify zoolankton automatically and effectively. The deep network is characterized by capturing more general and representative features than previous predefined feature extraction algorithms in challenging classification. Also, we incorporate some data augmentation to aim at reducing the overfitting for lacking of zooplankton images. And we decide the zooplankton class according to the highest score in the final predictions of ZooplanktoNet. Experimental results demonstrate that ZooplanktoNet can solve the problem effectively with accuracy of 93.7% in zooplankton classification.",
"title": ""
}
] |
scidocsrr
|
1d3abb3ca2bc9848cec02d1ffd0ec890
|
Distributed denial of service (DDoS) resilience in cloud: Review and conceptual cloud DDoS mitigation framework
|
[
{
"docid": "28899946726bc1e665298f09ea9e654d",
"text": "This paper presents a simple and robust mechanism, called change-point monitoring (CPM), to detect denial of service (DoS) attacks. The core of CPM is based on the inherent network protocol behavior and is an instance of the sequential change point detection. To make the detection mechanism insensitive to sites and traffic patterns, a nonparametric cumulative sum (CUSUM) method is applied, thus making the detection mechanism robust, more generally applicable, and its deployment much easier. CPM does not require per-flow state information and only introduces a few variables to record the protocol behaviors. The statelessness and low computation overhead of CPM make itself immune to any flooding attacks. As a case study, the efficacy of CPM is evaluated by detecting a SYN flooding attack - the most common DoS attack. The evaluation results show that CPM has short detection latency and high detection accuracy",
"title": ""
},
{
"docid": "f6db2eaa4877a482b18ca14a5bc0524d",
"text": "Safety and reliability are important in the cloud computing environment. This is especially true today as distributed denial-of-service (DDoS) attacks constitute one of the largest threats faced by Internet users and cloud computing services. DDoS attacks target the resources of these services, lowering their ability to provide optimum usage of the network infrastructure. Due to the nature of cloud computing, the methodologies for preventing or stopping DDoS attacks are quite different compared to those used in traditional networks. In this paper, we investigate the effect of DDoS attacks on cloud resources and recommend practical defense mechanisms against different types of DDoS attacks in the cloud environment.",
"title": ""
}
] |
[
{
"docid": "46360fec3d7fa0adbe08bb4b5bb05847",
"text": "Previous approaches to action recognition with deep features tend to process video frames only within a small temporal region, and do not model long-range dynamic information explicitly. However, such information is important for the accurate recognition of actions, especially for the discrimination of complex activities that share sub-actions, and when dealing with untrimmed videos. Here, we propose a representation, VLAD for Deep Dynamics (VLAD3), that accounts for different levels of video dynamics. It captures short-term dynamics with deep convolutional neural network features, relying on linear dynamic systems (LDS) to model medium-range dynamics. To account for long-range inhomogeneous dynamics, a VLAD descriptor is derived for the LDS and pooled over the whole video, to arrive at the final VLAD3 representation. An extensive evaluation was performed on Olympic Sports, UCF101 and THUMOS15, where the use of the VLAD3 representation leads to state-of-the-art results.",
"title": ""
},
{
"docid": "06499372aac4f329e1b96512587ac37d",
"text": "This study focuses on the task of multipassage reading comprehension (RC) where an answer is provided in natural language. Current mainstream approaches treat RC by extracting the answer span from the provided passages and cannot generate an abstractive summary from the given question and passages. Moreover, they cannot utilize and control different styles of answers, such as concise phrases and well-formed sentences, within a model. In this study, we propose a style-controllable Multi-source Abstractive Summarization model for QUEstion answering, called Masque. The model is an end-toend deep neural network that can generate answers conditioned on a given style. Experiments with MS MARCO 2.1 show that our model achieved state-of-the-art performance on two tasks with different answer styles.",
"title": ""
},
{
"docid": "919483807937c5aed6f4529b0db29540",
"text": "Tabular data is an abundant source of information on the Web, but remains mostly isolated from the latter’s interconnections since tables lack links and computer-accessible descriptions of their structure. In other words, the schemas of these tables — attribute names, values, data types, etc. — are not explicitly stored as table metadata. Consequently, the structure that these tables contain is not accessible to the crawlers that power search engines and thus not accessible to user search queries. We address this lack of structure with a new method for leveraging the principles of table construction in order to extract table schemas. Discovering the schema by which a table is constructed is achieved by harnessing the similarities and differences of nearby table rows through the use of a novel set of features and a feature processing scheme. The schemas of these data tables are determined using a classification technique based on conditional random fields in combination with a novel feature encoding method called logarithmic binning, which is specifically designed for the data table extraction task. Our method provides considerable improvement over the wellknown WebTables schema extraction method. In contrast with previous work that focuses on extracting individual relations, our method excels at correctly interpreting full tables, thereby being capable of handling general tables such as those found in spreadsheets, instead of being restricted to HTML tables as is the case with the WebTables method. We also extract additional schema characteristics, such as row groupings, which are important for supporting information retrieval tasks on tabular data.",
"title": ""
},
{
"docid": "37936de50a1d3fa8612a465b6644c282",
"text": "Nature uses a limited, conservative set of amino acids to synthesize proteins. The ability to genetically encode an expanded set of building blocks with new chemical and physical properties is transforming the study, manipulation and evolution of proteins, and is enabling diverse applications, including approaches to probe, image and control protein function, and to precisely engineer therapeutics. Underpinning this transformation are strategies to engineer and rewire translation. Emerging strategies aim to reprogram the genetic code so that noncanonical biopolymers can be synthesized and evolved, and to test the limits of our ability to engineer the translational machinery and systematically recode genomes.",
"title": ""
},
{
"docid": "8404b6b5abcbb631398898e81beabea1",
"text": "As a result of agricultural intensification, more food is produced today than needed to feed the entire world population and at prices that have never been so low. Yet despite this success and the impact of globalization and increasing world trade in agriculture, there remain large, persistent and, in some cases, worsening spatial differences in the ability of societies to both feed themselves and protect the long-term productive capacity of their natural resources. This paper explores these differences and develops a countryxfarming systems typology for exploring the linkages between human needs, agriculture and the environment, and for assessing options for addressing future food security, land use and ecosystem service challenges facing different societies around the world.",
"title": ""
},
{
"docid": "af6544e8f0ed8ea0dbda5d843ed628dc",
"text": "This paper presents an image segmentation method for the manual identification of squamous epithelium from cervical cancer images. The present study was utilizing the various feature extraction techniques including texture feature, triangle feature and profile based correlation features. Generated features are used to classifying the squamous epithelium into normal, Cervical Intraepithelial Neoplasia (CIN1), CIN2 and CIN3. The results are used to classify the images into: 1) normal, 2) pre-cancer. The final system will take as input a biopsy image of the cervix containing the epithelium layer and provide the classification using our approach, to assist the pathologist in cervical cancer diagnosis. Keywords— Histology, Cervical Intraepithelial Neoplasia, Human Papillomavirus.",
"title": ""
},
{
"docid": "5b1214a8ede20c32b3fdb296e1382a0f",
"text": "Neural networks (NNs) have been adopted in a wide range of application domains, such as image classification, speech recognition, object detection, and computer vision. However, training NNs – especially deep neural networks (DNNs) – can be energy and time consuming, because of frequent data movement between processor and memory. Furthermore, training involves massive fine-grained operations with various computation and memory access characteristics. Exploiting high parallelism with such diverse operations is challenging. To address these challenges, we propose a software/hardware co-design of heterogeneous processing-in-memory (PIM) system. Our hardware design incorporates hundreds of fix-function arithmetic units and ARM-based programmable cores on the logic layer of a 3D die-stacked memory to form a heterogeneous PIM architecture attached to CPU. Our software design offers a programming model and a runtime system that program, offload, and schedule various NN training operations across compute resources provided by CPU and heterogeneous PIM. By extending the OpenCL programming model and employing a hardware heterogeneity-aware runtime system, we enable high program portability and easy program maintenance across various heterogeneous hardware, optimize system energy efficiency, and improve hardware utilization.",
"title": ""
},
{
"docid": "8e0ec02b22243b4afb04a276712ff6cf",
"text": "1 Morphology with or without Affixes The last few years have seen the emergence of several clearly articulated alternative approaches to morphology. One such approach rests on the notion that only stems of the so-called lexical categories (N, V, A) are morpheme \"pieces\" in the traditional sense—connections between (bundles of) meaning (features) and (bundles of) sound (features). What look like affixes on this view are merely the by-product of morphophonological rules called word formation rules (WFRs) that are sensitive to features associated with the lexical categories, called lexemes. Such an amorphous or affixless theory, adumbrated by Beard (1966) and Aronoff (1976), has been articulated most notably by Anderson (1992) and in major new studies by Aronoff (1992) and Beard (1991). In contrast, Lieber (1992) has refined the traditional notion that affixes as well as lexical stems are \"mor-pheme\" pieces whose lexical entries relate phonological form with meaning and function. For Lieber and other \"lexicalists\" (see, e.g., Jensen 1990), the combining of lexical items creates the words that operate in the syntax. In this paper we describe and defend a third theory of morphology , Distributed Morphology, 1 which combines features of the affixless and the lexicalist alternatives. With Anderson, Beard, and Aronoff, we endorse the separation of the terminal elements involved in the syntax from the phonological realization of these elements. With Lieber and the lexicalists, on the other hand, we take the phonological realization of the terminal elements in the syntax to be governed by lexical (Vocabulary) entries that relate bundles of morphosyntactic features to bundles of pho-nological features. We have called our approach Distributed Morphology (hereafter DM) to highlight the fact that the machinery of what traditionally has been called morphology is not concentrated in a single component of the gram",
"title": ""
},
{
"docid": "ba314edceb1b8ac00f94ad0037bd5b8e",
"text": "AMS subject classifications: primary 62G10 secondary 62H20 Keywords: dCor dCov Multivariate independence Distance covariance Distance correlation High dimension a b s t r a c t Distance correlation is extended to the problem of testing the independence of random vectors in high dimension. Distance correlation characterizes independence and determines a test of multivariate independence for random vectors in arbitrary dimension. In this work, a modified distance correlation statistic is proposed, such that under independence the distribution of a transformation of the statistic converges to Student t, as dimension tends to infinity. Thus we obtain a distance correlation t-test for independence of random vectors in arbitrarily high dimension, applicable under standard conditions on the coordinates that ensure the validity of certain limit theorems. This new test is based on an unbiased es-timator of distance covariance, and the resulting t-test is unbiased for every sample size greater than three and all significance levels. The transformed statistic is approximately normal under independence for sample size greater than nine, providing an informative sample coefficient that is easily interpretable for high dimensional data. 1. Introduction Many applications in genomics, medicine, engineering, etc. require analysis of high dimensional data. Time series data can also be viewed as high dimensional data. Objects can be represented by their characteristics or features as vectors p. In this work, we consider the extension of distance correlation to the problem of testing independence of random vectors in arbitrarily high, not necessarily equal dimensions, so the dimension p of the feature space of a random vector is typically large. measure all types of dependence between random vectors in arbitrary, not necessarily equal dimensions. (See Section 2 for definitions.) Distance correlation takes values in [0, 1] and is equal to zero if and only if independence holds. It is more general than the classical Pearson product moment correlation, providing a scalar measure of multivariate independence that characterizes independence of random vectors. The distance covariance test of independence is consistent against all dependent alternatives with finite second moments. In practice, however, researchers are often interested in interpreting the numerical value of distance correlation, without a formal test. For example, given an array of distance correlation statistics, what can one learn about the strength of dependence relations from the dCor statistics without a formal test? This is in fact, a difficult question, but a solution is finally available for a large class of problems. The …",
"title": ""
},
{
"docid": "6b1dc94c4c70e1c78ea32a760b634387",
"text": "3d reconstruction from a single image is inherently an ambiguous problem. Yet when we look at a picture, we can often infer 3d information about the scene. Humans perform single-image 3d reconstructions by using a variety of singleimage depth cues, for example, by recognizing objects and surfaces, and reasoning about how these surfaces are connected to each other. In this paper, we focus on the problem of automatic 3d reconstruction of indoor scenes, specifically ones (sometimes called “Manhattan worlds”) that consist mainly of orthogonal planes. We use a Markov random field (MRF) model to identify the different planes and edges in the scene, as well as their orientations. Then, an iterative optimization algorithm is applied to infer the most probable position of all the planes, and thereby obtain a 3d reconstruction. Our approach is fully automatic—given an input image, no human intervention is necessary to obtain an approximate 3d reconstruction.",
"title": ""
},
{
"docid": "1b7e958b7505129a150da0e186e5d022",
"text": "The study of complex adaptive systems, from cells to societies, is a study of the interplay among processes operating at diverse scales of space, time and organizational complexity. The key to such a study is an understanding of the interrelationships between microscopic processes and macroscopic patterns, and the evolutionary forces that shape systems. In particular, for ecosystems and socioeconomic systems, much interest is focused on broad scale features such as diversity and resiliency, while evolution operates most powerfully at the level of individual agents. Understanding the evolution and development of complex adaptive systems thus involves understanding how cooperation, coalitions and networks of interaction emerge from individual behaviors and feed back to influence those behaviors. In this paper, some of the mathematical challenges are discussed.",
"title": ""
},
{
"docid": "825267f8485bd5d5beefef55cecf770d",
"text": "Mobile phones are rapidly becoming small-size general purpose computers, so-called smartphones. However, applications and data stored on mobile phones are less protected from unauthorized access than on most desktop and mobile computers. This paper presents a survey on users' security needs, awareness and concerns in the context of mobile phones. It also evaluates acceptance and perceived protection of existing and novel authentication methods. The responses from 465 participants reveal that users are interested in increased security and data protection. The current protection by using PIN (Personal Identification Number) is perceived as neither adequate nor convenient in all cases. The sensitivity of data stored on the devices varies depending on the data type and the context of use, asking for the need for another level of protection. According to these findings, a two-level security model for mobile phones is proposed. The model provides differential data and service protection by utilizing existing capabilities of a mobile phone for authenticating users.",
"title": ""
},
{
"docid": "64122833d6fa0347f71a9abff385d569",
"text": "We present a brief history and overview of statistical methods in frame-semantic parsing – the automatic analysis of text using the theory of frame semantics. We discuss how the FrameNet lexicon and frameannotated datasets have been used by statistical NLP researchers to build usable, state-of-the-art systems. We also focus on future directions in frame-semantic parsing research, and discuss NLP applications that could benefit from this line of work. 1 Frame-Semantic Parsing Frame-semantic parsing has been considered as the task of automatically finding semantically salient targets in text, disambiguating their semantic frame representing an event and scenario in discourse, and annotating arguments consisting of words or phrases in text with various frame elements (or roles). The FrameNet lexicon (Baker et al., 1998), an ontology inspired by the theory of frame semantics (Fillmore, 1982), serves as a repository of semantic frames and their roles. Figure 1 depicts a sentence with three evoked frames for the targets “million”, “created” and “pushed” with FrameNet frames and roles. Automatic analysis of text using framesemantic structures can be traced back to the pioneering work of Gildea and Jurafsky (2002). Although their experimental setup relied on a primitive version of FrameNet and only made use of “exemplars” or example usages of semantic frames (containing one target per sentence) as opposed to a “corpus” of sentences, it resulted in a flurry of work in the area of automatic semantic role labeling (Màrquez et al., 2008). However, the focus of semantic role labeling (SRL) research has mostly been on PropBank (Palmer et al., 2005) conventions, where verbal targets could evoke a “sense” frame, which is not shared across targets, making the frame disambiguation setup different from the representation in FrameNet. Furthermore, it is fair to say that early research on PropBank focused primarily on argument structure prediction, and the interaction between frame and argument structure analysis has mostly been unaddressed (Màrquez et al., 2008). There are exceptions, where the verb frame has been taken into account during SRL (Meza-Ruiz and Riedel, 2009; Watanabe et al., 2010). Moreoever, the CoNLL 2008 and 2009 shared tasks also include the verb and noun frame identification task in their evaluations, although the overall goal was to predict semantic dependencies based on PropBank, and not full argument spans (Surdeanu et al., 2008; Hajič",
"title": ""
},
{
"docid": "9978f33847a09c651ccce68c3b88287f",
"text": "We propose a method for discovering the dependency relationships between the topics of documents shared in social networks using the latent social interactions, attempting to answer the question: given a seemingly new topic, from where does this topic evolve? In particular, we seek to discover the pair-wise probabilistic dependency in topics of documents which associate social actors from a latent social network, where these documents are being shared. By viewing the evolution of topics as a Markov chain, we estimate a Markov transition matrix of topics by leveraging social interactions and topic semantics. Metastable states in a Markov chain are applied to the clustering of topics. Applied to the CiteSeer dataset, a collection of documents in academia, we show the trends of research topics, how research topics are related and which are stable. We also show how certain social actors, authors, impact these topics and propose new ways for evaluating author impact.",
"title": ""
},
{
"docid": "d60deca88b46171ad940b9ee8964dc77",
"text": "Established in 1987, the EuroQol Group initially comprised a network of international, multilingual and multidisciplinary researchers from seven centres in Finland, the Netherlands, Norway, Sweden and the UK. Nowadays, the Group comprises researchers from Canada, Denmark, Germany, Greece, Japan, New Zealand, Slovenia, Spain, the USA and Zimbabwe. The process of shared development and local experimentation resulted in EQ-5D, a generic measure of health status that provides a simple descriptive profile and a single index value that can be used in the clinical and economic evaluation of health care and in population health surveys. Currently, EQ-5D is being widely used in different countries by clinical researchers in a variety of clinical areas. EQ-5D is also being used by eight out of the first 10 of the top 50 pharmaceutical companies listed in the annual report of Pharma Business (November/December 1999). Furthermore, EQ-5D is one of the handful of measures recommended for use in cost-effectiveness analyses by the Washington Panel on Cost Effectiveness in Health and Medicine. EQ-5D has now been translated into most major languages with the EuroQol Group closely monitoring the process.",
"title": ""
},
{
"docid": "e64d177c2898aee78fbe0f06ef61c373",
"text": "For both humans and robots, tactile sensing is important for interaction with the environment: it is the core sensing used for exploration and manipulation of objects. In this paper, we present a novel tactile-array sensor based on flexible piezoresistive rubber. We describe the design of the sensor and data acquisition system.We evaluate the sensitivity and robustness of the sensor, and show that it is consistent over time with little relaxation. Furthermore, the sensor has the benefit of being flexible, having a high resolution, it is easy to mount, and simple to manufacture. We demonstrate the use of the sensor in an active object-classification system. A robotic gripper with two sensors mounted on its fingers performs a palpation procedure on a set of objects. By squeezing an object, the robot actively explores the material properties, and the system acquires tactile information corresponding to the resulting pressure. Based on a k nearest neighbor classifier and using dynamic time warping to calculate the distance between different time series, the system is able to successfully classify objects. Our sensor demonstrates similar classification performance to the Weiss Robotics tactile sensor, while having additional benefits. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4fe6f1a5b2591600300c0db3c48341d9",
"text": "Satisficing, as an approach to decision-making under uncertainty, aims at achieving solutions that satisfy the problem’s constraints as well as possible. Mathematical optimization problems that are related to this form of decision-making include the P-model of Charnes and Cooper (1963), where satisficing is the objective, as well as chance-constrained and robust optimization problems, where satisficing is articulated in the constraints. In this paper, we first propose the R-model, where satisficing is the objective, and where the problem consists in finding the most “robust” solution, feasible in the problem’s constraints when uncertain outcomes arise over a maximally sized uncertainty set. We then study the key features of satisficing decision making that are associated with these problems and provide the complete functional characterization of a satisficing decision criterion. As a consequence, we are able to provide the most general framework of a satisficing model, termed the S-model, which seeks to maximize a satisficing decision criterion in its objective, and the corresponding satisficing-constrained optimization problem that generalizes robust optimization and chance-constrained optimization problems. Next, we focus on a tractable probabilistic S-model, termed the T-model whose objective is a lower bound of the P-model. We show that when probability densities of the uncertainties are log-concave, the T-model can admit a tractable concave objective function. In the case of discrete probability distributions, the T-model is a linear mixed integer program of moderate dimensions. We also show how the T-model can be extended to multi-stage decision-making and present the conditions under which the problem is computationally tractable. Our computational experiments on a stochastic maximum coverage problem strongly suggest that the T-model solutions can be highly effective, thus allaying misconceptions of having to pay a high price for the satisficing models in terms of solution conservativeness.",
"title": ""
},
{
"docid": "f11aff32623c92c07af2bab89a3e7f6d",
"text": "(1995). Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Distributed memory and the representation of general and specific information. On the interaction of selective attention and lexical knowledge: a connectionist account of neglect dyslexia. Implicit and explicit memory in amnesia: some explanations and predictions of the tracelink model. On language and connectionism: analysis of a parallel distributed processing model of language acquisition. From rote learning to system building: acquiring verb morphology in children and connectionist nets. From simple associations to systematic reasoning: a connectionist representation of rules, variables and dynamic bindings using temporal synchrony. Constituent attachment and thematic role assignment in sentence processing: influences of content-based expectations. Symbolic cognitive models are theories of human cognition that take the form of working computer programs. A cogni-tive model is intended to be an explanation of how some aspect of cognition is accomplished by a set of primitive computational processes. A model performs a specific cog-nitive task or class of tasks and produces behavior that constitutes a set of predictions that can be compared to data from human performance. Task domains that have received considerable attention include problem solving, language comprehension, memory tasks, and human-device interaction. The scientific questions cognitive modeling seeks to answer belong to cognitive psychology, and the computational techniques are often drawn from artificial intelligence. Cognitive modeling differs from other forms of",
"title": ""
},
{
"docid": "0d95c132ff0dcdb146ed433987c426cf",
"text": "A smart connected car in conjunction with the Internet of Things (IoT) is an emerging topic. The fundamental concept of the smart connected car is connectivity, and such connectivity can be provided by three aspects, such as Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I), and Vehicle-to-Everything (V2X). To meet the aspects of V2V and V2I connectivity, we developed modules in accordance with international standards with respect to On-Board Diagnostics II (OBDII) and 4G Long Term Evolution (4G-LTE) to obtain and transmit vehicle information. We also developed software to visually check information provided by our modules. Information related to a user’s driving, which is transmitted to a cloud-based Distributed File System (DFS), was then analyzed for the purpose of big data analysis to provide information on driving habits to users. Yet, since this work is an ongoing research project, we focus on proposing an idea of system architecture and design in terms of big data analysis. Therefore, our contributions through this work are as follows: (1) Develop modules based on Controller Area Network (CAN) bus, OBDII, and 4G-LTE; (2) Develop software to check vehicle information on a PC; (3) Implement a database related to vehicle diagnostic codes; (4) Propose system architecture and design for big data analysis.",
"title": ""
},
{
"docid": "0774820345f37dd1ae474fc4da1a3a86",
"text": "Several diseases and disorders are treatable with therapeutic proteins, but some of these products may induce an immune response, especially when administered as multiple doses over prolonged periods. Antibodies are created by classical immune reactions or by the breakdown of immune tolerance; the latter is characteristic of human homologue products. Many factors influence the immunogenicity of proteins, including structural features (sequence variation and glycosylation), storage conditions (denaturation, or aggregation caused by oxidation), contaminants or impurities in the preparation, dose and length of treatment, as well as the route of administration, appropriate formulation and the genetic characteristics of patients. The clinical manifestations of antibodies directed against a given protein may include loss of efficacy, neutralization of the natural counterpart and general immune system effects (including allergy, anaphylaxis or serum sickness). An upsurge in the incidence of antibody-mediated pure red cell aplasia (PRCA) among patients taking one particular formulation of recombinant human erythropoietin (epoetin-alpha, marketed as Eprex(R)/Erypo(R); Johnson & Johnson) in Europe caused widespread concern. The PRCA upsurge coincided with removal of human serum albumin from epoetin-alpha in 1998 and its replacement with glycine and polysorbate 80. Although the immunogenic potential of this particular product may have been enhanced by the way the product was stored, handled and administered, it should be noted that the subcutaneous route of administration does not confer immunogenicity per se. The possible role of micelle (polysorbate 80 plus epoetin-alpha) formation in the PRCA upsurge with Eprex is currently being investigated.",
"title": ""
}
] |
scidocsrr
|
8b6281d80e857bb19d097a8fc92ed388
|
Resource management for Infrastructure as a Service (IaaS) in cloud computing: A survey
|
[
{
"docid": "3bda49f976e4b2de51c0852a9afc12ab",
"text": "Cloud platforms offer resource utilization as on demand service, which lays the foundation for applications to scale during runtime. However, just-in-time scalability is not achieved by simply deploying applications to cloud platforms. Existing approaches require developers to rewrite their applications to leverage the on-demand resource utilization, thus bind applications to specific cloud infrastructure. In this paper, profiles are used to capture experts’ knowledge of scaling different types of applications. The profile-based approach automates the deployment and scaling of applications in cloud. Just-in-time scalability is achieved without binding to specific cloud infrastructure. A real case is used to demonstrate the process and feasibility of this profile-based approach.",
"title": ""
}
] |
[
{
"docid": "c591881de09c709ae2679cacafe24008",
"text": "This paper discusses a technique to estimate the position of a sniper using a spatial microphone array placed on elevated platforms. The shooter location is obtained from the exact location of the microphone array, from topographic information of the area and from an estimated direction of arrival (DoA) of the acoustic wave related to the explosion in the gun barrel, which is known as muzzle blast. The estimation of the DOA is based on the time differences the sound wavefront arrives at each pair of microphones, employing a technique known as Generalized Cross Correlation (GCC) with phase transform. The main idea behind the localization procedure used herein is that, based on the DoA, the acoustical path of the muzzle blast (from the weapon to the microphone) can be marked as a straight line on a terrain profile obtained from an accurate digital map, allowing the estimation of the shooter location whenever the microphone array is located on an dominant position. In addition, a new approach to improve the DoA estimation from a cognitive selection of microphones is introduced. In this technique, the microphones selected must form a consistent (sum of delays equal to zero) fundamental loop. The results obtained after processing muzzle blast gunshot signals recorded in a typical scenario, show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "de11d8c566fcb365bc1be6566f2574e7",
"text": "Helplessness in social situations was conceptualized as the perceived inability to surmount rejection, as revealed by causal attributions for rejection. Although current research on children's social adjustment emphasizes differences in social skills between popular and unpopular children or behavioral intervention as an aid for withdrawn children, the present study explores responses to rejection across popularity levels. The results show that individual differences in attributions for rejection are related to disruption of goal-directed behavior following rejection. As predicted, the most severe disruption of attempts to gain social approval (withdrawal and perseveration) was associated with the tendency to emphasize personal incompetence as the cause of rejection, regardless of popularity level. The findings suggest that cognitive mediators of overt social behavior and ability to solve problems when faced with difficulties need to be considered in the study of children's social relations.",
"title": ""
},
{
"docid": "49abf545b4af09c680c92fd52e1cbc92",
"text": "We propose a new olfactory display system that can generate an odor distribution on a two-dimensional display screen. The proposed system has four fans on the four corners of the screen. The airflows that are generated by these fans collide multiple times to create an airflow that is directed towards the user from a certain position on the screen. By introducing odor vapor into the airflows, the odor distribution is as if an odor source had been placed onto the screen. The generated odor distribution leads the user to perceive the odor as emanating from a specific region of the screen. The position of this virtual odor source can be shifted to an arbitrary position on the screen by adjusting the balance of the airflows from the four fans. Most users do not immediately notice the odor presentation mechanism of the proposed olfactory display system because the airflow and perceived odor come from the display screen rather than the fans. The airflow velocity can even be set below the threshold for airflow sensation, such that the odor alone is perceived by the user. We present experimental results that show the airflow field and odor distribution that are generated by the proposed system. We also report sensory test results to show how the generated odor distribution is perceived by the user and the issues that must be considered in odor presentation.",
"title": ""
},
{
"docid": "483c3e0bd9406baef7040cdc3399442d",
"text": "Composite resins have been shown to be susceptible to discolouration on exposure to oral environment over a period of time. Discolouration of composite resins can be broadly classified as intrinsic or extrinsic. Intrinsic discolouration involves physico-chemical alteration within the material, while extrinsic stains are a result of surface discolouration by extrinsic compounds. Although the effects of various substances on the colour stability of composite resins have been extensively investigated, little has been published on the methods of removing the composite resins staining. The purpose of this paper is to provide a brief literature review on the colour stability of composite resins and clinical approaches in the stain removal.",
"title": ""
},
{
"docid": "629deef69cda09fa256fb76a2bed41b6",
"text": "Learning a good representation of text is key to many recommendation applications. Examples include news recommendation where texts to be recommended are constantly published everyday. However, most existing recommendation techniques, such as matrix factorization based methods, mainly rely on interaction histories to learn representations of items. While latent factors of items can be learned eectively from user interaction data, in many cases, such data is not available, especially for newly emerged items. In this work, we aim to address the problem of personalized recommendation for completely new items with text information available. We cast the problem as a personalized text ranking problem and propose a general framework that combines text embedding with personalized recommendation. Users and textual content are embedded into latent feature space. e text embedding function can be learned end-to-end by predicting user interactions with items. To alleviate sparsity in interaction data, and leverage large amount of text data with lile or no user interactions, we further propose a joint text embedding model that incorporates unsupervised text embedding with a combination module. Experimental results show that our model can signicantly improve the eectiveness of recommendation systems on real-world datasets.",
"title": ""
},
{
"docid": "9c106d71e5c40c3338cf4acd1e142621",
"text": "Pomegranate peels were studied for the effect of gamma irradiation on microbial decontamination along with its effect on total phenolic content and in vitro antioxidant activity. Gamma irradiation was applied at various dose levels (5.0, 10.0, 15.0 and 25.0 kGy) on pomegranate peel powder. Both the values of total phenolic content and in vitro antioxidant activity were positively correlated and showed a significant increase (p < 0.05) for 10.0 kGy irradiated dose level immediately after irradiation and 60 days of post irradiation storage. At 5.0 kGy and above dose level, gamma irradiation has reduced microbial count of pomegranate peel powder to nil. Post irradiation storage studies also showed that, the irradiated peel powder was microbiologically safe even after 90 days of storage period.",
"title": ""
},
{
"docid": "61359ded391acaaaab0d4b9a0d851b8c",
"text": "A laparoscopic Heller myotomy with partial fundoplication is considered today in most centers in the United States and abroad the treatment of choice for patients with esophageal achalasia. Even though the operation has initially a very high success rate, dysphagia eventually recurs in some patients. In these cases, it is important to perform a careful work-up to identify the cause of the failure and to design a tailored treatment plan by either endoscopic means or revisional surgery. The best results are obtained by a team approach, in Centers where radiologists, gastroenterologists, and surgeons have experience in the diagnosis and treatment of this disease.",
"title": ""
},
{
"docid": "bac9584a31e42129fb7a5fe2640f5725",
"text": "During the last few years, continuous progresses in wireless communications have opened new research fields in computer networking, aimed at extending data networks connectivity to environments where wired solutions are impracticable. Among these, vehicular communication is attracting growing attention from both academia and industry, owing to the amount and importance of the related applications, ranging from road safety to traffic control and up to mobile entertainment. Vehicular Ad-hoc Networks (VANETs) are self-organized networks built up from moving vehicles, and are part of the broader class of Mobile Ad-hoc Networks (MANETs). Owing to their peculiar characteristics, VANETs require the definition of specific networking techniques, whose feasibility and performance are usually tested by means of simulation. One of the main challenges posed by VANETs simulations is the faithful characterization of vehicular mobility at both the macroscopic and microscopic levels, leading to realistic non-uniform distributions of cars and velocity, and unique connectivity dynamics. However, freely distributed tools which are commonly used for academic studies only consider limited vehicular mobility issues, while they pay little or no attention to vehicular traffic generation and its interaction with its motion constraints counterpart. Such a simplistic approach can easily raise doubts on the confidence of derived VANETs simulation results. In this paper we present VanetMobiSim, a freely available generator of realistic vehicular movement traces for networks simulators. The traces generated by VanetMobiSim are validated first by illustrating how the interaction between featured motion constraints and traffic generator models is able to reproduce typical phenomena of vehicular traffic. Then, the traces are formally validated against those obtained by TSIS-CORSIM, a benchmark traffic simulator in transportation research. This makes VanetMobiSim one of the few vehicular mobility simulator fully validated and freely available to the vehicular networks research community.",
"title": ""
},
{
"docid": "eb64f11d3795bd2e97eb6d440169a3f0",
"text": "Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337:a2338], although the results are controversial. In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and nonverbal cues are not strictly necessary for emotional contagion, and that the observation of others' positive experiences constitutes a positive experience for people.",
"title": ""
},
{
"docid": "c229a2ebe7ce4d8088b1decf596053c7",
"text": "We study the infinitely many-armed bandit problem with budget constraints, where the number of arms can be infinite and much larger than the number of possible experiments. The player aims at maximizing his/her total expected reward under a budget constraint B for the cost of pulling arms. We introduce a weak stochastic assumption on the ratio of expected-reward to expected-cost of a newly pulled arm which characterizes its probability of being a near-optimal arm. We propose an algorithm named RCB-I to this new problem, in which the player first randomly picks K arms, whose order is sub-linear in terms of B, and then runs the algorithm for the finite-arm setting on the selected arms. Theoretical analysis shows that this simple algorithm enjoys a sub-linear regret in term of the budget B. We also provide a lower bound of any algorithm under Bernoulli setting. The regret bound of RCB-I matches the lower bound up to a logarithmic factor. We further extend this algorithm to the any-budget setting (i.e., the budget is unknown in advance) and conduct corresponding theoretical analysis.",
"title": ""
},
{
"docid": "99f66f4ff6a8548a4cbdac39d5f54cc4",
"text": "Dissolution tests that can predict the in vivo performance of drug products are usually called biorelevant dissolution tests. Biorelevant dissolution testing can be used to guide formulation development, to identify food effects on the dissolution and bioavailability of orally administered drugs, and to identify solubility limitations and stability issues. To develop a biorelevant dissolution test for oral dosage forms, the physiological conditions in the gastrointestinal (GI) tract that can affect drug dissolution are taken into consideration according to the properties of the drug and dosage form. A variety of biorelevant methods in terms of media and hydrodynamics to simulate the contents and the conditions of the GI tract are presented. The ability of biorelevant dissolution methods to predict in vivo performance and generate successful in vitro–in vivo correlations (IVIVC) for oral formulations are also discussed through several studies.",
"title": ""
},
{
"docid": "ac9fb08fd12fc776138b2735cd370118",
"text": "In this paper we study 3D convolutional networks for video understanding tasks. Our starting point is the stateof-the-art I3D model of [3], which “inflates” all the 2D filters of the Inception architecture to 3D. We first consider “deflating” the I3D model at various levels to understand the role of 3D convolutions. Interestingly, we found that 3D convolutions at the top layers of the network contribute more than 3D convolutions at the bottom layers, while also being computationally more efficient. This indicates that I3D is better at capturing high-level temporal patterns than low-level motion signals. We also consider replacing 3D convolutions with spatiotemporal-separable 3D convolutions (i.e., replacing convolution using a kt×k×k filter with 1× k× k followed by kt× 1× 1 filters); we show that such a model, which we call S3D, is 1.5x more computationally efficient (in terms of FLOPS) than I3D, and achieves better accuracy. Finally, we explore spatiotemporal feature gating on top of S3D. The resulting model, which we call S3D-G, outperforms the state-of-the-art I3D model by 3.5% accuracy on Kinetics and reduces the FLOPS by 34%. It also achieves a new state-of-the-art performance when transferred to other action classification (UCF-101 and HMDB51) and detection (UCF-101 and JHMDB) datasets.",
"title": ""
},
{
"docid": "70bc203f48e4de04266b06b5bc9c9145",
"text": "The effective propagation of pixel labels through the spatial and temporal domains is vital to many computer vision and multimedia problems, yet little attention have been paid to the temporal/video domain propagation in the past. Previous video label propagation algorithms largely avoided the use of dense optical flow estimation due to their computational costs and inaccuracies, and relied heavily on complex (and slower) appearance models. We show in this paper the limitations of pure motion and appearance based propagation methods alone, especially the fact that their performances vary on different type of videos. We propose a probabilistic framework that estimates the reliability of the sources and automatically adjusts the weights between them. Our experiments show that the “dragging effect” of pure optical-flow-based methods are effectively avoided, while the problems of pure appearance-based methods such the large intra-class variance is also effectively handled.",
"title": ""
},
{
"docid": "4845233571c0572570445f4e3ca4ebc2",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. You may purchase this article from the Ask*IEEE Document Delivery Service at http://www.ieee.org/services/askieee/",
"title": ""
},
{
"docid": "80ef53f4488df5a998e88da050c24b1b",
"text": "We present Crayon, a library and runtime system that reduces display power dissipation by acceptably approximating displayed images via shape and color transforms. Crayon can be inserted between an application and the display to optimize dynamically generated images before they appear on the screen. It can also be applied offline to optimize stored images before they are retrieved and displayed. Crayon exploits three fundamental properties: the acceptability of small changes in shape and color, the fact that the power dissipation of OLED displays and DLP pico-projectors is different for different colors, and the relatively small energy cost of computation in comparison to display energy usage.\n We implement and evaluate Crayon in three contexts: a hardware platform with detailed power measurement facilities and an OLED display, an Android tablet, and a set of cross-platform tools. Our results show that Crayon's color transforms can reduce display power dissipation by over 66% while producing images that remain visually acceptable to users. The measured whole-system power reduction is approximately 50%. We quantify the acceptability of Crayon's shape and color transforms with a user study involving over 400 participants and over 21,000 image evaluations.",
"title": ""
},
{
"docid": "16851eb1d58b5ddd8287eaf06f453209",
"text": "This paper shows a quick review of cloud computing technology along with its deployment and service models. We have focused in the security issues of cloud computing and we have listed most of the solutions which are available to solve these issues. Moreover, we have listed the most popular threats which been recognized by cloud security alliance (CSA). Also there are some other threats has been mentioned in this paper. Finally the privacy issues has been explained.",
"title": ""
},
{
"docid": "ce18f78a9285a68016e7d793122d3079",
"text": "Civic technology, or civic tech, encompasses a rich body of work, inside and outside HCI, around how we shape technology for, and in turn how technology shapes, how we govern, organize, serve, and identify matters of concern for communities. This study builds on previous work by investigating how civic leaders in a large US city conceptualize civic tech, in particular, how they approach the intersection of data, design and civics. We encountered a range of overlapping voices, from providers, to connectors, to volunteers of civic services and resources. Through this account, we identified different conceptions and expectation of data, design and civics, as well as several shared issues around pressing problems and strategic aspirations. Reflecting on this set of issues produced guiding questions, in particular about the current and possible roles for design, to advance civic tech.",
"title": ""
},
{
"docid": "9bbf9422ae450a17e0c46d14acf3a3e3",
"text": "This short paper outlines how polynomial chaos theory (PCT) can be utilized for manipulator dynamic analysis and controller design in a 4-DOF selective compliance assembly robot-arm-type manipulator with variation in both the link masses and payload. It includes a simple linear control algorithm into the formulation to show the capability of the PCT framework.",
"title": ""
},
{
"docid": "90ea878961c91e6c2b5e10f737798350",
"text": "Recent advances in Representation Learning and Adversarial Training seem to succeed in removing unwanted features from the learned representation. We show that demographic information of authors is encoded in—and can be recovered from—the intermediate representations learned by text-based neural classifiers. The implication is that decisions of classifiers trained on textual data are not agnostic to—and likely condition on—demographic attributes. When attempting to remove such demographic information using adversarial training, we find that while the adversarial component achieves chance-level development-set accuracy during training, a post-hoc classifier, trained on the encoded sentences from the first part, still manages to reach substantially higher classification accuracies on the same data. This behavior is consistent across several tasks, demographic properties and datasets. We explore several techniques to improve the effectiveness of the adversarial component. Our main conclusion is a cautionary one: do not rely on the adversarial training to achieve invariant representation to sensitive features.",
"title": ""
},
{
"docid": "b5b26158a44457bb5e30eb26428d5cb7",
"text": "In this paper we propose the utterance-level Permutation Invariant Training (uPIT) technique. uPIT is a practically applicable, end-to-end, deep learning based solution for speaker independent multi-talker speech separation. Specifically, uPIT extends the recently proposed Permutation Invariant Training (PIT) technique with an utterance-level cost function, hence eliminating the need for solving an additional permutation problem during inference, which is otherwise required by frame-level PIT. We achieve this using Recurrent Neural Networks (RNNs) that, during training, minimize the utterance-level separation error, hence forcing separated frames belonging to the same speaker to be aligned to the same output stream. In practice, this allows RNNs, trained with uPIT, to separate multi-talker mixed speech without any prior knowledge of signal duration, number of speakers, speaker identity or gender. We evaluated uPIT on the WSJ0 and Danish twoand three-talker mixed-speech separation tasks and found that uPIT outperforms techniques based on Non-negative Matrix Factorization (NMF) and Computational Auditory Scene Analysis (CASA), and compares favorably with Deep Clustering (DPCL) and the Deep Attractor Network (DANet). Furthermore, we found that models trained with uPIT generalize well to unseen speakers and languages. Finally, we found that a single model, trained with uPIT, can handle both two-speaker, and three-speaker speech mixtures.",
"title": ""
}
] |
scidocsrr
|
25a916349fd801bcddee8adf300ce97d
|
Swallow swarm optimization algorithm: a new method to optimization
|
[
{
"docid": "aeebcc70000e6ceed99d2e033d35c65e",
"text": "This paper presents glowworm swarm optimization (GSO), a novel algorithm for the simultaneous computation of multiple optima of multimodal functions. The algorithm shares a few features with some better known swarm intelligence based optimization algorithms, such as ant colony optimization and particle swarm optimization, but with several significant differences. The agents in GSO are thought of as glowworms that carry a luminescence quantity called luciferin along with them. The glowworms encode the fitness of their current locations, evaluated using the objective function, into a luciferin value that they broadcast to their neighbors. The glowworm identifies its neighbors and computes its movements by exploiting an adaptive neighborhood, which is bounded above by its sensor range. Each glowworm selects, using a probabilistic mechanism, a neighbor that has a luciferin value higher than its own and moves toward it. These movements—based only on local information and selective neighbor interactions—enable the swarm of glowworms to partition into disjoint subgroups that converge on multiple optima of a given multimodal function. We provide some theoretical results related to the luciferin update mechanism in order to prove the bounded nature and convergence of luciferin levels of the glowworms. Experimental results demonstrate the efficacy of the proposed glowworm based algorithm in capturing multiple optima of a series of standard multimodal test functions and more complex ones, such as stair-case and multiple-plateau functions. We also report the results of tests in higher dimensional spaces with a large number of peaks. We address the parameter selection problem by conducting experiments to show that only two parameters need to be selected by the user. Finally, we provide some comparisons of GSO with PSO and an experimental comparison with Niche-PSO, a PSO variant that is designed for the simultaneous computation of multiple optima.",
"title": ""
},
{
"docid": "555ad116b9b285051084423e2807a0ba",
"text": "The performance of particle swarm optimization using an inertia weight is compared with performance using a constriction factor. Five benchmark functions are used for the comparison. It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension. This approach provides performance on the benchmark functions superior to any other published results known by the authors. '",
"title": ""
},
{
"docid": "3bf954a23ea3e7d5326a7b89635f966a",
"text": "The particle swarm optimizer (PSO) is a stochastic, population-based optimization technique that can be applied to a wide range of problems, including neural network training. This paper presents a variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm. This is achieved by using multiple swarms to optimize different components of the solution vector cooperatively. Application of the new PSO algorithm on several benchmark optimization problems shows a marked improvement in performance over the traditional PSO.",
"title": ""
}
] |
[
{
"docid": "e5ad17a5e431c8027ae58337615a60bd",
"text": "In this paper, we focus on learning structure-aware document representations from data without recourse to a discourse parser or additional annotations. Drawing inspiration from recent efforts to empower neural networks with a structural bias (Cheng et al., 2016; Kim et al., 2017), we propose a model that can encode a document while automatically inducing rich structural dependencies. Specifically, we embed a differentiable non-projective parsing algorithm into a neural model and use attention mechanisms to incorporate the structural biases. Experimental evaluations across different tasks and datasets show that the proposed model achieves state-of-the-art results on document modeling tasks while inducing intermediate structures which are both interpretable and meaningful.",
"title": ""
},
{
"docid": "45b5072faafa8a26cfe320bd5faedbcd",
"text": "METIS-II was an EU-FET MT project running from October 2004 to September 2007, which aimed at translating free text input without resorting to parallel corpora. The idea was to use “basic” linguistic tools and representations and to link them with patterns and statistics from the monolingual target-language corpus. The METIS-II project has four partners, translating from their “home” languages Greek, Dutch, German, and Spanish into English. The paper outlines the basic ideas of the project, their implementation, the resources used, and the results obtained. It also gives examples of how METIS-II has continued beyond its lifetime and the original scope of the project. On the basis of the results and experiences obtained, we believe that the approach is promising and offers the potential for development in various directions.",
"title": ""
},
{
"docid": "3092ba8df6080445f15382235ed63985",
"text": "The introduction of new technologies into vehicles has been imposing new forms of interaction, being a challenge to drivers but also to HMI research. The multiplicity of on-board systems in the market has been changing the driving task, being the consequences of such interaction a concern especially to older drivers. Several studies have been conducted to report the natural functional declines of older drivers and the way they cope with additional sources of information and additional tasks in specific moments. However, the evolution of these equipments, their frequent presence in the automotive market and also the increased acceptability and familiarization of older drivers with such technologies, compel researchers to consider other aspects of these interactions: from adaptation to the long term effects of using any in-vehicle technologies.",
"title": ""
},
{
"docid": "6a74c2d26f5125237929031cf1ccf204",
"text": "Harnessing crowds can be a powerful mechanism for increasing innovation. However, current approaches to crowd innovation rely on large numbers of contributors generating ideas independently in an unstructured way. We introduce a new approach called distributed analogical idea generation, which aims to make idea generation more effective and less reliant on chance. Drawing from the literature in cognitive science on analogy and schema induction, our approach decomposes the creative process in a structured way amenable to using crowds. In three experiments we show that distributed analogical idea generation leads to better ideas than example-based approaches, and investigate the conditions under which crowds generate good schemas and ideas. Our results have implications for improving creativity and building systems for distributed crowd innovation.",
"title": ""
},
{
"docid": "eed4d069544649b2c80634bdacbda372",
"text": "Data mining tools become important in finance and accounting. Their classification and prediction abilities enable them to be used for the purposes of bankruptcy prediction, going concern status and financial distress prediction, management fraud detection, credit risk estimation, and corporate performance prediction. This study aims to provide a state-of-the-art review of the relative literature and to indicate relevant research opportunities.",
"title": ""
},
{
"docid": "28899946726bc1e665298f09ea9e654d",
"text": "This paper presents a simple and robust mechanism, called change-point monitoring (CPM), to detect denial of service (DoS) attacks. The core of CPM is based on the inherent network protocol behavior and is an instance of the sequential change point detection. To make the detection mechanism insensitive to sites and traffic patterns, a nonparametric cumulative sum (CUSUM) method is applied, thus making the detection mechanism robust, more generally applicable, and its deployment much easier. CPM does not require per-flow state information and only introduces a few variables to record the protocol behaviors. The statelessness and low computation overhead of CPM make itself immune to any flooding attacks. As a case study, the efficacy of CPM is evaluated by detecting a SYN flooding attack - the most common DoS attack. The evaluation results show that CPM has short detection latency and high detection accuracy",
"title": ""
},
{
"docid": "cff0b5c06b322c887aed9620afeac668",
"text": "In addition to providing substantial performance enhancements, future 5G networks will also change the mobile network ecosystem. Building on the network slicing concept, 5G allows to “slice” the network infrastructure into separate logical networks that may be operated independently and targeted at specific services. This opens the market to new players: the infrastructure provider, which is the owner of the infrastructure, and the tenants, which may acquire a network slice from the infrastructure provider to deliver a specific service to their customers. In this new context, we need new algorithms for the allocation of network resources that consider these new players. In this paper, we address this issue by designing an algorithm for the admission and allocation of network slices requests that (i) maximises the infrastructure provider's revenue and (ii) ensures that the service guarantees provided to tenants are satisfied. Our key contributions include: (i) an analytical model for the admissibility region of a network slicing-capable 5G Network, (ii) the analysis of the system (modelled as a Semi-Markov Decision Process) and the optimisation of the infrastructure provider's revenue, and (iii) the design of an adaptive algorithm (based on Q-learning) that achieves close to optimal performance.",
"title": ""
},
{
"docid": "ba4260598a634bcfdfb7423182c4c8b6",
"text": "A wide range of computational methods and tools for data analysis are available. In this study we took advantage of those available technological advancements to develop prediction models for the prediction of a Type-2 Diabetic Patient. We aim to investigate how the diabetes incidents are affected by patients’ characteristics and measurements. Efficient predictive modeling is required for medical researchers and practitioners. This study proposes Hybrid Prediction Model (HPM) which uses Simple K-means clustering algorithm aimed at validating chosen class label of given data (incorrectly classified instances are removed, i.e. pattern extracted from original data) and subsequently applying the classification algorithm to the result set. C4.5 algorithm is used to build the final classifier model by using the k-fold cross-validation method. The Pima Indians diabetes data was obtained from the University of California at Irvine (UCI) machine learning repository datasets. A wide range of different classification methods have been applied previously by various researchers in order to find the best performing algorithm on this dataset. The accuracies achieved have been in the range of 59.4–84.05%. However the proposed HPM obtained a classification accuracy of 92.38%. In order to evaluate the performance of the proposed method, sensitivity and specificity performance measures that are used commonly in medical classification studies were used. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fea34b4a4b0b2dcdacdc57dce66f31de",
"text": "Deep neural networks have become the state-ofart methods in many fields of machine learning recently. Still, there is no easy way how to choose a network architecture which can significantly influence the network performance. This work is a step towards an automatic architecture design. We propose an algorithm for an optimization of a network architecture based on evolution strategies. The al gorithm is inspired by and designed directly for the Keras library [3] which is one of the most common implementations of deep neural networks. The proposed algorithm is tested on MNIST data set and the prediction of air pollution based on sensor measurements, and it is compared to several fixed architectures and support vector regression.",
"title": ""
},
{
"docid": "0a2795008a60a8b3f9c3a4a6834de30f",
"text": "Infection, as a common postoperative complication of orthopedic surgery, is the main reason leading to implant failure. Silver nanoparticles (AgNPs) are considered as a promising antibacterial agent and always used to modify orthopedic implants to prevent infection. To optimize the implants in a reasonable manner, it is critical for us to know the specific antibacterial mechanism, which is still unclear. In this review, we analyzed the potential antibacterial mechanisms of AgNPs, and the influences of AgNPs on osteogenic-related cells, including cellular adhesion, proliferation, and differentiation, were also discussed. In addition, methods to enhance biocompatibility of AgNPs as well as advanced implants modifications technologies were also summarized.",
"title": ""
},
{
"docid": "f32e8f005d277652fe691216e96e7fd8",
"text": "PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup O(log N) sampling instead of O(N) enabling the practical generation of 512× 512 images. We evaluate the model on class-conditional image generation, text-toimage synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.",
"title": ""
},
{
"docid": "69d826aa8309678cf04e2870c23a99dd",
"text": "Contemporary analyses of cell metabolism have called out three metabolites: ATP, NADH, and acetyl-CoA, as sentinel molecules whose accumulation represent much of the purpose of the catabolic arms of metabolism and then drive many anabolic pathways. Such analyses largely leave out how and why ATP, NADH, and acetyl-CoA (Figure 1 ) at the molecular level play such central roles. Yet, without those insights into why cells accumulate them and how the enabling properties of these key metabolites power much of cell metabolism, the underlying molecular logic remains mysterious. Four other metabolites, S-adenosylmethionine, carbamoyl phosphate, UDP-glucose, and Δ2-isopentenyl-PP play similar roles in using group transfer chemistry to drive otherwise unfavorable biosynthetic equilibria. This review provides the underlying chemical logic to remind how these seven key molecules function as mobile packets of cellular currencies for phosphoryl transfers (ATP), acyl transfers (acetyl-CoA, carbamoyl-P), methyl transfers (SAM), prenyl transfers (IPP), glucosyl transfers (UDP-glucose), and electron and ADP-ribosyl transfers (NAD(P)H/NAD(P)+) to drive metabolic transformations in and across most primary pathways. The eighth key metabolite is molecular oxygen (O2), thermodynamically activated for reduction by one electron path, leaving it kinetically stable to the vast majority of organic cellular metabolites.",
"title": ""
},
{
"docid": "5e376e42186e894ca78e8d1c50d33911",
"text": "We consider a family of chaotic skew tent maps. The skew tent map is a two-parameter, piecewise-linear, weakly-unimodal, map of the interval Fa;b. We show that Fa;b is Markov for a dense set of parameters in the chaotic region, and we exactly ®nd the probability density function (pdf), for any of these maps. It is well known (Boyarsky A, G ora P. Laws of chaos: invariant measures and dynamical systems in one dimension. Boston: Birkhauser, 1997), that when a sequence of transformations has a uniform limit F, and the corresponding sequence of invariant pdfs has a weak limit, then that invariant pdf must be F invariant. However, we show in the case of a family of skew tent maps that not only does a suitable sequence of convergent sequence exist, but they can be constructed entirely within the family of skew tent maps. Furthermore, such a sequence can be found amongst the set of Markov transformations, for which pdfs are easily and exactly calculated. We then apply these results to exactly integrate Lyapunov exponents. Ó 2000 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4542d7d6f8109dcc9ade9e8fc44918bb",
"text": "This paper proposes a subject transfer framework for EEG classification. It aims to improve the classification performance when the training set of the target subject (namely user) is small owing to the need to reduce the calibration session. Our framework pursues improvement not only at the feature extraction stage, but also at the classification stage. At the feature extraction stage, we first obtain a candidate filter set for each subject through a previously proposed feature extraction method. Then, we design different criterions to learn two sparse subsets of the candidate filter set, which are called the robust filter bank and adaptive filter bank, respectively. Given robust and adaptive filter banks, at the classification step, we learn classifiers corresponding to these filter banks and employ a two-level ensemble strategy to dynamically and locally combine their outcomes to reach a single decision output. The proposed framework, as validated by experimental results, can achieve positive knowledge transfer for improving the performance of EEG classification.",
"title": ""
},
{
"docid": "0ec0af632612fbbc9b4dba1aa8843590",
"text": "The diversity in web object types and their resource requirements contributes to the unpredictability of web service provisioning. In this paper, an eÆcient admission control algorithm, PACERS, is proposed to provide di erent levels of services based on the server workload characteristics. Service quality is ensured by periodical allocation of system resources based on the estimation of request rate and service requirements of prioritized tasks. Admission of lower priority tasks is restricted during high load periods to prevent denial-of-services to high priority tasks. A doublequeue structure is implemented to reduce the e ects of estimation inaccuracy and to utilize the spare capacity of the server, thus increasing the system throughput. Response delays of the high priority tasks are bounded by the length of the prediction period. Theoretical analysis and experimental study show that the PACERS algorithm provides desirable throughput and bounded response delay to the prioritized tasks, without any signi cant impact on the aggregate throughput of the system under various workload.",
"title": ""
},
{
"docid": "91eecde9d0e3b67d7af0194782923ead",
"text": "The burden of entry into mobile crowdsensing (MCS) is prohibitively high for human-subject researchers who lack a technical orientation. As a result, the benefits of MCS remain beyond the reach of research communities (e.g., psychologists) whose expertise in the study of human behavior might advance applications and understanding of MCS systems. This paper presents Sensus, a new MCS system for human-subject studies that bridges the gap between human-subject researchers and MCS methods. Sensus alleviates technical burdens with on-device, GUI-based design of sensing plans, simple and efficient distribution of sensing plans to study participants, and uniform participant experience across iOS and Android devices. Sensing plans support many hardware and software sensors, automatic deployment of sensor-triggered surveys, and double-blind assignment of participants within randomized controlled trials. Sensus offers these features to study designers without requiring knowledge of markup and programming languages. We demonstrate the feasibility of using Sensus within two human-subject studies, one in psychology and one in engineering. Feedback from non-technical users indicates that Sensus is an effective and low-burden system for MCS-based data collection and analysis.",
"title": ""
},
{
"docid": "763b8982d13b0637a17347b2c557f1f8",
"text": "This paper describes an application of Case-Based Reasonin g to the problem of reducing the number of final-line fraud investigation s i the credit approval process. The performance of a suite of algorithms whi ch are applied in combination to determine a diagnosis from a set of retriev ed cases is reported. An adaptive diagnosis algorithm combining several neighbourhoodbased and probabilistic algorithms was found to have the bes t performance, and these results indicate that an adaptive solution can pro vide fraud filtering and case ordering functions for reducing the number of fin al-li e fraud investigations necessary.",
"title": ""
},
{
"docid": "18e77bde932964655ba7df73b02a3048",
"text": "In this paper, we propose a mathematical framework to jointly model related activities with both motion and context information for activity recognition and anomaly detection. This is motivated from observations that activities related in space and time rarely occur independently and can serve as context for each other. The spatial and temporal distribution of different activities provides useful cues for the understanding of these activities. We denote the activities occurring with high frequencies in the database as normal activities. Given training data which contains labeled normal activities, our model aims to automatically capture frequent motion and context patterns for each activity class, as well as each pair of classes, from sets of predefined patterns during the learning process. Then, the learned model is used to generate globally optimum labels for activities in the testing videos. We show how to learn the model parameters via an unconstrained convex optimization problem and how to predict the correct labels for a testing instance consisting of multiple activities. The learned model and generated labels are used to detect anomalies whose motion and context patterns deviate from the learned patterns. We show promising results on the VIRAT Ground Dataset that demonstrates the benefit of joint modeling and recognition of activities in a wide-area scene and the effectiveness of the proposed method in anomaly detection.",
"title": ""
},
{
"docid": "8efc308fe9730aca44975ecfb0fa7581",
"text": "We give a survey of the developments in the theory of Backward Stochastic Differential Equations during the last 20 years, including the solutions’ existence and uniqueness, comparison theorem, nonlinear Feynman-Kac formula, g-expectation and many other important results in BSDE theory and their applications to dynamic pricing and hedging in an incomplete financial market. We also present our new framework of nonlinear expectation and its applications to financial risk measures under uncertainty of probability distributions. The generalized form of law of large numbers and central limit theorem under sublinear expectation shows that the limit distribution is a sublinear Gnormal distribution. A new type of Brownian motion, G-Brownian motion, is constructed which is a continuous stochastic process with independent and stationary increments under a sublinear expectation (or a nonlinear expectation). The corresponding robust version of Itô’s calculus turns out to be a basic tool for problems of risk measures in finance and, more general, for decision theory under uncertainty. We also discuss a type of “fully nonlinear” BSDE under nonlinear expectation. Mathematics Subject Classification (2010). 60H, 60E, 62C, 62D, 35J, 35K",
"title": ""
},
{
"docid": "0a58e34f272e6aa33ca49e54888056a3",
"text": "INTRODUCTION Isolated rupture of the distal semitendinosus is rare, and as a result, there is paucity of evidence over the best method of managing the injury, that is, surgical or nonsurgical, particularly in light of the fact that the tendon is routinely harvested for anterior cruciate ligament (ACL) reconstruction. We present the cases of 2 elite sprinters with isolated ruptures of the distal semitendinosus who were managed nonoperatively, and we take the opportunity to look at the literature surrounding the management of this injury.",
"title": ""
}
] |
scidocsrr
|
2664988ead9822e0f1793602dba8a63d
|
Fiber Optic Sensors in Structural Health Monitoring
|
[
{
"docid": "c060f75acd562c535ad655f82fa1163b",
"text": "can be found at: Structural Health Monitoring Additional services and information for http://shm.sagepub.com/cgi/alerts Email Alerts: http://shm.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: http://shm.sagepub.com/cgi/content/refs/2/3/257 SAGE Journals Online and HighWire Press platforms): (this article cites 14 articles hosted on the Citations",
"title": ""
}
] |
[
{
"docid": "c57b8d5a7e9a52fcdc796d3a145c1cb5",
"text": "This paper presents a robust location-aware activity recognition approach for establishing ambient intelligence applications in a smart home. With observations from a variety of multimodal and unobtrusive wireless sensors seamlessly integrated into ambient-intelligence compliant objects (AICOs), the approach infers a single resident's interleaved activities by utilizing a generalized and enhanced Bayesian Network fusion engine with inputs from a set of the most informative features. These features are collected by ranking their usefulness in estimating activities of interest. Additionally, each feature reckons its corresponding reliability to control its contribution in cases of possible device failure, therefore making the system more tolerant to inevitable device failure or interference commonly encountered in a wireless sensor network, and thus improving overall robustness. This work is part of an interdisciplinary Attentive Home pilot project with the goal of fulfilling real human needs by utilizing context-aware attentive services. We have also created a novel application called ldquoActivity Maprdquo to graphically display ambient-intelligence-related contextual information gathered from both humans and the environment in a more convenient and user-accessible way. All experiments were conducted in an instrumented living lab and their results demonstrate the effectiveness of the system.",
"title": ""
},
{
"docid": "273153d0cf32162acb48ed989fa6d713",
"text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "b868623565254556b289777737b23585",
"text": "Playing videogames is now a major leisure pursuit, yet research in the area is comparatively sparse. Previous correlational evidence suggests that subjective time loss occurs during playing videogames. This study examined experiences of time loss among a relatively large group of gamers (n = 280). Quantitative and qualitative data were collected through an online survey. Results showed that time loss occurred irrespective of gender, age, or frequency of play, but was associated with particular structural characteristics of games such as their complexity, the presence of multi-levels, missions and/or high scores, multiplayer interactions, and plot. Results also demonstrated that time loss could have both positive and negative outcomes for players. Positive aspects of time loss included helping players to relax and temporarily escape from reality. Negative aspects included the sacrificing of other things in their lives, guilty feelings about wasted time, and social conflict. It is concluded that for many gamers, losing track of time is a positive experience and is one of the main reasons for playing videogames.",
"title": ""
},
{
"docid": "3adf8510887ff9e5c7a270e16dcdec9a",
"text": "This paper analyzes the Sampled Value (SV) Process Bus concept that was recently introduced by the IEC 61850-9-2 standard. This standard proposes that the Current and Voltage Transformer (CT, PT) outputs that are presently hard wired to various devices (relays, meters, IED, and SCADA) be digitized at the source and then communicated to those devices using an Ethernet-Based Local Area Network (LAN). The approach is especially interesting for modern optical CT/PT devices that possess high quality information about the primary voltage/current waveforms, but are often forced to degrade output signal accuracy in order to meet traditional analog interface requirements (5 A/120 V). While very promising, the SV-based process bus brings along a distinct set of issues regarding the overall reliability of the new Ethernet communications-based protection and control system. This paper looks at the Merging Unit Concept, analyzes the protection system reliability in the process bus environment, and proposes an alternate approach that can be used to successfully deploy this technology. Multiple scenarios used with the associated equipment configurations are compared. Additional issues that need to be addressed by various standards bodies and interoperability challenges posed by the SV process bus LAN on real-time monitoring and control applications (substation HMI, SCADA, engineering access) are also identified.",
"title": ""
},
{
"docid": "637e65b2b3fddd9b00ea2eebe65bbdfb",
"text": "BACKGROUND\nSurface electromyography (sEMG) signals have been used in numerous studies for the classification of hand gestures and movements and successfully implemented in the position control of different prosthetic hands for amputees. sEMG could also potentially be used for controlling wearable devices which could assist persons with reduced muscle mass, such as those suffering from sarcopenia. While using sEMG for position control, estimation of the intended torque of the user could also provide sufficient information for an effective force control of the hand prosthesis or assistive device. This paper presents the use of pattern recognition to estimate the torque applied by a human wrist and its real-time implementation to control a novel two degree of freedom wrist exoskeleton prototype (WEP), which was specifically developed for this work.\n\n\nMETHODS\nBoth sEMG data from four muscles of the forearm and wrist torque were collected from eight volunteers by using a custom-made testing rig. The features that were extracted from the sEMG signals included root mean square (rms) EMG amplitude, autoregressive (AR) model coefficients and waveform length. Support Vector Machines (SVM) was employed to extract classes of different force intensity from the sEMG signals. After assessing the off-line performance of the used classification technique, the WEP was used to validate in real-time the proposed classification scheme.\n\n\nRESULTS\nThe data gathered from the volunteers were divided into two sets, one with nineteen classes and the second with thirteen classes. Each set of data was further divided into training and testing data. It was observed that the average testing accuracy in the case of nineteen classes was about 88% whereas the average accuracy in the case of thirteen classes reached about 96%. Classification and control algorithm implemented in the WEP was executed in less than 125 ms.\n\n\nCONCLUSIONS\nThe results of this study showed that classification of EMG signals by separating different levels of torque is possible for wrist motion and the use of only four EMG channels is suitable. The study also showed that SVM classification technique is suitable for real-time classification of sEMG signals and can be effectively implemented for controlling an exoskeleton device for assisting the wrist.",
"title": ""
},
{
"docid": "088f4245f749feaf0cc88d9f374e17bf",
"text": "Trajectory classification, i.e., model construction for predicting the class labels of moving objects based on their trajectories and other features, has many important, real-world applications. A number of methods have been reported in the literature, but due to using the shapes of whole trajectories for classification, they have limited classification capability when discriminative features appear at parts of trajectories or are not relevant to the shapes of trajectories. These situations are often observed in long trajectories spreading over large geographic areas. Since an essential task for effective classification is generating discriminative features, a feature generation framework TraClass for trajectory data is proposed in this paper, which generates a hierarchy of features by partitioning trajectories and exploring two types of clustering: (1) region-based and (2) trajectory-based. The former captures the higher-level region-based features without using movement patterns, whereas the latter captures the lower-level trajectory-based features using movement patterns. The proposed framework overcomes the limitations of the previous studies because trajectory partitioning makes discriminative parts of trajectories identifiable, and the two types of clustering collaborate to find features of both regions and sub-trajectories. Experimental results demonstrate that TraClass generates high-quality features and achieves high classification accuracy from real trajectory data.",
"title": ""
},
{
"docid": "737231466c50ac647f247b60852026e2",
"text": "The proliferation of wearable devices, e.g., smartwatches and activity trackers, with embedded sensors has already shown its great potential on monitoring and inferring human daily activities. This paper reveals a serious security breach of wearable devices in the context of divulging secret information (i.e., key entries) while people are accessing key-based security systems. Existing methods of obtaining such secret information rely on installations of dedicated hardware (e.g., video camera or fake keypad), or training with labeled data from body sensors, which restrict use cases in practical adversary scenarios. In this work, we show that a wearable device can be exploited to discriminate mm-level distances and directions of the user’s fine-grained hand movements, which enable attackers to reproduce the trajectories of the user’s hand and further to recover the secret key entries. In particular, our system confirms the possibility of using embedded sensors in wearable devices, i.e., accelerometers, gyroscopes, and magnetometers, to derive the moving distance of the user’s hand between consecutive key entries regardless of the pose of the hand. Our Backward PIN-Sequence Inference algorithm exploits the inherent physical constraints between key entries to infer the complete user key entry sequence. Extensive experiments are conducted with over 7,000 key entry traces collected from 20 adults for key-based security systems (i.e., ATM keypads and regular keyboards) through testing on different kinds of wearables. Results demonstrate that such a technique can achieve 80 percent accuracy with only one try and more than 90 percent accuracy with three tries. Moreover, the performance of our system is consistently good even under low sampling rate and when inferring long PIN sequences. To the best of our knowledge, this is the first technique that reveals personal PINs leveraging wearable devices without the need for labeled training data and contextual information.",
"title": ""
},
{
"docid": "7319ef7763ac2e79e946d29e7dba623a",
"text": "Computer system security is one of the most popular and the fastest evolving Information Technology (IT) areas. Protection of information access, availability and data integrity represents the basic security characteristics desired on information sources. Any disruption of these properties would result in system intrusion and the related security risk. Advanced decoy based technology called Honeypot has a huge potential for the security community and can achieve several goals of other security technologies, which makes it almost universal. Paper is devoted to sophisticated hybrid Honeypot with autonomous feature that allows to, based on the collected system parameters, adapt to the system of deployment. By its presence Honeypot attracts attacker by simulating vulnerabilities and poor security. After initiation of interaction Honeypot will record all attacker activities and after data analysis allows improving security in computer systems.",
"title": ""
},
{
"docid": "9d34171c2fcc8e36b2fb907fe63fc08d",
"text": "A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.",
"title": ""
},
{
"docid": "ec989c3afdfebd6fe50dcb2205ac3ea3",
"text": "Recently, result diversification has attracted a lot of attention as a means to improve the quality of results retrieved by user queries. In this article, we introduce a novel definition of diversity called DisC diversity. Given a tuning parameter r, which we call radius, we consider two items to be similar if their distance is smaller than or equal to r. A DisC diverse subset of a result contains items such that each item in the result is represented by a similar item in the diverse subset and the items in the diverse subset are dissimilar to each other. We show that locating a minimum DisC diverse subset is an NP-hard problem and provide algorithms for its approximation. We extend our definition to the multiple radii case, where each item is associated with a different radius based on its importance, relevance, or other factors. We also propose adapting DisC diverse subsets to a different degree of diversification by adjusting r, that is, increasing the radius (or zooming-out) and decreasing the radius (or zooming-in). We present efficient implementations of our algorithms based on the M-tree, a spatial index structure, and experimentally evaluate their performance.",
"title": ""
},
{
"docid": "307d9742739cbd2ade98c3d3c5d25887",
"text": "In this paper, we present a smart US imaging system (SMUS) based on an android-OS smartphone, which can provide maximally optimized efficacy in terms of weight and size in point-of-care diagnostic applications. The proposed SMUS consists of the smartphone (Galaxy S5 LTE-A, Samsung., Korea) and a 16-channel probe system. The probe system contains analog and digital front-ends, which conducts beamforming and mid-processing procedures. Otherwise, the smartphone performs the back-end processing including envelope detection, log compression, 2D image filtering, digital scan conversion, and image display with custom-made graphical user interface (GUI). Note that the probe system and smartphone are interconnected by the USB 3.0 protocol. As a result, the developed SMUS can provide real-time B-mode image with the sufficient frame rate (i.e., 58 fps), battery run-time for point-of-care diagnosis (i.e., 54 min), and 35.0°C of transducer surface temperature during B-mode imaging, which satisfies the temperature standards for the safety and effectiveness of medical electrical equipment, IEC 60601-1 (i.e., 43°C).",
"title": ""
},
{
"docid": "600203272eace7a02d6f4cbdc591e0b9",
"text": "Algebraic manipulation covers branches of software, particularly list processing, mathematics, notably logic and number theory, and applications largely in physics. The lectures will deal with all of these to a varying extent. The mathematical content will be kept to a minimum.",
"title": ""
},
{
"docid": "a134fe9ffdf7d99593ad9cdfd109b89d",
"text": "A hybrid particle swarm optimization (PSO) for the job shop problem (JSP) is proposed in this paper. In previous research, PSO particles search solutions in a continuous solution space. Since the solution space of the JSP is discrete, we modified the particle position representation, particle movement, and particle velocity to better suit PSO for the JSP. We modified the particle position based on preference list-based representation, particle movement based on swap operator, and particle velocity based on the tabu list concept in our algorithm. Giffler and Thompson’s heuristic is used to decode a particle position into a schedule. Furthermore, we applied tabu search to improve the solution quality. The computational results show that the modified PSO performs better than the original design, and that the hybrid PSO is better than other traditional metaheuristics. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "90709f620b27196fdc7fc380e3757518",
"text": "The bag-of-visual-words (BoVW) method with construction of a single dictionary of visual words has been used previously for a variety of classification tasks in medical imaging, including the diagnosis of liver lesions. In this paper, we describe a novel method for automated diagnosis of liver lesions in portal-phase computed tomography (CT) images that improves over single-dictionary BoVW methods by using an image patch representation of the interior and boundary regions of the lesions. Our approach captures characteristics of the lesion margin and of the lesion interior by creating two separate dictionaries for the margin and the interior regions of lesions (“dual dictionaries” of visual words). Based on these dictionaries, visual word histograms are generated for each region of interest within the lesion and its margin. For validation of our approach, we used two datasets from two different institutions, containing CT images of 194 liver lesions (61 cysts, 80 metastasis, and 53 hemangiomas). The final diagnosis of each lesion was established by radiologists. The classification accuracy for the images from the two institutions was 99% and 88%, respectively, and 93% for a combined dataset. Our new BoVW approach that uses dual dictionaries shows promising results. We believe the benefits of our approach may generalize to other application domains within radiology.",
"title": ""
},
{
"docid": "4ea7fba21969fcdd2de9b4e918583af8",
"text": "Due to the explosion in the size of the WWW[1,4,5] it becomes essential to make the crawling process parallel. In this paper we present an architecture for a parallel crawler that consists of multiple crawling processes called as C-procs which can run on network of workstations. The proposed crawler is scalable, is resilient against system crashes and other event. The aim of this architecture is to efficiently and effectively crawl the current set of publically indexable web pages so that we can maximize the download rate while minimizing the overhead from parallelization",
"title": ""
},
{
"docid": "89d736c68d2befba66a0b7d876e52502",
"text": "The optical properties of human skin, subcutaneous adipose tissue and human mucosa were measured in the wavelength range 400–2000 nm. The measurements were carried out using a commercially available spectrophotometer with an integrating sphere. The inverse adding–doubling method was used to determine the absorption and reduced scattering coefficients from the measurements.",
"title": ""
},
{
"docid": "89f0034e6ba61fde368087773dc2f922",
"text": "The importance of reflection and reflective practice are frequently noted in the literature; indeed, reflective capacity is regarded by many as an essential characteristic for professional competence. Educators assert that the emergence of reflective practice is part of a change that acknowledges the need for students to act and to think professionally as an integral part of learning throughout their courses of study, integrating theory and practice from the outset. Activities to promote reflection are now being incorporated into undergraduate, postgraduate and continuing medical education, and across a variety of health professions. The evidence to support and inform these curricular interventions and innovations remains largely theoretical. Further, the literature is dispersed across several fields, and it is unclear which approaches may have efficacy or impact. We, therefore, designed a literature review to evaluate the existing evidence about reflection and reflective practice and their utility in health professional education. Our aim was to understand the key variables influencing this educational process, identify gaps in the evidence, and to explore any implications for educational practice and research.",
"title": ""
},
{
"docid": "78008ed707f701189c7f8f995c1cdc9b",
"text": "In this paper, we present a high performance and fast object detection method based on a fully convolutional network (FCN) for advanced driver assistance systems (ADAS). Object detection methods based on deep learning have high performance but they require high computational complexity. Even if a method works on the high-performance graphics processing unit (GPU) hardware platform, it is hard to guarantee real-time processing. General object detectors based on deep learning try to localize too many classes of objects in various dynamic environments. The proposed detection method based on FCN improves detection performance and maintains real-time processing in road environments through various schemes related to the limitation of object class type, data augmentation, network architecture, and multi-ratio default boxes. Our experimental results show that the proposed method outperforms a previous method both in terms of performance and speed.",
"title": ""
},
{
"docid": "fb3018d852c2a7baf96fb4fb1233b5e5",
"text": "The term twin spotting refers to phenotypes characterized by the spatial and temporal co-occurrence of two (or more) different nevi arranged in variable cutaneous patterns, and can be associated with extra-cutaneous anomalies. Several examples of twin spotting have been described in humans including nevus vascularis mixtus, cutis tricolor, lesions of overgrowth, and deficient growth in Proteus and Elattoproteus syndromes, epidermolytic hyperkeratosis of Brocq, and the so-called phacomatoses pigmentovascularis and pigmentokeratotica. We report on a 28-year-old man and a 15-year-old girl, who presented with a previously unrecognized association of paired cutaneous vascular nevi of the telangiectaticus and anemicus types (naevus vascularis mixtus) distributed in a mosaic pattern on the face (in both patients) and over the entire body (in the man) and a complex brain malformation (in both patients) consisting of cerebral hemiatrophy, hypoplasia of the cerebral vessels and homolateral hypertrophy of the skull and sinuses (known as Dyke-Davidoff-Masson malformation). Both patients had facial asymmetry and the young man had facial dysmorphism, seizures with EEG anomalies, hemiplegia, insulin-dependent diabetes mellitus (IDDM), autoimmune thyroiditis, a large hepatic cavernous vascular malformation, and left Legg-Calvé-Perthes disease (LCPD) [LCPD-like presentation]. Array-CGH analysis and mutation analysis of the RASA1 gene were normal in both patients.",
"title": ""
},
{
"docid": "1e852e116c11a6c7fb1067313b1ffaa3",
"text": "Article history: Received 20 February 2013 Received in revised form 30 July 2013 Accepted 11 September 2013 Available online 21 September 2013",
"title": ""
}
] |
scidocsrr
|
d0450cfa8490bea879de9dda5291c08d
|
Explaining Explanations in AI
|
[
{
"docid": "c3e037cb49fb639217142437ed3e8e04",
"text": "Machine learning models are now used extensively for decision making in diverse applications, but for non-experts they are essentially black boxes. While there has been some work on the explanation of classifications, these are targeted at the expert user. For the non-expert, a better model is one of justification not detailing how the model made its decision, but justifying it to the human user on his or her terms. In this paper we introduce the idea of a justification narrative: a simple model-agnostic mapping of the essential values underlying a classification to a semantic space. We present a package that automatically produces these narratives and realizes them visually or textually.",
"title": ""
}
] |
[
{
"docid": "470093535d4128efa9839905ab2904a5",
"text": "Photovolatic systems normally use a maximum power point tracking (MPPT) technique to continuously deliver the highest possible power to the load when variations in the insolation and temperature occur. It overcomes the problem of mismatch between the solar arrays and the given load. A simple method of tracking the maximum power points (MPP’s) and forcing the system to operate close to these points is presented. The principle of energy conservation is used to derive the largeand small-signal model and transfer function. By using the proposed model, the drawbacks of the state-space-averaging method can be overcome. The TI320C25 digital signal processor (DSP) was used to implement the proposed MPPT controller, which controls the dc/dc converter in the photovoltaic system. Simulations and experimental results show excellent performance.",
"title": ""
},
{
"docid": "6cbf17613305d715474a4a6dd351d304",
"text": "Cloud computing data centers are becoming increasingly popular for the provisioning of computing resources. In the past, most of the research works focused on the effective use of the computational and storage resources by employing the Virtualization technology. Network automation and virtualization of data center LAN and WAN were not the primary focus. Recently, a key emerging trend in Cloud computing is that the core systems infrastructure, including compute resources, storage and networking, is increasingly becoming Software-Defined. In particular, instead of being limited by the physical infrastructure, applications and platforms will be able to specify their fine-grained needs, thus precisely defining the virtual environment in which they wish to run. Software-Defined Networking (SDN) plays an important role in paving the way for effectively virtualizing and managing the network resources in an on demand manner. Still, many research challenges remain: how to achieve network Quality of Service (QoS), optimal load balancing, scalability, and security. Hence, it is the main objective of this article to survey the current research work and describes the ongoing efforts to address these challenging issues.",
"title": ""
},
{
"docid": "c6a25dc466e4a22351359f17bd29916c",
"text": "We consider practical methods for adding constraints to the K-Means clustering algorithm in order to avoid local solutions with empty clusters or clusters having very few points. We often observe this phenomena when applying K-Means to datasets where the number of dimensions is n 10 and the number of desired clusters is k 20. We propose explicitly adding k constraints to the underlying clustering optimization problem requiring that each cluster have at least a minimum number of points in it. We then investigate the resulting cluster assignment step. Preliminary numerical tests on real datasets indicate the constrained approach is less prone to poor local solutions, producing a better summary of the underlying data. Contrained K-Means Clustering 1",
"title": ""
},
{
"docid": "68a826dad7fd3da0afc234bb04505d8a",
"text": "The use of deep syntactic information such as typed dependencies has been shown to be very effective in Information Extraction. Despite this potential, the process of manually creating rule-based information extractors that operate on dependency trees is not intuitive for persons without an extensive NLP background. In this system demonstration, we present a tool and a workflow designed to enable initiate users to interactively explore the effect and expressivity of creating Information Extraction rules over dependency trees. We introduce the proposed five step workflow for creating information extractors, the graph query based rule language, as well as the core features of the PROPMINER tool.",
"title": ""
},
{
"docid": "968555bbada2d930b97d8bb982580535",
"text": "With the recent developments in three-dimensional (3-D) scanner technologies and photogrammetric techniques, it is now possible to acquire and create accurate models of historical and archaeological sites. In this way, unrestricted access to these sites, which is highly desirable from both a research and a cultural perspective, is provided. Through the process of virtualisation, numerous virtual collections are created. These collections must be archives, indexed and visualised over a very long period of time in order to be able to monitor and restore them as required. However, the intrinsic complexities and tremendous importance of ensuring long-term preservation and access to these collections have been widely overlooked. This neglect may lead to the creation of a so-called “Digital Rosetta Stone”, where models become obsolete and the data cannot be interpreted or virtualised. This paper presents a framework for the long-term preservation of 3-D culture heritage data as well as the application thereof in monitoring, restoration and virtual access. The interplay between raw data and model is considered as well as the importance of calibration. Suitable archiving and indexing techniques are described and the issue of visualisation over a very long period of time is addressed. An approach to experimentation though detachment, migration and emulation is presented.",
"title": ""
},
{
"docid": "d6cf367f29ed1c58fb8fd0b7edf69458",
"text": "Diabetes mellitus is a chronic disease that leads to complications including heart disease, stroke, kidney failure, blindness and nerve damage. Type 2 diabetes, characterized by target-tissue resistance to insulin, is epidemic in industrialized societies and is strongly associated with obesity; however, the mechanism by which increased adiposity causes insulin resistance is unclear. Here we show that adipocytes secrete a unique signalling molecule, which we have named resistin (for resistance to insulin). Circulating resistin levels are decreased by the anti-diabetic drug rosiglitazone, and increased in diet-induced and genetic forms of obesity. Administration of anti-resistin antibody improves blood sugar and insulin action in mice with diet-induced obesity. Moreover, treatment of normal mice with recombinant resistin impairs glucose tolerance and insulin action. Insulin-stimulated glucose uptake by adipocytes is enhanced by neutralization of resistin and is reduced by resistin treatment. Resistin is thus a hormone that potentially links obesity to diabetes.",
"title": ""
},
{
"docid": "1d2b45d990059df15c4fb3c76c67c39d",
"text": "Wireless networks with their ubiquitous applications have become an indispensable part of our daily lives. Wireless networks demand more and more spectral resources to support the ever increasing numbers of users. According to network engineers, the current spectrum crunch can be addressed with the introduction of cognitive radio networks (CRNs). In half-duplex (HD) CRNs, the secondary users (SUs) can either only sense the spectrum or transmit at a given time. This HD operation limits the SU throughput, because the SUs cannot transmit during the spectrum sensing. However, with the advances in self-interference suppression (SIS), full-duplex (FD) CRNs allow for simultaneous spectrum sensing and transmission on a given channel. This FD operation increases the throughput and reduces collisions as compared with HD-CRNs. In this paper, we present a comprehensive survey of FD-CRN communications. We cover the supporting network architectures and the various transmit and receive antenna designs. We classify the different SIS approaches in FD-CRNs. We survey the spectrum sensing approaches and security requirements for FD-CRNs. We also survey major advances in FD medium access control protocols as well as open issues, challenges, and future research directions to support the FD operation in CRNs.",
"title": ""
},
{
"docid": "c3c47c2e0c091916c8b2f4a0ca988f2f",
"text": "Four experiments demonstrated implicit self-esteem compensation (ISEC) in response to threats involving gender identity (Experiment 1), implicit racism (Experiment 2), and social rejection (Experiments 3-4). Under conditions in which people might be expected to suffer a blow to self-worth, they instead showed high scores on 2 implicit self-esteem measures. There was no comparable effect on explicit self-esteem. However, ISEC was eliminated following self-affirmation (Experiment 3). Furthermore, threat manipulations increased automatic intergroup bias, but ISEC mediated these relationships (Experiments 2-3). Thus, a process that serves as damage control for the self may have negative social consequences. Finally, pretest anxiety mediated the relationship between threat and ISEC (Experiment 3), whereas ISEC negatively predicted anxiety among high-threat participants (Experiment 4), suggesting that ISEC may function to regulate anxiety. The implications of these findings for automatic emotion regulation, intergroup bias, and implicit self-esteem measures are discussed.",
"title": ""
},
{
"docid": "f740b1c21be29da5717d9f8cc6d52ce4",
"text": "The goal of image stitching is to create natural-looking mosaics free of artifacts that may occur due to relative camera motion, illumination changes, and optical aberrations. In this paper, we propose a novel stitching method, that uses a smooth stitching field over the entire target image, while accounting for all the local transformation variations. Computing the warp is fully automated and uses a combination of local homography and global similarity transformations, both of which are estimated with respect to the target. We mitigate the perspective distortion in the non-overlapping regions by linearizing the homography and slowly changing it to the global similarity. The proposed method is easily generalized to multiple images, and allows one to automatically obtain the best perspective in the panorama. It is also more robust to parameter selection, and hence more automated compared with state-of-the-art methods. The benefits of the proposed approach are demonstrated using a variety of challenging cases.",
"title": ""
},
{
"docid": "fa7c81c8d3d6574f1f1c905ad136f0ee",
"text": "The goal of this track is to study and evaluate the performance of 3D object retrieval algorithms using RGB-D data. This is inspired from the practical need to pair an object acquired from a consumer-grade depth camera to CAD models available in public datasets on the Internet. To support the study, we propose ObjectNN, a new dataset with well segmented and annotated RGB-D objects from SceneNN [HPN∗16] and CAD models from ShapeNet [CFG∗15]. The evaluation results show that the RGB-D to CAD retrieval problem, while being challenging to solve due to partial and noisy 3D reconstruction, can be addressed to a good extent using deep learning techniques, particularly, convolutional neural networks trained by multi-view and 3D geometry. The best method in this track scores 82% in accuracy.",
"title": ""
},
{
"docid": "e19b68314e61f96dea0d7d98f80ca19b",
"text": "With growing interest in adversarial machine learning, it is important for practitioners and users of machine learning to understand how their models may be attacked. We present a web-based visualization tool, ADVERSARIALPLAYGROUND, to demonstrate the efficacy of common adversarial methods against a convolutional neural network. ADVERSARIAL-PLAYGROUND provides users an efficient and effective experience in exploring algorithms for generating adversarial examples — samples crafted by an adversary to fool a machine learning system. To enable fast and accurate responses to users, our webapp employs two key features: (1) We split the visualization and evasive sample generation duties between client and server while minimizing the transferred data. (2) We introduce a variant of the Jacobian Saliency Map Approach that is faster and yet maintains a comparable evasion rate 1.",
"title": ""
},
{
"docid": "9a7d21701b0c45bfe9d0ba7928266f50",
"text": "Increase in demand of electricity for entire applications in any country, need to produce consistently with advanced protection system. Many special protection systems are available based on volume of power distributed and often the load changes without prediction required an advanced and special communication based systems to control the electrical parameters of the generation. Most of the existing systems are reliable on various applications but not perfect for electrical applications. Electrical environment will have lots of disturbance in nature, Due to natural disasters like storms, cyclones or heavy rains transmission and distribution lines may lead to damage. The electrical wire may cut and fall on ground, this leads to very harmful for human beings and may become fatal. So, a rigid, reliable and robust communications like GSM technology instead of many communication techniques used earlier. This enhances speed of communication with distance independenncy. This technology saves human life from this electrical danger by providing the fault detection and automatically stops the electricity to the damaged line and also conveys the message to the electricity board to clear the fault. An Embedded based hardware design is developed and must acquire data from electrical sensing system. A powerful GSM networking is designed to send data from a network to other network. Any change in parameters of transmission is sensed to protect the entire transmission and distribution.",
"title": ""
},
{
"docid": "42296e5b73efaa854d0768cd867b485f",
"text": "A dedicated wake-up receiver may be used in wireless sensor nodes to control duty cycle and reduce network latency. However, its power dissipation must be extremely low to minimize the power consumption of the overall link. This paper describes the design of a 2 GHz receiver using a novel ldquouncertain-IFrdquo architecture, which combines MEMS-based high-Q filtering and a free-running CMOS ring oscillator as the RF LO. The receiver prototype, implemented in 90 nm CMOS technology, achieves a sensitivity of -72 dBm at 100 kbps (10-3 bit error rate) while consuming just 52 muW from the 0.5 V supply.",
"title": ""
},
{
"docid": "9d80272f499057c714ff6dee9fba3b7e",
"text": "Classifying Web Queries by User Intent aims to identify the type of information need behind the queries. In this paper we use a set of features extracted only from the terms including in the query, without any external or additional information. We automatically extracted the features proposed from two different corpora, then implemented machine learning algorithms to validate the accuracy of the classification, and evaluate the results. We analyze the distribution of the features in the queries per class, present the classification results obtained and draw some conclusions about the feature query distribution.",
"title": ""
},
{
"docid": "ce098e1e022235a2c322a231bff8da6c",
"text": "In recent years, due to the development of three-dimensional scanning technology, the opportunities for real objects to be three-dimensionally measured, taken into the PC as point cloud data, and used for various contents are increasing. However, the point cloud data obtained by three-dimensional scanning has many problems such as data loss due to occlusion or the material of the object to be measured, and occurrence of noise. Therefore, it is necessary to edit the point cloud data obtained by scanning. Particularly, since the point cloud data obtained by scanning contains many data missing, it takes much time to fill holes. Therefore, we propose a method to automatically filling hole obtained by three-dimensional scanning. In our method, a surface is generated from a point in the vicinity of a hole, and a hole region is filled by generating a point sequence on the surface. This method is suitable for processing to fill a large number of holes because point sequence interpolation can be performed automatically for hole regions without requiring user input.",
"title": ""
},
{
"docid": "ed3a859e2cea465a6d34c556fec860d9",
"text": "Multi-word expressions constitute a significant portion of the lexicon of every natural language, and handling them correctly is mandatory for various NLP applications. Yet such entities are notoriously hard to define, and are consequently missing from standard lexicons and dictionaries. Multi-word expressions exhibit idiosyncratic behavior on various levels: orthographic, morphological, syntactic and semantic. In this work we take advantage of the morphological and syntactic idiosyncrasy of Hebrew noun compounds and employ it to extract such expressions from text corpora. We show that relying on linguistic information dramatically improves the accuracy of compound extraction, reducing over one third of the errors compared with the best baseline.",
"title": ""
},
{
"docid": "36cf94eb997bc6a4308618779e28e835",
"text": "Retrieving finer grained text units such as passages or sentences as answers for non-factoid Web queries is becoming increasingly important for applications such as mobile Web search. In this work, we introduce the answer sentence retrieval task for non-factoid Web queries, and investigate how this task can be effectively solved under a learning to rank framework. We design two types of features, namely semantic and context features, beyond traditional text matching features. We compare learning to rank methods with multiple baseline methods including query likelihood and the state-of-the-art convolutional neural network based method, using an answer-annotated version of the TREC GOV2 collection. Results show that features used previously to retrieve topical sentences and factoid answer sentences are not sufficient for retrieving answer sentences for non-factoid queries, but with semantic and context features, we can significantly outperform the baseline methods.",
"title": ""
},
{
"docid": "e6d5f3c9a58afcceae99ff522d6dfa81",
"text": "Strategic information systems planning (SISP) is a key concern facing top business and information systems executives. Observers have suggested that both too little and too much SISP can prove ineffective. Hypotheses examine the expected relationship between comprehensiveness and effectiveness in five SISP planning phases. They predict a nonlinear, inverted-U relationship thus suggesting the existence of an optimal level of comprehensiveness. A survey collected data from 161 US information systems executives. After an extensive validation of the constructs, the statistical analysis supported the hypothesis in a Strategy Implementation Planning phase, but not in terms of the other four SISP phases. Managers may benefit from the knowledge that both too much and too little implementation planning may hinder SISP success. Future researchers should investigate why the hypothesis was supported for that phase, but not the others. q 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3eb8a99236905f59af8a32e281189925",
"text": "F2FS is a Linux file system designed to perform well on modern flash storage devices. The file system builds on append-only logging and its key design decisions were made with the characteristics of flash storage in mind. This paper describes the main design ideas, data structures, algorithms and the resulting performance of F2FS. Experimental results highlight the desirable performance of F2FS; on a state-of-the-art mobile system, it outperforms EXT4 under synthetic workloads by up to 3.1× (iozone) and 2× (SQLite). It reduces elapsed time of several realistic workloads by up to 40%. On a server system, F2FS is shown to perform better than EXT4 by up to 2.5× (SATA SSD) and 1.8× (PCIe SSD).",
"title": ""
},
{
"docid": "b95e6cc4d0e30e0f14ecc757e583502e",
"text": "Over the last decade, it has become well-established that a captcha’s ability to withstand automated solving lies in the difficulty of segmenting the image into individual characters. The standard approach to solving captchas automatically has been a sequential process wherein a segmentation algorithm splits the image into segments that contain individual characters, followed by a character recognition step that uses machine learning. While this approach has been effective against particular captcha schemes, its generality is limited by the segmentation step, which is hand-crafted to defeat the distortion at hand. No general algorithm is known for the character collapsing anti-segmentation technique used by most prominent real world captcha schemes. This paper introduces a novel approach to solving captchas in a single step that uses machine learning to attack the segmentation and the recognition problems simultaneously. Performing both operations jointly allows our algorithm to exploit information and context that is not available when they are done sequentially. At the same time, it removes the need for any hand-crafted component, making our approach generalize to new captcha schemes where the previous approach can not. We were able to solve all the real world captcha schemes we evaluated accurately enough to consider the scheme insecure in practice, including Yahoo (5.33%) and ReCaptcha (33.34%), without any adjustments to the algorithm or its parameters. Our success against the Baidu (38.68%) and CNN (51.09%) schemes that use occluding lines as well as character collapsing leads us to believe that our approach is able to defeat occluding lines in an equally general manner. The effectiveness and universality of our results suggests that combining segmentation and recognition is the next evolution of catpcha solving, and that it supersedes the sequential approach used in earlier works. More generally, our approach raises questions about how to develop sufficiently secure captchas in the future.",
"title": ""
}
] |
scidocsrr
|
34e8745eba4e1f18ac09cef4b01bc1fc
|
Semantic analysis of song lyrics
|
[
{
"docid": "69d3c943755734903b9266ca2bd2fad1",
"text": "This paper describes experiments in Machine Learning for text classification using a new representation of text based on WordNet hypernyms. Six binary classification tasks of varying diff iculty are defined, and the Ripper system is used to produce discrimination rules for each task using the new hypernym density representation. Rules are also produced with the commonly used bag-of-words representation, incorporating no knowledge from WordNet. Experiments show that for some of the more diff icult tasks the hypernym density representation leads to significantly more accurate and more comprehensible rules.",
"title": ""
}
] |
[
{
"docid": "26699915946647c1c582c1a0ab63b963",
"text": "In computer vision problems such as pair matching, only binary information ‘same’ or ‘different’ label for pairs of images is given during training. This is in contrast to classification problems, where the category labels of training images are provided. We propose a unified discriminative dictionary learning approach for both pair matching and multiclass classification tasks. More specifically, we introduce a new discriminative term called ‘pairwise sparse code error’ for the discriminativeness in sparse representation of pairs of signals, and then combine it with the classification error for discriminativeness in classifier construction to form a unified objective function. The solution to the new objective function is achieved by employing the efficient feature-sign search algorithm. The learned dictionary encourages feature points from a similar pair (or the same class) to have similar sparse codes. We validate the effectiveness of our approach through a series of experiments on face verification and recognition problems.",
"title": ""
},
{
"docid": "7e8b111ca6998c62fbf9d1e2956d0bcb",
"text": "Opinion mining is one of the most challenging tasks of the field of information retrieval. Research community has been publishing a number of articles on this topic but a significant increase in interest has been observed during the past decade especially after the launch of several online social networks. In this paper, we provide a very detailed overview of the related work of opinion mining. Following features of our review make it stand unique among the works of similar kind: (1) it presents a very different perspective of the opinion mining field by discussing the work on different granularity levels (like word, sentences, and document levels) which is very unique and much required, (2) discussion of the related work in terms of challenges of the field of opinion mining, (3) document level discussion of the related work gives an overview of opinion mining task in blogosphere, one of most popular online social network, and (4) highlights the importance of online social networks for opinion mining task and other related sub-tasks.",
"title": ""
},
{
"docid": "91c0bd1c3faabc260277c407b7c6af59",
"text": "In this paper, we consider the Direct Perception approach for autonomous driving. Previous efforts in this field focused more on feature extraction of the road markings and other vehicles in the scene rather than on the autonomous driving algorithm and its performance under realistic assumptions. Our main contribution in this paper is introducing a new, more robust, and more realistic Direct Perception framework and corresponding algorithm for autonomous driving. First, we compare the top 3 Convolutional Neural Networks (CNN) models in the feature extraction competitions and test their performance for autonomous driving. The experimental results showed that GoogLeNet performs the best in this application. Subsequently, we propose a deep learning based algorithm for autonomous driving, and we refer to our algorithm as GoogLenet for Autonomous Driving (GLAD). Unlike previous efforts, GLAD makes no unrealistic assumptions about the autonomous vehicle or its surroundings, and it uses only five affordance parameters to control the vehicle as compared to the 14 parameters used by prior efforts. Our simulation results show that the proposed GLAD algorithm outperforms previous Direct Perception algorithms both on empty roads and while driving with other surrounding vehicles.",
"title": ""
},
{
"docid": "ef9947c8f478d6274fcbcf8c9e300806",
"text": "The introduction in 1998 of multi-detector row computed tomography (CT) by the major CT vendors was a milestone with regard to increased scan speed, improved z-axis spatial resolution, and better utilization of the available x-ray power. In this review, the general technical principles of multi-detector row CT are reviewed as they apply to the established four- and eight-section systems, the most recent 16-section scanners, and future generations of multi-detector row CT systems. Clinical examples are used to demonstrate both the potential and the limitations of the different scanner types. When necessary, standard single-section CT is referred to as a common basis and starting point for further developments. Another focus is the increasingly important topic of patient radiation exposure, successful dose management, and strategies for dose reduction. Finally, the evolutionary steps from traditional single-section spiral image-reconstruction algorithms to the most recent approaches toward multisection spiral reconstruction are traced.",
"title": ""
},
{
"docid": "5a50c1fcaf41d26e47fad3b55e2c488e",
"text": "This paper presents an early stage exploratory analysis of communities engaging in alternative narratives about crisis events on social media. Over 11,000 user accounts on Twitter engaged in conversations questioning the mainstream narrative of the Paris Attacks and Umpqua Community College Shootings in Autumn 2015. We analyze the social network of communication, examining the composition of clusters of accounts with shared audiences. We describe some of the common traits within communities and note the prevalence of automated accounts. Our results shed light on the nature of the communities espousing alternative narratives and factors in the spread of the content.",
"title": ""
},
{
"docid": "f6ddb7fd8a4a06d8a0e58b02085b9481",
"text": "We explore approximate policy iteration (API), replacing t he usual costfunction learning step with a learning step in policy space. We give policy-language biases that enable solution of very large relational Markov decision processes (MDPs) that no previous techniqu e can solve. In particular, we induce high-quality domain-specific plan ners for classical planning domains (both deterministic and stochastic variants) by solving such domains as extremely large MDPs.",
"title": ""
},
{
"docid": "5b6bf9ee0fed37b20d4b3607717d2f77",
"text": "In order to understand the organization of the cerebral cortex, it is necessary to create a map or parcellation of cortical areas. Reconstructions of the cortical surface created from structural MRI scans, are frequently used in neuroimaging as a common coordinate space for representing multimodal neuroimaging data. These meshes are used to investigate healthy brain organization as well as abnormalities in neurological and psychiatric conditions. We frame cerebral cortex parcellation as a mesh segmentation task, and address it by taking advantage of recent advances in generalizing convolutions to the graph domain. In particular, we propose to assess graph convolutional networks and graph attention networks, which, in contrast to previous mesh parcellation models, exploit the underlying structure of the data to make predictions. We show experimentally on the Human Connectome Project dataset that the proposed graph convolutional models outperform current state-ofthe-art and baselines, highlighting the potential and applicability of these methods to tackle neuroimaging challenges, paving the road towards a better characterization of brain diseases.",
"title": ""
},
{
"docid": "66d6f514c6bce09110780a1130b64dfe",
"text": "Today, with more competiveness of industries, markets, and working atmosphere in productive and service organizations what is very important for maintaining clients present, for attracting new clients and as a result increasing growth of success in organizations is having a suitable relation with clients. Bank is among organizations which are not an exception. Especially, at the moment according to increasing rate of banks` privatization, it can be argued that significance of attracting clients for banks is more than every time. The article tries to investigate effect of CRM on marketing performance in banking industry. The research method is applied and survey and descriptive. Statistical community of the research is 5 branches from Mellat Banks across Khoramabad Province and their clients. There are 45 personnel in this branch and according to Morgan Table the sample size was 40 people. Clients example was considered according to collected information, one questionnaire was designed for bank organization and another one was prepared for banks` clients in which reliability and validity are approved. The research result indicates that CRM is ineffective on marketing performance.",
"title": ""
},
{
"docid": "90ecdad8743f134fb07489cee9ce15ef",
"text": "As one of the most successful fast food chain in the world, throughout the development of McDonald’s, we could easily identify many successful business strategy implementations. In this paper, I will discuss some critical business strategies, which linked to the company’s structure and external environment. This paper is organized as follows: In the first section, I will give brief introduction to the success of McDonald’s. In the second section, I will analyze some particular strategies used by McDonald’s and how these strategies are suitable to their business structure. I will then analyze why McDonald’s choose these strategies in response to the changing external environment. Finally, I will summarize the approaches used by McDonald’s to achieve their strategic goals.",
"title": ""
},
{
"docid": "70313633b2694adbaea3e82b30b1ca51",
"text": "The Global Assessment Scale (GAS) is a rating scale for evaluating the overall functioning of a subject during a specified time period on a continuum from psychological or psychiatric sickness to health. In five studies encompassing the range of population to which measures of overall severity of illness are likely to be applied, the GAS was found to have good reliability. GAS ratings were found to have a greater sensitivity to change over time than did other ratings of overall severity or specific symptom dimensions. Former inpatients in the community with a GAS rating below 40 had a higher probability of readmission to the hospital than did patients with higher GAS scores. The relative simplicity, reliability, and validity of the GAS suggests that it would be useful in a wide variety of clinical and research settings.",
"title": ""
},
{
"docid": "214be33e744fc211174a8164e26e2f36",
"text": "On-chip communication remains as a key research issue at the gates of the manycore era. In response to this, novel interconnect technologies have opened the door to new Network-on-Chip (NoC) solutions towards greater scalability and architectural flexibility. Particularly, wireless on-chip communication has garnered considerable attention due to its inherent broadcast capabilities, low latency, and system-level simplicity. This work presents OrthoNoC, a wired-wireless architecture that differs from existing proposals in that both network planes are decoupled and driven by traffic steering policies enforced at the network interfaces. With these and other design decisions, OrthoNoC seeks to emphasize the ordered broadcast advantage offered by the wireless technology. The performance and cost of OrthoNoC are first explored using synthetic traffic, showing substantial improvements with respect to other wired-wireless designs with a similar number of antennas. Then, the applicability of OrthoNoC in the multiprocessor scenario is demonstrated through the evaluation of a simple architecture that implements fast synchronization via ordered broadcast transmissions. Simulations reveal significant execution time speedups and communication energy savings for 64-threaded benchmarks, proving that the value of OrthoNoC goes beyond simply improving the performance of the on-chip interconnect.",
"title": ""
},
{
"docid": "5a3f65509a2acd678563cd495fe287de",
"text": "Auditory menus have the potential to make devices that use visual menus accessible to a wide range of users. Visually impaired users could especially benefit from the auditory feedback received during menu navigation. However, auditory menus are a relatively new concept, and there are very few guidelines that describe how to design them. This paper details how visual menu concepts may be applied to auditory menus in order to help develop design guidelines. Specifically, this set of studies examined possible ways of designing an auditory scrollbar for an auditory menu. The following different auditory scrollbar designs were evaluated: single-tone, double-tone, alphabetical grouping, and proportional grouping. Three different evaluations were conducted to determine the best design. The first two evaluations were conducted with sighted users, and the last evaluation was conducted with visually impaired users. The results suggest that pitch polarity does not matter, and proportional grouping is the best of the auditory scrollbar designs evaluated here.",
"title": ""
},
{
"docid": "d836f8b9c13ba744f39daa5887bed52e",
"text": "Cerebral palsy is the most common cause of childhood-onset, lifelong physical disability in most countries, affecting about 1 in 500 neonates with an estimated prevalence of 17 million people worldwide. Cerebral palsy is not a disease entity in the traditional sense but a clinical description of children who share features of a non-progressive brain injury or lesion acquired during the antenatal, perinatal or early postnatal period. The clinical manifestations of cerebral palsy vary greatly in the type of movement disorder, the degree of functional ability and limitation and the affected parts of the body. There is currently no cure, but progress is being made in both the prevention and the amelioration of the brain injury. For example, administration of magnesium sulfate during premature labour and cooling of high-risk infants can reduce the rate and severity of cerebral palsy. Although the disorder affects individuals throughout their lifetime, most cerebral palsy research efforts and management strategies currently focus on the needs of children. Clinical management of children with cerebral palsy is directed towards maximizing function and participation in activities and minimizing the effects of the factors that can make the condition worse, such as epilepsy, feeding challenges, hip dislocation and scoliosis. These management strategies include enhancing neurological function during early development; managing medical co-morbidities, weakness and hypertonia; using rehabilitation technologies to enhance motor function; and preventing secondary musculoskeletal problems. Meeting the needs of people with cerebral palsy in resource-poor settings is particularly challenging.",
"title": ""
},
{
"docid": "4bfb3823edf6dece64ebf5cee80368e0",
"text": "As ontology development becomes a more ubiquitous and collaborative process, the developers face the problem of maintaining versions of ontologies akin to maintaining versions of software code in large software projects. Versioning systems for software code provide mechanisms for tracking versions, checking out versions for editing, comparing different versions, and so on. We can directly reuse many of these mechanisms for ontology versioning. However, version comparison for code is based on comparing text files--an approach that does not work for comparing ontologies. Two ontologies can be identical but have different text representation. We have developed the PROMPTDIFF algorithm, which integrates different heuristic matchers for comparing ontology versions. We combine these matchers in a fixed-point manner, using the results of one matcher as an input for others until the matchers produce no more changes. The current implementation includes ten matchers but the approach is easily extendable to an arbitrary number of matchers. Our evaluation showed that PROMPTDIFF correctly identified 96% of the matches in ontology versions from large projects.",
"title": ""
},
{
"docid": "20d96905880332d6ef5a33b4dd0d8827",
"text": "In spite of the fact that equal opportunities for men and women have been a priority in many countries, enormous gender differences prevail in most competitive high-ranking positions. We conduct a series of controlled experiments to investigate whether women might react differently than men to competitive incentive schemes commonly used in job evaluation and promotion. We observe no significant gender difference in mean performance when participants are paid proportional to their performance. But in the competitive environment with mixed gender groups we observe a significant gender difference: the mean performance of men has a large and significant, that of women is unchanged. This gap is not due to gender differences in risk aversion. We then run the same test with homogeneous groups, to investigate whether women under-perform only when competing against men. Women do indeed increase their performance and gender differences in mean performance are now insignificant. These results may be due to lower skill of women, or more likely to the fact that women dislike competition, or alternatively that they feel less competent than their male competitors, which depresses their performance in mixed tournaments. Our last experiment provides support for this hypothesis.",
"title": ""
},
{
"docid": "204ecea0d8b6c572cd1a5d20b5e267a9",
"text": "Nowadays it is very common for people to write online reviews of products they have purchased. These reviews are a very important source of information for the potential customers before deciding to purchase a product. Consequently, websites containing customer reviews are becoming targets of opinion spam. -- undeserving positive or negative reviews; reviews that reviewers never use the product, but is written with an agenda in mind. This paper aims to detect spam reviews by users. Characteristics of the review will be identified based on previous research, plus a new feature -- rating consistency check. The goal is to devise a tool to evaluate the product reviews and detect product review spams. The approach is based on multiple criteria: checking unusual review vs. rating patterns, links or advertisements, detecting questions and comparative reviews. We tested our system on a couple of sets of data and find that we are able to detect these factors effectively.",
"title": ""
},
{
"docid": "f7797c2392419c0bd46908e86bcab61b",
"text": "Data in the maritime domain is growing at an unprecedented rate, e.g., terabytes of oceanographic data are collected every month, and petabytes of data are already publicly available. Big data from heterogeneous sources such as sensors, buoys, vessels, and satellites could potentially fuel a large number of interesting applications for environmental protection, security, fault prediction, shipping routes optimization, and energy production. However, because of several challenges related to big data and the high heterogeneity of the data sources, such applications are still underdeveloped and fragmented. In this paper, we analyze challenges and requirements related to big maritime data applications and propose a scalable data management solution. A big data architecture meeting these requirements is described, and examples of its implementation in concrete scenarios are provided. The related data value chain and use cases in the context of a European project, BigDataOcean, are also described.",
"title": ""
},
{
"docid": "a0d4e1038ac7309260d984f4e39d5c91",
"text": "Modeling plays a central role in design automation of embedded processors. It is necessary to develop a specification language that can model complex processors at a higher level of abstraction and enable automatic analysis and generation of efficient tools and prototypes. The language should be powerful enough to capture high-level description of the processor architectures. On the other hand, the language should be simple enough to allow correlation of the information between the specification and the architecture manual.",
"title": ""
},
{
"docid": "d593c18bf87daa906f83d5ff718bdfd0",
"text": "Information and communications technologies (ICTs) have enabled the rise of so-called “Collaborative Consumption” (CC): the peer-to-peer-based activity of obtaining, giving, or sharing the access to goods and services, coordinated through community-based online services. CC has been expected to alleviate societal problems such as hyper-consumption, pollution, and poverty by lowering the cost of economic coordination within communities. However, beyond anecdotal evidence, there is a dearth of understanding why people participate in CC. Therefore, in this article we investigate people’s motivations to participate in CC. The study employs survey data (N = 168) gathered from people registered onto a CC site. The results show that participation in CC is motivated by many factors such as its sustainability, enjoyment of the activity as well as economic gains. An interesting detail in the result is that sustainability is not directly associated with participation unless it is at the same time also associated with positive attitudes towards CC. This suggests that sustainability might only be an important factor for those people for whom ecological consumption is important. Furthermore, the results suggest that in CC an attitudebehavior gap might exist; people perceive the activity positively and say good things about it, but this good attitude does not necessary translate into action. Introduction",
"title": ""
},
{
"docid": "7297a6317a3fc515d2d46943a2792c69",
"text": "The present work elaborates the process design methodology for the evaluation of the distillation systems based on the economic, exergetic and environmental point of view, the greenhouse gas (GHG) emissions. The methodology proposes the Heat Integrated Pressure Swing Distillation Sequence (HiPSDS) is economic and reduces the GHG emissions than the conventional Extractive Distillation Sequence (EDS) and the Pressure Swing Distillation Sequence (PSDS) for the case study of isobutyl alcohol and isobutyl acetate with the solvents for EDS and with low pressure variations for PSDS and HiPSDS. The study demonstrates that the exergy analysis can predict the results of the economic and environmental evaluation associated with the process design.",
"title": ""
}
] |
scidocsrr
|
3eb3069d22c40da7757423612c47231a
|
Improved Crowbar Control Strategy of DFIG Based Wind Turbines for Grid Fault Ride-Through
|
[
{
"docid": "859c1b7269c2a297478ca73f521b2ea2",
"text": "This paper analyzes the ability of a doubly fed induction generator (DFIG) in a wind turbine to ride through a grid fault and the limitations to its performance. The fundamental difficulty for the DFIG in ride-through is the electromotive force (EMF) induced in the machine rotor during the fault, which depends on the dc and negative sequence components in the stator-flux linkage and the rotor speed. The investigation develops a control method to increase the probability of successful grid fault ride-through, given the current and voltage capabilities of the rotor-side converter. A time-domain computer simulation model is developed and laboratory experiments are conducted to verify the model and a control method is proposed. Case studies are then performed on a representatively sized system to define the feasibility regions of successful ride-through for different types of grid faults",
"title": ""
},
{
"docid": "8066246656f6a9a3060e42efae3b197f",
"text": "The paper describes the engineering and design of a doubly fed induction generator (DFIG), using back-to-back PWM voltage-source converters in the rotor circuit. A vector-control scheme for the supply-side PWM converter results in independent control of active and reactive power drawn from the supply, while ensuring sinusoidal supply currents. Vector control of the rotor-connected converter provides for wide speed-range operation; the vector scheme is embedded in control loops which enable optimal speed tracking for maximum energy capture from the wind. An experimental rig, which represents a 1.5 kW variable speed wind-energy generation system is described, and experimental results are given that illustrate the excellent performance characteristics of the system. The paper considers a grid-connected system; a further paper will describe a stand-alone system.",
"title": ""
}
] |
[
{
"docid": "531ac7d6500373005bae464c49715288",
"text": "We have used acceleration sensors to monitor the heart motion during surgery. A three-axis accelerometer was made from two commercially available two-axis sensors, and was used to measure the heart motion in anesthetized pigs. The heart moves due to both respiration and heart beating. The heart beating was isolated from respiration by high-pass filtering at 1.0 Hz, and heart wall velocity and position were calculated by numerically integrating the filtered acceleration traces. The resulting curves reproduced the heart motion in great detail, noise was hardly visible. Events that occurred during the measurements, e.g. arrhythmias and fibrillation, were recognized in the curves, and confirmed by comparison with synchronously recorded ECG data. We conclude that acceleration sensors are able to measure heart motion with good resolution, and that such measurements can reveal patterns that may be an indication of heart circulation failure.",
"title": ""
},
{
"docid": "196ddcefb2c3fcb6edd5e8d108f7e219",
"text": "This paper may be considered as a practical reference for those who wish to add (now sufficiently matured) Agent Based modeling to their analysis toolkit and may or may not have some System Dynamics or Discrete Event modeling background. We focus on systems that contain large numbers of active objects (people, business units, animals, vehicles, or even things like projects, stocks, products, etc. that have timing, event ordering or other kind of individual behavior associated with them). We compare the three major paradigms in simulation modeling: System Dynamics, Discrete Event and Agent Based Modeling with respect to how they approach such systems. We show in detail how an Agent Based model can be built from an existing System Dynamics or a Discrete Event model and then show how easily it can be further enhanced to capture much more complicated behavior, dependencies and interactions thus providing for deeper insight in the system being modeled. Commonly understood examples are used throughout the paper; all models are specified in the visual language supported by AnyLogic tool. We view and present Agent Based modeling not as a substitution to older modeling paradigms but as a useful add-on that can be efficiently combined with System Dynamics and Discrete Event modeling. Several multi-paradigm model architectures are suggested.",
"title": ""
},
{
"docid": "0dc3c4e628053e8f7c32c0074a2d1a59",
"text": "Understanding inter-character relationships is fundamental for understanding character intentions and goals in a narrative. This paper addresses unsupervised modeling of relationships between characters. We model relationships as dynamic phenomenon, represented as evolving sequences of latent states empirically learned from data. Unlike most previous work our approach is completely unsupervised. This enables data-driven inference of inter-character relationship types beyond simple sentiment polarities, by incorporating lexical and semantic representations, and leveraging large quantities of raw text. We present three models based on rich sets of linguistic features that capture various cues about relationships. We compare these models with existing techniques and also demonstrate that relationship categories learned by our model are semantically coherent.",
"title": ""
},
{
"docid": "7b952e7c5d8dc00e2c2f4cfd9e58fbcb",
"text": "As heterogeneous networks (HetNets) emerge as one of the most promising developments toward realizing the target specifications of Long Term Evolution (LTE) and LTE-Advanced (LTE-A) networks, radio resource management (RRM) research for such networks has, in recent times, been intensively pursued. Clearly, recent research mainly concentrates on the aspect of interference mitigation. Other RRM aspects, such as radio resource utilization, fairness, complexity, and QoS, have not been given much attention. In this paper, we aim to provide an overview of the key challenges arising from HetNets and highlight their importance. Subsequently, we present a comprehensive survey of the RRM schemes that have been studied in recent years for LTE/LTE-A HetNets, with a particular focus on those for femtocells and relay nodes. Furthermore, we classify these RRM schemes according to their underlying approaches. In addition, these RRM schemes are qualitatively analyzed and compared to each other. We also identify a number of potential research directions for future RRM development. Finally, we discuss the lack of current RRM research and the importance of multi-objective RRM studies.",
"title": ""
},
{
"docid": "0762f2778f3d9f7da10b8c51b2ff7ff5",
"text": "We propose a real-time, robust to outliers and accurate solution to the Perspective-n-Point (PnP) problem. The main advantages of our solution are twofold: first, it in- tegrates the outlier rejection within the pose estimation pipeline with a negligible computational overhead, and sec- ond, its scalability to arbitrarily large number of correspon- dences. Given a set of 3D-to-2D matches, we formulate pose estimation problem as a low-rank homogeneous sys- tem where the solution lies on its 1D null space. Outlier correspondences are those rows of the linear system which perturb the null space and are progressively detected by projecting them on an iteratively estimated solution of the null space. Since our outlier removal process is based on an algebraic criterion which does not require computing the full-pose and reprojecting back all 3D points on the image plane at each step, we achieve speed gains of more than 100× compared to RANSAC strategies. An extensive exper- imental evaluation will show that our solution yields accu- rate results in situations with up to 50% of outliers, and can process more than 1000 correspondences in less than 5ms.",
"title": ""
},
{
"docid": "c9ca8d6f38c44bde6983e401a967c399",
"text": "The validation and verification of cognitive skills of highly automated vehicles is an important milestone for legal and public acceptance of advanced driver assistance systems (ADAS). In this paper, we present an innovative data-driven method in order to create critical traffic situations from recorded sensor data. This concept is completely contrary to previous approaches using parametrizable simulation models. We demonstrate our concept at the example of parametrizing lane change maneuvers: Firstly, the road layout is automatically derived from observed vehicle trajectories. The road layout is then used in order to detect vehicle maneuvers, which is shown exemplarily on lane change maneuvers. Then, the maneuvers are parametrized using data operators in order to create critical traffic scenarios. Finally, we demonstrate our concept using LIDAR-captured traffic situations on urban and highway scenes, creating critical scenarios out of safely recorded data.",
"title": ""
},
{
"docid": "ca7c4d8118d199301904ae6af7100eb9",
"text": "In den drei vorangegangenen Mitteilungen wurden die Geschichte, verfahrensbedingten Ersparnisse, Platteneigenschaften des kontinuierlichen Pressens und, als aktuelles Verfahren, das Küsters press-Verfahren behandelt und ein Kostenvergleich mit dem Takt-Preßverfahren angestellt. In der vorliegenden, vorläufig abschließenden Mitteilung werden die ebenfalls aktuellen kontinuierlichen Preßverfahren von Bison und Siempelkamp vorgestellt und die Kosten miteinander verglichen, soweit dies derzeit möglich ist. Da jedoch außerhalb dieser verfahrenstragenden Firmen noch keine dieser Anlagen in Betrieb ist, sollen Erfahrungswerte aus der Betriebspraxis, soweit sie mit diesen Verfahren gesammelt werden können, in einer späteren Veröffentlichung vorgelegt werden. In three preceding papers the history, process-related savings, board properties and the socalled Küsters press process were described and the costs of this pressing method compared to those of the step-pressing process. The present paper reports on processes by Messrs. Bison and Siempelkamp, with a comparison of respective costs as far as they were available. As neither plants are working outside the Bison or Siempelkamp factories yet, data accumulating during actual manufacturing with these processes will be published at a later date.",
"title": ""
},
{
"docid": "65a4709f62c084cdd07fe54d834b8eaf",
"text": "Although in the era of third generation (3G) mobile networks technical hurdles are minor, the continuing failure of mobile payments (m-payments) withstands the endorsement by customers and service providers. A major reason is the uncommonly high interdependency of technical, human and market factors which have to be regarded and orchestrated cohesively to solve the problem. In this paper, we apply Business Model Ontology in order to develop an m-payment business model framework based on the results of a precedent multi case study analysis of 27 m-payment procedures. The framework is depicted with a system of morphological boxes and the interrelations between the associated characteristics. Representing any m-payment business model along with its market setting and influencing decisions as instantiations, the resulting framework enables researchers and practitioners for comprehensive analysis of existing and future models and provides a helpful tool for m-payment business model engineering.",
"title": ""
},
{
"docid": "73e4fed83bf8b1f473768ce15d6a6a86",
"text": "Improving science, technology, engineering, and mathematics (STEM) education, especially for traditionally disadvantaged groups, is widely recognized as pivotal to the U.S.'s long-term economic growth and security. In this article, we review and discuss current research on STEM education in the U.S., drawing on recent research in sociology and related fields. The reviewed literature shows that different social factors affect the two major components of STEM education attainment: (1) attainment of education in general, and (2) attainment of STEM education relative to non-STEM education conditional on educational attainment. Cognitive and social psychological characteristics matter for both major components, as do structural influences at the neighborhood, school, and broader cultural levels. However, while commonly used measures of socioeconomic status (SES) predict the attainment of general education, social psychological factors are more important influences on participation and achievement in STEM versus non-STEM education. Domestically, disparities by family SES, race, and gender persist in STEM education. Internationally, American students lag behind those in some countries with less economic resources. Explanations for group disparities within the U.S. and the mediocre international ranking of US student performance require more research, a task that is best accomplished through interdisciplinary approaches.",
"title": ""
},
{
"docid": "c6eb01a11e88dd686a47ca594b424350",
"text": "Automatic fake news detection is an important, yet very challenging topic. Traditional methods using lexical features have only very limited success. This paper proposes a novel method to incorporate speaker profiles into an attention based LSTM model for fake news detection. Speaker profiles contribute to the model in two ways. One is to include them in the attention model. The other includes them as additional input data. By adding speaker profiles such as party affiliation, speaker title, location and credit history, our model outperforms the state-of-the-art method by 14.5% in accuracy using a benchmark fake news detection dataset. This proves that speaker profiles provide valuable information to validate the credibility of news articles.",
"title": ""
},
{
"docid": "9f68df51d0d47b539a6c42207536d012",
"text": "Schizophrenia-spectrum risk alleles may persist in the population, despite their reproductive costs in individuals with schizophrenia, through the possible creativity benefits of mild schizotypy in non-psychotic relatives. To assess this creativity-benefit model, we measured creativity (using 6 verbal and 8 drawing tasks), schizotypy, Big Five personality traits, and general intelligence in 225 University of New Mexico students. Multiple regression analyses showed that openness and intelligence, but not schizotypy, predicted reliable observer ratings of verbal and drawing creativity. Thus, the 'madness-creativity' link seems mediated by the personality trait of openness, and standard creativity-benefit models seem unlikely to explain schizophrenia's evolutionary persistence.",
"title": ""
},
{
"docid": "aae3e8f023b90bc2050d7c38a3857cc5",
"text": "Severe, chronic growth retardation of cattle early in life reduces growth potential, resulting in smaller animals at any given age. Capacity for long-term compensatory growth diminishes as the age of onset of nutritional restriction resulting in prolonged growth retardation declines. Hence, more extreme intrauterine growth retardation can result in slower growth throughout postnatal life. However, within the limits of beef production systems, neither severely restricted growth in utero nor from birth to weaning influences efficiency of nutrient utilisation later in life. Retail yield from cattle severely restricted in growth during pregnancy or from birth to weaning is reduced compared with cattle well grown early in life, when compared at the same age later in life. However, retail yield and carcass composition of low- and high-birth-weight calves are similar at the same carcass weight. At equivalent carcass weights, cattle grown slowly from birth to weaning have carcasses of similar or leaner composition than those grown rapidly. However, if high energy, concentrate feed is provided following severe growth restriction from birth to weaning, then at equivalent weights post-weaning the slowly-grown, small weaners may be fatter than their well-grown counterparts. Restricted prenatal and pre-weaning nutrition and growth do not adversely affect measures of beef quality. Similarly, bovine myofibre characteristics are little affected in the long term by growth in utero or from birth to weaning. Interactions were not evident between prenatal and pre-weaning growth for subsequent growth, efficiency, carcass, yield and beef-quality characteristics, within our pasture-based production systems. Furthermore, interactions between genotype and nutrition early in life, studied using offspring of Piedmontese and Wagyu sired cattle, were not evident for any growth, efficiency, carcass, yield and beef-quality parameters. We propose that within pasture-based production systems for beef cattle, the plasticity of the carcass tissues, particularly of muscle, allows animals that are growth-retarded early in life to attain normal composition at equivalent weights in the long term, albeit at older ages. However, the quality of nutrition during recovery from early life growth retardation may be important in determining the subsequent composition of young, light-weight cattle relative to their heavier counterparts. Finally, it should be emphasised that long-term consequences of more specific and/or acute environmental influences during specific stages of embryonic, foetal and neonatal calf development remain to be determined. This need for further research extends to consequences of nutrition and growth early in life for reproductive capacity.",
"title": ""
},
{
"docid": "9d30cfbc7d254882e92cad01f5bd17c7",
"text": "Data from culture studies have revealed that Enterococcus faecalis is occasionally isolated from primary endodontic infections but frequently recovered from treatment failures. This molecular study was undertaken to investigate the prevalence of E. faecalis in endodontic infections and to determine whether this species is associated with particular forms of periradicular diseases. Samples were taken from cases of untreated teeth with asymptomatic chronic periradicular lesions, acute apical periodontitis, or acute periradicular abscesses, and from root-filled teeth associated with asymptomatic chronic periradicular lesions. DNA was extracted from the samples, and a 16S rDNA-based nested polymerase chain reaction assay was used to identify E. faecalis. This species occurred in seven of 21 root canals associated with asymptomatic chronic periradicular lesions, in one of 10 root canals associated with acute apical periodontitis, and in one of 19 pus samples aspirated from acute periradicular abscesses. Statistical analysis showed that E. faecalis was significantly more associated with asymptomatic cases than with symptomatic ones. E. faecalis was detected in 20 of 30 cases of persistent endodontic infections associated with root-filled teeth. When comparing the frequencies of this species in 30 cases of persistent infections with 50 cases of primary infections, statistical analysis demonstrated that E. faecalis was strongly associated with persistent infections. The average odds of detecting E. faecalis in cases of persistent infections associated with treatment failure were 9.1. The results of this study indicated that E. faecalis is significantly more associated with asymptomatic cases of primary endodontic infections than with symptomatic ones. Furthermore, E. faecalis was much more likely to be found in cases of failed endodontic therapy than in primary infections.",
"title": ""
},
{
"docid": "3db4d7a83afbbadbafe3d1c4fddf51a0",
"text": "A Successive approximation analog to digital converter (ADC) for data acquisition using fully CMOS high speed self-biased comparator circuit is discussed in this paper. ASIC finds greater demand when area and speed optimization are major concern and here the entire optimized design is done in CADENCE virtuoso EDA tool in 180nm technology. Towerjazz semiconductor foundry is the base for layout design and GDSII extraction. Comparison of different DAC architecture and the precise architecture with minimum DNL and INL are chosen for the design procedure. This paper describes the design of a fully customized 9 bit SAR ADC with input voltage ranging from 0 to 2.5V and sampling frequency 16.67 KHz. Hspice simulators is used for the simulations. Keywords— SAR ADC, Comparator, CADENCE, CMOS, DAC. [1] INTRODUCTION With the development of sensors, portable devices and high speed computing systems, comparable growth is seen in the optimization of Analog to digital converters (ADC) to assist in the technology growth. All the natural signals are analog and the present digital world require the signal in digital format for storing, processing and transmitting and thereby ADC becomes an integral part of almost all electronic devices 8 . This leads to the need for power, area and speed optimized design of ADCs. There are different ADC architectures like Flash ADC, SAR ADC, sigma-delta ADC etc., with each having its own pros and cons. The designer selects the desired architecture according to the requirements 1 . Flash ADC is the fasted ADC structure where the output is obtained in a single cycle but requires a large number of resistors and comparators for the design. For an N bit 2 flash ADC 2 N resistors and 2 N-1 comparators are required consuming large amount of area and power. Modifications are done on flash ADC to form pipelined flash ADC where the number of components can be reduced but the power consumption cannot be further reduced beyond a level. Sigma-delta ADC or integrating type of ADC is used when the resolution required is very high. This is the slowest architecture compared to other architectures. Design of sigma-delta requires analog design of integrator circuit making its design complex. SAR ADC architecture gives the output in N cycles for an N-bit ADC. SAR ADC being one of the pioneer ADC architecture is been commonly used due to its good trade-off between area, power and speed, which is the required criteria for CMOS deep submicron circuits. SAR ADC consists of a Track and Hold (TH) circuit, comparator, DAC and a SAR register and control logic. Figure 1 shows the block diagram of a SAR ADC. This paper is organized into six sections. Section II describes the analog design of TH and comparator. Section III compares the DAC architecture. Section IV explains the SAR logic. Section V gives the simulation results and section VI is the conclusion. Fig 1 Block Diagram of SAR ADC [2] ANALOG DESIGN OF TH AND COMPARATOR A. Track and Hold In general, Sample and hold circuit or Track and Hold contain a switch and a capacitor. In the tracking mode, when the sampling signal (strobe pulse) is high and the switch is connected, it tracks the analog input signal 3 . Then, it holds the value when the sampling signal turns to low in the hold mode. In this case, sample and hold provides a constant voltage at the input of the ADC during conversion 7 . Figure 2 shows a simple Track and hold Vol 05, Article 11492, November 2014 International Journal of VLSI and Embedded Systems-IJVES http://ijves.com ISSN: 2249 – 6556 2010-2014 – IJVES Indexing in Process EMBASE, EmCARE, Electronics & Communication Abstracts, SCIRUS, SPARC, GOOGLE Database, EBSCO, NewJour, Worldcat, DOAJ, and other major databases etc., 1392 circuit with a NMOS transistor as switch. The capacitance value is selected as 100pF and aspect ratio of the transistor as 28 based on the design steps. Fig 2 Track and hold circuit B. Latched comparator Comparator with high resolution and high speed is the desired design criteria and here dynamic latched comparator topology and self-biased open loop comparator topology are studied and implemented. From the comparison results, the best topology considering speed and better resolution is selected. Figure 3 shows a latched comparator. Static latch consumes static power which is not attractive for low power applications. A major disadvantage of latch is low resolution. Fig 3Latched comparator C. Self-biased open loop comparator A self-biased open loop comparator is a differential input high gain amplifier with an output stage. A currentmirror acts as the load for the differential pair and converts the double ended circuit to a single ended. Since precise gain is not required for comparator circuit, no compensation techniques are required 4 . Figure 4 shows a self-biased open loop comparator. Schematic of the circuit implementation and simulation result shows that selfbiased open loop comparator has better speed of operation compared to latched comparator. The simulation results are tabulated below in table 1. Thought there are two capacitors in open loop comparator resulting in more power consumption, speed of operation and resolution is better compared to latched comparator. So open loop comparator circuit is selected for the design advancement. Both the comparator design is done based of a specific output current and slew rate. Fig 4 Self-biased open loop comparator Conversion time No of transistors Resolution Power Latched comparator 426.6ns 11 4mv 80nw Self-biased open loop comparator 712.7ns 10 15mv 58nw Table 1 Comparator simulation results [3] DAC ARCHITECTURE A. R-2R DAC The digital data bits are entered through the input lines (d0 to d(N-1)) which is to be converted to an equivalent analog voltage (Vout) using R/2R resistor network 5 . The R/2R network is built by a set of resistors of two Vol 05, Article 11492, November 2014 International Journal of VLSI and Embedded Systems-IJVES http://ijves.com ISSN: 2249 – 6556 2010-2014 – IJVES Indexing in Process EMBASE, EmCARE, Electronics & Communication Abstracts, SCIRUS, SPARC, GOOGLE Database, EBSCO, NewJour, Worldcat, DOAJ, and other major databases etc., 1393 values, with values of one sets being twice of the other. Here for simulation purpose 1K and 2K resistors are used, there by resulting R/2R ratio. Accuracy or precision of DAC depends on the values of resistors chosen, higher precision can be obtained with an exact match of the R/2R ratio. B. C-2C DAC The schematic diagram of 3bit C2C ladder is shown in figure 4.3 which is similar to that of the R2R type. The capacitor value selected as 20 fF and 40 fF for C and 2C respectively such that the impedance value of C is twice that of 2C. C. Charge scaling DAC The voltage division principle is same as that of C-2C 6 . The value of unit capacitance is selected as 20fF for the simulation purpose. In order to obtain precision between the capacitance value parallel combinations of unit capacitance is implemented for the binary weighted value. Compared to C-2C the capacitance area is considerably large. DAC type Integral Non-Linearity INL Differential Non-Linearity DNL Offset",
"title": ""
},
{
"docid": "a117e006785ab63ef391d882a097593f",
"text": "An increasing interest in understanding human perception in social media has led to the study of the processes of personality self-presentation and impression formation based on user profiles and text blogs. However, despite the popularity of online video, we do not know of any attempt to study personality impressions that go beyond the use of text and still photos. In this paper, we analyze one facet of YouTube as a repository of brief behavioral slices in the form of personal conversational vlogs, which are a unique medium for selfpresentation and interpersonal perception. We investigate the use of nonverbal cues as descriptors of vloggers’ behavior and find significant associations between automatically extracted nonverbal cues for several personality judgments. As one notable result, audio and visual cues together can be used to predict 34% of the variance of the Extraversion trait of the Big Five model. In addition, we explore the associations between vloggers’ personality scores and the level of social attention that their videos received in YouTube. Our study is conducted on a dataset of 442 YouTube vlogs and 2,210 annotations collected using Amazon’s Mechanical Turk.",
"title": ""
},
{
"docid": "a3868d4a31883fc789a3f207cbd32285",
"text": "Randomised controlled trials (RCTs) are the most effective approach to causal discovery, but in many circumstances it is impossible to conduct RCTs. Therefore, observational studies based on passively observed data are widely accepted as an alternative to RCTs. However, in observational studies, prior knowledge is required to generate the hypotheses about the cause-effect relationships to be tested, and hence they can only be applied to problems with available domain knowledge and a handful of variables. In practice, many datasets are of high dimensionality, which leaves observational studies out of the opportunities for causal discovery from such a wealth of data sources. In another direction, many efficient data mining methods have been developed to identify associations among variables in large datasets. The problem is that causal relationships imply associations, but the reverse is not always true. However, we can see the synergy between the two paradigms here. Specifically, association rule mining can be used to deal with the high-dimensionality problem, whereas observational studies can be utilised to eliminate noncausal associations. In this article, we propose the concept of causal rules (CRs) and develop an algorithm for mining CRs in large datasets. We use the idea of retrospective cohort studies to detect CRs based on the results of association rule mining. Experiments with both synthetic and real-world datasets have demonstrated the effectiveness and efficiency of CR mining. In comparison with the commonly used causal discovery methods, the proposed approach generally is faster and has better or competitive performance in finding correct or sensible causes. It is also capable of finding a cause consisting of multiple variables—a feature that other causal discovery methods do not possess.",
"title": ""
},
{
"docid": "30c75f37a7798b57a90376e88bb19270",
"text": "We develop methods for performing smoothing computations in general state-space models. The methods rely on a particle representation of the filtering distributions, and their evolution through time using sequential importance sampling and resampling ideas. In particular, novel techniques are presented for generation of sample realizations of historical state sequences. This is carried out in a forwardfiltering backward-smoothing procedure which can be viewed as the non-linear, non-Gaussian counterpart of standard Kalman filter-based simulation smoothers in the linear Gaussian case. Convergence in the mean-squared error sense of the smoothed trajectories is proved, showing the validity of our proposed method. The methods are tested in a substantial application for the processing of speech signals represented by a time-varying autoregression and parameterised in terms of timevarying partial correlation coefficients, comparing the results of our algorithm with those from a simple smoother based upon the filtered trajectories.",
"title": ""
},
{
"docid": "d05dd1185643ced774fa0f4a1fbfe2cb",
"text": "This paper explores the use of Support Vector Machines (SVMs) for learning text classifiers from examples. It analyzes the particular properties of learning with text data and identifies why SVMs arc appropriate for this task. Empirical results support the theoretical findings. SVMs achieve substantial improvements over the currently best performing methods and behave robustly over a variety of different learning tasks. Furthermore, they are fully automatic, eliminating the need for manual parameter tuning. 1 I n t r o d u c t i o n With the rapid growth of online information, text categorization has become one of the key techniques for handling and organizing text data. Text categorization techniques are used to classify news stories, to find interesting information on the WWW, and to guide a user's search through hypertext. Since building text classifiers by hand is difficult and time-consuming, it is advantageous to learn classifiers from examples. In this paper I will explore and identify the benefits of Support Vector Machines (SVMs) for text categorization. SVMs are a new learning method introduced by V. Vapnik et al. [9] [1]. They are well-founded in terms of computational learning theory and very open to theoretical understanding and analysis. After reviewing the standard feature vector representation of text, I will identify the particular properties of text in this representation in section 4. I will argue that SVMs are very well suited for learning in this setting. The empirical results in section 5 will support this claim. Compared to state-of-the-art methods, SVMs show substantial performance gains. Moreover, in contrast to conventional text classification methods SVMs will prove to be very robust, eliminating the need for expensive parameter tuning. 2 T e x t C a t e g o r i z a t i o n The goal of text categorization is the classification of documents into a fixed number of predefined categories. Each document can be in multiple, exactly one, or no category at all. Using machine learning, the objective is to learn classifiers",
"title": ""
},
{
"docid": "3fe2cb22ac6aa37d8f9d16dea97649c5",
"text": "The term biosensors encompasses devices that have the potential to quantify physiological, immunological and behavioural responses of livestock and multiple animal species. Novel biosensing methodologies offer highly specialised monitoring devices for the specific measurement of individual and multiple parameters covering an animal's physiology as well as monitoring of an animal's environment. These devices are not only highly specific and sensitive for the parameters being analysed, but they are also reliable and easy to use, and can accelerate the monitoring process. Novel biosensors in livestock management provide significant benefits and applications in disease detection and isolation, health monitoring and detection of reproductive cycles, as well as monitoring physiological wellbeing of the animal via analysis of the animal's environment. With the development of integrated systems and the Internet of Things, the continuously monitoring devices are expected to become affordable. The data generated from integrated livestock monitoring is anticipated to assist farmers and the agricultural industry to improve animal productivity in the future. The data is expected to reduce the impact of the livestock industry on the environment, while at the same time driving the new wave towards the improvements of viable farming techniques. This review focusses on the emerging technological advancements in monitoring of livestock health for detailed, precise information on productivity, as well as physiology and well-being. Biosensors will contribute to the 4th revolution in agriculture by incorporating innovative technologies into cost-effective diagnostic methods that can mitigate the potentially catastrophic effects of infectious outbreaks in farmed animals.",
"title": ""
},
{
"docid": "217dfc849cea5e0d80555790362af2e7",
"text": "Research examining online political forums has until now been overwhelmingly guided by two broad perspectives: (1) a deliberative conception of democratic communication and (2) a diverse collection of incommensurable multi-sphere approaches. While these literatures have contributed many insightful observations, their disadvantages have left many interesting communicative dynamics largely unexplored. This article seeks to introduce a new framework for evaluating online political forums (based on the work of Jürgen Habermas and Lincoln Dahlberg) that addresses the shortcomings of prior approaches by identifying three distinct, overlapping models of democracy that forums may manifest: the liberal, the communitarian and the deliberative democratic. For each model, a set of definitional variables drawn from the broader online forum literature is documented and discussed.",
"title": ""
}
] |
scidocsrr
|
9d87aa198ac5ee23a3d3882c0e40f2ff
|
The Dark Side of Micro-Task Marketplaces: Characterizing Fiverr and Automatically Detecting Crowdturfing
|
[
{
"docid": "b5dcb1496143f31526b3bd07b1045add",
"text": "Crowdturfing has recently been identified as a sinister counterpart to the enormous positive opportunities of crowdsourcing. Crowdturfers leverage human-powered crowdsourcing platforms to spread malicious URLs in social media, form “astroturf” campaigns, and manipulate search engines, ultimately degrading the quality of online information and threatening the usefulness of these systems. In this paper we present a framework for “pulling back the curtain” on crowdturfers to reveal their underlying ecosystem. Concretely, we analyze the types of malicious tasks and the properties of requesters and workers in crowdsourcing sites such as Microworkers.com, ShortTask.com and Rapidworkers.com, and link these tasks (and their associated workers) on crowdsourcing sites to social media, by monitoring the activities of social media participants. Based on this linkage, we identify the relationship structure connecting these workers in social media, which can reveal the implicit power structure of crowdturfers identified on crowdsourcing sites. We identify three classes of crowdturfers – professional workers, casual workers, and middlemen – and we develop statistical user models to automatically differentiate these workers and regular social media users.",
"title": ""
}
] |
[
{
"docid": "df15ea13d3bbcb7e9c5658670d37c6b1",
"text": "We present a new time integration method featuring excellent stability and energy conservation properties, making it particularly suitable for real-time physics. The commonly used backward Euler method is stable but introduces artificial damping. Methods such as implicit midpoint do not suffer from artificial damping but are unstable in many common simulation scenarios. We propose an algorithm that blends between the implicit midpoint and forward/backward Euler integrators such that the resulting simulation is stable while introducing only minimal artificial damping. We achieve this by tracking the total energy of the simulated system, taking into account energy-changing events: damping and forcing. To facilitate real-time simulations, we propose a local/global solver, similar to Projective Dynamics, as an alternative to Newton’s method. Compared to the original Projective Dynamics, which is derived from backward Euler, our final method introduces much less numerical damping at the cost of minimal computing overhead. Stability guarantees of our method are derived from the stability of backward Euler, whose stability is a widely accepted empirical fact. However, to our knowledge, theoretical guarantees have so far only been proven for linear ODEs. We provide preliminary theoretical results proving the stability of backward Euler also for certain cases of nonlinear potential functions.",
"title": ""
},
{
"docid": "6ebb0bccba167e4b093e7832621e3e23",
"text": "Bump-less Cu/adhesive hybrid bonding is a promising technology for 2.5D/3D integration. The remaining issues of this technology include high Cu–Cu bonding temperature, long thermal-compression time (low throughput), and large thermal stress. In this paper, we investigate a Cu-first hybrid bonding process in hydrogen(H)-containing formic acid (HCOOH) vapor ambient, lowering the bonding temperature to 180 °C and shortening the thermal-compression time to 600 s. We find that the H-containing HCOOH vapor pre-bonding treatment is effective for Cu surface activation and friendly to adhesives at treatment temperature of 160–200 °C. The effects of surface activation (temperature and time) on Cu–Cu bonding and cyclo-olefin polymer (COP) adhesive bonding are studied by shear tests, fracture surface observations, and interfacial observations. Cu/adhesive hybrid bonding was successfully demonstrated at a bonding temperature of 180 °C with post-bonding adhesive curing at 200 °C.",
"title": ""
},
{
"docid": "c4b5c4c94faa6e77486a95457cdf502f",
"text": "In this paper, we implement an optical fiber communication system as an end-to-end deep neural network, including the complete chain of transmitter, channel model, and receiver. This approach enables the optimization of the transceiver in a single end-to-end process. We illustrate the benefits of this method by applying it to intensity modulation/direct detection (IM/DD) systems and show that we can achieve bit error rates below the 6.7% hard-decision forward error correction (HD-FEC) threshold. We model all componentry of the transmitter and receiver, as well as the fiber channel, and apply deep learning to find transmitter and receiver configurations minimizing the symbol error rate. We propose and verify in simulations a training method that yields robust and flexible transceivers that allow—without reconfiguration—reliable transmission over a large range of link dispersions. The results from end-to-end deep learning are successfully verified for the first time in an experiment. In particular, we achieve information rates of 42 Gb/s below the HD-FEC threshold at distances beyond 40 km. We find that our results outperform conventional IM/DD solutions based on two- and four-level pulse amplitude modulation with feedforward equalization at the receiver. Our study is the first step toward end-to-end deep learning based optimization of optical fiber communication systems.",
"title": ""
},
{
"docid": "22c643e0a13c3510f0099ac61282fcfb",
"text": "We propose and study a novel panoptic segmentation (PS) task. Panoptic segmentation unifies the typically distinct tasks of semantic segmentation (assign a class label to each pixel) and instance segmentation (detect and segment each object instance). The proposed task requires generating a coherent scene segmentation that is rich and complete, an important step toward real-world vision systems. While early work in computer vision addressed related image/scene parsing tasks, these are not currently popular, possibly due to lack of appropriate metrics or associated recognition challenges. To address this, we first propose a novel panoptic quality (PQ) metric that captures performance for all classes (stuff and things) in an interpretable and unified manner. Using the proposed metric, we perform a rigorous study of both human and machine performance for PS on three existing datasets, revealing interesting insights about the task. Second, we are working to introduce panoptic segmentation tracks at upcoming recognition challenges. The aim of our work is to revive the interest of the community in a more unified view of image segmentation.",
"title": ""
},
{
"docid": "076be6f579e3b2b8889cd97f781c98e9",
"text": "To gain an in-depth understanding of the behaviour of a malware, reverse engineers have to disassemble the malware, analyze the resulting assembly code, and then archive the commented assembly code in a malware repository for future reference. In this paper, we have developed an assembly code clone detection system called BinClone to identify the code clone fragments from a collection of malware binaries with the following major contributions. First, we introduce two deterministic clone detection methods with the goals of improving the recall rate and facilitating malware analysis. Second, our methods allow malware analysts to discover both exact and inexact clones at different token normalization levels. Third, we evaluate our proposed clone detection methods on real-life malware binaries. To the best of our knowledge, this is the first work that studies the problem of assembly code clone detection for malware analysis.",
"title": ""
},
{
"docid": "d29ca3ca682433a9ea6172622d12316c",
"text": "The phenomenon of a phantom limb is a common experience after a limb has been amputated or its sensory roots have been destroyed. A complete break of the spinal cord also often leads to a phantom body below the level of the break. Furthermore, a phantom of the breast, the penis, or of other innervated body parts is reported after surgical removal of the structure. A substantial number of children who are born without a limb feel a phantom of the missing part, suggesting that the neural network, or 'neuromatrix', that subserves body sensation has a genetically determined substrate that is modified by sensory experience.",
"title": ""
},
{
"docid": "abbe1bca2f31ad7b3d5761b03cebafa5",
"text": "Research Article Gimun Kim Konyang University gmkim@konyang.ac.kr Bongsik Shin San Diego State University bshin@mail.sdsu.edu Kyung Kyu Kim Yonsei University kyu.kim@yonsei.ac.kr Ho Geun Lee Yonsei University h.lee@yonsei.ac.kr More and more publications are highlighting the value of IT in affecting business processes. Recognizing firmlevel dynamic capabilities as key to improved firm performance, our work examines and empirically tests the influencing relationships among IT capabilities (IT personnel expertise, IT infrastructure flexibility, and IT management capabilities), process-oriented dynamic capabilities, and financial performance. Processoriented dynamic capabilities are defined as a firm’s ability to change (improve, adapt, or reconfigure) a business process better than the competition in terms of integrating activities, reducing cost, and capitalizing on business intelligence/learning. They encompass a broad category of changes in the firm’s processes, ranging from continual adjustments and improvements to radical one-time alterations. Although the majority of changes may be incremental, a firm’s capacity for timely changes also implies its readiness to execute radical alterations when the need arises. Grounded on the theoretical position, we propose a research model and gather a survey data set through a rigorous process that retains research validity. From the analysis of the survey data, we find an important route of causality, as follows: IT personnel expertise IT management capabilities IT infrastructure flexibility process-oriented dynamic capabilities financial performance. Based on this finding, we discuss the main contributions of our study in terms of the strategic role of IT in enhancing firm performance.",
"title": ""
},
{
"docid": "09c5da2fbf8a160ba27221ff0c5417ac",
"text": " The burst fracture of the spine was first described by Holdsworth in 1963 and redefined by Denis in 1983 as being a fracture of the anterior and middle columns of the spine with or without an associated posterior column fracture. This injury has received much attention in the literature as regards its radiological diagnosis and also its clinical managment. The purpose of this article is to review the way that imaging has been used both to diagnose the injury and to guide management. Current concepts of the stability of this fracture are presented and our experience in the use of magnetic resonance imaging in deciding treatment options is discussed.",
"title": ""
},
{
"docid": "c46edb8a67c10ba5819a5eeeb0e62905",
"text": "One of the most challenging projects in information systems is extracting information from unstructured texts, including medical document classification. I am developing a classification algorithm that classifies a medical document by analyzing its content and categorizing it under predefined topics from the Medical Subject Headings (MeSH). I collected a corpus of 50 full-text journal articles (N=50) from MEDLINE, which were already indexed by experts based on MeSH. Using natural language processing (NLP), my algorithm classifies the collected articles under MeSH subject headings. I evaluated the algorithm's outcome by measuring its precision and recall of resulting subject headings from the algorithm, comparing results to the actual documents' subject headings. The algorithm classified the articles correctly under 45% to 60% of the actual subject headings and got 40% to 53% of the total subject headings correct. This holds promising solutions for the global health arena to index and classify medical documents expeditiously.",
"title": ""
},
{
"docid": "a78c5f726ac3306528b5094b2e8e871c",
"text": "Despite widespread agreement that multi-method assessments are optimal in personality research, the literature is dominated by a single method: self-reports. This pattern seems to be based, at least in part, on widely held preconceptions about the costs of non-self-report methods, such as informant methods. Researchers seem to believe that informant methods are: (a) time-consuming, (b) expensive, (c) ineffective (i.e., informants will not cooperate), and (d) particularly vulnerable to faking or invalid responses. This article evaluates the validity of these preconceptions in light of recent advances in Internet technology, and proposes some strategies for making informant methods more effective. Drawing on data from three separate studies, I demonstrate that, using these strategies, informant reports can be collected with minimal effort and few monetary costs. In addition, informants are generally very willing to cooperate (e.g., response rates of 76–95%) and provide valid data (in terms of strong consensus and self-other agreement). Informant reports represent a mostly untapped resource that researchers can use to improve the validity of personality assessments and to address new questions that cannot be examined with self-reports alone. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "c635f2ad65cd74c137910661aeb0ab3d",
"text": "Scholarly research on the topic of leadership has witnessed a dramatic increase over the last decade, resulting in the development of diverse leadership theories. To take stock of established and developing theories since the beginning of the new millennium, we conducted an extensive qualitative review of leadership theory across 10 top-tier academic publishing outlets that included The Leadership Quarterly, Administrative Science Quarterly, American Psychologist, Journal of Management, Academy of Management Journal, Academy of Management Review, Journal of Applied Psychology, Organizational Behavior and Human Decision Processes, Organizational Science, and Personnel Psychology. We then combined two existing frameworks (Gardner, Lowe, Moss, Mahoney, & Cogliser, 2010; Lord & Dinh, 2012) to provide a processoriented framework that emphasizes both forms of emergence and levels of analysis as a means to integrate diverse leadership theories. We then describe the implications of the findings for future leadership research and theory.",
"title": ""
},
{
"docid": "6e8d30f3eaaf6c88dddb203c7b703a92",
"text": "searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.",
"title": ""
},
{
"docid": "a267fadc2875fc16b69635d4592b03ae",
"text": "We investigated neural correlates of human visual orienting using event-related functional magnetic resonance imaging (fMRI). When subjects voluntarily directed attention to a peripheral location, we recorded robust and sustained signals uniquely from the intraparietal sulcus (IPs) and superior frontal cortex (near the frontal eye field, FEF). In the ventral IPs and FEF only, the blood oxygen level dependent signal was modulated by the direction of attention. The IPs and FEF also maintained the most sustained level of activation during a 7-sec delay, when subjects maintained attention at the peripheral cued location (working memory). Therefore, the IPs and FEF form a dorsal network that controls the endogenous allocation and maintenance of visuospatial attention. A separate right hemisphere network was activated by the detection of targets at unattended locations. Activation was largely independent of the target's location (visual field). This network included among other regions the right temporo-parietal junction and the inferior frontal gyrus. We propose that this cortical network is important for reorienting to sensory events.",
"title": ""
},
{
"docid": "85965736f2d215fb9d7d7351160cc1e9",
"text": "In Robotics, especially in this era of autonomous driving, mapping is one key ability of a robot to be able to navigate through an environment, localize on it and analyze its traversability. To allow for real-time execution on constrained hardware, the map usually estimated by feature-based or semidense SLAM algorithms is a sparse point cloud; a richer and more complete representation of the environment is desirable. Existing dense mapping algorithms require extensive use of GPU computing and they hardly scale to large environments; incremental algorithms from sparse points still represent an effective solution when light computational effort is needed and big sequences have to be processed in real-time. In this paper we improved and extended the state of the art incremental manifold mesh algorithm proposed in [1] and extended in [2]. While these algorithms do not achieve real-time and they embed points from SLAM or Structure from Motion only when their position is fixed, in this paper we propose the first incremental algorithm able to reconstruct a manifold mesh in real-time through single core CPU processing which is also able to modify the mesh according to 3D points updates from the underlying SLAM algorithm. We tested our algorithm against two state of the art incremental mesh mapping systems on the KITTI dataset, and we showed that, while accuracy is comparable, our approach is able to reach real-time performances thanks to an order of magnitude speed-up.",
"title": ""
},
{
"docid": "77ece03721c0bf08484e64b405523e04",
"text": "Video content providers put stringent requirements on the quality assessment methods realized on their services. They need to be accurate, real-time, adaptable to new content, and scalable as the video set grows. In this letter, we introduce a novel automated and computationally efficient video assessment method. It enables accurate real-time (online) analysis of delivered quality in an adaptable and scalable manner. Offline deep unsupervised learning processes are employed at the server side and inexpensive no-reference measurements at the client side. This provides both real-time assessment and performance comparable to the full reference counterpart, while maintaining its no-reference characteristics. We tested our approach on the LIMP Video Quality Database (an extensive packet loss impaired video set) obtaining a correlation between <inline-formula><tex-math notation=\"LaTeX\">$78\\%$</tex-math> </inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$91\\%$</tex-math></inline-formula> to the FR benchmark (the video quality metric). Due to its unsupervised learning essence, our method is flexible and dynamically adaptable to new content and scalable with the number of videos.",
"title": ""
},
{
"docid": "907883af0e81f4157e81facd4ff4344c",
"text": "This work presents a low-power low-cost CDR design for RapidIO SerDes. The design is based on phase interpolator, which is controlled by a synthesized standard cell digital block. Half-rate architecture is adopted to lessen the problems in routing high speed clocks and reduce power. An improved half-rate bang-bang phase detector is presented to assure the stability of the system. Moreover, the paper proposes a simplified control scheme for the phase interpolator to further reduce power and cost. The CDR takes an area of less than 0.05mm2, and post simulation shows that the CDR has a RMS jitter of UIpp/32 (11.4ps@3.125GBaud) and consumes 9.5mW at 3.125GBaud.",
"title": ""
},
{
"docid": "72a283eda92eb25404536308d8909999",
"text": "This paper presents a 128.7nW analog front-end amplifier and Gm-C filter for biomedical sensing applications, specifically for Electroencephalogram (EEG) use. The proposed neural amplifier has a supply voltage of 1.8V, consumes a total current of 71.59nA, for a total dissipated power of 128nW and has a gain of 40dB. Also, a 3th order Butterworth Low Pass Gm-C Filter with a 14.7nS transconductor is designed and presented. The filter has a pass band suitable for use in EEG (1-100Hz). The amplifier and filter utilize current sources without resistance which provide 56nA and (1.154nA ×5) respectively. The proposed amplifier occupies and area of 0.26mm2 in 0.3μm TSMC process.",
"title": ""
},
{
"docid": "cfae06b1dc6faf1fca6617c722c146a3",
"text": "State-of-the-art approaches for the previous emotion recognition in the wild challenges are usually built on prevailing Convolutional Neural Networks (CNNs). Although there is clear evidence that CNNs with increased depth or width can usually bring improved predication accuracy, existing top approaches provide supervision only at the output feature layer, resulting in the insufficient training of deep CNN models. In this paper, we present a new learning method named Supervised Scoring Ensemble (SSE) for advancing this challenge with deep CNNs. We first extend the idea of recent deep supervision to deal with emotion recognition problem. Benefiting from adding supervision not only to deep layers but also to intermediate layers and shallow layers, the training of deep CNNs can be well eased. Second, we present a new fusion structure in which class-wise scoring activations at diverse complementary feature layers are concatenated and further used as the inputs for second-level supervision, acting as a deep feature ensemble within a single CNN architecture. We show our proposed learning method brings large accuracy gains over diverse backbone networks consistently. On this year's audio-video based emotion recognition task, the average recognition rate of our best submission is 60.34%, forming a new envelop over all existing records.",
"title": ""
},
{
"docid": "94b8aeb8454b05a7916daf0f0b57ee8b",
"text": "Accumulating evidence suggests that neuroinflammation affecting microglia plays an important role in the etiology of schizophrenia, and appropriate control of microglial activation may be a promising therapeutic strategy for schizophrenia. Minocycline, a second-generation tetracycline that inhibits microglial activation, has been shown to have a neuroprotective effect in various models of neurodegenerative disease, including anti-inflammatory, antioxidant, and antiapoptotic properties, and an ability to modulate glutamate-induced excitotoxicity. Given that these mechanisms overlap with neuropathologic pathways, minocycline may have a potential role in the adjuvant treatment of schizophrenia, and improve its negative symptoms. Here, we review the relevant studies of minocycline, ranging from preclinical research to human clinical trials.",
"title": ""
},
{
"docid": "19eed07d00b48a0dbb70127bab446cc2",
"text": "In addition to compatibility with VLSI technology, sigma-delta converters provide high level of reliability and functionality and reduced chip cost. Those characteristics are commonly required in the today wireless communication environment. The objective of this paper is to simulate and analyze the sigma-delta technology which proposed for the implementation in the low-digital-bandwidth voice communication. The results of simulation show the superior performance of the converter compared to the performance of more conventional implementations, such as the delta converters. Particularly, this paper is focused on simulation and comparison between sigma-delta and delta converters in terms of varying signal to noise ratio, distortion ratio and sampling structure.",
"title": ""
}
] |
scidocsrr
|
2fc805c64562df9daf1344e2c4a8883d
|
In the Eye of the Beholder: A Survey of Models for Eyes and Gaze
|
[
{
"docid": "12d625fe60790761ff604ab8aa70c790",
"text": "We describe a system designed to monitor the gaze of a user working naturally at a computer workstation. The system consists of three cameras situated between the keyboard and the monitor. Free head movements are allowed within a three-dimensional volume approximately 40 centimeters in diameter. Two fixed, wide-field \"face\" cameras equipped with active-illumination systems enable rapid localization of the subject's pupils. A third steerable \"eye\" camera has a relatively narrow field of view, and acquires the images of the eyes which are used for gaze estimation. Unlike previous approaches which construct an explicit three-dimensional representation of the subject's head and eye, we derive mappings for steering control and gaze estimation using a procedure we call implicit calibration. Implicit calibration is performed by collecting a \"training set\" of parameters and associated measurements, and solving for a set of coefficients relating the measurements back to the parameters of interest. Preliminary data on three subjects indicate an median gaze estimation error of ap-proximately 0.8 degree.",
"title": ""
},
{
"docid": "953f2efa434f29ceecc191201ebd77d7",
"text": "This paper presents a novel design for a non-contact eye detection and gaze tracking device. It uses two cameras to maintain real-time tracking of a person s eye in the presence of head motion. Image analysis techniques are used to obtain accurate locations of the pupil and corneal reflections. All the computations are performed in software and the device only requires simple, compact optics and electronics attached to the user s computer. Three methods of estimating the user s point of gaze on a computer monitor are evaluated. The camera motion system is capable of tracking the user s eye in real-time (9 fps) in the presence of natural head movements as fast as 100 /s horizontally and 77 /s vertically. Experiments using synthetic images have shown its ability to track the location of the eye in an image to within 0.758 pixels horizontally and 0.492 pixels vertically. The system has also been tested with users with different eye colors and shapes, different ambient lighting conditions and the use of eyeglasses. A gaze accuracy of 2.9 was observed. 2004 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "0666baa7be39ef1887c7f8ce04aaa957",
"text": "BACKGROUND\nEnsuring health worker job satisfaction and motivation are important if health workers are to be retained and effectively deliver health services in many developing countries, whether they work in the public or private sector. The objectives of the paper are to identify important aspects of health worker satisfaction and motivation in two Indian states working in public and private sectors.\n\n\nMETHODS\nCross-sectional surveys of 1916 public and private sector health workers in Andhra Pradesh and Uttar Pradesh, India, were conducted using a standardized instrument to identify health workers' satisfaction with key work factors related to motivation. Ratings were compared with how important health workers consider these factors.\n\n\nRESULTS\nThere was high variability in the ratings for areas of satisfaction and motivation across the different practice settings, but there were also commonalities. Four groups of factors were identified, with those relating to job content and work environment viewed as the most important characteristics of the ideal job, and rated higher than a good income. In both states, public sector health workers rated \"good employment benefits\" as significantly more important than private sector workers, as well as a \"superior who recognizes work\". There were large differences in whether these factors were considered present on the job, particularly between public and private sector health workers in Uttar Pradesh, where the public sector fared consistently lower (P < 0.01). Discordance between what motivational factors health workers considered important and their perceptions of actual presence of these factors were also highest in Uttar Pradesh in the public sector, where all 17 items had greater discordance for public sector workers than for workers in the private sector (P < 0.001).\n\n\nCONCLUSION\nThere are common areas of health worker motivation that should be considered by managers and policy makers, particularly the importance of non-financial motivators such as working environment and skill development opportunities. But managers also need to focus on the importance of locally assessing conditions and managing incentives to ensure health workers are motivated in their work.",
"title": ""
},
{
"docid": "54776bdc9f7a9b18289d4901a8db5d7a",
"text": "The goal of this research was to determine the effect of different doses of galactooligosaccharide (GOS) on the fecal microbiota of healthy adults, with a focus on bifidobacteria. The study was designed as a single-blinded study, with eighteen subjects consuming GOS-containing chocolate chews at four increasing dosage levels; 0, 2.5, 5.0, and 10.0g. Subjects consumed each dose for 3 weeks, with a two-week baseline period preceding the study and a two-week washout period at the end. Fecal samples were collected weekly and analyzed by cultural and molecular methods. Cultural methods were used for bifidobacteria, Bacteroides, enterobacteria, enterococci, lactobacilli, and total anaerobes; culture-independent methods included denaturing gradient gel electrophoresis (DGGE) and quantitative real-time PCR (qRT-PCR) using Bifidobacterium-specific primers. All three methods revealed an increase in bifidobacteria populations, as the GOS dosage increased to 5 or 10g. Enumeration of bifidobacteria by qRT-PCR showed a high inter-subject variation in bifidogenic effect and indicated a subset of 9 GOS responders among the eighteen subjects. There were no differences, however, in the initial levels of bifidobacteria between the responding individuals and the non-responding individuals. Collectively, this study showed that a high purity GOS, administered in a confection product at doses of 5g or higher, was bifidogenic, while a dose of 2.5g showed no significant effect. However, the results also showed that even when GOS was administered for many weeks and at high doses, there were still some individuals for which a bifidogenic response did not occur.",
"title": ""
},
{
"docid": "9e9dd203746a1bd4024632abeb80fb0a",
"text": "Translating data from linked data sources to the vocabulary that is expected by a linked data application requires a large number of mappings and can require a lot of structural transformations as well as complex property value transformations. The R2R mapping language is a language based on SPARQL for publishing expressive mappings on the web. However, the specification of R2R mappings is not an easy task. This paper therefore proposes the use of mapping patterns to semi-automatically generate R2R mappings between RDF vocabularies. In this paper, we first specify a mapping language with a high level of abstraction to transform data from a source ontology to a target ontology vocabulary. Second, we introduce the proposed mapping patterns. Finally, we present a method to semi-automatically generate R2R mappings using the mapping",
"title": ""
},
{
"docid": "a97838c0a9290bb3bf6fbbbac0a25f5e",
"text": "The collaborative filtering (CF) using known user ratings of items has proved to be effective for predicting user preferences in item selection. This thrivi ng subfield of machine learning became popular in the late 1990s with the spread of online services t hat use recommender systems, such as Amazon, Yahoo! Music, and Netflix. CF approaches are usually designed to work on very large data sets. Therefore the scalability of the methods is cruci al. In this work, we propose various scalable solutions that are validated against the Netflix Pr ize data set, currently the largest publicly available collection. First, we propose various matrix fac torization (MF) based techniques. Second, a neighbor correction method for MF is outlined, which alloy s the global perspective of MF and the localized property of neighbor based approaches efficie ntly. In the experimentation section, we first report on some implementation issues, and we suggest on how parameter optimization can be performed efficiently for MFs. We then show that the propos ed calable approaches compare favorably with existing ones in terms of prediction accurac y nd/or required training time. Finally, we report on some experiments performed on MovieLens and Jes ter data sets.",
"title": ""
},
{
"docid": "e1da6ca2b27ef6dfcdad1db9def49ce2",
"text": "The first stage of every knowledge base question answering approach is to link entities in the input question. We investigate entity linking in the context of a question answering task and present a jointly optimized neural architecture for entity mention detection and entity disambiguation that models the surrounding context on different levels of granularity. We use the Wikidata knowledge base and available question answering datasets to create benchmarks for entity linking on question answering data. Our approach outperforms the previous state-of-the-art system on this data, resulting in an average 8% improvement of the final score. We further demonstrate that our model delivers a strong performance across different entity categories.",
"title": ""
},
{
"docid": "9581c692787cfef1ce2916100add4c1e",
"text": "Diabetes related eye disease is growing as a major health concern worldwide. Diabetic retinopathy is an infirmity due to higher level of glucose in the retinal capillaries, resulting in cloudy vision and blindness eventually. With regular screening, pathology can be detected in the instigating stage and if intervened with in time medication could prevent further deterioration. This paper develops an automated diagnosis system to recognize retinal blood vessels, and pathologies, such as exudates and microaneurysms together with certain texture properties using image processing techniques. These anatomical and texture features are then fed into a multiclass support vector machine (SVM) for classifying it into normal, mild, moderate, severe and proliferative categories. Advantages include, it processes quickly a large collection of fundus images obtained from mass screening which lessens cost and increases efficiency for ophthalmologists. Our method was evaluated on two publicly available databases and got encouraging results with a state of the art in this area.",
"title": ""
},
{
"docid": "872946be0c4897dc33bc1276593ee7a4",
"text": "BACKGROUND\nMusic therapy is a therapeutic method that uses musical interaction as a means of communication and expression. The aim of the therapy is to help people with serious mental disorders to develop relationships and to address issues they may not be able to using words alone.\n\n\nOBJECTIVES\nTo review the effects of music therapy, or music therapy added to standard care, compared with 'placebo' therapy, standard care or no treatment for people with serious mental disorders such as schizophrenia.\n\n\nSEARCH METHODS\nWe searched the Cochrane Schizophrenia Group Trials Register (December 2010) and supplemented this by contacting relevant study authors, handsearching of music therapy journals and manual searches of reference lists.\n\n\nSELECTION CRITERIA\nAll randomised controlled trials (RCTs) that compared music therapy with standard care, placebo therapy, or no treatment.\n\n\nDATA COLLECTION AND ANALYSIS\nStudies were reliably selected, quality assessed and data extracted. We excluded data where more than 30% of participants in any group were lost to follow-up. We synthesised non-skewed continuous endpoint data from valid scales using a standardised mean difference (SMD). If statistical heterogeneity was found, we examined treatment 'dosage' and treatment approach as possible sources of heterogeneity.\n\n\nMAIN RESULTS\nWe included eight studies (total 483 participants). These examined effects of music therapy over the short- to medium-term (one to four months), with treatment 'dosage' varying from seven to 78 sessions. Music therapy added to standard care was superior to standard care for global state (medium-term, 1 RCT, n = 72, RR 0.10 95% CI 0.03 to 0.31, NNT 2 95% CI 1.2 to 2.2). Continuous data identified good effects on negative symptoms (4 RCTs, n = 240, SMD average endpoint Scale for the Assessment of Negative Symptoms (SANS) -0.74 95% CI -1.00 to -0.47); general mental state (1 RCT, n = 69, SMD average endpoint Positive and Negative Symptoms Scale (PANSS) -0.36 95% CI -0.85 to 0.12; 2 RCTs, n=100, SMD average endpoint Brief Psychiatric Rating Scale (BPRS) -0.73 95% CI -1.16 to -0.31); depression (2 RCTs, n = 90, SMD average endpoint Self-Rating Depression Scale (SDS) -0.63 95% CI -1.06 to -0.21; 1 RCT, n = 30, SMD average endpoint Hamilton Depression Scale (Ham-D) -0.52 95% CI -1.25 to -0.21 ); and anxiety (1 RCT, n = 60, SMD average endpoint SAS -0.61 95% CI -1.13 to -0.09). Positive effects were also found for social functioning (1 RCT, n = 70, SMD average endpoint Social Disability Schedule for Inpatients (SDSI) score -0.78 95% CI -1.27 to -0.28). Furthermore, some aspects of cognitive functioning and behaviour seem to develop positively through music therapy. Effects, however, were inconsistent across studies and depended on the number of music therapy sessions as well as the quality of the music therapy provided.\n\n\nAUTHORS' CONCLUSIONS\nMusic therapy as an addition to standard care helps people with schizophrenia to improve their global state, mental state (including negative symptoms) and social functioning if a sufficient number of music therapy sessions are provided by qualified music therapists. Further research should especially address the long-term effects of music therapy, dose-response relationships, as well as the relevance of outcomes measures in relation to music therapy.",
"title": ""
},
{
"docid": "8bda505118b1731e778b41203520b3b8",
"text": "Image search and retrieval systems depend heavily on availability of descriptive textual annotations with images, to match them with textual queries of users. In most cases, such systems have to rely on users to provide tags or keywords with images. Users may add insufficient or noisy tags. A system to automatically generate descriptive tags for images can be extremely helpful for search and retrieval systems. Automatic image annotation has been explored widely in both image and text processing research communities. In this paper, we present a novel approach to tackle this problem by incorporating contextual information provided by scene analysis of image. Image can be represented by features which indicate type of scene shown in the image, instead of representing individual objects or local characteristics of that image. We have used such features to provide context in the process of predicting tags for images.",
"title": ""
},
{
"docid": "576819d44c53e29e495fe594ce624f17",
"text": "This paper proposes a new off line error compensation model by taking into accounting of geometric and cutting force induced errors in a 3-axis CNC milling machine. Geometric error of a 3-axis milling machine composes of 21 components, which can be measured by laser interferometer within the working volume. Geometric error estimation determined by back-propagation neural network is proposed and used separately in the geometric error compensation model. Likewise, cutting force induced error estimation by back-propagation neural network determined based on a flat end mill behavior observation is proposed and used separately in the cutting force induced error compensation model. Various experiments over a wide range of cutting conditions are carried out to investigate cutting force and machine error relation. Finally, the combination of geometric and cutting force induced errors is modeled by the combined back-propagation neural network. This unique model is used to compensate both geometric and cutting force induced errors simultaneously by a single model. Experimental tests have been carried out in order to validate the performance of geometric and cutting force induced errors compensation model. # 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "225e7b608d06d218144853b900d40fd1",
"text": "Deep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our deep transfer learning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120). In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2% - 10% using a single model. Codes and models are available at https://github.com/ZYYSzj/Selective-Joint-Fine-tuning.",
"title": ""
},
{
"docid": "e30db40102a2d84a150c220250fa4d36",
"text": "A voltage reference circuit operating with all transistors biased in weak inversion, providing a mean reference voltage of 257.5 mV, has been fabricated in 0.18 m CMOS technology. The reference voltage can be approximated by the difference of transistor threshold voltages at room temperature. Accurate subthreshold design allows the circuit to work at room temperature with supply voltages down to 0.45 V and an average current consumption of 5.8 nA. Measurements performed over a set of 40 samples showed an average temperature coefficient of 165 ppm/ C with a standard deviation of 100 ppm/ C, in a temperature range from 0 to 125°C. The mean line sensitivity is ≈0.44%/V, for supply voltages ranging from 0.45 to 1.8 V. The power supply rejection ratio measured at 30 Hz and simulated at 10 MHz is lower than -40 dB and -12 dB, respectively. The active area of the circuit is ≈0.043mm2.",
"title": ""
},
{
"docid": "114880188f559f42f818ddfc0753c169",
"text": "Geometric active contours have many advantages over parametric active contours, such as computational simplicity and the ability to change the curve topology during deformation. While many of the capabilities of the older parametric active contours have been reproduced in geometric active contours, the relationship between the two has not always been clear. We develop a precise relationship between the two which includes spatially-varying coefficients, both tension and rigidity, and non-conservative external forces. The result is a very general geometric active contour formulation for which the intuitive design principles of parametric active contours can be applied. We demonstrate several novel applications in a series of simulations.",
"title": ""
},
{
"docid": "a7bc0af9b764021d1f325b1edfbfd700",
"text": "BACKGROUND\nIn the treatment of schizophrenia, changing antipsychotics is common when one treatment is suboptimally effective, but the relative effectiveness of drugs used in this strategy is unknown. This randomized, double-blind study compared olanzapine, quetiapine, risperidone, and ziprasidone in patients who had just discontinued a different atypical antipsychotic.\n\n\nMETHOD\nSubjects with schizophrenia (N=444) who had discontinued the atypical antipsychotic randomly assigned during phase 1 of the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) investigation were randomly reassigned to double-blind treatment with a different antipsychotic (olanzapine, 7.5-30 mg/day [N=66]; quetiapine, 200-800 mg/day [N=63]; risperidone, 1.5-6.0 mg/day [N=69]; or ziprasidone, 40-160 mg/day [N=135]). The primary aim was to determine if there were differences between these four treatments in effectiveness measured by time until discontinuation for any reason.\n\n\nRESULTS\nThe time to treatment discontinuation was longer for patients treated with risperidone (median: 7.0 months) and olanzapine (6.3 months) than with quetiapine (4.0 months) and ziprasidone (2.8 months). Among patients who discontinued their previous antipsychotic because of inefficacy (N=184), olanzapine was more effective than quetiapine and ziprasidone, and risperidone was more effective than quetiapine. There were no significant differences between antipsychotics among those who discontinued their previous treatment because of intolerability (N=168).\n\n\nCONCLUSIONS\nAmong this group of patients with chronic schizophrenia who had just discontinued treatment with an atypical antipsychotic, risperidone and olanzapine were more effective than quetiapine and ziprasidone as reflected by longer time until discontinuation for any reason.",
"title": ""
},
{
"docid": "1dde34893bbfb2c08e2dd59f98836a2b",
"text": "Standards such as OIF CEI-25G, CEI-28G and 32G-FC require transceivers operating at high data rates over imperfect channels. Equalizers are used to cancel the inter-symbol interference (ISI) caused by frequency-dependent channel losses such as skin effect and dielectric loss. The primary objective of an equalizer is to compensate for high-frequency loss, which often exceeds 30dB at fs/2. However, due to the skin effect in a PCB stripline, which starts at 10MHz or lower, we also need to compensate for a small amount of loss at low frequency (e.g., 500MHz). Figure 2.1.1 shows simulated responses of a backplane channel (42.6dB loss at fs/2 for 32Gb/s) with conventional high-frequency equalizers only (4-tap feed-forward equalizer (FFE), 1st-order continuous-time linear equalizer (CTLE) with a dominant pole at fs/4, and 1-tap DFE) and with additional low-frequency equalization. Conventional equalizers cannot compensate for the small amount of low-frequency loss because the slope of the low-frequency loss is too gentle (<;3dB/dec). The FFE and CTLE do not have a pole in the low frequency region and hence have only a steep slope of 20dB/dec above their zero. The DFE cancels only short-term ISI. Effects of such low-frequency loss have often been overlooked or neglected, because 1) the loss is small (2 to 3dB), 2) when plotted using the linear frequency axis which is commonly used to show frequency dependence of skin effect and dielectric loss, the low-frequency loss is degenerated at DC and hardly visible (Fig. 2.1.1a), and 3) the long ISI tail of the channel pulse response seems well cancelled at first glance by conventional equalizers only (Fig. 2.1.1b). However, the uncompensated low-frequency loss causes non-negligible long-term residual ISI, because the integral of the residual ISI magnitude keeps going up for several hundred UI. As shown by the eye diagrams in the inset of Fig. 2.1.1(b), the residual long-term ISI results in 0.42UI data-dependent Jitter (DDJ) that is difficult to reduce further by enhancing FFE/CTLE/DFE, but can be reduced to 0.21UI by adding a low-frequency equalizer (LFEQ). Savoj et al. also recently reported long-tail cancellation [2].",
"title": ""
},
{
"docid": "a962df86c47b97280a272fb4a62c4f47",
"text": "Following an approach introduced by Lagnado and Osher (1997), we study Tikhonov regularization applied to an inverse problem important in mathematical finance, that of calibrating, in a generalized Black–Scholes model, a local volatility function from observed vanilla option prices. We first establish W 1,2 p estimates for the Black–Scholes and Dupire equations with measurable ingredients. Applying general results available in the theory of Tikhonov regularization for ill-posed nonlinear inverse problems, we then prove the stability of this approach, its convergence towards a minimum norm solution of the calibration problem (which we assume to exist), and discuss convergence rates issues.",
"title": ""
},
{
"docid": "e44636035306e122bf50115552516f53",
"text": "Texts and dialogues often express information indirectly. For instance, speakers’ answers to yes/no questions do not always straightforwardly convey a ‘yes’ or ‘no’ answer. The intended reply is clear in some cases (Was it good? It was great!) but uncertain in others (Was it acceptable? It was unprecedented.). In this paper, we present methods for interpreting the answers to questions like these which involve scalar modifiers. We show how to ground scalar modifier meaning based on data collected from the Web. We learn scales between modifiers and infer the extent to which a given answer conveys ‘yes’ or ‘no’. To evaluate the methods, we collected examples of question–answer pairs involving scalar modifiers from CNN transcripts and the Dialog Act corpus and use response distributions from Mechanical Turk workers to assess the degree to which each answer conveys ‘yes’ or ‘no’. Our experimental results closely match the Turkers’ response data, demonstrating that meanings can be learned from Web data and that such meanings can drive pragmatic inference.",
"title": ""
},
{
"docid": "1272563e64ca327aba1be96f2e045c30",
"text": "Current Web search engines are built to serve all users, independent of the special needs of any individual user. Personalization of Web search is to carry out retrieval for each user incorporating his/her interests. We propose a novel technique to learn user profiles from users' search histories. The user profiles are then used to improve retrieval effectiveness in Web search. A user profile and a general profile are learned from the user's search history and a category hierarchy, respectively. These two profiles are combined to map a user query into a set of categories which represent the user's search intention and serve as a context to disambiguate the words in the user's query. Web search is conducted based on both the user query and the set of categories. Several profile learning and category mapping algorithms and a fusion algorithm are provided and evaluated. Experimental results indicate that our technique to personalize Web search is both effective and efficient.",
"title": ""
},
{
"docid": "494d013d52282b9c6667024188c38542",
"text": "Digital Image processing( DIP ) is a theme of awesome significance basically for any task, either for essential varieties of photograph indicators or complex mechanical frameworks utilizing assumed vision. In this paperbasics of the image processing in LabVIEW have been described in brief. It involves capturing the image of an object that is to be analysed and compares it with the reference image template of the object by pattern matching algorithm. The co-ordinates of the image is also be identified by tracking of object on the screen. A basic pattern matching algorithm is modified to snap and track the image on real-time basis. Keywords— LabVIEW, IMAQ, Pattern matching, Realtime tracking, .",
"title": ""
},
{
"docid": "df609125f353505fed31eee302ac1742",
"text": "We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark [2] both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors [4] and sub-category detection [23][24]. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset[26].",
"title": ""
},
{
"docid": "cd48c6b722f8e88f0dc514fcb6a0d890",
"text": "Multi-tier data-intensive applications are widely deployed in virtualized data centers for high scalability and reliability. As the response time is vital for user satisfaction, this requires achieving good performance at each tier of the applications in order to minimize the overall latency. However, in such virtualized environments, each tier (e.g., application, database, web) is likely to be hosted by different virtual machines (VMs) on multiple physical servers, where a guest VM is unaware of changes outside its domain, and the hypervisor also does not know the configuration and runtime status of a guest VM. As a result, isolated virtualization domains lend themselves to performance unpredictability and variance. In this paper, we propose IOrchestra, a holistic collaborative virtualization framework, which bridges the semantic gaps of I/O stacks and system information across multiple VMs, improves virtual I/O performance through collaboration from guest domains, and increases resource utilization in data centers. We present several case studies to demonstrate that IOrchestra is able to address numerous drawbacks of the current practice and improve the I/O latency of various distributed cloud applications by up to 31%.",
"title": ""
}
] |
scidocsrr
|
41d227c4dc354201db798ac4daf82255
|
DexLego: Reassembleable Bytecode Extraction for Aiding Static Analysis
|
[
{
"docid": "e8b5fcac441c46e46b67ffbdd4b043e6",
"text": "We present DroidSafe, a static information flow analysis tool that reports potential leaks of sensitive information in Android applications. DroidSafe combines a comprehensive, accurate, and precise model of the Android runtime with static analysis design decisions that enable the DroidSafe analyses to scale to analyze this model. This combination is enabled by accurate analysis stubs, a technique that enables the effective analysis of code whose complete semantics lies outside the scope of Java, and by a combination of analyses that together can statically resolve communication targets identified by dynamically constructed values such as strings and class designators. Our experimental results demonstrate that 1) DroidSafe achieves unprecedented precision and accuracy for Android information flow analysis (as measured on a standard previously published set of benchmark applications) and 2) DroidSafe detects all malicious information flow leaks inserted into 24 real-world Android applications by three independent, hostile Red-Team organizations. The previous state-of-the art analysis, in contrast, detects less than 10% of these malicious flows.",
"title": ""
}
] |
[
{
"docid": "635438f0937666b5f07de348b30b13c1",
"text": "Management of the horseshoe crab, Limulus polyphemus, is currently surrounded by controversy. The species is considered a multiple-use resource, as it plays an important role as bait in a commercial fishery, as a source of an important biomedical product, as an important food source for multiple species of migratory shorebirds, as well as in several other minor, but important, uses. Concern has arisen that horseshoe crabs may be declining in number. However, traditional management historically data have not been kept for this species. In this review we discuss the general biology, ecology, and life history of the horseshoe crab. We discuss the role the horseshoe crab plays in the commercial fishery, in the biomedical industry, as well as for the shorebirds. We examine the economic impact the horseshoe crab has in the mid-Atlantic region and review the current developments of alternatives to the horseshoe crab resource. We discuss the management of horseshoe crabs by including a description of the Atlantic States Marine Fisheries Commission (ASMFC) and its management process. An account of the history of horseshoe crab management is included, as well as recent and current regulations and restrictions.",
"title": ""
},
{
"docid": "9c8de86aca580cdeae69ca5335fd6e85",
"text": "Neural language models (NLMs) are generative, and they model the distribution of grammatical sentences. Trained on huge corpus, NLMs are pushing the limit of modeling accuracy. Besides, they have also been applied to supervised learning tasks that decode text, e.g., automatic speech recognition (ASR). By re-scoring the n-best list, NLM can select grammatically more correct candidate among the list, and significantly reduce word/char error rate. However, the generative nature of NLM may not guarantee a discrimination between “good” and “bad” (in a task-specific sense) sentences, resulting in suboptimal performance. This work proposes an approach to adapt a generative NLM to a discriminative one. Different from the commonly used maximum likelihood objective, the proposed method aims at enlarging the margin between the “good” and “bad” sentences. It is trained end-to-end and can be widely applied to tasks that involve the re-scoring of the decoded text. Significant gains are observed in both ASR and statistical machine translation (SMT) tasks.",
"title": ""
},
{
"docid": "c0ef14f81d45adcfff18a59f6ae563a0",
"text": "Identifying a person by his or her voice is an important human trait most take for granted in natural human-to-human interaction/communication. Speaking to someone over the telephone usually begins by identifying who is speaking and, at least in cases of familiar speakers, a subjective verification by the listener that the identity is correct and the conversation can proceed. Automatic speaker-recognition systems have emerged as an important means of verifying identity in many e-commerce applications as well as in general business interactions, forensics, and law enforcement. Human experts trained in forensic speaker recognition can perform this task even better by examining a set of acoustic, prosodic, and linguistic characteristics of speech in a general approach referred to as structured listening. Techniques in forensic speaker recognition have been developed for many years by forensic speech scientists and linguists to help reduce any potential bias or preconceived understanding as to the validity of an unknown audio sample and a reference template from a potential suspect. Experienced researchers in signal processing and machine learning continue to develop automatic algorithms to effectively perform speaker recognition?with ever-improving performance?to the point where automatic systems start to perform on par with human listeners. In this article, we review the literature on speaker recognition by machines and humans, with an emphasis on prominent speaker-modeling techniques that have emerged in the last decade for automatic systems. We discuss different aspects of automatic systems, including voice-activity detection (VAD), features, speaker models, standard evaluation data sets, and performance metrics. Human speaker recognition is discussed in two parts?the first part involves forensic speaker-recognition methods, and the second illustrates how a na?ve listener performs this task from a neuroscience perspective. We conclude this review with a comparative study of human versus machine speaker recognition and attempt to point out strengths and weaknesses of each.",
"title": ""
},
{
"docid": "6ed26bfb94b03c262fe6173a5baaf8f7",
"text": "The main goal of a persuasion dialogue is to persuade, but agents may have a number of additional goals concerning the dialogue duration, how much and what information is shared or how aggressive the agent is. Several criteria have been proposed in the literature covering different aspects of what may matter to an agent, but it is not clear how to combine these criteria that are often incommensurable and partial. This paper is inspired by multi-attribute decision theory and considers argument selection as decision-making where multiple criteria matter. A meta-level argumentation system is proposed to argue about what argument an agent should select in a given persuasion dialogue. The criteria and sub-criteria that matter to an agent are structured hierarchically into a value tree and meta-level argument schemes are formalized that use a value tree to justify what argument the agent should select. In this way, incommensurable and partial criteria can be combined.",
"title": ""
},
{
"docid": "efa066fc7ed815cc43a40c9c327b2cb3",
"text": "Induction surface hardening of parts with non-uniform cylindrical shape requires a multi-frequency process in order to obtain a uniform surface hardened depth. This paper presents an induction heating high power supply constituted of an only inverter circuit and a specially designed output resonant circuit. The whole circuit supplies both medium and high frequency power signals to the heating inductor simultaneously",
"title": ""
},
{
"docid": "46849f5c975551b401bccae27edd9d81",
"text": "Many ideas of High Performance Computing are applicable to Big Data problems. The more so now, that hybrid, GPU computing gains traction in mainstream computing applications. This work discusses the differences between the High Performance Computing software stack and the Big Data software stack and then focuses on two popular computing workloads, the Alternating Least Squares algorithm and the Singular Value Decomposition, and shows how their performance can be maximized using hybrid computing techniques.",
"title": ""
},
{
"docid": "3a35170197fb05c59609fb0aa8344bcb",
"text": "Stevioside, an ent-kaurene type of diterpenoid glycoside, is a natural sweetener extracted from leaves of Stevia rebaudiana (Bertoni) Bertoni. Stevioside and a few related compounds are regarded as the most common active principles of the plant. Such phytochemicals have not only been established as non-caloric sweeteners, but reported to exhibit some other pharmacological activities also. In this article, natural distribution of stevioside and related compounds, their structural features, plausible biosynthetic pathways along with an insight into the structure-sweetness relationship are presented. Besides, the pharmacokinetics, wide-range of pharmacological potentials, safety evaluation and clinical trials of these ent-kaurene glycosides are revisited.",
"title": ""
},
{
"docid": "cb24afb02b1f07a15661e8a643690b75",
"text": "Whereas the use of Enterprise Social Networks (ESN) is a pervasive topic in research and practice, both parties are still struggling to come to a better understanding of the role and impact of ESN in and on knowledge-intensive corporate work. As a part of this phenomenon, employees who communicate their knowledge in ESN helping other users to do their daily work play a decisive role. We need to come to a better understanding of the role and behaviour of such value adding users. This is a prerequisite, for example, for understanding knowledge support hubs or for enabling more effective internal information and knowledge sharing. Against this background, we investigate the structural characteristics of value adding users in ESN using qualitative text analysis and Social Network Analysis. Based on a large scale dataset of a global consulting company using the ESN Yammer.com we analyse the social relationships of value adding users. We confirm their significant position and draw conclusions for research and practice.",
"title": ""
},
{
"docid": "4302215930f9478ed5421fc4268cc0f1",
"text": "This study examined the links between childhood obesity, activity participation and television and video game use in a nationally representative sample of children (N = 2831) ages 1-12 using age-normed body mass index (BMI) ratings. Results indicated that while television use was not related to children's weight status, video game use was. Children with higher weight status played moderate amounts of electronic games, while children with lower weight status played either very little or a lot of electronic games. Interaction analyses revealed that this curvilinear relationship applied to children under age 8 and that girls, but not boys, with higher weight status played more video games. Children ages 9-12 with lower weight status used the computer (non-game) for moderate amounts of time, while those with higher weight status used the computer either very little or a lot. This was also true for the relationship between print use and weight status for children of all ages. Results also indicated that children with higher weight status spent more time in sedentary activities than those with lower weight status.",
"title": ""
},
{
"docid": "05eaf278ed39cd6a8522f812589388c6",
"text": "Several recent software systems have been designed to obtain novel annotation of cross-referencing text fragments and Wikipedia pages. Tagme is state of the art in this setting and can accurately manage short textual fragments (such as snippets of search engine results, tweets, news, or blogs) on the fly.",
"title": ""
},
{
"docid": "e461ba6f2a569fd93094a7ad8643cbb7",
"text": "Sequence Generator Projection Constraint Conjunction 1 scheme(612,34,18,34,1) id alldifferent*18 2 scheme(612,34,18,2,2) id alldifferent*153 3 scheme(612,34,18,1,18) id alldifferent*34 4 scheme(612,34,18,1,18) absolute value symmetric alldifferent([1..18])*34 5 scheme(612,34,18,17,1) absolute value alldifferent*36 6 repart(612,34,18,34,9) id sum ctr(0)*306 7 repart(612,34,18,34,9) id twin*1 8 repart(612,34,18,34,9) id elements([i,-i ])*1 9 first(9,[1,3,5,7,9,11,13,15,17]) id strictly increasing*1 10 vector(612) id global cardinality([-18.. -1-17,0-0,1..18-17])*1 11 repart(612,34,18,34,9) id sum powers5 ctr(0)*306 12 repart(612,34,18,34,9) id sum cubes ctr(0)*306 13 repart(612,34,18,34,3) sign global cardinality([-1-3,0-0,1-3])*102 14 scheme(612,34,18,34,1) sign global cardinality([-1-17,0-0,1-17])*18 15 repart(612,34,18,17,9) sign global cardinality([-1-2,0-0,1-2])*153 16 repart(612,34,18,2,9) sign global cardinality([-1-17,0-0,1-17])*18 17 scheme(612,34,18,1,18) sign global cardinality([-1-9,0-0,1-9])*34 18 repart(612,34,18,34,9) sign sum ctr(0)*306 19 repart(612,34,18,34,9) sign twin*1 20 repart(612,34,18,34,9) absolute value twin*1 21 repart(612,34,18,34,9) sign elements([i,-i ])*1 22 scheme(612,34,18,34,1) sign among seq(3,[-1])*18 23 repart(612,34,18,34,9) absolute value elements([i,i ])*1 24 first(9,[1,3,5,7,9,11,13,15,17]) absolute value strictly increasing*1 25 first(6,[1,4,7,10,13,16]) absolute value strictly increasing*1 26 scheme(612,34,18,34,1) absolute value nvalue(17)*18 Selected Example Results",
"title": ""
},
{
"docid": "3ddac782fd9797771505a4a46b849b45",
"text": "A number of studies have found that mortality rates are positively correlated with income inequality across the cities and states of the US. We argue that this correlation is confounded by the effects of racial composition. Across states and Metropolitan Statistical Areas (MSAs), the fraction of the population that is black is positively correlated with average white incomes, and negatively correlated with average black incomes. Between-group income inequality is therefore higher where the fraction black is higher, as is income inequality in general. Conditional on the fraction black, neither city nor state mortality rates are correlated with income inequality. Mortality rates are higher where the fraction black is higher, not only because of the mechanical effect of higher black mortality rates and lower black incomes, but because white mortality rates are higher in places where the fraction black is higher. This result is present within census regions, and for all age groups and both sexes (except for boys aged 1-9). It is robust to conditioning on income, education, and (in the MSA results) on state fixed effects. Although it remains unclear why white mortality is related to racial composition, the mechanism working through trust that is often proposed to explain the effects of inequality on health is also consistent with the evidence on racial composition and mortality.",
"title": ""
},
{
"docid": "52ebf28afd8ae56816fb81c19e8890b6",
"text": "In this paper we aim to model the relationship between the text of a political blog post and the comment volume—that is, the total amount of response—that a post will receive. We seek to accurately identify which posts will attract a high-volume response, and also to gain insight about the community of readers and their interests. We design and evaluate variations on a latentvariable topic model that links text to comment volume. Introduction What makes a blog post noteworthy? One measure of the popularity or breadth of interest of a blog post is the extent to which readers of the blog are inspired to leave comments on the post. In this paper, we study the relationship between the text contents of a blog post and the volume of response it will receive from blog readers. Modeling this relationship has the potential to reveal the interests of a blog’s readership community to its authors, readers, advertisers, and scientists studying the blogosphere, but it may also be useful in improving technologies for blog search, recommendation, summarization, and so on. There are many ways to define “popularity” in blogging. In this study, we focus exclusively on the aggregate volume of comments. Commenting is an important activity in the political blogosphere, giving a blog site the potential to become a discussion forum. For a given blog post, we treat comment volume as a target output variable, and use generative probabilistic models to learn from past data the relationship between a blog post’s text contents and its comment volume. While many clues might be useful in predicting comment volume (e.g., the post’s author, the time the post appears, the length of the post, etc.) here we focus solely on the text contents of the post. We first describe the data and experimental framework, including a simple baseline. We then explore how latentvariable topic models can be used to make better predictions about comment volume. These models reveal that part of the variation in comment volume can be explained by the topic of the blog post, and elucidate the relative degrees to which readers find each topic comment-worthy. ∗The authors acknowledge research support from HP Labs and helpful comments from the reviewers and Jacob Eisenstein. Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Predicting Comment Volume Our goal is to predict some measure of the volume of comments on a new blog post.1 Volume might be measured as the number of words in the comment section, the number of comments, the number of distinct users who leave comments, or a variety of other ways. Any of these can be affected by uninteresting factors—the time of day the post appears, a side conversation, a surge in spammer activity—but these quantities are easily measured. In research on blog data, comments are often ignored, and it is easy to see why: comments are very noisy, full of non-standard grammar and spelling, usually unedited, often cryptic and uninformative, at least to those outside the blog’s community. A few studies have focused on information in comments. Mishe and Glance (2006) showed the value of comments in characterizing the social repercussions of a post, including popularity and controversy. Their largescale user study correlated popularity and comment activity. Yano et al. (2009) sought to predict which members of blog’s community would leave comments, and in some cases used the text contents of the comments themselves to discover topics related to both words and user comment behavior. This work is similar, but we seek to predict the aggregate behavior of the blog post’s readers: given a new blog post, how much will the community comment on it?",
"title": ""
},
{
"docid": "24ecf1119592cc5496dc4994d463eabe",
"text": "To improve data availability and resilience MapReduce frameworks use file systems that replicate data uniformly. However, analysis of job logs from a large production cluster shows wide disparity in data popularity. Machines and racks storing popular content become bottlenecks; thereby increasing the completion times of jobs accessing this data even when there are machines with spare cycles in the cluster. To address this problem, we present Scarlett, a system that replicates blocks based on their popularity. By accurately predicting file popularity and working within hard bounds on additional storage, Scarlett causes minimal interference to running jobs. Trace driven simulations and experiments in two popular MapReduce frameworks (Hadoop, Dryad) show that Scarlett effectively alleviates hotspots and can speed up jobs by 20.2%.",
"title": ""
},
{
"docid": "c503fa0aac706ea16136de7ead1a63f3",
"text": "The functions of dance and music in human evolution are a mystery. Current research on the evolution of music has mainly focused on its melodic attribute which would have evolved alongside (proto-)language. Instead, we propose an alternative conceptual framework which focuses on the co-evolution of rhythm and dance (R&D) as intertwined aspects of a multimodal phenomenon characterized by the unity of action and perception. Reviewing the current literature from this viewpoint we propose the hypothesis that R&D have co-evolved long before other musical attributes and (proto-)language. Our view is supported by increasing experimental evidence particularly in infants and children: beat is perceived and anticipated already by newborns and rhythm perception depends on body movement. Infants and toddlers spontaneously move to a rhythm irrespective of their cultural background. The impulse to dance may have been prepared by the susceptibility of infants to be soothed by rocking. Conceivable evolutionary functions of R&D include sexual attraction and transmission of mating signals. Social functions include bonding, synchronization of many individuals, appeasement of hostile individuals, and pre- and extra-verbal communication enabling embodied individual and collective memorizing. In many cultures R&D are used for entering trance, a base for shamanism and early religions. Individual benefits of R&D include improvement of body coordination, as well as painkilling, anti-depressive, and anti-boredom effects. Rhythm most likely paved the way for human speech as supported by studies confirming the overlaps between cognitive and neural resources recruited for language and rhythm. In addition, dance encompasses visual and gestural communication. In future studies attention should be paid to which attribute of music is focused on and that the close mutual relation between R&D is taken into account. The possible evolutionary functions of dance deserve more attention.",
"title": ""
},
{
"docid": "29dcdc7c19515caad04c6fb58e7de4ea",
"text": "The standard way to procedurally generate random terrain for video games and other applications is to post-process the output of a fast noise generator such as Perlin noise. Tuning the post-processing to achieve particular types of terrain requires game designers to be reasonably well-trained in mathematics. A well-known variant of Perlin noise called value noise is used in a process accessible to designers trained in geography to generate geotypical terrain based on elevation statistics drawn from widely available sources such as the United States Geographical Service. A step-by-step process for downloading and creating terrain from realworld USGS elevation data is described, and an implementation in C++ is given.",
"title": ""
},
{
"docid": "c62cc1b0a9c1c4cadede943b4cbd8050",
"text": "The problem of parsing has been studied extensively for various formal grammars. Given an input string and a grammar, the parsing problem is to check if the input string belongs to the language generated by the grammar. A closely related problem of great importance is one where the input are a string I and a grammar G and the task is to produce a string I ′ that belongs to the language generated by G and the ‘distance’ between I and I ′ is the smallest (from among all the strings in the language). Specifically, if I is in the language generated by G, then the output should be I. Any parser that solves this version of the problem is called an error correcting parser. In 1972 Aho and Peterson presented a cubic time error correcting parser for context free grammars. Since then this asymptotic time bound has not been improved under the (standard) assumption that the grammar size is a constant. In this paper we present an error correcting parser for context free grammars that runs in O(T (n)) time, where n is the length of the input string and T (n) is the time needed to compute the tropical product of two n× n matrices. In this paper we also present an n M -approximation algorithm for the language edit distance problem that has a run time of O(Mnω), where O(nω) is the time taken to multiply two n× n matrices. To the best of our knowledge, no approximation algorithms have been proposed for error correcting parsing for general context free grammars.",
"title": ""
},
{
"docid": "cdaba8e8d86ca072607880eb5408e441",
"text": "The bridged T-coil, often simply called the T-coil, is a circuit topology that extends the bandwidth by a greater factor than does inductive peaking. Many high-speed amplifiers, line drivers, and input/output (I/O) interfaces in today's wireline systems incorporate on-chip T-coils to deal with parasitic capacitances. In this article, we introduce and analyze the basic structure and study its applications.",
"title": ""
},
{
"docid": "ac5193fbbe22010c8ef697112cbe663b",
"text": "In this paper, we describe a novel method for discovering and incorporating higher level map structure in a real-time visual simultaneous localization and mapping (SLAM) system. Previous approaches use sparse maps populated by isolated features such as 3-D points or edgelets. Although this facilitates efficient localization, it yields very limited scene representation and ignores the inherent redundancy among features resulting from physical structure in the scene. In this paper, higher level structure, in the form of lines and surfaces, is discovered concurrently with SLAM operation, and then, incorporated into the map in a rigorous manner, attempting to maintain important cross-covariance information and allow consistent update of the feature parameters. This is achieved by using a bottom-up process, in which subsets of low-level features are ldquofolded inrdquo to a parameterization of an associated higher level feature, thus collapsing the state space as well as building structure into the map. We demonstrate and analyze the effects of the approach for the cases of line and plane discovery, both in simulation and within a real-time system operating with a handheld camera in an office environment.",
"title": ""
}
] |
scidocsrr
|
395b4bd7f36acea6080847e61ccbd40b
|
CRISPR-Cas systems in bacteria and archaea: versatile small RNAs for adaptive defense and regulation.
|
[
{
"docid": "21042ce5670109dd548e43ca46cacbfd",
"text": "The CRISPR/Cas adaptive immune system provides resistance against phages and plasmids in Archaea and Bacteria. CRISPR loci integrate short DNA sequences from invading genetic elements that provide small RNA-mediated interference in subsequent exposure to matching nucleic acids. In Streptococcus thermophilus, it was previously shown that the CRISPR1/Cas system can provide adaptive immunity against phages and plasmids by integrating novel spacers following exposure to these foreign genetic elements that subsequently direct the specific cleavage of invasive homologous DNA sequences. Here, we show that the S. thermophilus CRISPR3/Cas system can be transferred into Escherichia coli and provide heterologous protection against plasmid transformation and phage infection. We show that interference is sequence-specific, and that mutations in the vicinity or within the proto-spacer adjacent motif (PAM) allow plasmids to escape CRISPR-encoded immunity. We also establish that cas9 is the sole cas gene necessary for CRISPR-encoded interference. Furthermore, mutation analysis revealed that interference relies on the Cas9 McrA/HNH- and RuvC/RNaseH-motifs. Altogether, our results show that active CRISPR/Cas systems can be transferred across distant genera and provide heterologous interference against invasive nucleic acids. This can be leveraged to develop strains more robust against phage attack, and safer organisms less likely to uptake and disseminate plasmid-encoded undesirable genetic elements.",
"title": ""
},
{
"docid": "50f369f80405f7142e557c7f6bc405c8",
"text": "Microbes rely on diverse defense mechanisms that allow them to withstand viral predation and exposure to invading nucleic acid. In many Bacteria and most Archaea, clustered regularly interspaced short palindromic repeats (CRISPR) form peculiar genetic loci, which provide acquired immunity against viruses and plasmids by targeting nucleic acid in a sequence-specific manner. These hypervariable loci take up genetic material from invasive elements and build up inheritable DNA-encoded immunity over time. Conversely, viruses have devised mutational escape strategies that allow them to circumvent the CRISPR/Cas system, albeit at a cost. CRISPR features may be exploited for typing purposes, epidemiological studies, host-virus ecological surveys, building specific immunity against undesirable genetic elements, and enhancing viral resistance in domesticated microbes.",
"title": ""
}
] |
[
{
"docid": "8442bf64a1c89bbddb6ffb8001b1381e",
"text": "In this paper we present a scalable hardware architecture to implement large-scale convolutional neural networks and state-of-the-art multi-layered artificial vision systems. This system is fully digital and is a modular vision engine with the goal of performing real-time detection, recognition and segmentation of mega-pixel images. We present a performance comparison between a software, FPGA and ASIC implementation that shows a speed up in custom hardware implementations.",
"title": ""
},
{
"docid": "15a802e659141d98415bc06932179aab",
"text": "The key length used for a cryptographic protocol determines the highest security it can offer. If the key is found or ‘broken’, the security is undermined. Thus, key lengths must be chosen in accordance with the desired security. In practice, key lengths are mostly determined by standards, legacy system compatibility issues, and vendors. From a theoretical point of view selecting key lengths is more involved. Understanding the relation between security and key lengths and the impact of anticipated and unexpected cryptanalytic progress, requires insight into the design of the cryptographic methods and the mathematics involved in the attempts at breaking them. In this chapter practical and theoretical aspects of key size selection are discussed.",
"title": ""
},
{
"docid": "1ba4e36597e7beaf6591185c1c799afd",
"text": "A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon.",
"title": ""
},
{
"docid": "bc1218f0b3dd3772154b9bd43d2dcd65",
"text": "Online information has become important data source to analyze the public opinion and behavior, which is significant for social management and business decision. Web crawler systems target at automatically download and parse web pages to extract expected online information. However, as the rapid increasing of web pages and the heterogeneous page structures, the performance and the rules of parsing have become two serious challenges to web crawler systems. In this paper, we propose a distributed and generic web crawler system (DGWC), in which spiders are scheduled to parallel access and parse web pages to improve performance, utilized a shared and memory based database. Furthermore, we package the spider program and the dependencies in a container called Docker to make the system easily horizontal scaling. Last but not the least, a statistics-based approach is proposed to extract the main text using supervised-learning classifier instead of parsing the page structures. Experimental results on real-world data validate the efficiency and effectiveness of DGWC.",
"title": ""
},
{
"docid": "642375cacf55a769f1e5130de8d80419",
"text": "Given learning samples from a raster data set, spatial decision tree learning aims to find a decision tree classifier that minimizes classification errors as well as salt-and-pepper noise. The problem has important societal applications such as land cover classification for natural resource management. However, the problem is challenging due to the fact that learning samples show spatial autocorrelation in class labels, instead of being independently identically distributed. Related work relies on local tests (i.e., testing feature information of a location) and cannot adequately model the spatial autocorrelation effect, resulting in salt-and-pepper noise. In contrast, we recently proposed a focal-test-based spatial decision tree (FTSDT), in which the tree traversal direction of a sample is based on both local and focal (neighborhood) information. Preliminary results showed that FTSDT reduces classification errors and salt-and-pepper noise. This paper extends our recent work by introducing a new focal test approach with adaptive neighborhoods that avoids over-smoothing in wedge-shaped areas. We also conduct computational refinement on the FTSDT training algorithm by reusing focal values across candidate thresholds. Theoretical analysis shows that the refined training algorithm is correct and more scalable. Experiment results on real world data sets show that new FTSDT with adaptive neighborhoods improves classification accuracy, and that our computational refinement significantly reduces training time.",
"title": ""
},
{
"docid": "113a875106929be7bf4ea590c6bd3cc2",
"text": "Reconstruction of shapes and appearances of thin film objects can be applied to many fields such as industrial inspection, biological analysis, and archaeologic research. However, it comes with many challenging issues because the appearances of thin film can change dramatically depending on view and light directions. The appearance is deeply dependent on not only the shapes but also the optical parameters of thin film. In this paper, we propose a novel method to estimate shapes and film thickness. First, we narrow down candidates of zenith angle by degree of polarization and determine it by the intensity of thin film which increases monotonically along the zenith angle. Second, we determine azimuth angle from occluding boundaries. Finally, we estimate the film thickness by comparing a look-up table of color along the thickness and zenith angle with captured images. We experimentally evaluated the accuracy of estimated shapes and appearances and found that our proposed method is effective.",
"title": ""
},
{
"docid": "aadd1d3e22b767a12b395902b1b0c6ca",
"text": "Long-term situation prediction plays a crucial role for intelligent vehicles. A major challenge still to overcome is the prediction of complex downtown scenarios with multiple road users, e.g., pedestrians, bikes, and motor vehicles, interacting with each other. This contribution tackles this challenge by combining a Bayesian filtering technique for environment representation, and machine learning as long-term predictor. More specifically, a dynamic occupancy grid map is utilized as input to a deep convolutional neural network. This yields the advantage of using spatially distributed velocity estimates from a single time step for prediction, rather than a raw data sequence, alleviating common problems dealing with input time series of multiple sensors. Furthermore, convolutional neural networks have the inherent characteristic of using context information, enabling the implicit modeling of road user interaction. Pixel-wise balancing is applied in the loss function counteracting the extreme imbalance between static and dynamic cells. One of the major advantages is the unsupervised learning character due to fully automatic label generation. The presented algorithm is trained and evaluated on multiple hours of recorded sensor data and compared to Monte-Carlo simulation. Experiments show the ability to model complex interactions.",
"title": ""
},
{
"docid": "2a2db7ff8bb353143ca2bb9ad8ec2d7d",
"text": "A revision of the genus Leptoplana Ehrenberg, 1831 in the Mediterranean basin is undertaken. This revision deals with the distribution and validity of the species of Leptoplana known for the area. The Mediterranean sub-species polyclad, Leptoplana tremellaris forma mediterranea Bock, 1913 is elevated to the specific level. Leptoplana mediterranea comb. nov. is redescribed from the Lake of Tunis, Tunisia. This flatworm is distinguished from Leptoplana tremellaris mainly by having a prostatic vesicle provided with a long diverticulum attached ventrally to the seminal vesicle, a genital pit closer to the male pore than to the female one and a twelve-eyed hatching juvenile instead of the four-eyed juvenile of L. tremellaris. The direct development in L. mediterranea is described at 15 °C.",
"title": ""
},
{
"docid": "b011b5e9ed5c96a59399603f4200b158",
"text": "The word list memory test from the Consortium to establish a registry for Alzheimer's disease (CERAD) neuropsychological battery (Morris et al. 1989) was administered to 230 psychiatric outpatients. Performance of a selected, age-matched psychiatric group and normal controls was compared using an ANCOVA design with education as a covariate. Results indicated that controls performed better than psychiatric patients on most learning and recall indices. The exception to this was the savings index that has been found to be sensitive to the effects of progressive dementias. The current data are compared and integrated with published CERAD data for Alzheimer's disease patients. The CERAD list memory test is recommended as a brief, efficient, and sensitive memory measure that can be used with a range of difficult patients.",
"title": ""
},
{
"docid": "119dd2c7eb5533ece82cff7987f21dba",
"text": "Despite the word's common usage by gamers and reviewers alike, it is still not clear what immersion means. This paper explores immersion further by investigating whether immersion can be defined quantitatively, describing three experiments in total. The first experiment investigated participants' abilities to switch from an immersive to a non-immersive task. The second experiment investigated whether there were changes in participants' eye movements during an immersive task. The third experiment investigated the effect of an externally imposed pace of interaction on immersion and affective measures (state-anxiety, positive affect, negative affect). Overall the findings suggest that immersion can be measured subjectively (through questionnaires) as well as objectively (task completion time, eye movements). Furthermore, immersion is not only viewed as a positive experience: negative emotions and uneasiness (i.e. anxiety) also run high.",
"title": ""
},
{
"docid": "4e14e9cb95ed8bc3b352e3e1119b53e1",
"text": "We introduce a fast and efficient convolutional neural network, ESPNet, for semantic segmentation of high resolution images under resource constraints. ESPNet is based on a new convolutional module, efficient spatial pyramid (ESP), which is efficient in terms of computation, memory, and power. ESPNet is 22 times faster (on a standard GPU) and 180 times smaller than the state-of-the-art semantic segmentation network PSPNet [1], while its categorywise accuracy is only 8% less. We evaluated ESPNet on a variety of semantic segmentation datasets including Cityscapes, PASCAL VOC, and a breast biopsy whole slide image dataset. Under the same constraints on memory and computation, ESPNet outperforms all the current efficient CNN networks such as MobileNet [16], ShuffleNet [17], and ENet [20] on both standard metrics and our newly introduced performance metrics that measure efficiency on edge devices. Our network can process high resolution images at a rate of 112 and 9 frames per second on a standard GPU and edge device, respectively.",
"title": ""
},
{
"docid": "32b2cd6b63c6fc4de5b086772ef9d319",
"text": "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models – which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree – which are common in highlyconnected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set – however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets – deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets.",
"title": ""
},
{
"docid": "47d2ebd3794647708d41c6b3d604e796",
"text": "Most stream data classification algorithms apply the supervised learning strategy which requires massive labeled data. Such approaches are impractical since labeled data are usually hard to obtain in reality. In this paper, we build a clustering feature decision tree model, CFDT, from data streams having both unlabeled and a small number of labeled examples. CFDT applies a micro-clustering algorithm that scans the data only once to provide the statistical summaries of the data for incremental decision tree induction. Micro-clusters also serve as classifiers in tree leaves to improve classification accuracy and reinforce the any-time property. Our experiments on synthetic and real-world datasets show that CFDT is highly scalable for data streams while generating high classification accuracy with high speed.",
"title": ""
},
{
"docid": "8eace30c00d9b118635dc8a2e383f36b",
"text": "Wafer Level Packaging (WLP) has the highest potential for future single chip packages because the WLP is intrinsically a chip size package. The package is completed directly on the wafer then singulated by dicing for the assembly. All packaging and testing operations of the dice are replaced by whole wafer fabrication and wafer level testing. Therefore, it becomes more cost-effective with decreasing die size or increasing wafer size. However, due to the intrinsic mismatch of the coefficient of thermal expansion (CTE) between silicon chip and plastic PCB material, solder ball reliability subject to temperature cycling becomes the weakest point of the technology. In this paper some fundamental principles in designing WLP structure to achieve the robust reliability are demonstrated through a comprehensive study of a variety of WLP technologies. The first principle is the 'structural flexibility' principle. The more flexible a WLP structure is, the less the stresses that are applied on the solder balls will be. Ball on polymer WLP, Cu post WLP, polymer core solder balls are such examples to achieve better flexibility of overall WLP structure. The second principle is the 'local enhancement' at the interface region of solder balls where fatigue failures occur. Polymer collar WLP, and increasing solder opening size are examples to reduce the local stress level. In this paper, the reliability improvements are discussed through various existing and tested WLP technologies at silicon level and ball level, respectively. The fan-out wafer level packaging is introduced, which is expected to extend the standard WLP to the next stage with unlimited potential applications in future.",
"title": ""
},
{
"docid": "dfb95120d19a363a27d162b598cdcf26",
"text": "Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data.",
"title": ""
},
{
"docid": "c26caff761092bc5b6af9f1c66986715",
"text": "The mechanisms used by DNN accelerators to leverage datareuse and perform data staging are known as dataflow, and they directly impact the performance and energy efficiency of DNN accelerator designs. Co-optimizing the accelerator microarchitecture and its internal dataflow is crucial for accelerator designers, but there is a severe lack of tools and methodologies to help them explore the co-optimization design space. In this work, we first introduce a set of datacentric directives to concisely specify DNN dataflows in a compiler-friendly form. Next, we present an analytical model, MAESTRO, that estimates various cost-benefit tradeoffs of a dataflow including execution time and energy efficiency for a DNN model and hardware configuration. Finally, we demonstrate the use of MAESTRO to drive a hardware design space exploration (DSE) engine. The DSE engine searched 480M designs and identified 2.5M valid designs at an average rate of 0.17M designs per second, and also identified throughputand energy-optimized designs among this set.",
"title": ""
},
{
"docid": "54663fcef476f15e2b5261766a19375b",
"text": "In this study, performances of classification techniques were compared in order to predict the presence of coronary artery disease (CAD). A retrospective analysis was performed in 1245 subjects (865 presence of CAD and 380 absence of CAD). We compared performances of logistic regression (LR), classification and regression tree (CART), multi-layer perceptron (MLP), radial basis function (RBF), and self-organizing feature maps (SOFM). Predictor variables were age, sex, family history of CAD, smoking status, diabetes mellitus, systemic hypertension, hypercholesterolemia, and body mass index (BMI). Performances of classification techniques were compared using ROC curve, Hierarchical Cluster Analysis (HCA), and Multidimensional Scaling (MDS). Areas under the ROC curves are 0.783, 0.753, 0.745, 0.721, and 0.675, respectively for MLP, LR, CART, RBF, and SOFM. MLP was found the best technique to predict presence of CAD in this data set, given its good classificatory performance. MLP, CART, LR, and RBF performed better than SOFM in predicting CAD in according to HCA and MDS. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0bc7de3f7ac06aa080ec590bdaf4c3b3",
"text": "This paper demonstrates that US prestige-press coverage of global warming from 1988 to 2002 has contributed to a significant divergence of popular discourse from scientific discourse. This failed discursive translation results from an accumulation of tactical media responses and practices guided by widely accepted journalistic norms. Through content analysis of US prestige press— meaning the New York Times, the Washington Post, the Los Angeles Times, and the Wall Street Journal—this paper focuses on the norm of balanced reporting, and shows that the prestige press’s adherence to balance actually leads to biased coverage of both anthropogenic contributions to global warming and resultant action. r 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3eb76b15fa11704c0a6f3fc64f880aa8",
"text": "The emergence of environmental problems and the increased awareness towards green purchase behaviour have received many responses by the stakeholders’ worldwide like from the government bodies, researchers, businesses, consumers and so on. Government’s bodies, for example, have responded by developing and introducing their own environmentally-linked policies to be implemented in their countries, which are intended to conserve and preserve the environment. Researchers, on the other hand, are continuously conducting extensive studies and publishing their findings on the issues to inform the public, while businesses that promote the selling of green products (or environmentally friendly products) in the marketplace have been increasing in number. Segments of green consumers have been observed to emerge and grow in size worldwide including Malaysia. This may be due to the increased number of green products introduced to consumers in the marketplace. Moreover, scholars from Malaysia also argued that, this trend is experiencing tremendous growth. Although there are responses from these stakeholders, especially consumers, who have had a positive impact on the environment, the trend of the green purchase behaviour by Malaysian consumers remains unobserved. Therefore, the authors aim to answer the questions concerning whether a trend can be observed in the green purchase behaviour of Malaysian consumers. The ability to observe the green purchase behaviour trend is useful, particularly for marketers and businesses that are selling or intending to sell green products within the country.",
"title": ""
}
] |
scidocsrr
|
50cc4ba570f611f44d939c9fe7b1e1c7
|
Deep Air Learning: Interpolation, Prediction, and Feature Analysis of Fine-Grained Air Quality
|
[
{
"docid": "11b953431233ef988f11c5139a4eb484",
"text": "In this paper, we forecast the reading of an air quality monitoring station over the next 48 hours, using a data-driven method that considers current meteorological data, weather forecasts, and air quality data of the station and that of other stations within a few hundred kilometers. Our predictive model is comprised of four major components: 1) a linear regression-based temporal predictor to model the local factors of air quality, 2) a neural network-based spatial predictor to model global factors, 3) a dynamic aggregator combining the predictions of the spatial and temporal predictors according to meteorological data, and 4) an inflection predictor to capture sudden changes in air quality. We evaluate our model with data from 43 cities in China, surpassing the results of multiple baseline methods. We have deployed a system with the Chinese Ministry of Environmental Protection, providing 48-hour fine-grained air quality forecasts for four major Chinese cities every hour. The forecast function is also enabled on Microsoft Bing Map and MS cloud platform Azure. Our technology is general and can be applied globally for other cities.",
"title": ""
}
] |
[
{
"docid": "285fd0cdd988df78ac172640509b2cd3",
"text": "Self-assembly in swarm robotics is essential for a group of robots in achieving a common goal that is not possible to achieve by a single robot. Self-assembly also provides several advantages to swarm robotics. Some of these include versatility, scalability, re-configurability, cost-effectiveness, extended reliability, and capability for emergent phenomena. This work investigates the effect of self-assembly in evolutionary swarm robotics. Because of the lack of research literature within this paradigm, there are few comparisons of the different implementations of self-assembly mechanisms. This paper reports the influence of connection port configuration on evolutionary self-assembling swarm robots. The port configuration consists of the number and the relative positioning of the connection ports on each of the robot. Experimental results suggest that configuration of the connection ports can significantly impact the emergence of selfassembly in evolutionary swarm robotics.",
"title": ""
},
{
"docid": "868e569737ba8c3bbf2867ded20dd5f5",
"text": "Microstepping drive for stepper motor is a well-known technique to improve stepper motor performance. Normally microstepping is carried at fixed pulse width planned. But this method is suitable only when motors are driven at fixed pre-defined speeds for non-real time applications. The problem discussed in this paper considers an application where motors need to track a given reference position profile in real time whose speed is varying at every interval of time. In such scenario fixed speed microstep drive does not meet the high pointing requirements and leads to poor system performance. For such applications, an innovative Frequency Modulation (FM) based microstep drive algorithm is developed which meets high degree of motor pointing even at higher output angular rates. As per this scheme, pulse width and number of steps motor supposed to move are derived from the reference position itself in a given amount of time and motor is actuated with corresponding number of steps in real time. FM based microstep drive is implemented with PWM based chopper current controller in an optimized way without compromising on the motor torque margins. The experimental results obtained are very encouraging. The paper presents the design, analysis, simulation and experimental results of FM based microstep drive.",
"title": ""
},
{
"docid": "05e3d07db8f5ecf3e446a28217878b56",
"text": "In this paper, we investigate the topic of gender identification for short length, multi-genre, content-free e-mails. We introduce for the first time (to our knowledge), psycholinguistic and gender-linked cues for this problem, along with traditional stylometric features. Decision tree and Support Vector Machines learning algorithms are used to identify the gender of the author of a given e-mail. The experiment results show that our approach is promising with an average accuracy of 82.2%.",
"title": ""
},
{
"docid": "7df3fe3ffffaac2fb6137fdc440eb9f4",
"text": "The amount of information in medical publications continues to increase at a tremendous rate. Systematic reviews help to process this growing body of information. They are fundamental tools for evidence-based medicine. In this paper, we show that automatic text classification can be useful in building systematic reviews for medical topics to speed up the reviewing process. We propose a per-question classification method that uses an ensemble of classifiers that exploit the particular protocol of a systematic review. We also show that when integrating the classifier in the human workflow of building a review the per-question method is superior to the global method. We test several evaluation measures on a real dataset.",
"title": ""
},
{
"docid": "fa0c62b91643a45a5eff7c1b1fa918f1",
"text": "This paper presents outdoor field experimental results to clarify the 4x4 MIMO throughput performance from applying multi-point transmission in the 15 GHz frequency band in the downlink of 5G cellular radio access system. The experimental results in large-cell scenario shows that up to 30 % throughput gain compared to non-multi-point transmission is achieved although the difference for the RSRP of two TPs is over 10 dB, so that the improvement for the antenna correlation is achievable and important aspect for the multi-point transmission in the 15 GHz frequency band as well as the improvement of the RSRP. Furthermore in small-cell scenario, the throughput gain of 70% and over 5 Gbps are achieved applying multi-point transmission in the condition of two different MIMO streams transmission from a single TP as distributed MIMO instead of four MIMO streams transmission from a single TP.",
"title": ""
},
{
"docid": "6c007825e5dc398911d3d8b77f954dc2",
"text": "Although the effects of climate warming on the chemical and physical properties of lakes have been documented, biotic and ecosystem-scale responses to climate change have been only estimated or predicted by manipulations and models. Here we present evidence that climate warming is diminishing productivity in Lake Tanganyika, East Africa. This lake has historically supported a highly productive pelagic fishery that currently provides 25–40% of the animal protein supply for the populations of the surrounding countries. In parallel with regional warming patterns since the beginning of the twentieth century, a rise in surface-water temperature has increased the stability of the water column. A regional decrease in wind velocity has contributed to reduced mixing, decreasing deep-water nutrient upwelling and entrainment into surface waters. Carbon isotope records in sediment cores suggest that primary productivity may have decreased by about 20%, implying a roughly 30% decrease in fish yields. Our study provides evidence that the impact of regional effects of global climate change on aquatic ecosystem functions and services can be larger than that of local anthropogenic activity or overfishing.",
"title": ""
},
{
"docid": "e27da58188be54b71187d3489fa6b4e7",
"text": "In a prospective-longitudinal study of a representative birth cohort, we tested why stressful experiences lead to depression in some people but not in others. A functional polymorphism in the promoter region of the serotonin transporter (5-HT T) gene was found to moderate the influence of stressful life events on depression. Individuals with one or two copies of the short allele of the 5-HT T promoter polymorphism exhibited more depressive symptoms, diagnosable depression, and suicidality in relation to stressful life events than individuals homozygous for the long allele. This epidemiological study thus provides evidence of a gene-by-environment interaction, in which an individual's response to environmental insults is moderated by his or her genetic makeup.",
"title": ""
},
{
"docid": "a9cde1abf51c8d2b12d6ce2a59d9f784",
"text": "This article argues that now is an opportune time to draw together the accumulated bodies of knowledge in developmental and cultural psychology in order to build a new vision for developmental psychology scholarship that bridges universal and cultural perspectives. Such bridging requires rethinking (a) the entity of developmental psychological analysis, (b) the scope and meaning of developmental psychology concepts, and (c) the nature of theoretical frameworks. This rethinking will render developmental psychology more broadly valid across cultures and more applicable to local cultural conditions. This is imperative in an increasingly global world where diverse peoples interact more than ever. Although the present focus is on rethinking developmental psychology, conclusions about the implications of bridging universal and cultural perspectives may be of interest in other fields and disciplines addressing psychological thought and behavior. KEYWORDS—bridging; culture; development; globalization; psychology; theory I would like to thank my colleagues who took part in the Bridging Project (in alphabetical order): Jeffrey Arnett, Oscar A. Baldelomar, Xinyin Chen, Patricio Cumsille, William Damon, Ranjana Dutta, Constance Flanagan, Jacqueline J. Goodnow, Michelle Leichtman, Jin Li, M. Loreto Martı́nez, Jayanthi Mistry, A. Bame Nsamenang, Jean S. Phinney, Fred Rothbaum, Alice Schlegel, Richard Shweder, T. S. Saraswathi, Jaan Valsiner, and Yan Z. Wang. I am also grateful for financial support from the Society for Research in Child Development, the Köhler Foundation, and the Department of Psychology at Clark University. Correspondence concerning this article should be addressed to Lene Arnett Jensen, Department of Psychology, Clark University, 950 Main St., Worcester, MA 01610; e-mail: ljensen@clarku.edu. a 2011 The Author Child Development Perspectives a 2011 The Society for Research in Child Development DOI: 10.1111/j.1750-8606.2011.00213.x Volume 6, Number 1, Since its inception about a century ago, developmental psychology has had the mission of describing, explaining, and predicting characteristics and processes of human development, as well as applying its knowledge to improve the lives of children and families. Over time, a large and important body of knowledge has accumulated. One legitimate question about this knowledge is how much of it is valid across cultures. Even in today’s globalized world, when it has become easier than ever to cross borders and access information from around the world, it is striking how restricted mainstream developmental psychology remains. For example, a recent analysis examined the nationalities of the samples included in research published in American Psychological Association journals from 2003 to 2007 (Arnett, 2008). For Developmental Psychology, Journal of Educational Psychology, and Journal of Family Psychology, a mere 5%–9% were from Africa, Asia, Latin America, or the Middle East. The vast majority of samples was from the United States (64%–81%), followed by other English-speaking countries (8%–19%) and Europe (8%–13%). Moreover, during the more extended period from 1988 to 2007—when globalization proceeded with hyperspeed—these numbers remained flat. As Super (2010) has observed, ‘‘We are still struggling to get ‘the child’ out of the confines of North America and Europe’’ (p. 1). If the intent of developmental psychology were to provide theories and findings with a primarily American scope, then these numbers might not give rise to questions. However, typically that is not the intent, and researchers often assume that theories, methods, and findings apply around the world, without corresponding evidence. If ‘‘the child’’ at the center of developmental psychology is American, how confident can we be in the universal validity of developmental psychology knowledge? A TIMELY OPPORTUNITY Rather than seeing this question in a negative light, it can serve as an opportunity to reframe and expand the mission of developmental psychology. Although research with culturally diverse",
"title": ""
},
{
"docid": "2fbd1ba2f656e3c32839032754992974",
"text": "We consider a basic cache network, in which a single server is connected to multiple users via a shared bottleneck link. The server has a database of files (content). Each user has an isolated memory that can be used to cache content in a prefetching phase. In a following delivery phase, each user requests a file from the database, and the server needs to deliver users’ demands as efficiently as possible by taking into account their cache contents. We focus on an important and commonly used class of prefetching schemes, where the caches are filled with uncoded data. We provide the exact characterization of the rate-memory tradeoff for this problem, by deriving both the minimum average rate (for a uniform file popularity) and the minimum peak rate required on the bottleneck link for a given cache size available at each user. In particular, we propose a novel caching scheme, which strictly improves the state of the art by exploiting commonality among user demands. We then demonstrate the exact optimality of our proposed scheme through a matching converse, by dividing the set of all demands into types, and showing that the placement phase in the proposed caching scheme is universally optimal for all types. Using these techniques, we also fully characterize the rate-memory tradeoff for a decentralized setting, in which users fill out their cache content without any coordination.",
"title": ""
},
{
"docid": "7440101e3a6ff726c5c7a40f83d25816",
"text": "The polar format algorithm (PFA) for spotlight synthetic aperture radar (SAR) is based on a linear approximation for the differential range to a scatterer. We derive a second-order Taylor series approximation of the differential range. We provide a simple and concise derivation of both the far-field linear approximation of the differential range, which forms the basis of the PFA, and the corresponding approximation limits based on the second-order terms of the approximation.",
"title": ""
},
{
"docid": "355d4250c2091c4325903096dd5a2b61",
"text": "It has been realized that resilience as a concept involves several contradictory definitions, both for instance resilience as agile adjustment and as robust resistance to situations. Our analysis of resilience concepts and models suggest that beyond simplistic definitions, it is possible to draw up a systemic resilience model (SyRes) that maintains these opposing characteristics without contradiction. We outline six functions in a systemic model, drawing primarily on resilience engineering, and disaster response: anticipation, monitoring, response, recovery, learning, and self-monitoring. The model consists of four areas: Event-based constraints, Functional Dependencies, Adaptive Capacity and Strategy. The paper describes dependencies between constraints, functions and strategies. We argue that models such as SyRes should be useful both for envisioning new resilience methods and metrics, as well as for engineering and evaluating resilient systems.",
"title": ""
},
{
"docid": "5bde44a162fa6259ece485b4319b56a4",
"text": "3D reconstruction from single view images is an ill-posed problem. Inferring the hidden regions from self-occluded images is both challenging and ambiguous. We propose a two-pronged approach to address these issues. To better incorporate the data prior and generate meaningful reconstructions, we propose 3D-LMNet, a latent embedding matching approach for 3D reconstruction. We first train a 3D point cloud auto-encoder and then learn a mapping from the 2D image to the corresponding learnt embedding. To tackle the issue of uncertainty in the reconstruction, we predict multiple reconstructions that are consistent with the input view. This is achieved by learning a probablistic latent space with a novel view-specific ‘diversity loss’. Thorough quantitative and qualitative analysis is performed to highlight the significance of the proposed approach. We outperform state-of-the-art approaches on the task of single-view 3D reconstruction on both real and synthetic datasets while generating multiple plausible reconstructions, demonstrating the generalizability and utility of our approach.",
"title": ""
},
{
"docid": "0ad47e79e9bea44a76029e1f24f0a16c",
"text": "The requirements for OLTP database systems are becoming ever more demanding. New OLTP applications require high degrees of scalability with controlled transaction latencies in in-memory databases. Deployments of these applications require low-level control of database system overhead and program-to-data affinity to maximize resource utilization in modern machines. Unfortunately, current solutions fail to meet these requirements. First, existing database solutions fail to expose a high-level programming abstraction in which latency of transactions can be reasoned about by application developers. Second, these solutions limit infrastructure engineers in exercising low-level control on the deployment of the system on a target infrastructure, further impacting performance. In this paper, we propose a relational actor programming model for in-memory databases. Conceptually, relational actors, or reactors for short, are application-defined, isolated logical actors encapsulating relations that process function calls asynchronously. Reactors ease reasoning about correctness by guaranteeing serializability of application-level function calls. In contrast to classic transactional models, however, reactors allow developers to take advantage of intra-transaction parallelism to reduce latency and improve performance. Moreover, reactors enable a new degree of flexibility in database deployment. We present REACTDB, a novel system design exposing reactors that allows for flexible virtualization of database architecture between the extremes of shared-nothing and shared-everything without changes to application code. Our experiments with REACTDB illustrate performance predictability, multi-core scalability, and low overhead in OLTP benchmarks.",
"title": ""
},
{
"docid": "6519ae37d66b3e5524318adc5070223e",
"text": "Powering cellular networks with renewable energy sources via energy harvesting (EH) have recently been proposed as a promising solution for green networking. However, with intermittent and random energy arrivals, it is challenging to provide satisfactory quality of service (QoS) in EH networks. To enjoy the greenness brought by EH while overcoming the instability of the renewable energy sources, hybrid energy supply (HES) networks that are powered by both EH and the electric grid have emerged as a new paradigm for green communications. In this paper, we will propose new design methodologies for HES green cellular networks with the help of Lyapunov optimization techniques. The network service cost, which addresses both the grid energy consumption and achievable QoS, is adopted as the performance metric, and it is optimized via base station assignment and power control (BAPC). Our main contribution is a low-complexity online algorithm to minimize the long-term average network service cost, namely, the Lyapunov optimization-based BAPC (LBAPC) algorithm. One main advantage of this algorithm is that the decisions depend only on the instantaneous side information without requiring distribution information of channels and EH processes. To determine the network operation, we only need to solve a deterministic per-time slot problem, for which an efficient inner-outer optimization algorithm is proposed. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Finally, sample simulation results are presented to verify the theoretical analysis as well as validate the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "a6c772380f45f9905e31c42b0680d36d",
"text": "Current neofunctionalist views of emotion underscore the biologically adaptive and psychologically constructive contributions of emotion to organized behavior, but little is known of the development of the emotional regulatory processes by which this is fostered. Emotional regulation refers to the extrinsic and intrinsic processes responsible for monitoring, evaluating, and modifying emotional reactions. This review provides a developmental outline of emotional regulation and its relation to emotional development throughout the life-span. The biological foundations of emotional self-regulation and individual differences in regulatory tendencies are summarized. Extrinsic influences on the early regulation of a child's emotion and their long-term significance are then discussed, including a parent's direct intervention strategies, selective reinforcement and modeling processes, affective induction, and the caregiver's ecological control of opportunity for heightened emotion and its management. Intrinsic contributors to the growth of emotional self-regulatory capacities include the emergence of language and cognitive skills, the child's growing emotional and self-understanding (and cognized strategies of emotional self-control), and the emergence of a \"theory of personal emotion\" in adolescence.",
"title": ""
},
{
"docid": "78786193b4f7521b05f43997218f6778",
"text": "The design and fabrication of an Ultra broadband square quad-ridge polarizer is discussed here. The principal advantages of this topology relay on both the instantaneous bandwidth and the axial ratio improvement. Experimental measurements exhibit very good agreement with the predicted results given by Mode Matching techniques. The structure provides an extremely flat axial ratio (AR< 0.4dB) and good return losses >25dB at both square ports over the extended Ku band (= 60%). Moreover, yield analysis and scaling properties demonstrate the robustness of this design against fabrication tolerances.",
"title": ""
},
{
"docid": "742fef70793920d2b96c0877a2a7f371",
"text": "Cloud computing is an emerging technology and it allows users to pay as you need and has the high performance. Cloud computing is a heterogeneous system as well and it holds large amount of application data. In the process of scheduling some intensive data or computing an intensive application, it is acknowledged that optimizing the transferring and processing time is crucial to an application program. In this paper in order to minimize the cost of the processing we formulate a model for task scheduling and propose a particle swarm optimization (PSO) algorithm which is based on small position value rule. By virtue of comparing PSO algorithm with the PSO algorithm embedded in crossover and mutation and in the local research, the experiment results show the PSO algorithm not only converges faster but also runs faster than the other two algorithms in a large scale. The experiment results prove that the PSO algorithm is more suitable to cloud computing.",
"title": ""
},
{
"docid": "90d9360a3e769311a8d7611d8c8845d9",
"text": "We introduce a learning-based approach to detect repeatable keypoints under drastic imaging changes of weather and lighting conditions to which state-of-the-art keypoint detectors are surprisingly sensitive. We first identify good keypoint candidates in multiple training images taken from the same viewpoint. We then train a regressor to predict a score map whose maxima are those points so that they can be found by simple non-maximum suppression. As there are no standard datasets to test the influence of these kinds of changes, we created our own, which we will make publicly available. We will show that our method significantly outperforms the state-of-the-art methods in such challenging conditions, while still achieving state-of-the-art performance on untrained standard datasets.",
"title": ""
},
{
"docid": "983f7cf81342632af41dabf416c0b286",
"text": "Color histograms are widely used for content-based image retrieval. Their advantages are efficiency, and insensitivity to small changes in camera viewpoint. However, a histogram is a coarse characterization of an image, and so images with very different appearances can have similar histograms. We describe a technique for comparing images called histogram refinement, which imposes additional constraints on histogram based matching. Histogram refinement splits the pixels in a given bucket into several classes, based upon some local property. Within a given bucket, only pixels in the same class are compared. We describe a split histogram called a color coherence vector (CCV), which partitions each histogram bucket based on spatial coherence. CCV’s can be computed at over 5 images per second on a standard workstation. A database with 15,000 images can be queried using CCV’s in under 2 seconds. We demonstrate that histogram refinement can be used to distinguish images whose color histograms are indistinguishable.",
"title": ""
}
] |
scidocsrr
|
212c7d38fcad3718dadf573c518e5a90
|
Leveraging SDN to conserve energy in WSN-An analysis
|
[
{
"docid": "5a83cb0ef928b6cae6ce1e0b21d47f60",
"text": "Software defined networking, characterized by a clear separation of the control and data planes, is being adopted as a novel paradigm for wired networking. With SDN, network operators can run their infrastructure more efficiently, supporting faster deployment of new services while enabling key features such as virtualization. In this article, we adopt an SDN-like approach applied to wireless mobile networks that will not only benefit from the same features as in the wired case, but will also leverage on the distinct features of mobile deployments to push improvements even further. We illustrate with a number of representative use cases the benefits of the adoption of the proposed architecture, which is detailed in terms of modules, interfaces, and high-level signaling. We also review the ongoing standardization efforts, and discuss the potential advantages and weaknesses, and the need for a coordinated approach.",
"title": ""
}
] |
[
{
"docid": "007a42bdf781074a2d00d792d32df312",
"text": "This paper presents a new approach for road lane classification using an onboard camera. Initially, lane boundaries are detected using a linear-parabolic lane model, and an automatic on-the-fly camera calibration procedure is applied. Then, an adaptive smoothing scheme is applied to reduce noise while keeping close edges separated, and pairs of local maxima-minima of the gradient are used as cues to identify lane markings. Finally, a Bayesian classifier based on mixtures of Gaussians is applied to classify the lane markings present at each frame of a video sequence as dashed, solid, dashed solid, solid dashed, or double solid. Experimental results indicate an overall accuracy of over 96% using a variety of video sequences acquired with different devices and resolutions.",
"title": ""
},
{
"docid": "522938687849ccc9da8310ac9d6bbf9e",
"text": "Machine learning models, especially Deep Neural Networks, are vulnerable to adversarial examples—malicious inputs crafted by adding small noises to real examples, but fool the models. Adversarial examples transfer from one model to another, enabling black-box attacks to real-world applications. In this paper, we propose a strong attack algorithm named momentum iterative fast gradient sign method (MI-FGSM) to discover adversarial examples. MI-FGSM is an extension of iterative fast gradient sign method (I-FGSM) but improves the transferability significantly. Besides, we study how to attack an ensemble of models efficiently. Experiments demonstrate the effectiveness of the proposed algorithm. We hope that MI-FGSM can serve as a benchmark attack algorithm for evaluating the robustness of various models and defense methods.",
"title": ""
},
{
"docid": "7437f0c8549cb8f73f352f8043a80d19",
"text": "Graphene is considered as one of leading candidates for gas sensor applications in the Internet of Things owing to its unique properties such as high sensitivity to gas adsorption, transparency, and flexibility. We present self-activated operation of all graphene gas sensors with high transparency and flexibility. The all-graphene gas sensors which consist of graphene for both sensor electrodes and active sensing area exhibit highly sensitive, selective, and reversible responses to NO2 without external heating. The sensors show reliable operation under high humidity conditions and bending strain. In addition to these remarkable device performances, the significantly facile fabrication process enlarges the potential of the all-graphene gas sensors for use in the Internet of Things and wearable electronics.",
"title": ""
},
{
"docid": "38d2d91747cd211dcc3cdf403d250f9f",
"text": "In this paper we prove that in any non-trivial real analytic family of quasiquadratic maps, almost any map is either regular (i.e., it has an attracting cycle) or stochastic (i.e., it has an absolutely continuous invariant measure). To this end we show that the space of analytic maps is foliated by codimension-one analytic submanifolds, “hybrid classes”. This allows us to transfer the regular or stochastic property of the quadratic family to any non-trivial real analytic family.",
"title": ""
},
{
"docid": "4e4f653da064c9fc2096a5f334662ca8",
"text": "Face images appearing in multimedia applications, e.g., social networks and digital entertainment, usually exhibit dramatic pose, illumination, and expression variations, resulting in considerable performance degradation for traditional face recognition algorithms. This paper proposes a comprehensive deep learning framework to jointly learn face representation using multimodal information. The proposed deep learning structure is composed of a set of elaborately designed convolutional neural networks (CNNs) and a three-layer stacked auto-encoder (SAE). The set of CNNs extracts complementary facial features from multimodal data. Then, the extracted features are concatenated to form a high-dimensional feature vector, whose dimension is compressed by SAE. All of the CNNs are trained using a subset of 9,000 subjects from the publicly available CASIA-WebFace database, which ensures the reproducibility of this work. Using the proposed single CNN architecture and limited training data, 98.43% verification rate is achieved on the LFW database. Benefitting from the complementary information contained in multimodal data, our small ensemble system achieves higher than 99.0% recognition rate on LFW using publicly available training set.",
"title": ""
},
{
"docid": "95af5413e04341770887a74faa7c8405",
"text": "Two experiments investigate the effects of language comprehension on affordances. Participants read a sentence composed by either an observation or an action verb (Look at/Grasp) followed by an object name. They had to decide whether the visual object following the sentence was the same as the one mentioned in the sentence. Objects graspable with either a precision or a power grip were presented in an orientation affording action (canonical) or not. Action sentences were faster than observation sentences, and power grip objects were faster than precision grip objects. Moreover, faster RTs were obtained when orientation afforded action. Results indicate that the simulation activated during language comprehension leads to the formation of a \"motor prototype\" of the object. This motor prototype encodes information on temporary/canonical and stable affordances (e.g., orientation, size), which can be possibly referred to different cognitive and neural systems (dorsal, ventral systems).",
"title": ""
},
{
"docid": "4b3e7c1682b9e039e26702105fd0cc63",
"text": "Recent research has shown that voltage scaling is a very effective technique for low-power design. This paper describes a voltage scaling technique to minimize the power consumption of a combinational circuit. First, the converter-free multiple-voltage (CFMV) structures are proposed, including the p-type, the n-type, and the two-way CFMV structures. The CFMV structures make use of multiple supply voltages and do not require level converters. In contrast, previous works employing multiple supply voltages need level converters to prevent static currents, which may result in large power consumption. In addition, the CFMV structures group the gates with the same supply voltage in a cluster to reduce the complexity of placement and routing for the subsequent physical layout stage. Next, we formulated the problem and proposed an efficient heuristic algorithm to solve it. The heuristic algorithm has been implemented in C and experiments were performed on the ISCAS85 circuits to demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "39cb45c62b83a40f8ea42cb872a7aa59",
"text": "Levy flights are employed in a lattice model of contaminant migration by bioturbation, the reworking of sediment by benthic organisms. The model couples burrowing, foraging, and conveyor-belt feeding with molecular diffusion. The model correctly predicts a square-root dependence on bioturbation rates over a wide range of biomass densities. The model is used to predict the effect of bioturbation on the redistribution of contaminants in laboratory microcosms containing pyrene-inoculated sediments and the tubificid oligochaete Limnodrilus hoffmeisteri. The model predicts the dynamic flux from the sediment and in-bed concentration profiles that are consistent with observations. The sensitivity of flux and concentration profiles to the specific mechanisms of bioturbation are explored with the model. The flux of pyrene to the overlying water was largely controlled by the simulated foraging activities.",
"title": ""
},
{
"docid": "f2a72dcfaddde40a82b1182ecc945a29",
"text": "Cybersickness is an undesirable side effect, which occurs when people experience in a virtual environment (VE). A number of studies have tried to find ways to control cybersickness and some of them have demonstrated that presenting rest frames in VE might be one promising method to reduce cybersickness. This study investigates the effects of rest frames on cybersickness and the cybersickness-related brain activities. Participants (n=22) were exposed to a roller coaster simulator in a VE, both under a rest frame condition and a nonrest frame condition, in counter-balanced order while undergoing EEG recordings. Participants who experienced less cybersickness in the rest frame condition showed characteristic oscillations in different EEG frequency bands compared to those of in the nonrest frame condition. Based on the level of cybersickness and oscillatory EEG changes, we suggest that rest frames may reduce or delay the onset of cybersickness by alleviating users' attention or perception load.",
"title": ""
},
{
"docid": "151fd47f87944978edfafb121b655ad8",
"text": "We introduce a pair of tools, Rasa NLU and Rasa Core, which are open source python libraries for building conversational software. Their purpose is to make machine-learning based dialogue management and language understanding accessible to non-specialist software developers. In terms of design philosophy, we aim for ease of use, and bootstrapping from minimal (or no) initial training data. Both packages are extensively documented and ship with a comprehensive suite of tests. The code is available at https://github.com/RasaHQ/",
"title": ""
},
{
"docid": "89fed81d7d846086bdd284be422288cc",
"text": "A considerable number of organizations continually face difficulties bringing strategy to execution, and suffer from a lack of structure and transparency in corporate strategic management. Yet, enterprise architecture as a fundamental exercise to achieve a structured description of the enterprise and its relationships appears far from being adopted in the strategic management arena. To move the adoption process along, this paper develops a comprehensive business architecture framework that assimilates and extends prior research and applies the framework to selected scenarios in corporate strategic management. This paper also presents the approach in practice, based on a qualitative appraisal of interviews with strategic directors across different industries. With its integrated conceptual guideline for using enterprise architecture to facilitate corporate strategic management and the insights gained from the interviews, this paper not only delves more deeply into the research but also offers advice for both researchers and practitioners.",
"title": ""
},
{
"docid": "876814467f8dee3ae50e25d85b851029",
"text": "Our work presents a low-cost temperature acquisition, for incubator system, based on the Arduino hardware platform; both the hardware and software components are detailed, together with experimental evaluation. This system was designed to facilitate the process of identification and control of a temperature of premature infant incubator. The experimental evaluation revealed that this system is not only capable of temperature signal acquisition, for incubator purposes, but it can also be used as a generic platform for other biomedical applications, greatly extending its applicability. In this paper we describe the proposed platform, with special emphasis on the design principles and functionality. System identification results based on least squares algorithm (RLS) to find the ARMAX input-output mathematical model. We opted for the GPC structure for control temperature. The results of implementation in real time on the neonatal incubator were presented and interpreted.",
"title": ""
},
{
"docid": "d180c5d4a3d65664ab8535fb926a5a3b",
"text": "In living systems, it is crucial to study the exchange of entropy that plays a fundamental role in the understanding of irreversible chemical reactions. However, there are not yet works able to describe in a systematic way the rate of entropy production associated to irreversible processes. Hence, here we develop a theoretical model to compute the rate of entropy in the minimum living system. In particular, we apply the model to the most interesting and relevant case of metabolic network, the glucose catabolism in normal and cancer cells. We show, (i) the rate of internal entropy is mainly due to irreversible chemical reactions, and (ii) the rate of external entropy is mostly correlated to the heat flow towards the intercellular environment. The future applications of our model could be of fundamental importance for a more complete understanding of self-renewal and physiopatologic processes and could potentially be a support for cancer detection.",
"title": ""
},
{
"docid": "dd4322e25b26b501cf60f9b42a7aa575",
"text": "a r t i c l e i n f o In organizations today, the risk of poor information quality is becoming increasingly high as larger and more complex information resources are being collected and managed. To mitigate this risk, decision makers assess the quality of the information provided by their IS systems in order to make effective decisions based on it. To do so, they may rely on quality metadata: objective quality measurements tagged by data managers onto the information used by decision makers. Decision makers may also gauge information quality on their own, subjectively and contextually assessing the usefulness of the information for solving the specific task at hand. Although information quality has been defined as fitness for use, models of information quality assessment have thus far tended to ignore the impact of contextual quality on information use and decision outcomes. Contextual assessments can be as important as objective quality indicators because they can affect which information gets used for decision making tasks. This research offers a theoretical model for understanding users' contextual information quality assessment processes. The model is grounded in dual-process theories of human cognition, which enable simultaneous evaluation of both objective and contextual information quality attributes. Findings of an exploratory laboratory experiment suggest that the theoretical model provides an avenue for understanding contextual aspects of information quality assessment in concert with objective ones. The model offers guidance for the design of information environments that can improve performance by integrating both objective and subjective aspect of users' quality assessments. Organizational data is a critical resource that supports business processes and managerial decision making. Advances in information technology have enabled organizations to collect and store more data than ever before. This data is processed in a variety of different and complex ways to generate information that serves as input to organizational decision tasks. As data volumes increase, so does the complexity of managing it and the risks of poor data quality. Poor quality data can be detrimental to system usability and hinder operational performance, leading to flawed decisions [27]. It can also damage organizational reputation, heighten risk exposure, and cause significant capital losses [28]. While international figures are difficult to determine, data quality problems currently cost U.S. businesses over $600 billion annually [1]. Data quality is hence an important area of concern to both practitioners and researchers. Data quality researchers have used the terms \" data quality …",
"title": ""
},
{
"docid": "96ea7f2a0fd0a630df87d22d846d1575",
"text": "BACKGROUND\nRecent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures), while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies.\n\n\nRESULTS\nWe analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches.\n\n\nCONCLUSION\nSystems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational utilities including algorithmic, statistical and logic-based tools. For the task of automatic structure-based classification of chemical entities, essential to managing the vast swathes of chemical data being brought online, systems which are capable of hybrid reasoning combining several different approaches are crucial. We provide a thorough review of the available tools and methodologies, and identify areas of open research.",
"title": ""
},
{
"docid": "82c557b21509c30f34ac8d0463a027af",
"text": "Formant frequency data for /l/ in 23 languages/dialects where the consonant may be typically clear or dark show that the two varieties of /l/ are set in contrast mostly in the context of /i/ but also next to /a/, and that a few languages/dialects may exhibit intermediate degrees of darkness in the consonant. F2 for /l/ is higher utterance initially than utterance finally, more so if the lateral is clear than if it is dark; moreover, the initial and final allophones may be characterized as intrinsic (in most languages/dialects) or extrinsic (in several English dialects, Czech and Dutch) depending on whether the position-dependent frequency difference in question is below or above 200/ 300 Hz. The paper also reports a larger degree of vowel coarticulation for clear /l/ than for dark /l/ and in initial than in final position. These results are interpreted in terms of the production mechanisms involved in the realization of the two /l/ varieties in the different positional and vowel context conditions subjected to investigation. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0d1bfd16d091efcce0c2d558bb4da5d8",
"text": "In this paper, we perform a systematic design study of the \"elephant in the room\" facing the VR industry -- is it feasible to enable high-quality VR apps on untethered mobile devices such as smartphones? Our quantitative, performance-driven design study makes two contributions. First, we show that the QoE achievable for high-quality VR applications on today's mobile hardware and wireless networks via local rendering or offloading is about 10X away from the acceptable QoE, yet waiting for future mobile hardware or next-generation wireless networks (e.g. 5G) is unlikely to help, because of power limitation and the higher CPU utilization needed for processing packets under higher data rate. Second, we present Furion, a VR framework that enables high-quality, immersive mobile VR on today's mobile devices and wireless networks. Furion exploits a key insight about the VR workload that foreground interactions and background environment have contrasting predictability and rendering workload, and employs a split renderer architecture running on both the phone and the server. Supplemented with video compression, use of panoramic frames, and parallel decoding on multiple cores on the phone, we demonstrate Furion can support high-quality VR apps on today's smartphones over WiFi, with under 14ms latency and 60 FPS (the phone display refresh rate).",
"title": ""
},
{
"docid": "c07ea25fe12ec56e6bf7df9508a6b494",
"text": "The psychological and anthropological literature on cultural variations in emotions is reviewed. The literature has been interpreted within the framework of a cognitive-process model of emotions. Both cross-cultural differences and similarities were identified in each phase of the emotion process; similarities in 1 phase do not necessarily imply similarities in other phases. Whether cross-cultural differences or similarities are found depends to an important degree on the level of description of the emotional phenomena. Cultural differences in emotions appear to be due to differences in event types or schemas, in culture-specific appraisal propensities, in behavior repertoires, or in regulation processes. Differences in taxonomies of emotion words sometimes reflect true emotion differences like those just mentioned, but they may also just result from differences in which emotion-process phase serves as the basis for categorization.",
"title": ""
},
{
"docid": "79e0dc959169e5a4cd6ee5f26b880768",
"text": "Skin color detection is an important subject in computer vision research. Color segmentation takes a great attention because color is an effective and robust visual cue for characterizing an object from the others. To aim at existing skin color algorithms considering the luminance information not enough, a reliable color modeling approach was proposed. It is based on the fact that color distribution of a single-colored object is not invariant with respect to luminance variations even in the Cb-Cr plane and does not ignore the influence on luminance Y component in YCbCr color space. Firstly, according to statistics of skin color pixels, we take the luminance Y by ascending order, divide the total range of Y into finite number of intervals, collect pixels whose luminance belongs to the same luminance interval, calculate the covariance and the mean value of Cb and Cr with respect to Y, and use the above data to train the BP neural network, then we get the self-adaptive skin color model and design a Gaussian model classifier. The experimental results have indicated that this algorithm can effectively fulfill the skin-color detection for images captured under different environmental condition and the performance of the skin color segmentation is significantly improved.",
"title": ""
},
{
"docid": "9f53016723d5064e3790cd316399e082",
"text": "We investigated the processing effort during visual search and counting tasks using a pupil dilation measure. Search difficulty was manipulated by varying the number of distractors as well as the heterogeneity of the distractors. More difficult visual search resulted in more pupil dilation than did less difficult search. These results confirm a link between effort and increased pupil dilation. The pupil dilated more during the counting task than during target-absent search, even though the displays were identical, and the two tasks were matched for reaction time. The moment-to-moment dilation pattern during search suggests little effort in the early stages, but increasingly more effort towards response, whereas the counting task involved an increased initial effort, which was sustained throughout the trial. These patterns can be interpreted in terms of the differential memory load for item locations in each task. In an additional experiment, increasing the spatial memory requirements of the search evoked a corresponding increase in pupil dilation. These results support the view that search tasks involve some, but limited, memory for item locations, and the effort associated with this memory load increases during the trials. In contrast, counting involves a heavy locational memory component from the start.",
"title": ""
}
] |
scidocsrr
|
9b2903ce1c7110c61c08f7a16435cad5
|
Increasing Evolvability Considered as a Large-Scale Trend in Evolution
|
[
{
"docid": "b4958ecd58d42437cddda89623f55c1f",
"text": "The assumption that acquired characteristics are not inherited is often taken to imply that the adaptations that an organism learns during its lifetime cannot guide the course of evolution. This inference is incorrect (Baldwin, 1896). Learning alters the shape of the search space in which evolution operates and thereby provides good evolutionary paths towards sets of co-adapted alleles. We demonstrate that this effect allows learning organisms to evolve much faster than their nonlearning equivalents, even though the characteristics acquired by the phenotype are not communicated to the genotype.",
"title": ""
},
{
"docid": "844a39889bd671a8b9abe085b2e0a982",
"text": "1 One may wonder, ...] how complex organisms evolve at all. They seem to have so many genes, so many multiple or pleiotropic eeects of any one gene, so many possibilities for lethal mutations in early development, and all sorts of problems due to their long development. Abstract: The problem of complex adaptations is studied in two largely disconnected research traditions: evolutionary biology and evolutionary computer science. This paper summarizes the results from both areas and compares their implications. In evolutionary computer science it was found that the Darwinian process of mutation, recombination and selection is not universally eeective in improving complex systems like computer programs or chip designs. For adaptation to occur, these systems must possess \"evolvability\", i.e. the ability of random variations to sometimes produce improvement. It was found that evolvability critically depends on the way genetic variation maps onto phenotypic variation, an issue known as the representation problem. The genotype-phenotype map determines the variability of characters, which is the propensity to vary. Variability needs to be distinguished from variation, which are the actually realized diierences between individuals. The genotype-phenotype map is the common theme underlying such varied biological phenomena as genetic canalization, developmental constraints, biological versatility , developmental dissociability, morphological integration, and many more. For evolutionary biology the representation problem has important implications: how is it that extant species acquired a genotype-phenotype map which allows improvement by mutation and selection? Is the genotype-phenotype map able to change in evolution? What are the selective forces, if any, that shape the genotype-phenotype map? We propose that the genotype-phenotype map can evolve by two main routes: epistatic mutations, or the creation of new genes. A common result for organismic design is modularity. By modularity we mean a genotype-phenotype map in which there are few pleiotropic eeects among characters serving diierent functions, with pleiotropic eeects falling mainly among characters that are part of a single functional complex. Such a design is expected to improve evolvability by limiting the interference between the adaptation of diierent functions. Several population genetic models are reviewed that are intended to explain the evolutionary origin of a modular design. While our current knowledge is insuucient to assess the plausibil-ity of these models, they form the beginning of a framework for understanding the evolution of the genotype-phenotype map.",
"title": ""
}
] |
[
{
"docid": "5bd483e895de779f8b91ca8537950a2f",
"text": "To evaluate the efficacy of pregabalin in facilitating taper off chronic benzodiazepines, outpatients (N = 106) with a lifetime diagnosis of generalized anxiety disorder (current diagnosis could be subthreshold) who had been treated with a benzodiazepine for 8-52 weeks were stabilized for 2-4 weeks on alprazolam in the range of 1-4 mg/day. Patients were then randomized to 12 weeks of double-blind treatment with either pregabalin 300-600 mg/day or placebo while undergoing a gradual benzodiazepine taper at a rate of 25% per week, followed by a 6-week benzodiazepine-free phase during which they continued double-blind study treatment. Outcome measures included ability to remain benzodiazepine-free (primary) as well as changes in Hamilton Anxiety Rating Scale (HAM)-A and Physician Withdrawal Checklist (PWC). At endpoint, a non-significant higher proportion of patients remained benzodiazepine-free receiving pregabalin compared with placebo (51.4% vs 37.0%). Treatment with pregabalin was associated with significantly greater endpoint reduction in the HAM-A total score versus placebo (-2.5 vs +1.3; p < 0.001), and lower endpoint mean PWC scores (6.5 vs 10.3; p = 0.012). Thirty patients (53%) in the pregabalin group and 19 patients (37%) in the placebo group completed the study, reducing the power to detect a significant difference on the primary outcome. The results on the anxiety and withdrawal severity measures suggest that switching to pregabalin may be a safe and effective method for discontinuing long-term benzodiazepine therapy.",
"title": ""
},
{
"docid": "7fadd4cafa4997c8af947cbdf26f4a43",
"text": "This article presents a meta-analysis of the experimental literature that has examined the effect of performance and mastery achievement goals on intrinsic motivation. Summary analyses provided support for the hypothesis that the pursuit of performance goals has an undermining effect on intrinsic motivation relative to the pursuit of mastery goals. Moderator analyses were conducted in an attempt to explain significant variation in the magnitude and direction of this effect across studies. Results indicated that the undermining effect of performance goals relative to mastery goals was contingent on whether participants received confirming or nonconfirming competence feedback, and on whether the experimental procedures induced a performance-approach or performance-avoidance orientation. These findings provide conceptual clarity to the literature on achievement goals and intrinsic motivation and suggest numerous avenues for subsequent empirical work.",
"title": ""
},
{
"docid": "92312b5d757e31786210f7ce0197e175",
"text": "We present a deterministic incremental algorithm for exactly maintaining the size of a minimum cut with Õ(1) amortized time per edge insertion and O(1) query time. This result partially answers an open question posed by Thorup [Combinatorica 2007]. It also stays in sharp contrast to a polynomial conditional lower-bound for the fully-dynamic weighted minimum cut problem. Our algorithm is obtained by combining a recent sparsification technique of Kawarabayashi and Thorup [STOC 2015] and an exact incremental algorithm of Henzinger [J. of Algorithm 1997]. We also study space-efficient incremental algorithms for the minimum cut problem. Concretely, we show that there exists an O(n logn/ε2) space Monte-Carlo algorithm that can process a stream of edge insertions starting from an empty graph, and with high probability, the algorithm maintains a (1 + ε)-approximation to the minimum cut. The algorithm has Õ(1) amortized update-time and constant query-time. 1998 ACM Subject Classification G.2.2 Graph Theory",
"title": ""
},
{
"docid": "ab7663ef08505e37be080eab491d2607",
"text": "This paper has studied the fatigue and friction of big end bearing on an engine connecting rod by combining the multi-body dynamics and hydrodynamic lubrication model. First, the basic equations and the application on AVL-Excite software platform of multi-body dynamics have been described in detail. Then, introduce the hydrodynamic lubrication model, which is the extended Reynolds equation derived from the Navier-Stokes equation and the equation of continuity. After that, carry out the static calculation of connecting rod assembly. At the same time, multi-body dynamics analysis has been performed and stress history can be obtained by finite element data recovery. Next, execute the fatigue analysis combining the Static stress and dynamic stress, safety factor distribution of connecting rod will be obtained as result. At last, detailed friction analysis of the big-end bearing has been performed. And got a good agreement when contrast the simulation results to the Bearing wear in the experiment.",
"title": ""
},
{
"docid": "cef5778ec1f6b5a0fe7076eb20222c3d",
"text": "VIRTUAL reality, also called virtual environments, is a new interface paradigm that uses computers and human-computer interfaces to create the effect of a three-dimensional world in which the user interacts directly with virtual objects. The term “virtual reality” has received quite a lot of attention in the last few years, so we feel it is important to be clear about its meaning. For the purposes of this article, we adopt the following definition: Virtual reality is the use of computers and human-computer interfaces to create the effect of a three-dimensional world containing interactive objects with a strong sense of threedimensional presence. Steve Bryson Virtual Reality in Scientific Visualization",
"title": ""
},
{
"docid": "eab83df830eba33729dc018d1bd9586a",
"text": "This paper describes a split-mode tuning fork MEMS gyroscope with CMOS readout circuit. The gyroscope achieves 0.008°/<inline-formula> <tex-math notation=\"LaTeX\">$\\surd \\text{h}$ </tex-math></inline-formula> angle random walk (ARW) and 0.08°/h bias instability (BI). The noise and phase requirements of the MEMS sensing element and the readout circuit are analyzed, from which the system-level design guidelines are proposed. The MEMS sensing element is optimized to enhance its mechanical sensitivity with reduced quadrature coupling and thermoelastic damping. Front ends with 5.9-fA/<inline-formula> <tex-math notation=\"LaTeX\">$\\surd $ </tex-math></inline-formula>Hz input-referred current noise floor and less than 0.5° phase delay are achieved to reduce the gyroscope’s ARW and thermal drift. A low flicker noise automatic amplitude control circuit and digitized phase-sensitive demodulation are adopted to improve the gyroscope’s BI. After temperature compensation, the temperature coefficients (TCOs) of the scale factor and the zero-rate output are 27 ppm/°C and 1.7°/h/°C from −40 °C to +60 °C, respectively. The overall power consumption is 8.5 mW under a 3.3-V supply.",
"title": ""
},
{
"docid": "dec89c3035ce2456c23e547252c5824a",
"text": "This is a survey of some of the nice properties of the associahedron (also called Stasheff polytope) from several points of views: topological, geometrical, combinatorial and algebraic.",
"title": ""
},
{
"docid": "347b3cf4156d5538f5436195712ac892",
"text": "Kernel rootkits are among the most insidious threats to computer security to da . By employing various code injection techniques, they are able to maintain an omn ip te t presence in the compromised OS kernels. Existing preventive countermeasures typ icall employ virtualization technology as part of their solutions. However, they are still limited in either ( 1) equiring modifying the OS kernel source code for the protection or (2) leveraging so ftware-based virtualization techniques such as binary translation with a high overhead to implement a Ha rv rd architecture (which is robust to various code injection techniques used by kernel roo tkits). In this paper, we introduce hvmHarvard, a hardware virtualization-based Harvard arch itecture that transparently protects commodity OS kernels from kernel rootkit attacks and significantly re duc s the performance overhead. Our evaluation with a Xen-based prototype shows that it can tr ansparently protect legacy OS kernels with rootkit resistance while introducing < 5% performance overhead.",
"title": ""
},
{
"docid": "c2fc81074ceed3d7c3690a4b23f7624e",
"text": "The diffusion model for 2-choice decisions (R. Ratcliff, 1978) was applied to data from lexical decision experiments in which word frequency, proportion of high- versus low-frequency words, and type of nonword were manipulated. The model gave a good account of all of the dependent variables--accuracy, correct and error response times, and their distributions--and provided a description of how the component processes involved in the lexical decision task were affected by experimental variables. All of the variables investigated affected the rate at which information was accumulated from the stimuli--called drift rate in the model. The different drift rates observed for the various classes of stimuli can all be explained by a 2-dimensional signal-detection representation of stimulus information. The authors discuss how this representation and the diffusion model's decision process might be integrated with current models of lexical access.",
"title": ""
},
{
"docid": "0bdfd3e6f529efdf8aecb3794867b39a",
"text": "Head-tracked 3D displays can provide a compelling 3D effect, but even small inaccuracies in the calibration of the participant's viewpoint to the display can disrupt the 3D illusion. We propose a novel interactive procedure for a participant to easily and accurately calibrate a head-tracked display by visually aligning patterns across a multi-screen display. Head-tracker measurements are then calibrated to these known viewpoints. We conducted a user study to evaluate the effectiveness of different visual patterns and different display shapes. We found that the easiest to align shape was the spherical display and the best calibration pattern was the combination of circles and lines. We performed a quantitative camera-based calibration of a cubic display and found visual calibration outperformed manual tuning and generated viewpoint calibrations accurate to within a degree. Our work removes the usual, burdensome step of manual calibration when using head-tracked displays and paves the way for wider adoption of this inexpensive and effective 3D display technology.",
"title": ""
},
{
"docid": "d3203488fa016c5eb6f2b62bcffa5a1d",
"text": "Chronic copper toxicity was diagnosed in a Jersey herd in the Waikato region of New Zealand following an investigation into the deaths of six cattle from a herd of 250 dry cows. Clinical signs and post-mortem examination results were consistent with a hepatopathy, and high concentrations of copper in liver and blood samples of clinically affected animals confirmed copper toxicity. Liver copper concentrations and serum gamma-glutamyl transferase activities were both raised in a group of healthy animals sampled at random from the affected herd, indicating an ongoing risk to the remaining cattle; these animals all had serum copper concentrations within normal limits. Serum samples and liver biopsies were also collected and assayed for copper from animals within two other dairy herds on the same farm; combined results from all three herds showed poor correlation between serum and liver copper concentrations. To reduce liver copper concentrations the affected herd was drenched with 0.5 g ammonium molybdate and 1 g sodium sulphate per cow for five days, and the herd was given no supplementary feed or mineral supplements. Liver biopsies were repeated 44 days after the initial biopsies (approximately 1 month after the end of the drenching program); these showed a significant 37.3% decrease in liver copper concentrations (P <0.02). Also there were no further deaths after the start of the drenching program. Since there was no control group it is impossible to quantify the effect of the drenching program in this case, and dietary changes were also made that would have depleted liver copper stores. Historical analysis of the diet was difficult due to poor record keeping, but multiple sources of copper contributed to a long term copper over supplementation of the herd; the biggest source of copper was a mineral supplement. The farmer perceived this herd to have problems with copper deficiency prior to the diagnosis of copper toxicity, so this case demonstrates the importance of monitoring herd copper status regularly. Also the poor correlation between liver and serum copper concentrations in the three herds sampled demonstrates the importance of using liver copper concentration to assess herd copper status.",
"title": ""
},
{
"docid": "bd671831032f704a06344bd46ba8f694",
"text": "There has been an increase in the attention paid to the strategic potential of information systems and a new willingness to accept the possibility that information systems can be the source of strategic gains. This belief is reflected in a host of publications, from the popular press to respected journals. Much of this has been supported by a very limited set of prominent and well publicized success stories, principally involving marketing and distribution, financial services, and the airlines. Unfortunately, there has been little attempt at an analysis that abstracts from these experiences to determine factors that determine strategic success. This can be attributed in part to the absence of attention paid to unsuccessful ventures in the use of information technology for competitive advantage. Although this paper relies on the same anecdotes, it augments them with data on a few unsuccessful attempts to exploit information technology and with economic theory where appropriate. General conditions that appear necessary for sustainable competitive advantage are developed.",
"title": ""
},
{
"docid": "242746fd37b45c83d8f4d8a03c1079d3",
"text": "BACKGROUND\nThe use of wheat grass (Triticum aestivum) juice for treatment of various gastrointestinal and other conditions had been suggested by its proponents for more than 30 years, but was never clinically assessed in a controlled trial. A preliminary unpublished pilot study suggested efficacy of wheat grass juice in the treatment of ulcerative colitis (UC).\n\n\nMETHODS\nA randomized, double-blind, placebo-controlled study. One gastroenterology unit in a tertiary hospital and three study coordinating centers in three major cities in Israel. Twenty-three patients diagnosed clinically and sigmoidoscopically with active distal UC were randomly allocated to receive either 100 cc of wheat grass juice, or a matching placebo, daily for 1 month. Efficacy of treatment was assessed by a 4-fold disease activity index that included rectal bleeding and number of bowel movements as determined from patient diary records, a sigmoidoscopic evaluation, and global assessment by a physician.\n\n\nRESULTS\nTwenty-one patients completed the study, and full information was available on 19 of them. Treatment with wheat grass juice was associated with significant reductions in the overall disease activity index (P=0.031) and in the severity of rectal bleeding (P = 0.025). No serious side effects were found. Fresh extract of wheat grass demonstrated a prominent tracing in cyclic voltammetry methodology, presumably corresponding to four groups of compounds that exhibit anti-oxidative properties.\n\n\nCONCLUSION\nWheat grass juice appeared effective and safe as a single or adjuvant treatment of active distal UC.",
"title": ""
},
{
"docid": "322dcd68d7467c477c241bedc28fce11",
"text": "The automobile mathematical model is established on the analysis to the automobile electric power steering system (EPS) structural style and the performance. In order to solve the problem that the most automobile power steering is difficult to determine the PID controller parameter, the article uses the fuzzy neural network PID control in EPS. Through the simulation of PID control and the fuzzy neural network PID control computation, the test result indicated that, fuzzy neural network PID the control EPS system has a better robustness compared to traditional PID the control EPS, can improve EPS effectively the steering characteristic and the automobile changes characteristic well.",
"title": ""
},
{
"docid": "2e2dc51bc059d7d40cdae22e1e36776e",
"text": "In this thesis we present an approach to neural machine translation (NMT) that supports multiple domains in a single model and allows switching between the domains when translating. The core idea is to treat text domains as distinct languages and use multilingual NMT methods to create multi-domain translation systems; we show that this approach results in significant translation quality gains over fine-tuning. We also propose approach of unsupervised domain assignment and explore whether the knowledge of pre-specified text domains is necessary; turns out that it is after all, but also that when it is not known quite high translation quality can be reached, and even higher than with known domains in some cases. Additionally, we explore the possibility of intra-language style adaptation through zero shot translation. We show that this approach is able to style adapt, however, with unresolved text deterioration issues.",
"title": ""
},
{
"docid": "e8b0536f5d749b5f6f5651fe69debbe1",
"text": "Current centralized cloud datacenters provide scalable computation- and storage resources in a virtualized infrastructure and employ a use-based \"pay-as-you-go\" model. But current mobile devices and their resource-hungry applications (e.g., Speech-or face recognition) demand for these resources on the spot, though a mobile device's intrinsic characteristic is its limited availability of resources (e.g., CPU, storage, bandwidth, energy). Thus, mobile cloud computing (MCC) was introduced to overcome these limitations by transparently making accessible the apparently infinite cloud resources to the mobile devices and by allowing mobile applications to (elastically) expand into the cloud. However, MCC often relies on a stable and fast connection to the mobile devices' surrogate in the cloud, which is a rare case in mobile scenarios. Moreover, the increased latency and the limited bandwidth prevent the use of real-time applications like, e.g. Cloud gaming. Instead, mobile edge computing (MEC) or fog computing tries to provide the necessary resources at the logical edge of the network by including infrastructure components to create ad-hoc mobile clouds. However, this approach requires the replication and management of the applications' business logic in an untrusted, unreliable and constantly changing environment. Consequently, this paper presents a novel approach to allow mobile app developers to easily benefit from the features of MEC. In particular, we present a programming model and framework that directly fit the common app developers' mindset to design elastic and scalable edge-based mobile applications.",
"title": ""
},
{
"docid": "be4f91a03afd3a90523366403254aeff",
"text": "Today, it is generally accepted that sprint performance, like endurance performance, can improve considerably with training. Strength training, especially, plays a key role in this process. Sprint performance will be viewed multidimensionally as an initial acceleration phase (0 to 10 m), a phase of maximum running speed (36 to 100 m) and a transition phase in between. Immediately following the start action, the powerful extensions of the hip, knee and ankle joints are the main accelerators of body mass. However, the hamstrings, the m. adductor magnus and the m. gluteus maximus are considered to make the most important contribution in producing the highest levels of speed. Different training methods are proposed to improve the power output of these muscles. Some of them aim for hypertrophy and others for specific adaptations of the nervous system. This includes general (hypertrophy and neuronal activation), velocity specific (speed-strength) and movement specific (sprint associated exercises) strength training. In developing training strategies, the coach has to keep in mind that strength, power and speed are inherently related to one another, because they are all the output of the same functional systems. As heavy resistance training results in a fibre type IIb into fibre type IIa conversion, the coach has to aim for an optimal balance between sprint specific and nonspecific training components. To achieve this they must take into consideration the specific strength training demands of each individual, based on performance capacity in each specific phase of the sprint.",
"title": ""
},
{
"docid": "521db06094753c4e58024dea7e43f738",
"text": "In this paper, we propose a stretchable tactile sensor composed of a pair of silicone rubber channels filled with electro conductive liquid. When a force was applied to this channel, its length and cross-sectional area deforms. By measuring the resistance change of the electro conductive liquid in the channel, its deformation can be measured. The proposed tactile sensor is composed of two parallel channel filled with electro conductive liquid, therefore, by comparing the resistance changes of each channel to the deformation, only the contacting force can be measured independently. Since a liquid is used for the sensing material, the proposed liquid tactile sensor can be easily attached to movable portions as the joints of robots. In the paper, we measured the sensing characteristics of the liquid tactile sensor to the stretch, bend, and contact force. Finally, the efficiency of the sensor was demonstrated by measuring the contact force from 0 to 3.0N by attaching the 20% stretched liquid tactile sensor to curved surfaces with 0.05mm−1 in curvature.",
"title": ""
},
{
"docid": "d5666bfb1fcd82ac89da2cb893ba9fb7",
"text": "Ad-servers have to satisfy many different targeting criteria, and the combination can often result in no feasible solution. We hypothesize that advertisers may be defining these metrics to create a kind of \"proxy target\". We therefore reformulate the standard ad-serving problem to one where we attempt to get as close as possible to the advertiser's multi-dimensional target inclusive of delivery. We use a simple simulation to illustrate the behavior of this algorithm compared to Constraint and Pacing strategies. The system is then deployed in one of the largest video ad-servers in the United States and we show experimental results from live test ads, as well as 6 months of production performance across hundreds of ads. We find that the live ad-server tests match the simulation, and we report significant gains in multi-KPI performance from using the error minimization strategy.",
"title": ""
}
] |
scidocsrr
|
5f73633df9472d368dcdb17566f3c935
|
Research through design as a method for interaction design research in HCI
|
[
{
"docid": "ed4dcf690914d0a16d2017409713ea5f",
"text": "We argue that HCI has emerged as a design-oriented field of research, directed at large towards innovation, design, and construction of new kinds of information and interaction technology. But the understanding of such an attitude to research in terms of philosophical, theoretical, and methodological underpinnings seems however relatively poor within the field. This paper intends to specifically address what design 'is' and how it is related to HCI. First, three candidate accounts from design theory of what design 'is' are introduced; the conservative, the romantic, and the pragmatic. By examining the role of sketching in design, it is found that the designer becomes involved in a necessary dialogue, from which the design problem and its solution are worked out simultaneously as a closely coupled pair. In conclusion, it is proposed that we need to acknowledge, first, the role of design in HCI conduct, and second, the difference between the knowledge-generating Design-oriented Research and the artifact-generating conduct of Research-oriented Design.",
"title": ""
}
] |
[
{
"docid": "eb06c0af1ea9de72f27f995d54590443",
"text": "Random acceleration vibration specifications for subsystems, i.e. instruments, equipment, are most times based on measurement during acoustic noise tests on system level, i.e. a spacecraft and measured by accelerometers, placed in the neighborhood of the interface between spacecraft and subsystem. Tuned finite element models can be used to predict the random acceleration power spectral densities at other locations than available via the power spectral density measurements of the acceleration. The measured and predicted power spectral densities do represent the modal response characteristics of the system and show many peaks and valleys. The equivalent random acceleration vibration test specification is a smoothed, enveloped, peak-clipped version of the measured and predicted power spectral densities of the acceleration spectrum. The original acceleration vibration spectrum can be characterized by a different number response spectra: Shock Response Spectrum (SRS) , Extreme Response Spectrum (ERS), Vibration Response Spectrum (VRS), and Fatigue Damage Spectrum (FDS). An additional method of non-stationary random vibrations is based on the Rayleigh distribution of peaks. The response spectra represent the responses of series of SDOF systems excited at the base by random acceleration, both in time and frequency domain. The synthesis of equivalent random acceleration vibration specifications can be done in a very structured manner and are more suitable than equivalent random acceleration vibration specifications obtained by simple enveloping. In the synthesis process Miles’ equation plays a dominant role to invert the response spectra into equivalent random acceleration vibration spectra. A procedure is proposed to reduce the number of data point in the response spectra curve by dividing the curve in a numbers of fields. The synthesis to an equivalent random acceleration J.J. Wijker, M.H.M. Ellenbroek, and A. de Boer spectrum is performed on a reduced selected set of data points. The recalculated response spectra curve envelops the original response spectra curves. A real life measured random acceleration spectrum (PSD) with quite a number of peaks and valleys is taken to generate, applying response spectra SRS, ERS, VRS, FDS and the Rayleigh distribution of peaks, equivalent random acceleration vibration specifications. Computations are performed both in time and frequency domain. J.J. Wijker, M.H.M. Ellenbroek, and A. de Boer",
"title": ""
},
{
"docid": "5a8d4bfb89468d432b7482062a0cbf2d",
"text": "While “no one size fits all” is a sound philosophy for system designers to follow, it poses multiple challenges for application developers and system administrators. It can be hard for an application developer to pick one system when the needs of her application match the features of multiple “one size” systems. The choice becomes considerably harder when different components of an application fit the features of different “one size” systems. Considerable manual effort goes into creating and tuning such multi-system applications. An application’s data and workload properties may change over time, often in unpredictable and bursty ways. Consequently, the “one size” system that is best for an application can change over time. Adapting to change can be hard when application development is coupled tightly with any individual “one size” system. In this paper, we make the case for developing a new breed of Database Management Systems that we term DBMS. A DBMS contains multiple “one size” systems internally. An application specifies its execution requirements on aspects like performance, availability, consistency, change, and cost to the DBMS declaratively. For all requests (e.g., queries) made by the application, the DBMS will select the execution plan that meets the application’s requirements best. A unique aspect of the execution plan in a DBMS is that the plan includes the selection of one or more “one size” systems. The plan is then deployed and managed automatically on the selected system(s). If application requirements change beyond what was planned for originally by the DBMS, then the application can be reoptimized and redeployed; usually with no additional effort required from the application developer. The DBMS approach has the potential to address the challenges that application developers and system administrators face from the vast and growing number of “one size” systems today. However, this approach poses many research challenges that we discuss in this paper. We are taking the DBMS approach in a platform, called Cyclops, that we are building for continuous query execution. We will use Cyclops throughout the paper to give concrete illustrations of the benefits and challenges of the DBMS approach. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6 Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.",
"title": ""
},
{
"docid": "d27735fc52e407e4b5e1b3fd7296ff8e",
"text": "The ACL Anthology Network (AAN)1 is a comprehensive manually curated networked database of citations and collaborations in the field of Computational Linguistics. Each citation edge in AAN is associated with one or more citing sentences. A citing sentence is one that appears in a scientific article and contains an explicit reference to another article. In this paper, we shed the light on the usefulness of AAN citing sentences for understanding research trends and summarizing previous discoveries and contributions. We also propose and motivate several different uses and applications of citing sentences.",
"title": ""
},
{
"docid": "fb655a622c2e299b8d7f8b85769575b4",
"text": "With the substantial development of digital technologies in multimedia, network communication and user interfaces, we are seeing an increasing number of applications of these technologies, in particular in the entertainment domain. They include computer gaming, elearning, high-definition and interactive TVs, and virtual environments. The development of these applications typically involves the integration of existing technologies as well as the development of new technologies. This Introduction summarizes latest interactive entertainment technologies and applications, and briefly highlights some potential research directions. It also introduces the seven papers that are accepted to the special issue. Hopefully, this will provide the readers some insights into future research topics in interactive entertainment technologies and applications.",
"title": ""
},
{
"docid": "126b52ab2e2585eabf3345ef7fb39c51",
"text": "We propose a method to build in real-time animated 3D head models using a consumer-grade RGB-D camera. Our framework is the first one to provide simultaneously comprehensive facial motion tracking and a detailed 3D model of the user's head. Anyone's head can be instantly reconstructed and his facial motion captured without requiring any training or pre-scanning. The user starts facing the camera with a neutral expression in the first frame, but is free to move, talk and change his face expression as he wills otherwise. The facial motion is tracked using a blendshape representation while the fine geometric details are captured using a Bump image mapped over the template mesh. We propose an efficient algorithm to grow and refine the 3D model of the head on-the-fly and in real-time. We demonstrate robust and high-fidelity simultaneous facial motion tracking and 3D head modeling results on a wide range of subjects with various head poses and facial expressions. Our proposed method offers interesting possibilities for animation production and 3D video telecommunications.",
"title": ""
},
{
"docid": "065c24bc712f7740b95e0d1a994bfe19",
"text": "David Haussler Computer and Information Sciences University of California Santa Cruz Santa Cruz , CA 95064 We study a particular type of Boltzmann machine with a bipartite graph structure called a harmonium. Our interest is in using such a machine to model a probability distribution on binary input vectors . We analyze the class of probability distributions that can be modeled by such machines. showing that for each n ~ 1 this class includes arbitrarily good appwximations to any distribution on the set of all n-vectors of binary inputs. We then present two learning algorithms for these machines .. The first learning algorithm is the standard gradient ascent heuristic for computing maximum likelihood estimates for the parameters (i.e. weights and thresholds) of the modeL Here we give a closed form for this gradient that is significantly easier to compute than the corresponding gradient for the general Boltzmann machine . The second learning algorithm is a greedy method that creates the hidden units and computes their weights one at a time. This method is a variant of the standard method for projection pursuit density estimation . We give experimental results for these learning methods on synthetic data and natural data from the domain of handwritten digits.",
"title": ""
},
{
"docid": "3e80b90205de0033a3e22f7914f7fed9",
"text": "-------------------------------------------------------------------ABSTRACT---------------------------------------------------------------------Financial losses due to financial statement frauds (FSF) are increasing day by day in the world. The industry recognizes the problem and is just now starting to act. Although prevention is the best way to reduce frauds, fraudsters are adaptive and will usually find ways to circumvent such measures. Detecting fraud is essential once prevention mechanism has failed. Several data mining algorithms have been developed that allow one to extract relevant knowledge from a large amount of data like fraudulent financial statements to detect FSF. It is an attempt to detect FSF ; We present a generic framework to do our analysis.",
"title": ""
},
{
"docid": "7fed1248efb156c8b2585147e2791ed7",
"text": "In [1], we proposed a graph-based formulation that links and clusters person hypotheses over time by solving a minimum cost subgraph multicut problem. In this paper, we modify and extend [1] in three ways: 1) We introduce a novel local pairwise feature based on local appearance matching that is robust to partial occlusion and camera motion. 2) We perform extensive experiments to compare different pairwise potentials and to analyze the robustness of the tracking formulation. 3) We consider a plain multicut problem and remove outlying clusters from its solution. This allows us to employ an efficient primal feasible optimization algorithm that is not applicable to the subgraph multicut problem of [1]. Unlike the branch-and-cut algorithm used there, this efficient algorithm used here is applicable to long videos and many detections. Together with the novel feature, it eliminates the need for the intermediate tracklet representation of [1]. We demonstrate the effectiveness of our overall approach on the MOT16 benchmark [2], achieving state-of-art performance.",
"title": ""
},
{
"docid": "3cbb932e65cf2150cb32aaf930b45492",
"text": "In software industries, various open source projects utilize the services of Bug Tracking Systems that let users submit software issues or bugs and allow developers to respond to and fix them. The users label the reports as bugs or any other relevant class. This classification helps to decide which team or personnel would be responsible for dealing with an issue. A major problem here is that users tend to wrongly classify the issues, because of which a middleman called a bug triager is required to resolve any misclassifications. This ensures no time is wasted at the developer end. This approach is very time consuming and therefore it has been of great interest to automate the classification process, not only to speed things up, but to lower the amount of errors as well. In the literature, several approaches including machine learning techniques have been proposed to automate text classification. However, there has not been an extensive comparison on the performance of different natural language classifiers in this field. In this paper we compare general natural language data classifying techniques using five different machine learning algorithms: Naive Bayes, kNN, Pegasos, Rocchio and Perceptron. The performance comparison of these algorithms was done on the basis of their apparent error rates. The data-set involved four different projects, Httpclient, Jackrabbit, Lucene and Tomcat5, that used two different Bug Tracking Systems - Bugzilla and Jira. An experimental comparison of pre-processing techniques was also performed.",
"title": ""
},
{
"docid": "f785636331f737d8dc14b6958277553f",
"text": "This paper focuses on subword-based Neural Machine Translation (NMT). We hypothesize that in the NMT model, the appropriate subword units for the following three modules (layers) can differ: (1) the encoder embedding layer, (2) the decoder embedding layer, and (3) the decoder output layer. We find the subword based on Sennrich et al. (2016) has a feature that a large vocabulary is a superset of a small vocabulary and modify the NMT model enables the incorporation of several different subword units in a single embedding layer. We refer these small subword features as hierarchical subword features. To empirically investigate our assumption, we compare the performance of several different subword units and hierarchical subword features for both the encoder and decoder embedding layers. We confirmed that incorporating hierarchical subword features in the encoder consistently improves BLEU scores on the IWSLT evaluation datasets. Title and Abstract in Japanese 階層的部分単語特徴を用いたニューラル機械翻訳 本稿では、部分単語 (subword) を用いたニューラル機械翻訳 (Neural Machine Translation, NMT)に着目する。NMTモデルでは、エンコーダの埋め込み層、デコーダの埋め込み層お よびデコーダの出力層の 3箇所で部分単語が用いられるが、それぞれの層で適切な部分単 語単位は異なるという仮説を立てた。我々は、Sennrich et al. (2016)に基づく部分単語は、 大きな語彙集合が小さい語彙集合を必ず包含するという特徴を利用して、複数の異なる部 分単語列を同時に一つの埋め込み層として扱えるよう NMTモデルを改良する。以降、こ の小さな語彙集合特徴を階層的部分単語特徴と呼ぶ。本仮説を検証するために、様々な部 分単語単位や階層的部分単語特徴をエンコーダ・デコーダの埋め込み層に適用して、その 精度の変化を確認する。IWSLT評価セットを用いた実験により、エンコーダ側で階層的な 部分単語を用いたモデルは BLEUスコアが一貫して向上することが確認できた。",
"title": ""
},
{
"docid": "ab0541d9ec1ea0cf7ad85d685267c142",
"text": "Umbilical catheters have been used in NICUs for drawing blood samples, measuring blood pressure, and administering fluid and medications for more than 25 years. Complications associated with umbilical catheters include thrombosis; embolism; vasospasm; vessel perforation; hemorrhage; infection; gastrointestinal, renal, and limb tissue damage; hepatic necrosis; hydrothorax; cardiac arrhythmias; pericardial effusion and tamponade; and erosion of the atrium and ventricle. A review of the literature provides conflicting accounts of the superiority of high versus low placement of umbilical arterial catheters. This article reviews the current literature regarding use of umbilical catheters in neonates. It also highlights the policy developed for the authors' NICU, a 34-bed tertiary care unit of a children's hospital, and analyzes complications associated with umbilical catheter use for 1 year in that unit.",
"title": ""
},
{
"docid": "c35a4278aa4a084d119238fdd68d9eb6",
"text": "ARM TrustZone, which provides a Trusted Execution Environment (TEE), normally plays a role in keeping security-sensitive resources safe. However, to properly control access to the resources, it is not enough to just isolate them from the Rich Execution Environment (REE). In addition to the isolation, secure communication should be guaranteed between security-critical resources in the TEE and legitimate REE processes that are permitted to use them. Even though there is a TEE security solution — namely, a kernel-integrity monitor — it aims to protect the REE kernel’s static regions, not to secure communication between the REE and TEE. We propose SeCReT to ameliorate this problem. SeCReT is a framework that builds a secure channel between the REE and TEE by enabling REE processes to use session keys in the REE that is regarded as unsafe region. SeCReT provides the session key to a requestor process only when the requestor’s code and control flow integrity are verified. To prevent the key from being exposed to an attacker who already compromised the REE kernel, SeCReT flushes the key from the memory every time the processor switches into kernel mode. In this paper, we present the design and implementation of SeCReT to show how it protects the key in the REE. Our prototype is implemented on Arndale board, which offers a Cortex-A15 dual-core processor with TrustZone as its security extension. We performed a security analysis by using a kernel rootkit and also ran LMBench microbenchmark to evaluate the performance overhead imposed by SeCReT.",
"title": ""
},
{
"docid": "9533193407869250854157e89d2815eb",
"text": "Life events are often described as major forces that are going to shape tomorrow's consumer need, behavior and mood. Thus, the prediction of life events is highly relevant in marketing and sociology. In this paper, we propose a data-driven, real-time method to predict individual life events, using readily available data from smartphones. Our large-scale user study with more than 2000 users shows that our method is able to predict life events with 64.5% higher accuracy, 183.1% better precision and 88.0% higher specificity than a random model on average.",
"title": ""
},
{
"docid": "ba696260b6b5ae71f4558e4c1addeebd",
"text": "Over the last 100 years, many studies have been performed to determine the biochemical and histopathological phenomena that mark the origin of neoplasms. At the end of the last century, the leading paradigm, which is currently well rooted, considered the origin of neoplasms to be a set of genetic and/or epigenetic mutations, stochastic and independent in a single cell, or rather, a stochastic monoclonal pattern. However, in the last 20 years, two important areas of research have underlined numerous limitations and incongruities of this pattern, the hypothesis of the so-called cancer stem cell theory and a revaluation of several alterations in metabolic networks that are typical of the neoplastic cell, the so-called Warburg effect. Even if this specific \"metabolic sign\" has been known for more than 85 years, only in the last few years has it been given more attention; therefore, the so-called Warburg hypothesis has been used in multiple and independent surveys. Based on an accurate analysis of a series of considerations and of biophysical thermodynamic events in the literature, we will demonstrate a homogeneous pattern of the cancer stem cell theory, of the Warburg hypothesis and of the stochastic monoclonal pattern; this pattern could contribute considerably as the first basis of the development of a new uniform theory on the origin of neoplasms. Thus, a new possible epistemological paradigm is represented; this paradigm considers the Warburg effect as a specific \"metabolic sign\" reflecting the stem origin of the neoplastic cell, where, in this specific metabolic order, an essential reason for the genetic instability that is intrinsic to the neoplastic cell is defined.",
"title": ""
},
{
"docid": "fd3faa049df1d2a0b2fe9af6cf0f3e06",
"text": "Wireless Mesh Networks improve their capacities by equipping mesh nodes with multi-radios tuned to non-overlapping channels. Hence the data forwarding between two nodes has multiple selections of links and the bandwidth between the pair of nodes varies dynamically. Under this condition, a mesh node adopts machine learning mechanisms to choose the possible best next hop which has maximum bandwidth when it intends to forward data. In this paper, we present a machine learning based forwarding algorithm to let a forwarding node dynamically select the next hop with highest potential bandwidth capacity to resume communication based on learning algorithm. Key to this strategy is that a node only maintains three past status, and then it is able to learn and predict the potential bandwidth capacities of its links. Then, the node selects the next hop with potential maximal link bandwidth. Moreover, a geometrical based algorithm is developed to let the source node figure out the forwarding region in order to avoid flooding. Simulations demonstrate that our approach significantly speeds up the transmission and outperforms other peer algorithms.",
"title": ""
},
{
"docid": "0a1aeee5bf33abd61665c72d1c0b911b",
"text": "Kampo herbal remedies are reported to have a wide range of indications and have attracted attention due to reports suggesting that these remedies are effective when used in disease treatment while maintaining a favourable quality of life. Yokukansan, also known as TJ-54, is composed of seven herbs; Angelica acutiloba, Atractylodes lancea, Bupleurum falcatum, Poria cocos, Glycyrrhiza uralensis, Cnidium officinale and Uncaria rhynchophylla. Yokukansan is used to treat insomnia and irritability as well as screaming attacks, sleep tremors and hypnic myoclonia, and neurological disorders which include dementia and Alzheimer's disease - the focus of this article. It is concluded that Yokukansan is a versatile herbal remedy with a variety of effects on various neurological states, without reported adverse effects. Traditional herbal medicines consist of a combination of constituents which account for the clinical effect seen. Likewise, the benefits of Yokukansan are probably attributable to the preparation as a whole, rather than to individual compounds.",
"title": ""
},
{
"docid": "ecd7da1f742b4c92f3c748fd19098159",
"text": "Abstract. Today, a paradigm shift is being observed in science, where the focus is gradually shifting toward the cloud environments to obtain appropriate, robust and affordable services to deal with Big Data challenges (Sharma et al. 2014, 2015a, 2015b). Cloud computing avoids any need to locally maintain the overly scaled computing infrastructure that include not only dedicated space, but the expensive hardware and software also. In this paper, we study the evolution of as-a-Service modalities, stimulated by cloud computing, and explore the most complete inventory of new members beyond traditional cloud computing stack.",
"title": ""
},
{
"docid": "2e42e1f9478fb2548e39a92c5bacbaab",
"text": "In this paper, we consider a fully automatic makeup recommendation system and propose a novel examples-rules guided deep neural network approach. The framework consists of three stages. First, makeup-related facial traits are classified into structured coding. Second, these facial traits are fed into examples-rules guided deep neural recommendation model which makes use of the pairwise of Before-After images and the makeup artist knowledge jointly. Finally, to visualize the recommended makeup style, an automatic makeup synthesis system is developed as well. To this end, a new Before-After facial makeup database is collected and labeled manually, and the knowledge of makeup artist is modeled by knowledge base system. The performance of this framework is evaluated through extensive experimental analyses. The experiments validate the automatic facial traits classification, the recommendation effectiveness in statistical and perceptual ways and the makeup synthesis accuracy which outperforms the state of the art methods by large margin. It is also worthy to note that the proposed framework is a pioneering fully automatic makeup recommendation systems to our best knowledge.",
"title": ""
},
{
"docid": "33390e96d05644da201db3edb3ad7338",
"text": "This paper addresses the difficult problem of finding an optimal neural architecture design for a given image classification task. We propose a method that aggregates two main results of the previous state-of-the-art in neural architecture search. These are, appealing to the strong sampling efficiency of a search scheme based on sequential modelbased optimization (SMBO) [15], and increasing training efficiency by sharing weights among sampled architectures [18]. Sequential search has previously demonstrated its capabilities to find state-of-the-art neural architectures for image classification. However, its computational cost remains high, even unreachable under modest computational settings. Affording SMBO with weight-sharing alleviates this problem. On the other hand, progressive search with SMBO is inherently greedy, as it leverages a learned surrogate function to predict the validation error of neural architectures. This prediction is directly used to rank the sampled neural architectures. We propose to attenuate the greediness of the original SMBO method by relaxing the role of the surrogate function so it predicts architecture sampling probability instead. We demonstrate with experiments on the CIFAR-10 dataset that our method, denominated Efficient progressive neural architecture search (EPNAS), leads to increased search efficiency, while retaining competitiveness of found architectures.",
"title": ""
},
{
"docid": "ff5f7772a0a578cfe1dd08816af8e2e7",
"text": "Moisture-associated skin damage (MASD) occurs when there is prolonged exposure of the skin to excessive amounts of moisture from incontinence, wound exudate or perspiration. Incontinenceassociated dermatitis (IAD) relates specifically to skin breakdown from faecal and/or urinary incontinence (Beeckman et al, 2009), and has been defined as erythema and oedema of the skin surface, which may be accompanied by bullae with serous exudate, erosion or secondary cutaneous infection (Gray et al, 2012). IAD may also be referred to as a moisture lesion, moisture ulcer, perineal dermatitis or diaper dermatitis (Ousey, 2012). The effects of ageing on the skin are known to affect skin integrity, as is the underdeveloped nature of very young skin; as such, elderly patients and neonates are particularly vulnerable to damage from moisture (Voegeli, 2007). The increase in moisture resulting from episodes of incontinence is exacerbated due to bacterial and enzymatic activity associated with urine and faeces, particularly when both are present, which leads to an increase in skin pH alongside over-hydration of the skin surface. This damages the natural protection of the acid mantle, the skin’s naturally acidic pH, which is an important defence mechanism against external irritants and microorganisms. This damage leads to the breakdown of vulnerable skin and increased susceptibility to secondary infection (Beeckman et al, 2009). It has become well recognised that presence of IAD greatly increases the likelihood of pressure ulcer development, since over-hydrated skin is much more susceptible to damage by extrinsic factors such as pressure, friction and shear as compared with normal skin (Clarke et al, 2010). While it is important to firstly understand that pressure and moisture damage are separate aetiologies and, secondly, be able to recognise the clinical differences in presentation, one of the factors to consider for prevention of pressure ulcers is minimising exposure to moisture/ incontinence. Another important consideration with IAD is the effect on the patient. IAD can be painful and debilitating, and has been associated with reduced quality of life. It can also be time-consuming and expensive to treat, which has an impact on clinical resources and financial implications (Doughty et al, 2012). IAD is known to impact on direct Incontinence-associated dermatitis (IAD) relates to skin breakdown from exposure to urine or faeces, and its management involves implementation of structured skin care regimens that incorporate use of appropriate skin barrier products to protect the skin from exposure to moisture and irritants. Medi Derma-Pro Foam & Spray Cleanser and Medi Derma-Pro Skin Protectant Ointment are recent additions to the Total Barrier ProtectionTM (Medicareplus International) range indicated for management of moderateto-severe IAD and other moisture-associated skin damage. This article discusses a series of case studies and product evaluations performed to determine clinical outcomes and clinician feedback based on use of the Medi Derma-Pro skin barrier products to manage IAD. Results showed improvements to patients’ skin condition following use of Medi Derma-Pro, and the cleanser and skin protectant ointment were considered better than or the same as the most equivalent products on the market.",
"title": ""
}
] |
scidocsrr
|
d3d7552599aac27700019f8d8b3af542
|
The Sharing Economy: Friend or Foe?
|
[
{
"docid": "9b176a25a16b05200341ac54778a8bfc",
"text": "This paper reports on a study of motivations for the use of peer-to-peer or sharing economy services. We interviewed both users and providers of these systems to obtain different perspectives and to determine if providers are matching their system designs to the most important drivers of use. We found that the motivational models implicit in providers' explanations of their systems' designs do not match well with what really seems to motivate users. Providers place great emphasis on idealistic motivations such as creating a better community and increasing sustainability. Users, on the other hand are looking for services that provide what they need whilst increasing value and convenience. We discuss the divergent models of providers and users and offer design implications for peer system providers.",
"title": ""
}
] |
[
{
"docid": "82acc0bf0fc3860255c77af5e45a31a0",
"text": "We propose a mobile food recognition system the poses of which are estimating calorie and nutritious of foods and recording a user's eating habits. Since all the processes on image recognition performed on a smart-phone, the system does not need to send images to a server and runs on an ordinary smartphone in a real-time way. To recognize food items, a user draws bounding boxes by touching the screen first, and then the system starts food item recognition within the indicated bounding boxes. To recognize them more accurately, we segment each food item region by GrubCut, extract a color histogram and SURF-based bag-of-features, and finally classify it into one of the fifty food categories with linear SVM and fast 2 kernel. In addition, the system estimates the direction of food regions where the higher SVM output score is expected to be obtained, show it as an arrow on the screen in order to ask a user to move a smartphone camera. This recognition process is performed repeatedly about once a second. We implemented this system as an Android smartphone application so as to use multiple CPU cores effectively for real-time recognition. In the experiments, we have achieved the 81.55% classification rate for the top 5 category candidates when the ground-truth bounding boxes are given. In addition, we obtained positive evaluation by user study compared to the food recording system without object recognition.",
"title": ""
},
{
"docid": "d131f4f22826a2083d35dfa96bf2206b",
"text": "The ranking of n objects based on pairwise comparisons is a core machine learning problem, arising in recommender systems, ad placement, player ranking, biological applications and others. In many practical situations the true pairwise comparisons cannot be actively measured, but a subset of all n(n−1)/2 comparisons is passively and noisily observed. Optimization algorithms (e.g., the SVM) could be used to predict a ranking with fixed expected Kendall tau distance, while achieving an Ω(n) lower bound on the corresponding sample complexity. However, due to their centralized structure they are difficult to extend to online or distributed settings. In this paper we show that much simpler algorithms can match the same Ω(n) lower bound in expectation. Furthermore, if an average of O(n log(n)) binary comparisons are measured, then one algorithm recovers the true ranking in a uniform sense, while the other predicts the ranking more accurately near the top than the bottom. We discuss extensions to online and distributed ranking, with benefits over traditional alternatives.",
"title": ""
},
{
"docid": "38db17ce89e1a046d7d37213b59c8163",
"text": "Cardinality estimation has a wide range of applications and is of particular importance in database systems. Various algorithms have been proposed in the past, and the HyperLogLog algorithm is one of them. In this paper, we present a series of improvements to this algorithm that reduce its memory requirements and significantly increase its accuracy for an important range of cardinalities. We have implemented our proposed algorithm for a system at Google and evaluated it empirically, comparing it to the original HyperLogLog algorithm. Like HyperLogLog, our improved algorithm parallelizes perfectly and computes the cardinality estimate in a single pass.",
"title": ""
},
{
"docid": "3b34e09d2b7109c9cbc8249aec3f23c2",
"text": "The purpose of this paper is to explore the concept of brand equity and discuss its different perspectives, we try to review existing literature of brand equity and evaluate various Customer-based brand equity models to provide a collection from well-known databases for further research in this area.",
"title": ""
},
{
"docid": "95fb51b0b6d8a3a88edfc96157233b10",
"text": "Various types of video can be captured with fisheye lenses; their wide field of view is particularly suited to surveillance video. However, fisheye lenses introduce distortion, and this changes as objects in the scene move, making fisheye video difficult to interpret. Current still fisheye image correction methods are either limited to small angles of view, or are strongly content dependent, and therefore unsuitable for processing video streams. We present an efficient and robust scheme for fisheye video correction, which minimizes time-varying distortion and preserves salient content in a coherent manner. Our optimization process is controlled by user annotation, and takes into account a wide set of measures addressing different aspects of natural scene appearance. Each is represented as a quadratic term in an energy minimization problem, leading to a closed-form solution via a sparse linear system. We illustrate our method with a range of examples, demonstrating coherent natural-looking video output. The visual quality of individual frames is comparable to those produced by state-of-the-art methods for fisheye still photograph correction.",
"title": ""
},
{
"docid": "0bfc1507c0cf080a1881e8c34866c227",
"text": "We compare Android and iOS users according to their demographic differences, security and privacy awareness, and reported behavior when installing apps. We present an exploratory study based on an online survey with more than 700 German students and describe directions for further research.",
"title": ""
},
{
"docid": "4bdccdda47aea04c5877587daa0e8118",
"text": "Recognizing text character from natural scene images is a challenging problem due to background interferences and multiple character patterns. Scene Text Character (STC) recognition, which generally includes feature representation to model character structure and multi-class classification to predict label and score of character class, mostly plays a significant role in word-level text recognition. The contribution of this paper is a complete performance evaluation of image-based STC recognition, by comparing different sampling methods, feature descriptors, dictionary sizes, coding and pooling schemes, and SVM kernels. We systematically analyze the impact of each option in the feature representation and classification. The evaluation results on two datasets CHARS74K and ICDAR2003 demonstrate that Histogram of Oriented Gradient (HOG) descriptor, soft-assignment coding, max pooling, and Chi-Square Support Vector Machines (SVM) obtain the best performance among local sampling based feature representations. To improve STC recognition, we apply global sampling feature representation. We generate Global HOG (GHOG) by computing HOG descriptor from global sampling. GHOG enables better character structure modeling and obtains better performance than local sampling based feature representations. The GHOG also outperforms existing methods in the two benchmark datasets.",
"title": ""
},
{
"docid": "64ba4467dc4495c6828f2322e8f415f2",
"text": "Due to the advancement of microoptoelectromechanical systems and microelectromechanical systems (MEMS) technologies, novel display architectures have emerged. One of the most successful and well-known examples is the Digital Micromirror Device from Texas Instruments, a 2-D array of bistable MEMS mirrors, which function as spatial light modulators for the projection display. This concept of employing an array of modulators is also seen in the grating light valve and the interferometric modulator display, where the modulation mechanism is based on optical diffraction and interference, respectively. Along with this trend comes the laser scanning display, which requires a single scanning device with a large scan angle and a high scan frequency. A special example in this category is the retinal scanning display, which is a head-up wearable module that laser-scans the image directly onto the retina. MEMS technologies are also found in other display-related research, such as stereoscopic (3-D) displays and plastic thin-film displays.",
"title": ""
},
{
"docid": "13ee1c00203fd12486ee84aa4681dc60",
"text": "Mobile crowdsensing has emerged as an efficient sensing paradigm which combines the crowd intelligence and the sensing power of mobile devices, e.g., mobile phones and Internet of Things (IoT) gadgets. This article addresses the contradicting incentives of privacy preservation by crowdsensing users and accuracy maximization and collection of true data by service providers. We firstly define the individual contributions of crowdsensing users based on the accuracy in data analytics achieved by the service provider from buying their data. We then propose a truthful mechanism for achieving high service accuracy while protecting the privacy based on the user preferences. The users are incentivized to provide true data by being paid based on their individual contribution to the overall service accuracy. Moreover, we propose a coalition strategy which allows users to cooperate in providing their data under one identity, increasing their anonymity privacy protection, and sharing the resulting payoff. Finally, we outline important open research directions in mobile and people-centric crowdsensing.",
"title": ""
},
{
"docid": "6eaf0a456206871400280aaa43935712",
"text": "A monitoring study was carried out with the aim to assess the level of toxic metals i.e., lead (Pb), cadmium (Cd), arsenic (As) and mercury (Hg) in different vegetables grown in Sindh province of Pakistan during 2007-2008. Two hundred ten samples of twenty one vegetables were collected from farmers’ field of Sindh and exporters at Karachi. These samples were grouped into four categories viz., leafy, root and tuberous, cucurbits and fruity. The samples in duplicate were digested with nitric and perchloric acid mixture with 3:1 ratio. Cadmium and Pb were analyzed with Graphite Furnace Atomic Absorption Spectrophotometer and As and Hg on Atomic Absorption using Vapor and Hydride Generation Assembly. Average concentration of Cd, Pb, As and Hg in leafy vegetables was found 0.083 μgg -1 , 0.05 μgg -1 , 0.042 μgg -1 and 0.008 μgg -1 respectively, in roots and tuberous vegetables was 0.057 μgg -1 , 0.03 μgg -1 , 0.045 μgg -1 & 0.004 μgg -1 respectively, in cucurbit vegetables was 0.021 μgg -1 , 0.051 μgg -1 , 0.056 μgg -1 and 0.0089 μgg -1 respectively and in fruity vegetables was 0.035 μgg -1 , 0.067 μgg -1 , 0.054 μgg -1 and 0.007 μgg -1 respectively. In leafy vegetables, the concentration of cadmium, lead and mercury were found comparatively higher than other three groups of vegetables. However, concentration of heavy metals found in the samples of all four categories of vegetables, was within the permissible limits and safe to consume.",
"title": ""
},
{
"docid": "ff1477e7937e35711d6beae28c1d1d31",
"text": "The term ‘ontology’ has recently acquired a certain currency within the knowledge engineering community, especially in relation to the ARPA knowledge-sharing initiative (see Gruber (to appear), Mars (ed.) 1994, Guarino 1994, Guarino, Carrara and Giaretta 1994, 1994a). The term is used in a number of different senses, however, not all of them clear or mutually compatible. Here I follow philosophical tradition in conceiving ontology as the science which deals with the nature and the organisation of reality. Ontology thus conceived may be formal, in the sense that it is directed towards formal structures and relations in reality. This formal ontology is contrasted with the various material ontologies (of physics, chemistry, medicine, and so on) which study the nature and organisation of specific sub-regions of reality. Formal structures, for example the structures governing the relation of part to whole, are shared in common by all material domains. Both formal and material ontologies may be pursued with the aid of the machinery of axiomatic theories, and it is axiomatic formal ontology that has proved to be of most interest for the ontology-building purposes of the knowledge engineer.",
"title": ""
},
{
"docid": "c6f8baff2f549aca2f4367bdb6535e7f",
"text": "Process mining techniques attempt to extract non-trivial knowledge and interesting insights from event logs. Process mining provides a welcome extension of the repertoire of business process analysis techniques and has been adopted in various commercial BPM systems (BPM|one, Futura Reflect, ARIS PPM, Fujitsu, etc.). Unfortunately, traditional process discovery algorithms have problems dealing with lessstructured processes. The resulting models are difficult to comprehend or even misleading. Therefore, we propose a new approach based on trace alignment. The goal is to align traces in a way that event logs can be explored easily. Trace alignment can be used in a preprocessing phase where the event log is investigated or filtered and in later phases where detailed questions need to be answered. Hence, it complements existing process mining techniques focusing on discovery and conformance checking.",
"title": ""
},
{
"docid": "4a487825a05b10d94b1837cbe1d7c171",
"text": "INTRODUCTION Time of Flight (TOF) range cameras, besides being used in industrial metrology applications, have also a potential interest in consumer application such as ambient assisted living and gaming. In these fields, the information offered by the sensor can be used to efficiently track the position of objects and people in the camera field of view, thus overcoming many of the problems, which are present when analyzing conventional intensity images. The need of lowering the overall system cost and power consumption, while increasing the sensor resolution, has triggered the exploration of more advanced CMOS technologies to make sensors suitable for these applications. However, migration to new technologies is not straightforward, since the most mature commercial 3D sensors employ dedicated CCD-CMOS technologies, which cannot be translated to new processes without any process modification. In this contribution a comparative overview of three different pixel architectures aimed at TOF 3D imaging, and implemented in the same 0.18-μm CMOS technology, is given and the main advantages and drawbacks of each solution are analyzed.",
"title": ""
},
{
"docid": "e20dbb2dfb6820d27fc1639b8ea1393d",
"text": "A novel high step-up dc-dc converter with coupled-inductor and switched-capacitor techniques is proposed in this paper. The capacitors are charged in parallel and are discharged in series by the coupled inductor, stacking on the output capacitor. Thus, the proposed converter can achieve high step-up voltage gain with appropriate duty ratio. Besides, the voltage spike on the main switch can be clamped. Therefore, low on-state resistance RDS(ON) of the main switch can be adopted to reduce the conduction loss. The efficiency can be improved. The operating principle and steady-state analyses are discussed in detail. Finally, a prototype circuit with 24-V input voltage, 400-V output voltage, and 200-W output power is implemented in the laboratory. Experiment results confirm the analysis and advantages of the proposed converter.",
"title": ""
},
{
"docid": "7cf7b6d0ad251b98956a29ad9192cb63",
"text": "A method for two dimensional position finding of stationary targets whose bearing measurements suffers from indeterminable bias and random noise has been proposed. The algorithm uses convex optimization to minimize an error function which has been calculated based on circular as well as linear loci of error. Taking into account a number of observations, certain modifications have been applied to the initial crude method so as to arrive at a faster, more accurate method. Simulation results of the method illustrate up to 30% increase in accuracy compared with the well-known least square filter.",
"title": ""
},
{
"docid": "81b0bb8a139de9714a811f74c20f5260",
"text": "Scheduling workflow applications in grid environments is a great challenge, because it is an NPcomplete problem. Many heuristic methods have been presented in the literature and most of them deal with a single workflow application at a time. In recent years, several heuristic methods have been proposed to deal with concurrent workflows or online workflows, but they do not work with workflows composed of data-parallel tasks. In this paper, we present an online scheduling approach for multiple mixed-parallel workflows in grid environments. The proposed approach was evaluated with a series of simulation experiments and the results show that the proposed approach delivers good performance and outperforms other methods under various workloads. © 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d7958df069d911c1431c0b7461fb0268",
"text": "Most existing works in visual question answering (VQA) are dedicated to improving the accuracy of predicted answers, while disregarding the explanations. We argue that the explanation for an answer is of the same or even more importance compared with the answer itself, since it makes the question answering process more understandable and traceable. To this end, we propose a new task of VQA-E (VQA with Explanation), where the models are required to generate an explanation with the predicted answer. We first construct a new dataset, and then frame the VQA-E problem in a multi-task learning architecture. Our VQA-E dataset is automatically derived from the VQA v2 dataset by intelligently exploiting the available captions. We also conduct a user study to validate the quality of the synthesized explanations . We quantitatively show that the additional supervision from explanations can not only produce insightful textual sentences to justify the answers, but also improve the performance of answer prediction. Our model outperforms the state-of-the-art methods by a clear margin on the VQA v2 dataset.",
"title": ""
},
{
"docid": "78e4395a6bd6b4424813e20633d140b8",
"text": "This paper introduces a high-speed CMOS comparator. The comparator consists of a differential input stage, two regenerative flip-flops, and an S-R latch. No offset cancellation is exploited, which reduces the power consumption as well as the die area and increases the comparison speed. An experimental version of the comparator has been integrated in a standard double-poly double-metal 1.5-pm n-well process with a die area of only 140 x 100 pmz. This circuit, operating under a +2.5/– 2.5-V power supply, performs comparison to a precision of 8 b with a symmetrical input dynamic range of 2.5 V (therefore ~0.5 LSB resolution is equal to ~ 4.9 mV). input stage flip-flops S-R Iat",
"title": ""
},
{
"docid": "137610f1a839c9915a77fd8f12c309bf",
"text": "In this paper we give an overview of tasks, models and mixed-signal simulation tools to support design of digitally controlled switching power supplies where the digital controller is implemented in a dedicated FPGA or ASIC. Mixed-signal simulation models of a digitally controlled switching converter based on Matlab/Simulink and HDL/Spice simulation tools are presented. The models are used in the design of a high-frequency digital controller integrated circuit for dc-dc switching converters. Simulation and experimental results are compared.",
"title": ""
},
{
"docid": "546afa65724bedfc854ca1bdba0f8e98",
"text": "We report 12 consecutive cases of vertical scapular osteotomy to correct Sprengel's deformity, performed during a 16-year period, with a mean follow-up of 10.4 years. The mean increase in abduction of the shoulder was 53 degrees . The cosmetic appearance improved by a mean of 1.5 levels on the Cavendish scale. Neither function nor cosmesis deteriorated with time. We recommend the procedure for correction of moderate deformities with a functional deficit.",
"title": ""
}
] |
scidocsrr
|
cde4d43408900c12314f11e802fa9f06
|
Architecture, design and source code comparison of ns-2 and ns-3 network simulators
|
[
{
"docid": "f8d44bd997e8af8d0ad23450790c1fec",
"text": "We report on the design objectives and initial design of a new discrete-event network simulator for the research community. Creating Yet Another Network Simulator (yans, http://yans.inria.fr/yans) is not the sort of prospect network researchers are happy to contemplate, but this effort may be timely given that ns-2 is considering a major revision and is evaluating new simulator cores. We describe why we did not choose to build on existing tools such as ns-2, GTNetS, and OPNET, outline our functional requirements, provide a high-level view of the architecture and core components, and describe a new IEEE 802.11 model provided with yans.",
"title": ""
}
] |
[
{
"docid": "8581de718d41373ee4250a300e675fb4",
"text": "It seems almost impossible to overstate the power of words; they literally have changed and will continue to change the course of world history. Perhaps the greatest tools we can give students for succeeding, not only in their education but more generally in life, is a large, rich vocabulary and the skills for using those words. Our ability to function in today’s complex social and economic worlds is mightily affected by our language skills and word knowledge. In addition to the vital importance of vocabulary for success in life, a large vocabulary is more specifically predictive and reflective of high levels of reading achievement. The Report of the National Reading Panel (2000), for example, concluded, “The importance of vocabulary knowledge has long been recognized in the development of reading skills. As early as 1924, researchers noted that growth in reading power relies on continuous growth in word knowledge” (pp. 4–15). Vocabulary or Vocabularies?",
"title": ""
},
{
"docid": "bd2d864aa8c4871e883a2e1f199160de",
"text": "This paper proposes a framework for describing, comparing and understanding visualization tools that provide awareness of human activities in software development. The framework has several purposes -- it can act as a formative evaluation mechanism for tool designers; as an assessment tool for potential tool users; and as a comparison tool so that tool researchers can compare and understand the differences between various tools and identify potential new research areas. We use this framework to structure a survey of visualization tools for activity awareness in software development. Based on this survey we suggest directions for future research.",
"title": ""
},
{
"docid": "c4912e6187e5e64ec70dd4423f85474a",
"text": "Communication technologies are becoming increasingly diverse in form and functionality, making it important to identify which aspects of these technologies actually improve geographically distributed communication. Our study examines two potentially important aspects of communication technologies which appear in robot-mediated communication - physical embodiment and control of this embodiment. We studied the impact of physical embodiment and control upon interpersonal trust in a controlled laboratory experiment using three different videoconferencing settings: (1) a handheld tablet controlled by a local user, (2) an embodied system controlled by a local user, and (3) an embodied system controlled by a remote user (n = 29 dyads). We found that physical embodiment and control by the local user increased the amount of trust built between partners. These results suggest that both physical embodiment and control of the system influence interpersonal trust in mediated communication and have implications for future system designs.",
"title": ""
},
{
"docid": "39a370de917080095b11a7fce55b2b41",
"text": "We present in this paper a novel framework for the design of a modular and adaptive partial-automation wheelchair. Our design in particular aims to address hurdles to the adoption of partial-automation wheelchairs within general society. In this experimental work, a single assistance module (assisted doorway traversal) is evaluated, with arbitration between multiple goals (from multiple detected doors) and multiple control signals (from an autonomous path planner, and the human user). The experimental work provides the foundation and proofof-concept for the technical components of our proposed modular and adaptive wheelchair robot. The system is evaluated within multiple environmental scenarios and shows good performance.",
"title": ""
},
{
"docid": "7e7b2e8fc47f53d7d7bde48c75b28596",
"text": "We propose in this paper a novel sparse subspace clustering method that regularizes sparse subspace representation by exploiting the structural sharing between tasks and data points via group sparse coding. We derive simple, provably convergent, and computationally efficient algorithms for solving the proposed group formulations. We demonstrate the advantage of the framework on three challenging benchmark datasets ranging from medical record data to image and text clustering and show that they consistently outperforms rival methods.",
"title": ""
},
{
"docid": "bb0364b6c8e0f8a9c41b30b03c308841",
"text": "BACKGROUND\nFinding duplicates is an important phase of systematic review. However, no consensus regarding the methods to find duplicates has been provided. This study aims to describe a pragmatic strategy of combining auto- and hand-searching duplicates in systematic review and to evaluate the prevalence and characteristics of duplicates.\n\n\nMETHODS AND FINDINGS\nLiteratures regarding portal vein thrombosis (PVT) and Budd-Chiari syndrome (BCS) were searched by the PubMed, EMBASE, and Cochrane library databases. Duplicates included one index paper and one or more redundant papers. They were divided into type-I (duplicates among different databases) and type-II (duplicate publications in different journals/issues) duplicates. For type-I duplicates, reference items were further compared between index and redundant papers. Of 10936 papers regarding PVT, 2399 and 1307 were identified as auto- and hand-searched duplicates, respectively. The prevalence of auto- and hand-searched redundant papers was 11.0% (1201/10936) and 6.1% (665/10936), respectively. They included 3431 type-I and 275 type-II duplicates. Of 11403 papers regarding BCS, 3275 and 2064 were identified as auto- and hand-searched duplicates, respectively. The prevalence of auto- and hand-searched redundant papers was 14.4% (1640/11403) and 9.1% (1039/11403), respectively. They included 5053 type-I and 286 type-II duplicates. Most of type-I duplicates were identified by auto-searching method (69.5%, 2385/3431 in PVT literatures; 64.6%, 3263/5053 in BCS literatures). Nearly all type-II duplicates were identified by hand-searching method (94.9%, 261/275 in PVT literatures; 95.8%, 274/286 in BCS literatures). Compared with those identified by auto-searching method, type-I duplicates identified by hand-searching method had a significantly higher prevalence of wrong items (47/2385 versus 498/1046, p<0.0001 in PVT literatures; 30/3263 versus 778/1790, p<0.0001 in BCS literatures). Most of wrong items originated from EMBASE database.\n\n\nCONCLUSION\nGiven the inadequacy of a single strategy of auto-searching method, a combined strategy of auto- and hand-searching methods should be employed to find duplicates in systematic review.",
"title": ""
},
{
"docid": "c02cf08af76c24a71de17ae2f3ac1b00",
"text": "Clustering analysis is one of the most used Machine Learning techniques to discover groups among data objects. Some clustering methods require the number of clusters into which the data is going to be partitioned. There exist several cluster validity indices that help us to approximate the optimal number of clusters of the dataset. However, such indices are not suitable to deal with Big Data due to its size limitation and runtime costs. This paper presents two clustering validity indices that handle large amount of data in low computational time. Our indices are based on redefinitions of traditional indices by simplifying the intra-cluster distance calculation. Two types of tests have been carried out over 28 synthetic datasets to analyze the performance of the proposed indices. First, we test the indices with small and medium size datasets to verify that our indices have a similar effectiveness to the traditional ones. Subsequently, tests on datasets of up to 11 million records and 20 features have been executed to check their efficiency. The results show that both indices can handle Big Data in a very low computational time with an effectiveness similar to the traditional indices using Apache Spark framework.",
"title": ""
},
{
"docid": "83ae128f71bb154177881012dfb6a680",
"text": "Cell imbalance in large battery packs degrades their capacity delivery, especially for cells connected in series where the weakest cell dominates their overall capacity. In this article, we present a case study of exploiting system reconfigurations to mitigate the cell imbalance in battery packs. Specifically, instead of using all the cells in a battery pack to support the load, selectively skipping cells to be discharged may actually enhance the pack’s capacity delivery. Based on this observation, we propose CSR, a Cell Skipping-assisted Reconfiguration algorithm that identifies the system configuration with (near)-optimal capacity delivery. We evaluate CSR using large-scale emulation based on empirically collected discharge traces of 40 lithium-ion cells. CSR achieves close-to-optimal capacity delivery when the cell imbalance in the battery pack is low and improves the capacity delivery by about 20% and up to 1x in the case of a high imbalance.",
"title": ""
},
{
"docid": "0ce05b9c26df484fc59366762d31465a",
"text": "This paper presents an algorithm that extracts the tempo of a musical excerpt. The proposed system assumes a constant tempo and deals directly with the audio signal. A sliding window is applied to the signal and two feature classes are extracted. The first class is the log-energy of each band of a mel-scale triangular filterbank, a common feature vector used in various MIR applications. For the second class, a novel feature for the tempo induction task is presented; the strengths of the twelve western musical tones at all octaves are calculated for each audio frame, in a similar fashion with Pitch Class Profile. The timeevolving feature vectors are convolved with a bank of resonators, each resonator corresponding to a target tempo. Then the results of each feature class are combined to give the final output. The algorithm was evaluated on the popular ISMIR 2004 Tempo Induction Evaluation Exchange Dataset. Results demonstrate that the superposition of the different types of features enhance the performance of the algorithm, which is in the current state-of-the-art algorithms of the tempo induction task.",
"title": ""
},
{
"docid": "a4788b60b0fc16551f03557483a8a532",
"text": "The rapid growth in the population density in urban cities demands tolerable provision of services and infrastructure. To meet the needs of city inhabitants. Thus, increase in the request for embedded devices, such as sensors, actuators, and smartphones, etc., which is providing a great business potential towards the new era of Internet of Things (IoT); in which all the devices are capable of interconnecting and communicating with each other over the Internet. Therefore, the Internet technologies provide a way towards integrating and sharing a common communication medium. Having such knowledge, in this paper, we propose a combined IoT-based system for smart city development and urban planning using Big Data analytics. We proposed a complete system, which consists of various types of sensors deployment including smart home sensors, vehicular networking, weather and water sensors, smart parking sensors, and surveillance objects, etc. A four-tier architecture is proposed which include 1) Bottom Tier-1: which is responsible for IoT sources, data generations, and collections 2) Intermediate Tier-1: That is responsible for all type of communication between sensors, relays, base stations, the internet, etc. 3) Intermediate Tier 2: it is responsible for data management and processing using Hadoop framework, and 4) Top tier: is responsible for application and usage of the data analysis and results generated. The system implementation consists of various steps that start from data generation and collecting, aggregating, filtration, classification, preprocessing, computing and decision making. The proposed system is implemented using Hadoop with Spark, voltDB, Storm or S4 for real time processing of the IoT data to generate results in order to establish the smart city. For urban planning or city future development, the offline historical data is analyzed on Hadoop using MapReduce programming. IoT datasets generated by smart homes, smart parking weather, pollution, and vehicle data sets are used for analysis and evaluation. Such type of system with full functionalities does not exist. Similarly, the results show that the proposed system is more scalable and efficient than the existing systems. Moreover, the system efficiency is measured in term of throughput and processing time.",
"title": ""
},
{
"docid": "f782af034ef46a15d89637a43ad2849c",
"text": "Introduction: Evidence-based treatment of abdominal hernias involves the use of prosthetic mesh. However, the most commonly used method of treatment of diastasis of the recti involves plication with non-absorbable sutures as part of an abdominoplasty procedure. This case report describes single-port laparoscopic repair of diastasis of recti and umbilical hernia with prosthetic mesh after plication with slowly absorbable sutures combined with abdominoplasty. Technique Description: Our patient is a 36-year-old woman with severe diastasis of the recti, umbilical hernia and an excessive amount of redundant skin after two previous pregnancies and caesarean sections. After raising the upper abdominal flap, a single-port was placed in the left upper quadrant and the ligamenturn teres was divided. The diastasis of the recti and umbilical hernia were plicated under direct vision with continuous and interrupted slowly absorbable sutures before an antiadhesive mesh was placed behind the repair with 6 cm overlap, transfixed in 4 quadrants and tacked in place with non-absorbable tacks in a double-crown technique. The left upper quadrant wound was closed with slowly absorbable sutures. The excess skin was removed and fibrin sealant was sprayed in the subcutaneous space to minimize the risk of serorna formation without using drains. Discussion: Combining single-port laparoscopic repair of diastasis of recti and umbilical hemia repair minimizes inadvertent suturing of abdominal contents during plication, the risks of port site hernias associated with conventional multipart repair and permanently reinforced the midline weakness while achieving “scarless” surgery.",
"title": ""
},
{
"docid": "b2f7826fe74d5bb3be8361aeb6ae41c4",
"text": "Skid steering of 4-wheel-drive electric vehicles has good maneuverability and mobility as a result of the application of differential torque to wheels on opposite sides. For path following, the paper utilizes the techniques of sliding mode control based on extended state observer which not only has robustness against the system dynamics not modeled and uncertain parameter but also reduces the switch gain effectively, so as to obtain a predictable behavior for the instantaneous center of rotation thus preventing excessive skidding. The efficiency of the algorithm is validated on a vehicle model with 14 degree of freedom. The simulation results show that the control law is robust against to the evaluation error of parameter and to the variation of the friction force within the wheel-ground interaction, what's more, it is easy to be carried out in controller.",
"title": ""
},
{
"docid": "c828195cfc88abd598d1825f69932eb0",
"text": "The central purpose of passive signal intercept receivers is to perform automatic categorization of unknown radar signals. Currently, there is an urgent need to develop intelligent classification algorithms for these devices due to emerging complexity of radar waveforms. Especially multifunction radars (MFRs) capable of performing several simultaneous tasks by utilizing complex, dynamically varying scheduled waveforms are a major challenge for automatic pattern classification systems. To assist recognition of complex radar emissions in modern intercept receivers, we have developed a novel method to recognize dynamically varying pulse repetition interval (PRI) modulation patterns emitted by MFRs. We use robust feature extraction and classifier design techniques to assist recognition in unpredictable real-world signal environments. We classify received pulse trains hierarchically which allows unambiguous detection of the subpatterns using a sliding window. Accuracy, robustness and reliability of the technique are demonstrated with extensive simulations using both static and dynamically varying PRI modulation patterns.",
"title": ""
},
{
"docid": "76dd7060fdbf9927495985dd5313896f",
"text": "Many network solutions and overlay networks utilize probabilistic techniques to reduce information processing and networking costs. This survey article presents a number of frequently used and useful probabilistic techniques. Bloom filters and their variants are of prime importance, and they are heavily used in various distributed systems. This has been reflected in recent research and many new algorithms have been proposed for distributed systems that are either directly or indirectly based on Bloom filters. In this survey, we give an overview of the basic and advanced techniques, reviewing over 20 variants and discussing their application in distributed systems, in particular for caching, peer-to-peer systems, routing and forwarding, and measurement data summarization.",
"title": ""
},
{
"docid": "9eece0709b7df087f3ea1afcfa154c64",
"text": "This platform paper introduces a methodology for simulating an autonomous vehicle on open public roads. The paper outlines the technology and protocol needed for running these simulations, and describes an instance where the Real Road Autonomous Driving Simulator (RRADS) was used to evaluate 3 prototypes in a between-participant study design. 35 participants were interviewed at length before and after entering the RRADS. Although our study did not use overt deception---the consent form clearly states that a licensed driver is operating the vehicle---the protocol was designed to support suspension of disbelief. Several participants who did not read the consent form clearly strongly believed that they were interacting with a fully autonomous vehicle.\n The RRADS platform provides a lens onto the attitudes and concerns that people in real-world autonomous vehicles might have, and also points to ways that a protocol deliberately using misdirection can gain ecologically valid reactions from study participants.",
"title": ""
},
{
"docid": "1d390cf436dc5b4ee99b008070c0782d",
"text": "Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. SSL algorithms based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that SSL algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that the performance of simple baselines which do not use unlabeled data is often underreported, SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and performance can degrade substantially when the unlabeled dataset contains out-ofdistribution examples. To help guide SSL research towards real-world applicability, we make our unified reimplemention and evaluation platform publicly available.2",
"title": ""
},
{
"docid": "024168795536bc141bb07af74486ef78",
"text": "Video-based person re-identification matches video clips of people across non-overlapping cameras. Most existing methods tackle this problem by encoding each video frame in its entirety and computing an aggregate representation across all frames. In practice, people are often partially occluded, which can corrupt the extracted features. Instead, we propose a new spatiotemporal attention model that automatically discovers a diverse set of distinctive body parts. This allows useful information to be extracted from all frames without succumbing to occlusions and misalignments. The network learns multiple spatial attention models and employs a diversity regularization term to ensure multiple models do not discover the same body part. Features extracted from local image regions are organized by spatial attention model and are combined using temporal attention. As a result, the network learns latent representations of the face, torso and other body parts using the best available image patches from the entire video sequence. Extensive evaluations on three datasets show that our framework outperforms the state-of-the-art approaches by large margins on multiple metrics.",
"title": ""
},
{
"docid": "b81bff8f6a00d2ca6d008783eceb4363",
"text": "This paper describes a new guidance law that extends the pursuit guidance law previously developed by Park et. al. Several improvements are presented that allow operation in the real world. A stability analysis accounts for the dynamic response of the bank angle commands which leads to the definition of regions of instability. Another extension accounts for situations where the pursuit aim point is not defined by the previous work. A third extension changes the pursuit distance-to-go from a constant to a constant time-to-go so that the linearized transient response is independent of ground speed. Yet another extension defines a “homing” mode in which the UAV flies to a goal point without a defined path, commonly used as a “return-to-base,” either as a safety measure or as an end-of-mission order. Since there is no constraint that the goal point be stationary, we demonstrate that the new law can be used to follow a moving target whose location is known, such as a mobile ground control station. Simulations with a 6 degree-of-freedom aircraft model demonstrate these features.",
"title": ""
},
{
"docid": "511c90eadbbd4129fdf3ee9e9b2187d3",
"text": "BACKGROUND\nPressure ulcers are associated with substantial health burdens but may be preventable.\n\n\nPURPOSE\nTo review the clinical utility of pressure ulcer risk assessment instruments and the comparative effectiveness of preventive interventions in persons at higher risk.\n\n\nDATA SOURCES\nMEDLINE (1946 through November 2012), CINAHL, the Cochrane Library, grant databases, clinical trial registries, and reference lists.\n\n\nSTUDY SELECTION\nRandomized trials and observational studies on effects of using risk assessment on clinical outcomes and randomized trials of preventive interventions on clinical outcomes.\n\n\nDATA EXTRACTION\nMultiple investigators abstracted and checked study details and quality using predefined criteria.\n\n\nDATA SYNTHESIS\nOne good-quality trial found no evidence that use of a pressure ulcer risk assessment instrument, with or without a protocolized intervention strategy based on assessed risk, reduces risk for incident pressure ulcers compared with less standardized risk assessment based on nurses' clinical judgment. In higher-risk populations, 1 good-quality and 4 fair-quality randomized trials found that more advanced static support surfaces were associated with lower risk for pressure ulcers compared with standard mattresses (relative risk range, 0.20 to 0.60). Evidence on the effectiveness of low-air-loss and alternating-air mattresses was limited, with some trials showing no clear differences from advanced static support surfaces. Evidence on the effectiveness of nutritional supplementation, repositioning, and skin care interventions versus usual care was limited and had methodological shortcomings, precluding strong conclusions.\n\n\nLIMITATION\nOnly English-language articles were included, publication bias could not be formally assessed, and most studies had methodological shortcomings.\n\n\nCONCLUSION\nMore advanced static support surfaces are more effective than standard mattresses for preventing ulcers in higher-risk populations. The effectiveness of formal risk assessment instruments and associated intervention protocols compared with less standardized assessment methods and the effectiveness of other preventive interventions compared with usual care have not been clearly established.",
"title": ""
},
{
"docid": "4aebb6566c8b27c7528cc108bacc2a60",
"text": "OBJECT\nSuperior cluneal nerve (SCN) entrapment neuropathy is a poorly understood clinical entity that can produce low-back pain. The authors report a less-invasive surgical treatment for SCN entrapment neuropathy that can be performed with local anesthesia.\n\n\nMETHODS\nFrom November 2010 through November 2011, the authors performed surgery in 34 patients (age range 18-83 years; mean 64 years) with SCN entrapment neuropathy. The entrapment was unilateral in 13 patients and bilateral in 21. The mean postoperative follow-up period was 10 months (range 6-18 months). After the site was blocked with local anesthesia, the thoracolumbar fascia of the orifice was dissected with microscissors in a distal-to-rostral direction along the SCN to release the entrapped nerve.\n\n\nRESULTS\nwere evaluated according to Japanese Orthopaedic Association (JOA) and Roland-Morris Disability Questionnaire (RMDQ) scores. Results In all 34 patients, the SCN penetrated the orifice of the thoracolumbar fascia and could be released by dissection of the fascia. There were no intraoperative surgery-related complications. For all patients, surgery was effective; JOA and RMDQ scores indicated significant improvement (p < 0.05).\n\n\nCONCLUSIONS\nFor patients with low-back pain, SCN entrapment neuropathy must be considered as a causative factor. Treatment by less-invasive surgery, with local anesthesia, yielded excellent clinical outcomes.",
"title": ""
}
] |
scidocsrr
|
5ebddb18090ade32df43bb60fc8277c7
|
Data Fusion Algorithms for Multiple Inertial Measurement Units
|
[
{
"docid": "c723ff511bc207b490b2f414ec3a3565",
"text": "This paper evaluates the performance of a shoe/foot mounted inertial system for pedestrian navigation application. Two different grades of inertial sensors are used, namely a medium cost tactical grade Honeywell HG1700 inertial measurement unit (IMU) and a low-cost MEMS-based Crista IMU (Cloud Cap Technology). The inertial sensors are used in two different ways for computing the navigation solution. The first method is a conventional integration algorithm where IMU measurements are processed through a set of mechanization equation to compute a six degree-offreedom (DOF) navigation solution. Such a system is referred to as an Inertial Navigation System (INS). The integration of this system with GPS is performed using a tightly coupled integration scheme. Since the sensor is placed on the foot, the designed integrated system exploits the small period for which foot comes to rest at each step (stance-phase of the gait cycle) and uses Zero Velocity Update (ZUPT) to keep the INS errors bounded in the absence of GPS. An algorithm for detecting the stance-phase using the pattern of three-dimensional acceleration is discussed. In the second method, the navigation solutions is computed using the fact that a pedestrian takes one step at a time, and thus positions can be computed by propagating the step-length in the direction of pedestrian motion. This algorithm is termed as pedestrian dead-reckoning (PDR) algorithm. The IMU measurement in this algorithm is used to detect the step, estimate the step-length, and determine the heading for solution propagation. Different algorithms for stridelength estimation and step-detection are discussed in this paper. The PDR system is also integrated with GPS through a tightly coupled integration scheme. The performance of both the systems is evaluated through field tests conducted in challenging GPS environments using both inertial sensors. The specific focus is on the system performance under long GPS outages of duration upto 30 minutes.",
"title": ""
}
] |
[
{
"docid": "086f9cbed93553ca00b2afeff1cb8508",
"text": "Rapid advance of location acquisition technologies boosts the generation of trajectory data, which track the traces of moving objects. A trajectory is typically represented by a sequence of timestamped geographical locations. A wide spectrum of applications can benefit from the trajectory data mining. Bringing unprecedented opportunities, large-scale trajectory data also pose great challenges. In this paper, we survey various applications of trajectory data mining, e.g., path discovery, location prediction, movement behavior analysis, and so on. Furthermore, this paper reviews an extensive collection of existing trajectory data mining techniques and discusses them in a framework of trajectory data mining. This framework and the survey can be used as a guideline for designing future trajectory data mining solutions.",
"title": ""
},
{
"docid": "b811c82ff944715edc2b7dec382cb529",
"text": "The mobile industry has experienced a dramatic growth; it evolves from analog to digital 2G (GSM), then to high date rate cellular wireless communication such as 3G (WCDMA), and further to packet optimized 3.5G (HSPA) and 4G (LTE and LTE advanced) systems. Today, the main design challenges of mobile phone antenna are the requirements of small size, built-in structure, and multisystems in multibands, including all cellular 2G, 3G, 4G, and other noncellular radio-frequency (RF) bands, and moreover the need for a nice appearance and meeting all standards and requirements such as specific absorption rates (SARs), hearing aid compatibility (HAC), and over the air (OTA). This paper gives an overview of some important antenna designs and progress in mobile phones in the last 15 years, and presents the recent development on new antenna technology for LTE and compact multiple-input-multiple-output (MIMO) terminals.",
"title": ""
},
{
"docid": "e90411a6f1658e00b3b39f2516a7ba4f",
"text": "The Alzheimer's Disease Neuroimaging Initiative (ADNI) is a longitudinal multisite observational study of healthy elders, mild cognitive impairment (MCI), and Alzheimer's disease. Magnetic resonance imaging (MRI), (18F)-fluorodeoxyglucose positron emission tomography (FDG PET), urine serum, and cerebrospinal fluid (CSF) biomarkers, as well as clinical/psychometric assessments are acquired at multiple time points. All data will be cross-linked and made available to the general scientific community. The purpose of this report is to describe the MRI methods employed in ADNI. The ADNI MRI core established specifications that guided protocol development. A major effort was devoted to evaluating 3D T(1)-weighted sequences for morphometric analyses. Several options for this sequence were optimized for the relevant manufacturer platforms and then compared in a reduced-scale clinical trial. The protocol selected for the ADNI study includes: back-to-back 3D magnetization prepared rapid gradient echo (MP-RAGE) scans; B(1)-calibration scans when applicable; and an axial proton density-T(2) dual contrast (i.e., echo) fast spin echo/turbo spin echo (FSE/TSE) for pathology detection. ADNI MRI methods seek to maximize scientific utility while minimizing the burden placed on participants. The approach taken in ADNI to standardization across sites and platforms of the MRI protocol, postacquisition corrections, and phantom-based monitoring of all scanners could be used as a model for other multisite trials.",
"title": ""
},
{
"docid": "78b371e7df39a1ebbad64fdee7303573",
"text": "This state of the art report focuses on glyph-based visualization, a common form of visual design where a data set is depicted by a collection of visual objects referred to as glyphs. Its major strength is that patterns of multivariate data involving more than two attribute dimensions can often be more readily perceived in the context of a spatial relationship, whereas many techniques for spatial data such as direct volume rendering find difficult to depict with multivariate or multi-field data, and many techniques for non-spatial data such as parallel coordinates are less able to convey spatial relationships encoded in the data. This report fills several major gaps in the literature, drawing the link between the fundamental concepts in semiotics and the broad spectrum of glyph-based visualization, reviewing existing design guidelines and implementation techniques, and surveying the use of glyph-based visualization in many applications.",
"title": ""
},
{
"docid": "fc5969e8205ca0e03c0cb6aab2bbb058",
"text": "Oncology acupuncture is a new and emerging field of research. Recent advances from published clinical trials have added evidence to support the use of acupuncture for symptom management in cancer patients. Recent new developments include (1) pain and dysfunction after neck dissection; (2) radiation-induced xerostomia in head and neck cancer; (3) aromatase inhibitor-associated arthralgia in breast cancer; (4) hot flashes in breast cancer and prostate cancer; and (5) chemotherapy-induced neutropenia in ovarian cancer. Some interventions are becoming a non-pharmaceutical option for cancer patients, while others still require further validation and confirmation. Meanwhile, owing to the rapid development of the field and increased demands from cancer patients, safety issues concerning oncology acupuncture practice have become imperative. Patients with cancer may be at higher risk developing adverse reactions from acupuncture. Practical strategies for enhancing safety measures are discussed and recommended.",
"title": ""
},
{
"docid": "2189f0c48e453231bed41574c39f093c",
"text": "Rapid isolation of high-purity microbial genomic DNA is necessary for genome analysis. In this study, the authors compared a one-hour procedure using a microwave with enzymatic and boiling methods of genomic DNA extraction from Gram-negative and Gram-positive bacteria. High DNA concentration and purity were observed for both MRSA and ESBL strains (80.1 and 91.1 μg/ml; OD260/280, 1.82 and 1.70, respectively) when the extraction protocol included microwave pre-heating. DNA quality was further confirmed by PCR detection of mecA and CTX-M. In conclusion, the microwave-based procedure was rapid, efficient, cost-effective, and applicable for both Gram-positive and Gram-negative bacteria.",
"title": ""
},
{
"docid": "ac0119255806976213d61029247b14f1",
"text": "Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. We conducted a controlled experiment to test the effects of display and scenario properties on training effectiveness for a visual scanning task in a simulated urban environment. The experiment varied the levels of field of view and visual complexity during a training phase and then evaluated scanning performance with the simulator's highest levels of fidelity and scene complexity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual complexity significantly affected target detection during training; higher field of view led to better performance and higher visual complexity worsened performance. Additionally, adherence to the prescribed visual scanning strategy during assessment was best when the level of visual complexity during training matched that of the assessment conditions, providing evidence that similar visual complexity was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training-evaluation in a more realistic setting may be necessary.",
"title": ""
},
{
"docid": "e0f29540b1d4fba545dfcb60f112531b",
"text": "We evaluated the hypothesis that postural instability precedes the onset of motion sickness. Subjects standing in a \"moving room\" were exposed to nearly global oscillating optical flow. In the experimental condition, the optical oscillations were a complex sum-of-sines between 0.1 and 0.3 Hz, with an excursion of 1.8 cm. This optical motion was of such low frequency and magnitude that it was sometimes not noticed by subjects. However, in two experiments, exposure to the moving room produced significant increases in scores on a standard motion sickness questionnaire. In addition, approximately half of subjects reported motion sickness. Analysis of postural motion during exposure to the moving room revealed increases in postural sway before the onset of subjective motion sickness symptoms. This confirms a key prediction of the postural instability theory of motion sickness.",
"title": ""
},
{
"docid": "7ecfdf2dc10973d2c345d2372401b7e4",
"text": "Recently developed numerical methods make possible the highaccuracy computation of eigenmodes of the Laplacian for a variety of “drums” in two dimensions. A number of computed examples are presented together with a discussion of their implications concerning bound and continuum states, isospectrality, symmetry and degeneracy, eigenvalue avoidance, resonance, localization, eigenvalue optimization, perturbation of eigenvalues and eigenvectors, and other matters.",
"title": ""
},
{
"docid": "54e541c0a2c8c90862ce5573899aacc7",
"text": "The moving sofa problem, posed by L. Moser in 1966, asks for the planar shape of maximal area that can move around a right-angled corner in a hallway of unit width. It is known that a maximal area shape exists, and that its area is at least 2.2195 . . .—the area of an explicit construction found by Gerver in 1992—and at most 2 √ 2 ≈ 2.82, with the lower bound being conjectured as the true value. We prove a new and improved upper bound of 2.37. The method involves a computer-assisted proof scheme that can be used to rigorously derive further improved upper bounds that converge to the correct value.",
"title": ""
},
{
"docid": "38292a0baef7edc1e91bb2b07082f0e3",
"text": "General value functions (GVFs) are an approach to representing models of an agent’s world as a collection of predictive questions. A GVF is defined by: a policy, a prediction target, and a timescale. Traditionally predictions for a given timescale must be specified by the engineer and each timescale learned independently. Here we present γ-nets, a method for generalizing value function estimation over timescale, allowing a given GVF to be trained and queried for any fixed timescale. The key to our approach is to use timescale as one of the network inputs. The prediction target for any fixed timescale is then available at every timestep and we are free to train on any number of timescales. We present preliminary results on a simple test signal. 1. Value Functions and Timescale Reinforcement learning (RL) studies algorithms in which an agent learns to maximize the amount of reward it receives over its lifetime. A key method in RL is the estimation of value — the expected cumulative sum of discounted future rewards (called the return). In loose terms this tells an agent how good it is to be in a particular state. The agent can then learn a policy — a way of behaving — which maximizes the amount of reward received. Sutton et al. (2011) broadened the use of value estimation by introducing general value functions (GVFs), in which value estimates are made of other sensorimotor signals, not just reward. GVFs can be thought of as representing an agent’s model of itself and its environment as a collection of questions about future sensorimotor returns; a predictive representation of state. A GVF is defined by three elements: 1) the policy, 2) the cumulant (the sensorimotor signal to be Department of Computing Science, University of Alberta, Edmonton, Canada Cogitai, Inc., United States. Correspondence to: Craig Sherstan <sherstan@ualberta.ca>, Patrick M. Pilarski <pilarski@ualberta.ca>. Accepted at the FAIM workshop “Prediction and Generative Modeling in Reinforcement Learning”, Stockholm, Sweden, 2018. Copyright 2018 by the author(s). Transition",
"title": ""
},
{
"docid": "01d32a4b376c2d6afbb68f53978f9719",
"text": "In this perspective, our goal is to present and elucidate a thus far largely overlooked problem that is arising in scientific publishing, namely the identification and discovery of citation cartels in citation networks. Taking from the well-known definition of a community in the realm of network science, namely that people within a community share significantly more links with each other as they do outside of this community, we propose that citation cartels are defined as groups of authors that cite each other disproportionately more than they do other groups of authors that work on the same subject. Evidently, the identification of citation cartels is somewhat different, although similar to the identification of communities in networks. We systematically expose the problem, provide theoretical examples, and outline an algorithmic guide on how to approach the subject.",
"title": ""
},
{
"docid": "6d594c21ff1632b780b510620484eb62",
"text": "The last several years have seen intensive interest in exploring neural-networkbased models for machine comprehension (MC) and question answering (QA). In this paper, we approach the problems by closely modelling questions in a neural network framework. We first introduce syntactic information to help encode questions. We then view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline.",
"title": ""
},
{
"docid": "359418904acf423cfd7487803a706e2c",
"text": "Computational semantics has long been seen as a field divided between logical and statistical approaches, but this divide is rapidly eroding, with the development of statistical models that learn compositional semantic theories from corpora and databases. This paper presents a simple discriminative learning framework for defining such models and relating them to logical theories. Within this framework, we discuss the task of learning to map utterances to logical forms (semantic parsing) and the task of learning from denotations with logical forms as latent variables. We also consider models that use distributed (e.g., vector) representations rather than logical ones, showing that these can be seen as part of the same overall framework for understanding meaning and structural complexity.",
"title": ""
},
{
"docid": "e48da0cf3a09b0fd80f0c2c01427a931",
"text": "Timely analysis of information in cybersecurity necessitates automated information extraction from unstructured text. Unfortunately, state-of-the-art extraction methods require training data, which is unavailable in the cyber-security domain. To avoid the arduous task of handlabeling data, we develop a very precise method to automatically label text from several data sources by leveraging article-specific structured data and provide public access to corpus annotated with cyber-security entities. We then prototype a maximum entropy model that processes this corpus of auto-labeled text to label new sentences and present results showing the Collins Perceptron outperforms the MLE with LBFGS and OWL-QN optimization for parameter fitting. The main contribution of this paper is an automated technique for creating a training corpus from text related to a database. As a multitude of domains can benefit from automated extraction of domain-specific concepts for which no labeled data is available, we hope our solution is widely applicable.",
"title": ""
},
{
"docid": "8655653e5a4a64518af8da996ac17c25",
"text": "Although a rigorous review of literature is essential for any research endeavor, technical solutions that support systematic literature review approaches are still scarce. Systematic literature searches in particular are often described as complex, error-prone and time-consuming, due to the prevailing lack of adequate technical support. In this study, we therefore aim to learn how to design information systems that effectively facilitate systematic literature searches. Using the design science research paradigm, we develop design principles that intend to increase comprehensiveness, precision, and reproducibility of systematic literature searches. The design principles are derived through multiple design cycles that include the instantiation of the principles in form of a prototype web application and qualitative evaluations. Our design knowledge could serve as a foundation for future research on systematic search systems and support the development of innovative information systems that, eventually, improve the quality and efficiency of systematic literature reviews.",
"title": ""
},
{
"docid": "262302228a88025660c0add90d500518",
"text": "Social network analysis provides meaningful information about behavior of network members that can be used for diverse applications such as classification, link prediction. However, network analysis is computationally expensive because of feature learning for different applications. In recent years, many researches have focused on feature learning methods in social networks. Network embedding represents the network in a lower dimensional representation space with the same properties which presents a compressed representation of the network. In this paper, we introduce a novel algorithm named “CARE” for network embedding that can be used for different types of networks including weighted, directed and complex. Current methods try to preserve local neighborhood information of nodes, whereas the proposed method utilizes local neighborhood and community information of network nodes to cover both local and global structure of social networks. CARE builds customized paths, which are consisted of local and global structure of network nodes, as a basis for network embedding and uses the Skip-gram model to learn representation vector of nodes. Subsequently, stochastic gradient descent is applied to optimize our objective function and learn the final representation of nodes. Our method can be scalable when new nodes are appended to network without information loss. Parallelize generation of customized random walks is also used for speeding up CARE. We evaluate the performance of CARE on multi label classification and link prediction tasks. Experimental results on various networks indicate that the proposed method outperforms others in both Micro and Macro-f1 measures for different size of training data.",
"title": ""
},
{
"docid": "73b81ca84f4072188e1a263e9a7ea330",
"text": "The digital workplace is widely acknowledged as an important organizational asset for optimizing knowledge worker productivity. While there is no particular research stream on the digital workplace, scholars have conducted intensive research on related topics. This study aims to summarize the practical implications of the current academic body of knowledge on the digital workplace. For this purpose, a screening of academic-practitioner literature was conducted, followed by a systematic review of academic top journal literature. The screening revealed four main research topics on the digital workplace that are present in academic-practitioner literature: 1) Collaboration, 2) Compliance, 3) Mobility, and 4) Stress and overload. Based on the four topics, this study categorizes practical implications on the digital workplace into 15 concepts. Thereby, it provides two main contributions. First, the study delivers condensed information for practitioners about digital workplace design. Second, the results shed light on the relevance of IS research.",
"title": ""
},
{
"docid": "1a45d5e0ccc4816c0c64c7e25e7be4e3",
"text": "The interpolation of correspondences (EpicFlow) was widely used for optical flow estimation in most-recent works. It has the advantage of edge-preserving and efficiency. However, it is vulnerable to input matching noise, which is inevitable in modern matching techniques. In this paper, we present a Robust Interpolation method of Correspondences (called RicFlow) to overcome the weakness. First, the scene is over-segmented into superpixels to revitalize an early idea of piecewise flow model. Then, each model is estimated robustly from its support neighbors based on a graph constructed on superpixels. We propose a propagation mechanism among the pieces in the estimation of models. The propagation of models is significantly more efficient than the independent estimation of each model, yet retains the accuracy. Extensive experiments on three public datasets demonstrate that RicFlow is more robust than EpicFlow, and it outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "1ea25647642e46410b440f4edf3e9d8d",
"text": "1. Bernard J. Baars: Can physics provide a theory of consciousness? 2. David J. Chalmers: Minds, machines, and mathematics 3. Solomon Feferman: Penrose's Gödelian argument 4. Stanley A. Klein: Is quantum mechanics relevant to understanding consciousness? 5. Tim Maudlin: Between the motion and the act.... 6. John McCarthy: Awareness and understanding in computer programs 7. Daryl McCullough: Can humans escape Gödel? 8. Drew McDermott: [STAR] Penrose is wrong 9. Hans Moravec: Roger Penrose's gravitonic brains",
"title": ""
}
] |
scidocsrr
|
a619c17600882aec1f998d6079f6e5fe
|
Cross-lingual Transfer Learning for Multilingual Task Oriented Dialog
|
[
{
"docid": "5e86e48f73283ac321abee7a9f084bec",
"text": "Recent papers have shown that neural networks obtain state-of-the-art performance on several different sequence tagging tasks. One appealing property of such systems is their generality, as excellent performance can be achieved with a unified architecture and without task-specific feature engineering. However, it is unclear if such systems can be used for tasks without large amounts of training data. In this paper we explore the problem of transfer learning for neural sequence taggers, where a source task with plentiful annotations (e.g., POS tagging on Penn Treebank) is used to improve performance on a target task with fewer available annotations (e.g., POS tagging for microblogs). We examine the effects of transfer learning for deep hierarchical recurrent networks across domains, applications, and languages, and show that significant improvement can often be obtained. These improvements lead to improvements over the current state-ofthe-art on several well-studied tasks.1",
"title": ""
},
{
"docid": "330e0c60c4d491b2f824cb4da8467cc4",
"text": "We investigate the usage of convolutional neural networks (CNNs) for the slot filling task in spoken language understanding. We propose a novel CNN architecture for sequence labeling which takes into account the previous context words with preserved order information and pays special attention to the current word with its surrounding context. Moreover, it combines the information from the past and the future words for classification. Our proposed CNN architecture outperforms even the previously best ensembling recurrent neural network model and achieves state-of-the-art results with an F1-score of 95.61% on the ATIS benchmark dataset without using any additional linguistic knowledge and resources.",
"title": ""
}
] |
[
{
"docid": "2518564949f7488a7f01dff74e3b6e2d",
"text": "Although it is commonly believed that women are kinder and more cooperative than men, there is conflicting evidence for this assertion. Current theories of sex differences in social behavior suggest that it may be useful to examine in what situations men and women are likely to differ in cooperation. Here, we derive predictions from both sociocultural and evolutionary perspectives on context-specific sex differences in cooperation, and we conduct a unique meta-analytic study of 272 effect sizes-sampled across 50 years of research-on social dilemmas to examine several potential moderators. The overall average effect size is not statistically different from zero (d = -0.05), suggesting that men and women do not differ in their overall amounts of cooperation. However, the association between sex and cooperation is moderated by several key features of the social context: Male-male interactions are more cooperative than female-female interactions (d = 0.16), yet women cooperate more than men in mixed-sex interactions (d = -0.22). In repeated interactions, men are more cooperative than women. Women were more cooperative than men in larger groups and in more recent studies, but these differences disappeared after statistically controlling for several study characteristics. We discuss these results in the context of both sociocultural and evolutionary theories of sex differences, stress the need for an integrated biosocial approach, and outline directions for future research.",
"title": ""
},
{
"docid": "204d6d3327b4c0977a1ceb0d52cdcce4",
"text": "Contrasting meaning is a basic aspect of semantics. Recent word-embedding models based on distributional semantics hypothesis are known to be weak for modeling lexical contrast. We present in this paper the embedding models that achieve an F-score of 92% on the widely-used, publicly available dataset, the GRE “most contrasting word” questions (Mohammad et al., 2008). This is the highest performance seen so far on this dataset. Surprisingly at the first glance, unlike what was suggested in most previous work, where relatedness statistics learned from corpora is claimed to yield extra gains over lexicon-based models, we obtained our best result relying solely on lexical resources (Roget’s and WordNet)—corpora statistics did not lead to further improvement. However, this should not be simply taken as that distributional statistics is not useful. We examine several basic concerns in modeling contrasting meaning to provide detailed analysis, with the aim to shed some light on the future directions for this basic semantics modeling problem.",
"title": ""
},
{
"docid": "d8683a777be0027f60e2ab8b2291fb92",
"text": "This paper focuses on coordinate update methods, which are useful for solving problems involving large or high-dimensional datasets. They decompose a problem into simple subproblems, where each updates one, or a small block of, variables while fixing others. These methods can deal with linear and nonlinear mappings, smooth and nonsmooth functions, as well as convex and nonconvex problems. In addition, they are easy to parallelize. The great performance of coordinate update methods depends on solving simple subproblems. To derive simple subproblems for several new classes of applications, this paper systematically studies coordinate friendly operators that perform low-cost coordinate updates. Based on the discovered coordinate friendly operators, as well as operator splitting techniques, we obtain new coordinate update algorithms for a variety of problems in machine learning, image processing, as well as sub-areas of optimization. Several problems are treated with coordinate update for the first time in history. The obtained algorithms are scalable to large instances through parallel and even asynchronous computing. We present numerical examples to illustrate how effective these algorithms are.",
"title": ""
},
{
"docid": "00b851715df7fe4878f74796df9d8061",
"text": "Low duty-cycle mobile systems can benefit from ultra-low power deep neural network (DNN) accelerators. Analog in-memory computational units are used to store synaptic weights in on-chip non-volatile arrays and perform current-based calculations. In-memory computation entirely eliminates off-chip weight accesses, parallelizes operation, and amortizes readout power costs by reusing currents. The proposed system achieves 900nW measured power, with an estimated energy efficiency of 0.012pJ/MAC in a 130nm SONOS process.",
"title": ""
},
{
"docid": "7e8feb5f8d816a0c0626f6fdc4db7c04",
"text": "In this paper, we analyze if cascade usage of the context encoder with increasing input can improve the results of the inpainting. For this purpose, we train context encoder for 64x64 pixels images in a standard way and use its resized output to fill in the missing input region of the 128x128 context encoder, both in training and evaluation phase. As the result, the inpainting is visibly more plausible. In order to thoroughly verify the results, we introduce normalized squared-distortion, a measure for quantitative inpainting evaluation, and we provide its mathematical explanation. This is the first attempt to formalize the inpainting measure, which is based on the properties of latent feature representation, instead of L2 reconstruction loss.",
"title": ""
},
{
"docid": "c2e0166a7604836cc33836d1ca86e335",
"text": "Owing to the dramatic mobile IP growth, the emerging Internet of Things, and cloud-based applications, wireless networking is witnessing a paradigm shift. By fully exploiting spatial degrees of freedom, massive multiple-input-multiple-output (MIMO) systems promise significant gains in data rates and link reliability. Although the research community has recognized the theoretical benefits of these systems, building the hardware of such complex systems is a challenge in practice. This paper presents a time division duplex (TDD)-based 128-antenna massive MIMO prototype system from theory to reality. First, an analytical signal model is provided to facilitate the setup of a feasible massive MIMO prototype system. Second, a link-level simulation consistent with practical TDDbased massive MIMO systems is conducted to guide and validate the massive MIMO system design. We design and implement the TDDbased 128-antenna massive MIMO prototype system with the guidelines obtained from the link-level simulation. Uplink real-time video transmission and downlink data transmission under the configuration of multiple single-antenna users are achieved. Comparisons with state-of-the-art prototypes demonstrate the advantages of the proposed system in terms of antenna number, bandwidth, latency, and throughput. The proposed system is also equipped with scalability, which makes the system applicable to a wide range of massive scenarios.",
"title": ""
},
{
"docid": "8505afb27c5ef73baeaa53dfe1c337ae",
"text": "The Osprey (Pandion haliaetus) is one of only six bird species with an almost world-wide distribution. We aimed at clarifying its phylogeographic structure and elucidating its taxonomic status (as it is currently separated into four subspecies). We tested six biogeographical scenarios to explain how the species’ distribution and differentiation took place in the past and how such a specialized raptor was able to colonize most of the globe. Using two mitochondrial genes (cyt b and ND2), the Osprey appeared structured into four genetic groups representing quasi non-overlapping geographical regions. The group Indo-Australasia corresponds to the cristatus ssp, as well as the group Europe-Africa to the haliaetus ssp. In the Americas, we found a single lineage for both carolinensis and ridgwayi ssp, whereas in north-east Asia (Siberia and Japan), we discovered a fourth new lineage. The four lineages are well differentiated, contrasting with the low genetic variability observed within each clade. Historical demographic reconstructions suggested that three of the four lineages experienced stable trends or slight demographic increases. Molecular dating estimates the initial split between lineages at about 1.16 Ma ago, in the Early Pleistocene. Our biogeographical inference suggests a pattern of colonization from the American continent towards the Old World. Populations of the Palearctic would represent the last outcomes of this colonization. At a global scale the Osprey complex may be composed of four different Evolutionary Significant Units, which should be treated as specific management units. Our study brought essential genetic clarifications, which have implications for conservation strategies in identifying distinct lineages across which birds should not be artificially moved through exchange/reintroduction schemes.",
"title": ""
},
{
"docid": "d0f9bf7511bcaced02838aa1c2d8785b",
"text": "A folksonomy consists of three basic entities, namely users, tags and resources. This kind of social tagging system is a good way to index information, facilitate searches and navigate resources. The main objective of this paper is to present a novel method to improve the quality of tag recommendation. According to the statistical analysis, we find that the total number of tags used by a user changes over time in a social tagging system. Thus, this paper introduces the concept of user tagging status, namely the growing status, the mature status and the dormant status. Then, the determining user tagging status algorithm is presented considering a user’s current tagging status to be one of the three tagging status at one point. Finally, three corresponding strategies are developed to compute the tag probability distribution based on the statistical language model in order to recommend tags most likely to be used by users. Experimental results show that the proposed method is better than the compared methods at the accuracy of tag recommendation.",
"title": ""
},
{
"docid": "fbe0c6e8cbaf6c419990c1a7093fe2a9",
"text": "Deep learning is quickly becoming the leading methodology for medical image analysis. Given a large medical archive, where each image is associated with a diagnosis, efficient pathology detectors or classifiers can be trained with virtually no expert knowledge about the target pathologies. However, deep learning algorithms, including the popular ConvNets, are black boxes: little is known about the local patterns analyzed by ConvNets to make a decision at the image level. A solution is proposed in this paper to create heatmaps showing which pixels in images play a role in the image-level predictions. In other words, a ConvNet trained for image-level classification can be used to detect lesions as well. A generalization of the backpropagation method is proposed in order to train ConvNets that produce high-quality heatmaps. The proposed solution is applied to diabetic retinopathy (DR) screening in a dataset of almost 90,000 fundus photographs from the 2015 Kaggle Diabetic Retinopathy competition and a private dataset of almost 110,000 photographs (e-ophtha). For the task of detecting referable DR, very good detection performance was achieved: Az=0.954 in Kaggle's dataset and Az=0.949 in e-ophtha. Performance was also evaluated at the image level and at the lesion level in the DiaretDB1 dataset, where four types of lesions are manually segmented: microaneurysms, hemorrhages, exudates and cotton-wool spots. For the task of detecting images containing these four lesion types, the proposed detector, which was trained to detect referable DR, outperforms recent algorithms trained to detect those lesions specifically, with pixel-level supervision. At the lesion level, the proposed detector outperforms heatmap generation algorithms for ConvNets. This detector is part of the Messidor® system for mobile eye pathology screening. Because it does not rely on expert knowledge or manual segmentation for detecting relevant patterns, the proposed solution is a promising image mining tool, which has the potential to discover new biomarkers in images.",
"title": ""
},
{
"docid": "c5fa73d74225b29230e33ec2e8bb3a63",
"text": "This paper presents Discriminative Locality Alignment Network (DLANet), a novel manifold-learningbased discriminative learnable feature, for wild scene classification. Based on a convolutional structure, DLANet learns the filters of multiple layers by applying DLA and exploits the block-wise histograms of the binary codes of feature maps to generate the local descriptors. A DLA layer maximizes the margin between the inter-class patches and minimizes the distance of the intra-class patches in the local region. In particular, we construct a two-layer DLANet by stacking two DLA layers and a feature layer. It is followed by a popular framework of scene classification, which combines Locality-constrained Linear Coding–Spatial Pyramid Matching (LLC–SPM) and linear Support Vector Machine (SVM). We evaluate DLANet on NYU Depth V1, Scene-15 and MIT Indoor-67. Experiments show that DLANet performs well on depth image. It outperforms the carefully tuned features, including SIFT and is also competitive to the other reported methods. & 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2472a20493c3319cdc87057cc3d70278",
"text": "Traffic flow prediction is an essential function of traffic information systems. Conventional approaches, using artificial neural networks with narrow network architecture and poor training samples for supervised learning, have been only partially successful. In this paper, a deep-learning neural-network based on TensorFlow™ is suggested for the prediction traffic flow conditions, using real-time traffic data. Until now, no research has applied the TensorFlow™ deep learning neural network model to the estimation of traffic conditions. The suggested supervised model is trained by a deep learning algorithm, which uses real traffic data aggregated every five minutes. Results demonstrate that the model's accuracy rate is around 99%.",
"title": ""
},
{
"docid": "e82df2786524c8a427c8aecfc5ab817a",
"text": "This paper presents 2×2 patch array antenna for 2.45 GHz industrial, scientific and medical (ISM) band application. In this design, four array radiating elements interconnected with a transmission line and excited by 50Ω subminiature (SMA). The proposed antenna structure is combined with a reflector in order to investigate the effect of air gap between radiating element and reflector in terms of reflection coefficient (S11) bandwidth and realized gain. The analysis on the effect of air gap has significantly achieved maximum reflection coefficient and realized gain of -16 dB and 19.29 dBi respectively at 2.45 GHz.",
"title": ""
},
{
"docid": "70e96cc632b25adab5afd4941696f456",
"text": "Requirements elicitation techniques are methods used by analysts to determine the needs of customers and users, so that systems can be built with a high probability of satisfying those needs. Analysts with extensive experience seem to be more successful than less experienced analysts in uncovering the user needs. Less experienced analysts often select a technique based on one of two reasons: (a) it is the only one they know, or (b) they think that a technique that worked well last time must surely be appropriate this time. This paper presents the results of in-depth interviews with some of the world's most experienced analysts. These results demonstrate how they select elicitation techniques based on a variety of situational assessments.",
"title": ""
},
{
"docid": "d0e45de6baf9665123a43a21d25c18c2",
"text": "This paper studies the problem of computing optimal journeys in dynamic public transit networks. We introduce a novel algorithmic framework, called Connection Scan Algorithm (CSA), to compute journeys. It organizes data as a single array of connections, which it scans once per query. Despite its simplicity, our algorithm is very versatile. We use it to solve earliest arrival and multi-criteria profile queries. Moreover, we extend it to handle the minimum expected arrival time (MEAT) problem, which incorporates stochastic delays on the vehicles and asks for a set of (alternative) journeys that in its entirety minimizes the user’s expected arrival time at the destination. Our experiments on the dense metropolitan network of London show that CSA computes MEAT queries, our most complex scenario, in 272ms on average.",
"title": ""
},
{
"docid": "ccecd2617d9db04e1fe2c275643e6662",
"text": "Multi-step temporal-difference (TD) learning, where the update targets contain information from multiple time steps ahead, is one of the most popular forms of TD learning for linear function approximation. The reason is that multi-step methods often yield substantially better performance than their single-step counter-parts, due to a lower bias of the update targets. For non-linear function approximation, however, single-step methods appear to be the norm. Part of the reason could be that on many domains the popular multi-step methods TD(λ) and Sarsa(λ) do not perform well when combined with non-linear function approximation. In particular, they are very susceptible to divergence of value estimates. In this paper, we identify the reason behind this. Furthermore, based on our analysis, we propose a new multi-step TD method for non-linear function approximation that addresses this issue. We confirm the effectiveness of our method using two benchmark tasks with neural networks as function approximation.",
"title": ""
},
{
"docid": "60971d26877ef62b816526f13bd76c24",
"text": "Breast cancer is one of the leading causes of cancer death among women worldwide. In clinical routine, automatic breast ultrasound (BUS) image segmentation is very challenging and essential for cancer diagnosis and treatment planning. Many BUS segmentation approaches have been studied in the last two decades, and have been proved to be effective on private datasets. Currently, the advancement of BUS image segmentation seems to meet its bottleneck. The improvement of the performance is increasingly challenging, and only few new approaches were published in the last several years. It is the time to look at the field by reviewing previous approaches comprehensively and to investigate the future directions. In this paper, we study the basic ideas, theories, pros and cons of the approaches, group them into categories, and extensively review each category in depth by discussing the principles, application issues, and advantages/disadvantages. Keyword: breast ultrasound (BUS) images; breast cancer; segmentation; benchmark; early detection; computer-aided diagnosis (CAD)",
"title": ""
},
{
"docid": "0d95c132ff0dcdb146ed433987c426cf",
"text": "A smart connected car in conjunction with the Internet of Things (IoT) is an emerging topic. The fundamental concept of the smart connected car is connectivity, and such connectivity can be provided by three aspects, such as Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I), and Vehicle-to-Everything (V2X). To meet the aspects of V2V and V2I connectivity, we developed modules in accordance with international standards with respect to On-Board Diagnostics II (OBDII) and 4G Long Term Evolution (4G-LTE) to obtain and transmit vehicle information. We also developed software to visually check information provided by our modules. Information related to a user’s driving, which is transmitted to a cloud-based Distributed File System (DFS), was then analyzed for the purpose of big data analysis to provide information on driving habits to users. Yet, since this work is an ongoing research project, we focus on proposing an idea of system architecture and design in terms of big data analysis. Therefore, our contributions through this work are as follows: (1) Develop modules based on Controller Area Network (CAN) bus, OBDII, and 4G-LTE; (2) Develop software to check vehicle information on a PC; (3) Implement a database related to vehicle diagnostic codes; (4) Propose system architecture and design for big data analysis.",
"title": ""
},
{
"docid": "0a1bc682d4c2d2c57605702d44160a20",
"text": "This paper introduces an open architecture humanoid robotics platform (OpenHRP for short) on which various building blocks of humanoid robotics can be investigated. OpenHRP is a virtual humanoid robot platform with a compatible humanoid robot, and consists of a simulator of humanoid robots and motion control library for them which can also be applied to a compatible humanoid robot as it is. OpenHRP also has a view simulator of humanoid robots on which humanoid robot vision can be studied. The consistency between the simulator and the robot are enhanced by introducing a new algorithm to simulate repulsive force and torque between contacting objects. OpenHRP is expected to initiate the exploration of humanoid robotics on an open architecture software and hardware, thanks to the unification of the controllers and the examined consistency between the simulator and a real humanoid robot.",
"title": ""
},
{
"docid": "7fd7af08666f3cfad0c2dc975427c7f2",
"text": "Many network middleboxes perform deep packet inspection (DPI), a set of useful tasks which examine packet payloads. These tasks include intrusion detection (IDS), exfiltration detection, and parental filtering. However, a long-standing issue is that once packets are sent over HTTPS, middleboxes can no longer accomplish their tasks because the payloads are encrypted. Hence, one is faced with the choice of only one of two desirable properties: the functionality of middleboxes and the privacy of encryption. We propose BlindBox, the first system that simultaneously provides {\\em both} of these properties. The approach of BlindBox is to perform the deep-packet inspection {\\em directly on the encrypted traffic. BlindBox realizes this approach through a new protocol and new encryption schemes.\n We demonstrate that BlindBox enables applications such as IDS, exfiltration detection and parental filtering, and supports real rulesets from both open-source and industrial DPI systems. We implemented BlindBox and showed that it is practical for settings with long-lived HTTPS connections. Moreover, its core encryption scheme is 3-6 orders of magnitude faster than existing relevant cryptographic schemes.",
"title": ""
}
] |
scidocsrr
|
ec089bc17fbd305406537f1dbe0ec25d
|
Usability of Forensics Tools: A User Study
|
[
{
"docid": "5420818f35031e07207a9bc9168be3c2",
"text": "DFRWS is dedicated to the sharing of knowledge and ideas about digital forensics research. Ever since it organized the first open workshop devoted to digital forensics in 2001, DFRWS continues to bring academics and practitioners together in an informal environment. As a non-profit, volunteer organization, DFRWS sponsors technical working groups, annual conferences and challenges to help drive the direction of research and development.",
"title": ""
},
{
"docid": "faf0e45405b3c31135a20d7bff6e7a5a",
"text": "Law enforcement is in a perpetual race with criminals in the application of digital technologies, and requires the development of tools to systematically search digital devices for pertinent evidence. Another part of this race, and perhaps more crucial, is the development of a methodology in digital forensics that encompasses the forensic analysis of all genres of digital crime scene investigations. This paper explores the development of the digital forensics process, compares and contrasts four particular forensic methodologies, and finally proposes an abstract model of the digital forensic procedure. This model attempts to address some of the shortcomings of previous methodologies, and provides the following advantages: a consistent and standardized framework for digital forensic tool development; a mechanism for applying the framework to future digital technologies; a generalized methodology that judicial members can use to relate technology to non-technical observers; and, the potential for incorporating nondigital electronic technologies within the abstractionmodel of the digital forensic procedure. This model attempts to address some of the shortcomings of previous methodologies, and provides the following advantages: a consistent and standardized framework for digital forensic tool development; a mechanism for applying the framework to future digital technologies; a generalized methodology that judicial members can use to relate technology to non-technical observers; and, the potential for incorporating nondigital electronic technologies within the abstraction Introduction The digital age can be characterized as the application of computer technology as a tool that enhances traditional methodologies. The incorporation of computer systems as a tool into private, commercial, educational, governmental, and other facets of modern life has improved",
"title": ""
}
] |
[
{
"docid": "2466530b54a99a53ec9ee1d0aa413858",
"text": "Deep neural networks are typically optimized with stochastic gradient descent (SGD). In this work, we propose a novel second-order stochastic optimization algorithm. The algorithm is based on analytic results showing that a non-zero mean of features is harmful for the optimization. We prove convergence of our algorithm in a convex setting. In our experiments we show that our proposed algorithm converges faster than SGD. Further, in contrast to earlier work, our algorithm allows for training models with a factorized structure from scratch. We found this structure to be very useful not only because it accelerates training and decoding, but also because it is a very effective means against overfitting. Combining our proposed optimization algorithm with this model structure, model size can be reduced by a factor of eight and still improvements in recognition error rate are obtained. Additional gains are obtained by improving the Newbob learning rate strategy.",
"title": ""
},
{
"docid": "ced8cc9329777cc01cdb3e91772a29c2",
"text": "Manually annotating clinical document corpora to generate reference standards for Natural Language Processing (NLP) systems or Machine Learning (ML) is a timeconsuming and labor-intensive endeavor. Although a variety of open source annotation tools currently exist, there is a clear opportunity to develop new tools and assess functionalities that introduce efficiencies into the process of generating reference standards. These features include: management of document corpora and batch assignment, integration of machine-assisted verification functions, semi-automated curation of annotated information, and support of machine-assisted pre-annotation. The goals of reducing annotator workload and improving the quality of reference standards are important considerations for development of new tools. An infrastructure is also needed that will support largescale but secure annotation of sensitive clinical data as well as crowdsourcing which has proven successful for a variety of annotation tasks. We introduce the Extensible Human Oracle Suite of Tools (eHOST) http://code.google.com/p/ehost that provides such functionalities that when coupled with server integration offer an end-to-end solution to carry out small or large scale as well as crowd sourced annotation projects.",
"title": ""
},
{
"docid": "d9f812dc626fcfb360401af693042408",
"text": "OBJECTIVES\nTo examine the possible association of skull deformity and the development of the cranial sutures in fetuses with Apert syndrome.\n\n\nMETHODS\nThree-dimensional (3D) ultrasound was used to examine the metopic and coronal sutures in seven fetuses with Apert syndrome at 22-27 weeks of gestation. The gap between the frontal bones in the transverse plane of the head at the level of the cavum septi pellucidi was measured and compared to findings in 120 anatomically normal fetuses undergoing routine ultrasound examination at 16-32 weeks.\n\n\nRESULTS\nIn the normal group, the gap between the frontal bones in the metopic suture at the level of the cavum septi pellucidi, decreased significantly with gestation from a mean of 2.2 mm (5th and 95th centiles: 1.5 mm and 2.9 mm) at 16 weeks to 0.9 mm (5th and 95th centiles: 0.3 mm and 1.6 mm) at 32 weeks. In the seven cases with Apert syndrome, two-dimensional ultrasound examination demonstrated the characteristic features of frontal bossing, depressed nasal bridge and bilateral syndactyly. On 3D examination there was complete closure of the coronal suture and a wide gap in the metopic suture (15-23 mm).\n\n\nCONCLUSION\nIn normal fetuses, cranial bones are believed to grow in response to the centrifugal pressure from the expanding brain and proximity of the dura to the suture is critical in maintaining its patency. In Apert syndrome, the frontal bossing may be a mere consequence of a genetically predetermined premature closure of the coronal suture. Alternatively, there is a genetically predetermined deformation of the brain, which in turn, through differential stretch of the dura in the temporal and frontal regions, causes premature closure of the coronal suture and impaired closure of the metopic suture.",
"title": ""
},
{
"docid": "a3fafe73615c434375cd3f35323c939e",
"text": "In this paper, Magnetic Resonance Images,T2 weighte d modality , have been pre-processed by bilateral filter to reduce th e noise and maintaining edges among the different tissues. Four different t echniques with morphological operations have been applied to extra c the tumor region. These were: Gray level stretching and Sobel edge de tection, K-Means Clustering technique based on location and intensit y, Fuzzy C-Means Clustering, and An Adapted K-Means clustering techn ique and Fuzzy CMeans technique. The area of the extracted tumor re gions has been calculated. The present work showed that the four i mplemented techniques can successfully detect and extract the brain tumor and thereby help doctors in identifying tumor's size and region.",
"title": ""
},
{
"docid": "182bb07fb7dbbaf17b6c7a084f1c4fb2",
"text": "Network Functions Virtualization (NFV) is an upcoming paradigm where network functionality is virtualized and split up into multiple building blocks that can be chained together to provide the required functionality. This approach increases network flexibility and scalability as these building blocks can be allocated and reallocated at runtime depending on demand. The success of this approach depends on the existence and performance of algorithms that determine where, and how these building blocks are instantiated. In this paper, we present and evaluate a formal model for resource allocation of virtualized network functions within NFV environments, a problem we refer to as Virtual Network Function Placement (VNF-P). We focus on a hybrid scenario where part of the services may be provided by dedicated physical hardware, and where part of the services are provided using virtualized service instances. We evaluate the VNF-P model using a small service provider scenario and two types of service chains, and evaluate its execution speed. We find that the algorithms finish in 16 seconds or less for a small service provider scenario, making it feasible to react quickly to changing demand.",
"title": ""
},
{
"docid": "77d616dc746e74db02215dcf2fdb6141",
"text": "It is almost a quarter of a century since the launch in 1968 of NASA's Pioneer 9 spacecraft on the first mission into deep-space that relied on coding to enhance communications on the critical downlink channel. [The channel code used was a binary convolutional code that was decoded with sequential decoding--we will have much to say about this code in the sequel.] The success of this channel coding system had repercussions that extended far beyond NASA's space program. It is no exaggeration to say that the Pioneer 9 mission provided communications engineers with the first incontrovertible demonstration of the practical utility of channel coding techniques and thereby paved the way for the successful application of coding to many other channels.",
"title": ""
},
{
"docid": "43a84d7fc14e52e93ab2df5db6660a2b",
"text": "The advent of regenerative medicine has brought us the opportunity to regenerate, modify and restore human organs function. Stem cells, a key resource in regenerative medicine, are defined as clonogenic, self-renewing, progenitor cells that can generate into one or more specialized cell types. Stem cells have been classified into three main groups: embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs) and adult/postnatal stem cells (ASCs). The present review focused the attention on ASCs, which have been identified in many perioral tissues such as dental pulp, periodontal ligament, follicle, gingival, alveolar bone and papilla. Human dental pulp stem cells (hDPSCs) are ectodermal-derived stem cells, originating from migrating neural crest cells and possess mesenchymal stem cell properties. During last decade, hDPSCs have received extensive attention in the field of tissue engineering and regenerative medicine due to their accessibility and ability to differentiate in several cell phenotypes. In this review, we have carefully described the potential of hDPSCs to differentiate into odontoblasts, osteocytes/osteoblasts, adipocytes, chondrocytes and neural cells.",
"title": ""
},
{
"docid": "1a2d9da5b42a7ae5a8dcf5fef48cfe26",
"text": "The space of bio-inspired hardware can be partitioned along three axes: phylogeny, ontogeny, and epigenesis. We refer to this as the POE model. Our Embryonics (for embryonic electronics) project is situated along the ontogenetic axis of the POE model and is inspired by the processes of molecular biology and by the embryonic development of living beings. We will describe the architecture of multicellular automata that are endowed with self-replication and self-repair properties. In the conclusion, we will present our major on-going project: a giant self-repairing electronic watch, the BioWatch, built on a new reconfigurable tissue, the electronic wall or e–wall.",
"title": ""
},
{
"docid": "ce8f565a80deadb7b35adf93d2afbd4c",
"text": "Graph ranking plays an important role in many applications, such as page ranking on web graphs and entity ranking on social networks. In applications, besides graph structure, rich information on nodes and edges and explicit or implicit human supervision are often available. In contrast, conventional algorithms (e.g., PageRank and HITS) compute ranking scores by only resorting to graph structure information. A natural question arises here, that is, how to effectively and efficiently leverage all the information to more accurately calculate graph ranking scores than the conventional algorithms, assuming that the graph is also very large. Previous work only partially tackled the problem, and the proposed solutions are also not satisfying. This paper addresses the problem and proposes a general framework as well as an efficient algorithm for graph ranking. Specifically, we define a semi-supervised learning framework for ranking of nodes on a very large graph and derive within our proposed framework an efficient algorithm called Semi-Supervised PageRank. In the algorithm, the objective function is defined based upon a Markov random walk on the graph. The transition probability and the reset probability of the Markov model are defined as parametric models based on features on nodes and edges. By minimizing the objective function, subject to a number of constraints derived from supervision information, we simultaneously learn the optimal parameters of the model and the optimal ranking scores of the nodes. Finally, we show that it is possible to make the algorithm efficient to handle a billion-node graph by taking advantage of the sparsity of the graph and implement it in the MapReduce logic. Experiments on real data from a commercial search engine show that the proposed algorithm can outperform previous algorithms on several tasks.",
"title": ""
},
{
"docid": "e05d92ac29261f1560e8d9775d39f6b4",
"text": "The Architecture Engineering Construction Facilities Management (AEC/FM) industry is currently plagued with inefficiencies in access and retrieval of relevant information across the various stakeholders and actors, because the vast amount of project related information is not only diverse but the information is also highly fragmented and distributed across different sources and actors. More often than not, even if a good part of the project and task related information may be stored in the distributed information systems, the knowledge of where what is stored, and how that information can be accessed remains a tacit knowledge stored in the minds of the people involved in the project. Consequently, navigating through this distributed and fragmented information in the current practice is heavily reliant on the knowledge and experience of the individual actors in the network, who are able to guide each other to relevant information source, and in the process answering questions such as: who knows what? What information is where? Etc. Thus, to be able to access and effectively use the distributed knowledge and information held by different actors and information systems within a project, each actor needs to know the information access path, which in turn is mediated by other actors and their knowledge of the distribution of the information. In this type of distributed-knowledge network and “actor-focused thinking” when the key actor or actors leave the project, the access path to the relevant knowledge for the associated queries may also disappear, breaking the chain of queries. Therefore, we adopt an “information-focused thinking” where all project actors are considered and represented as computational and information storage entities in a knowledge network, building on the concepts and theories of Transactive Memory Systems (TMS), which primarily deal with effective management and usage of distributed knowledge sources. We further extend the explicit representation of the information entities to visual objects such that the actors can effectively understand, construct and recognize contextual relationships among the information entities through visual management and communication. The merits and challenges of such an approach towards visual transactive memory system for project information management are discussed using a prototype information management platform, VisuaLynk, developed around graph and linked-data concepts, and currently configured for the use phase of a project.",
"title": ""
},
{
"docid": "e49aa0d0f060247348f8b3ea0a28d3c6",
"text": "Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.",
"title": ""
},
{
"docid": "420719690b6249322927153daedba87b",
"text": "• In-domain: 91% F1 on the dev set, 5 we reduced the learning rate from 10−4 to 10−5. We then stopped the training when F1 was not improved after 20 epochs. We did the same for ment-norm except that the learning rate was changed at 91.5% F1. Note that all the hyper-parameters except K and the turning point for early stopping were set to the values used by Ganea and Hofmann (2017). Systematic tuning is expensive though may have further ncreased the result of our models.",
"title": ""
},
{
"docid": "68b38404198f2360c9fa9dccf3d49f8e",
"text": "A space-filling curve is a linear traversal of a discrete finite multidimensional space. In order for this traversal to be useful in many applications, the curve should preserve \"locality\". We quantify \"locality\" and bound the locality of multidimensional space-filling curves. Classic Hilbert space-filling curves come close to achieving optimal locality.",
"title": ""
},
{
"docid": "bdfb3a761d7d9dbb96fa4f07bc2c1f89",
"text": "We present an algorithm for recognition and reconstruction of scanned 3D indoor scenes. 3D indoor reconstruction is particularly challenging due to object interferences, occlusions and overlapping which yield incomplete yet very complex scene arrangements. Since it is hard to assemble scanned segments into complete models, traditional methods for object recognition and reconstruction would be inefficient. We present a search-classify approach which interleaves segmentation and classification in an iterative manner. Using a robust classifier we traverse the scene and gradually propagate classification information. We reinforce classification by a template fitting step which yields a scene reconstruction. We deform-to-fit templates to classified objects to resolve classification ambiguities. The resulting reconstruction is an approximation which captures the general scene arrangement. Our results demonstrate successful classification and reconstruction of cluttered indoor scenes, captured in just few minutes.",
"title": ""
},
{
"docid": "da4180a563d4395f642203abc19281b3",
"text": "PKCS#11 defines an API for cryptographic devices that has been widely adopted in industry. However, it has been shown to be vulnerable to a variety of attacks that could, for example, compromise the sensitive keys stored on the device. In this paper, we set out a formal model of the operation of the API, which differs from previous security API models notably in that it accounts for non-monotonic mutable global state. We give decidability results for our formalism, and describe an implementation of the resulting decision procedure using the model checker NuSMV. We report some new attacks and prove the safety of some configurations of the API in our model. We also analyse proprietary extensions proposed by nCipher (Thales) and Eracom (Safenet), designed to address the shortcomings of PKCS#11.",
"title": ""
},
{
"docid": "dc546e170054e505842a510ca04dc137",
"text": "Machine learning (ML) and pattern matching (PM) are powerful computer science techniques which can derive knowledge from big data, and provide prediction and matching. Since nanometer VLSI design and manufacturing have extremely high complexity and gigantic data, there has been a surge recently in applying and adapting machine learning and pattern matching techniques in VLSI physical design (including physical verification), e.g., lithography hotspot detection and data/pattern-driven physical design, as ML and PM can raise the level of abstraction from detailed physics-based simulations and provide reasonably good quality-of-result. In this paper, we will discuss key techniques and recent results of machine learning and pattern matching, with their applications in physical design.",
"title": ""
},
{
"docid": "5a7ab5ea251e45c3a1ced2ff044c228a",
"text": "In recent years, there has been growing interest in learning to rank. The introduction of feature selection into different learning problems has been proven effective. These facts motivate us to investigate the problem of feature selection for learning to rank. We propose a joint convex optimization formulation which minimizes ranking errors while simultaneously conducting feature selection. This optimization formulation provides a flexible framework in which we can easily incorporate various importance measures and similarity measures of the features. To solve this optimization problem, we use the Nesterov's approach to derive an accelerated gradient algorithm with a fast convergence rate O(1/T2). We further develop a generalization bound for the proposed optimization problem using the Rademacher complexities. Extensive experimental evaluations are conducted on the public LETOR benchmark datasets. The results demonstrate that the proposed method shows: 1) significant ranking performance gain compared to several feature selection baselines for ranking, and 2) very competitive performance compared to several state-of-the-art learning-to-rank algorithms.",
"title": ""
},
{
"docid": "3686b88ab4b0fdfe690bb1b8869dce5c",
"text": "In recent years, several special multiple-parameter discrete fractional transforms (MPDFRTs) have been proposed, and their advantages have been demonstrated in the fields of communication systems and information security. However, the general theoretical framework of MPDFRTs has not yet been established. In this paper, we propose two separate theoretical frameworks called the type I and II MPDFRT that can include existing multiple-parameter transforms as special cases. The properties of the type I and II MPDFRT have been analyzed in detail and their high-dimensional operators have been defined. Under the theoretical frameworks, we can construct new types of transforms that may be useful in signal processing and information security. Finally, we perform two applications about image encryption and image feature extraction in the type I and II MPDFRT domain. The simulation results demonstrate that the typical transforms constructed under the proposed theoretical frameworks yield promising results in these applications.",
"title": ""
},
{
"docid": "ff4088094e114f9a5682bd12ee046645",
"text": "In this paper, we present a multi-sensor system for automatic landing of fixed wing UAVs. The system is composed of a high precision aircraft controller and a vision module which is currently used for detection and tracking of runways. Designing the system we paid special attention to its robustness. The runway detection algorithm uses a maximum amount of information in images and works with high level geometrical models. It allows detecting a runway under different weather conditions even if only a small part is visible in the image. In order to increase landing reliability under sub-optimal wind conditions, an additional loop was introduced into the altitude controller. All control and image processing is performed onboard. The system has been successfully tested in flight experiments with two different fixed wing platforms at various weather conditions, in summer, fall and winter.",
"title": ""
},
{
"docid": "4ef861b705c207c95d93687571caea89",
"text": "Mounting of the acute inflammatory response is crucial for host defense and pivotal to the development of chronic inflammation, fibrosis, or abscess formation versus the protective response and the need of the host tissues to return to homeostasis. Within self-limited acute inflammatory exudates, novel families of lipid mediators are identified, named resolvins (Rv), protectins, and maresins, which actively stimulate cardinal signs of resolution, namely, cessation of leukocytic infiltration, counterregulation of proinflammatory mediators, and the uptake of apoptotic neutrophils and cellular debris. The biosynthesis of these resolution-phase mediators in sensu stricto is initiated during lipid-mediator class switching, in which the classic initiators of acute inflammation, prostaglandins and leukotrienes (LTs), switch to produce specialized proresolving mediators (SPMs). In this work, we review recent evidence on the structure and functional roles of these novel lipid mediators of resolution. Together, these show that leukocyte trafficking and temporal spatial signals govern the resolution of self-limited inflammation and stimulate homeostasis.",
"title": ""
}
] |
scidocsrr
|
229de5dcb9698fb7648b7286752147a1
|
A Feature Learning and Object Recognition Framework for Underwater Fish Images
|
[
{
"docid": "112fc675cce705b3bab9cb66ca1c08da",
"text": "Our Approach, 0.66 GIST 29.7 Spa>al Pyramid HOG 29.8 Spa>al Pyramid SIFT 34.4 ROI-‐GIST 26.5 Scene DPM 30.4 MM-‐Scene 28.0 Object Bank 37.6 Ours 38.1 Ours+GIST 44.0 Ours+SP 46.4 Ours+GIST+SP 47.5 Ours+DPM 42.4 Ours+GIST+DPM 46.9 Ours+SP+DPM 46.4 GIST+SP+DPM 43.1 Ours+GIST+SP+DPM 49.4 Two key requirements • representa,ve: Need to occur frequently enough • discrimina,ve: Need to be different enough from the rest of the “visual world” Goal: a mid-‐level visual representa>on Experimental Analysis Bonus: works even be`er if weakly supervised!",
"title": ""
},
{
"docid": "a076df910e5d61d07dacad420dadc242",
"text": "Recognizing objects in fine-grained domains can be extremely challenging due to the subtle differences between subcategories. Discriminative markings are often highly localized, leading traditional object recognition approaches to struggle with the large pose variation often present in these domains. Pose-normalization seeks to align training exemplars, either piecewise by part or globally for the whole object, effectively factoring out differences in pose and in viewing angle. Prior approaches relied on computationally-expensive filter ensembles for part localization and required extensive supervision. This paper proposes two pose-normalized descriptors based on computationally-efficient deformable part models. The first leverages the semantics inherent in strongly-supervised DPM parts. The second exploits weak semantic annotations to learn cross-component correspondences, computing pose-normalized descriptors from the latent parts of a weakly-supervised DPM. These representations enable pooling across pose and viewpoint, in turn facilitating tasks such as fine-grained recognition and attribute prediction. Experiments conducted on the Caltech-UCSD Birds 200 dataset and Berkeley Human Attribute dataset demonstrate significant improvements of our approach over state-of-art algorithms.",
"title": ""
}
] |
[
{
"docid": "097e2c17a34db96ba37f68e28058ceba",
"text": "ARTICLE The healing properties of compassion have been written about for centuries. The Dalai Lama often stresses that if you want others to be happy – focus on compassion; if you want to be happy yourself – focus on compassion (Dalai Lama 1995, 2001). Although all clinicians agree that compassion is central to the doctor–patient and therapist–client relationship, recently the components of com passion have been looked at through the lens of Western psychological science and research 2003a,b). Compassion can be thought of as a skill that one can train in, with in creasing evidence that focusing on and practising com passion can influence neurophysiological and immune systems (Davidson 2003; Lutz 2008). Compassionfocused therapy refers to the under pinning theory and process of applying a compassion model to psy chotherapy. Compassionate mind training refers to specific activities designed to develop compassion ate attributes and skills, particularly those that influence affect regula tion. Compassionfocused therapy adopts the philosophy that our under standing of psychological and neurophysiological processes is developing at such a rapid pace that we are now moving beyond 'schools of psychotherapy' towards a more integrated, biopsycho social science of psycho therapy (Gilbert 2009). Compassionfocused therapy and compassionate mind training arose from a number of observations. First, people with high levels of shame and self criticism can have enormous difficulty in being kind to themselves, feeling selfwarmth or being selfcompassionate. Second, it has long been known that problems of shame and selfcriticism are often rooted in histories of abuse, bullying, high expressed emo tion in the family, neglect and/or lack of affection Individuals subjected to early experiences of this type can become highly sensitive to threats of rejection or criticism from the outside world and can quickly become selfattacking: they experience both their external and internal worlds as easily turning hostile. Third, it has been recognised that working with shame and selfcriticism requires a thera peutic focus on memories of such early experiences And fourth, there are clients who engage with the cognitive and behavioural tasks of a therapy, and become skilled at generating (say) alternatives for their negative thoughts and beliefs, but who still do poorly in therapy (Rector 2000). They are likely to say, 'I understand the logic of my alterna tive thinking but it doesn't really help me feel much better' or 'I know I'm not to blame for the abuse but I still feel that I …",
"title": ""
},
{
"docid": "94ea3cbf3df14d2d8e3583cb4714c13f",
"text": "The emergence of computers as an essential tool in scientific research has shaken the very foundations of differential modeling. Indeed, the deeply-rooted abstraction of smoothness, or differentiability, seems to inherently clash with a computer's ability of storing only finite sets of numbers. While there has been a series of computational techniques that proposed discretizations of differential equations, the geometric structures they are supposed to simulate are often lost in the process.",
"title": ""
},
{
"docid": "8b6758fdd357384c2032afd405bf2c6a",
"text": "A novel 1200 V Insulated Gate Bipolar Transistor (IGBT) for high-speed switching that combines Shorted Dummy-cell (SD) to control carrier extraction at the emitter side and P/P- collector to reduce hole injection from the backside is proposed. The SD-IGBT with P/P- collector has achieved 37 % reduction of turn-off power dissipation compared with a conventional Floating Dummy-cell (FD) IGBT. The SD-IGBT with P/P- collector also has high turn-off current capability because it extracts carriers uniformly from the dummy-cell. These results show the proposed device has a ideal carrier profile for high-speed switching.",
"title": ""
},
{
"docid": "52ebff6e9509b27185f9f12bc65d86f8",
"text": "We address the problem of simplifying Portuguese texts at the sentence level by treating it as a \"translation task\". We use the Statistical Machine Translation (SMT) framework to learn how to translate from complex to simplified sentences. Given a parallel corpus of original and simplified texts, aligned at the sentence level, we train a standard SMT system and evaluate the \"translations\" produced using both standard SMT metrics like BLEU and manual inspection. Results are promising according to both evaluations, showing that while the model is usually overcautious in producing simplifications, the overall quality of the sentences is not degraded and certain types of simplification operations, mainly lexical, are appropriately captured.",
"title": ""
},
{
"docid": "dcdd6419d4cdbd53f07cf8a9eba48e8c",
"text": "The use of RFID devices for real-time production monitoring in modern factories is impeded by the inherent unreliability of RFID devices. In this paper we present a consistency stack that conceptually divides the different consistency issues in production monitoring into separate layers. In addition to this we have built a consistency management framework to ensure consistent real-time production monitoring, using unreliable RFID devices. In detail, we deal with the problem of detecting object sequences by a set of unreliable RFID readers that are installed along production lines. We propose a probabilistic sequence detection algorithm that assigns probabilities to objects detected by RFID devices and provides probabilistic guarantees regarding the real-time sequences of objects on the production lines.",
"title": ""
},
{
"docid": "7d38b4b2d07c24fdfb2306116017cd5e",
"text": "Science Technology Engineering, Art, Mathematics (STEAM) is an integration of art into Science Technology Engineering, Mathematics (STEM). Connecting art to science makes learning more effective and innovative. This study aims to determine the increase in mastery of the concept of high school students after the application of STEAM education in learning with the theme of Water and Us. The research method used is one group Pretestposttest design with students of class VII (n = 37) junior high school. The instrument used in the form of question of mastery of concepts in the form of multiple choices amounted to 20 questions and observation sheet of learning implementation. The results of the study show that there is an increase in conceptualization on the theme of Water and Us which is categorized as medium (<g>=0, 46) after the application of the STEAM approach. The conclusion obtained that by applying STEAM approach in learning can improve the mastery of concept",
"title": ""
},
{
"docid": "087f9c2abb99d8576645a2460298c1b5",
"text": "In a community cloud, multiple user groups dynamically share a massive number of data blocks. The authors present a new associative data sharing method that uses virtual disks in the MeePo cloud, a research storage cloud built at Tsinghua University. Innovations in the MeePo cloud design include big data metering, associative data sharing, data block prefetching, privileged access control (PAC), and privacy preservation. These features are improved or extended from competing features implemented in DropBox, CloudViews, and MySpace. The reported results support the effectiveness of the MeePo cloud.",
"title": ""
},
{
"docid": "32b49f58ef5e54c35224c3ffa434a84a",
"text": "PURPOSE\nThe aim of this study was to characterize a new generation stationary digital breast tomosynthesis system with higher tube flux and increased angular span over a first generation system.\n\n\nMETHODS\nThe linear CNT x-ray source was designed, built, and evaluated to determine its performance parameters. The second generation system was then constructed using the CNT x-ray source and a Hologic gantry. Upon construction, test objects and phantoms were used to characterize system resolution as measured by the modulation transfer function (MTF), and artifact spread function (ASF).\n\n\nRESULTS\nThe results indicated that the linear CNT x-ray source was capable of stable operation at a tube potential of 49 kVp, and measured focal spot sizes showed source-to-source consistency with a nominal focal spot size of 1.1 mm. After construction, the second generation (Gen 2) system exhibited entrance surface air kerma rates two times greater the previous s-DBT system. System in-plane resolution as measured by the MTF is 7.7 cycles/mm, compared to 6.7 cycles/mm for the Gen 1 system. As expected, an increase in the z-axis depth resolution was observed, with a decrease in the ASF from 4.30 mm to 2.35 mm moving from the Gen 1 system to the Gen 2 system as result of an increased angular span.\n\n\nCONCLUSIONS\nThe results indicate that the Gen 2 stationary digital breast tomosynthesis system, which has a larger angular span, increased entrance surface air kerma, and faster image acquisition time over the Gen 1 s-DBT system, results in higher resolution images. With the detector operating at full resolution, the Gen 2 s-DBT system can achieve an in-plane resolution of 7.7 cycles per mm, which is better than the current commercial DBT systems today, and may potentially result in better patient diagnosis.",
"title": ""
},
{
"docid": "6f1144f64fdd1bacfff35b7ac846ede4",
"text": "BACKGROUND\nIn 2004, the U.S. Preventive Services Task Force determined that evidence was insufficient to recommend behavioral interventions and counseling to prevent child abuse and neglect.\n\n\nPURPOSE\nTo review new evidence on the effectiveness of behavioral interventions and counseling in health care settings for reducing child abuse and neglect and related health outcomes, as well as adverse effects of interventions.\n\n\nDATA SOURCES\nMEDLINE and PsycINFO (January 2002 to June 2012), Cochrane Central Register of Controlled Trials and Cochrane Database of Systematic Reviews (through the second quarter of 2012), Scopus, and reference lists.\n\n\nSTUDY SELECTION\nEnglish-language trials of the effectiveness of behavioral interventions and counseling and studies of any design about adverse effects.\n\n\nDATA EXTRACTION\nInvestigators extracted data about study populations, designs, and outcomes and rated study quality using established criteria.\n\n\nDATA SYNTHESIS\nEleven fair-quality randomized trials of interventions and no studies of adverse effects met inclusion criteria. A trial of risk assessment and interventions for abuse and neglect in pediatric clinics for families with children aged 5 years or younger indicated reduced physical assault, Child Protective Services (CPS) reports, nonadherence to medical care, and immunization delay among screened children. Ten trials of early childhood home visitation reported reduced CPS reports, emergency department visits, hospitalizations, and self-reports of abuse and improved adherence to immunizations and well-child care, although results were inconsistent.\n\n\nLIMITATION\nTrials were limited by heterogeneity, low adherence, high loss to follow-up, and lack of standardized measures.\n\n\nCONCLUSION\nRisk assessment and behavioral interventions in pediatric clinics reduced abuse and neglect outcomes for young children. Early childhood home visitation also reduced abuse and neglect, but results were inconsistent. Additional research on interventions to prevent child abuse and neglect is needed.\n\n\nPRIMARY FUNDING SOURCE\nAgency for Healthcare Research and Quality.",
"title": ""
},
{
"docid": "a91a57326a2d961e24d13b844a3556cf",
"text": "This paper describes an interactive and adaptive streaming architecture that exploits temporal concatenation of H.264/AVC video bit-streams to dynamically adapt to both user commands and network conditions. The architecture has been designed to improve the viewing experience when accessing video content through individual and potentially bandwidth constrained connections. On the one hand, the user commands typically gives the client the opportunity to select interactively a preferred version among the multiple video clips that are made available to render the scene, e.g. using different view angles, or zoomed-in and slowmotion factors. On the other hand, the adaptation to the network bandwidth ensures effective management of the client buffer, which appears to be fundamental to reduce the client-server interaction latency, while maximizing video quality and preventing buffer underflow. In addition to user interaction and network adaptation, the deployment of fully autonomous infrastructures for interactive content distribution also requires the development of automatic versioning methods. Hence, the paper also surveys a number of approaches proposed for this purpose in surveillance and sport event contexts. Both objective metrics and subjective experiments are exploited to assess our system.",
"title": ""
},
{
"docid": "b3d42332cd9572813bc08efc670d34d7",
"text": "Context: The use of Systematic Literature Review (SLR) requires expertise and poses many challenges for novice researchers. The experiences of those who have used this research methodology can benefit novice researchers in effectively dealing with these challenges. Objective: The aim of this study is to record the reported experiences of conducting Systematic Literature Reviews, for the benefit of new researchers. Such a review will greatly benefit the researchers wanting to conduct SLR for the very first time. Method: We conducted a tertiary study to gather the experiences published by researchers. Studies that have used the SLR research methodology in software engineering and have implicitly or explicitly reported their experiences are included in this review. Results: Our research has revealed 116 studies relevant to the theme. The data has been extracted by two researchers working independently and conflicts resolved after discussion with third researcher. Findings from these studies highlight Search Strategy, Online Databases, Planning and Data Extraction as the most challenging phases of SLR. Lack of standard terminology in software engineering papers, poor quality of abstracts and problems with search engines are some of the most cited challenges. Conclusion: Further research and guidelines is required to facilitate novice researchers in conducting these phases properly.",
"title": ""
},
{
"docid": "dbc61bb20a0902d9c16145e72157c7ca",
"text": "Two new improved recursive least-squares adaptive-filtering algorithms, one with a variable forgetting factor and the other with a variable convergence factor are proposed. Optimal forgetting and convergence factors are obtained by minimizing the mean square of the noise-free a posteriori error signal. The determination of the optimal forgetting and convergence factors requires information about the noise-free a priori error which is obtained by solving a known L1-L2 minimization problem. Simulation results in system-identification and channel-equalization applications are presented which demonstrate that improved steady-state misalignment, tracking capability, and readaptation can be achieved relative to those in some state-of-the-art competing algorithms.",
"title": ""
},
{
"docid": "5869ef6be3ca9a36dbf964c41e9b17b1",
"text": " The Short Messaging Service (SMS), one of the most successful cellular services, generating millions of dollars in revenue for mobile operators yearly. Current estimations indicate that billions of SMSs are sent every day. Nevertheless, text messaging is becoming a source of customer dissatisfaction due to the rapid surge of messaging abuse activities. Although spam is a well tackled problem in the email world, SMS spam experiences a yearly growth larger than 500%. In this paper we expand our previous analysis on SMS spam traffic from a tier-1 cellular operator presented in [1], aiming to highlight the main characteristics of such messaging fraud activity. Communication patterns of spammers are compared to those of legitimate cell-phone users and Machine to Machine (M2M) connected appliances. The results indicate that M2M systems exhibit communication profiles similar to spammers, which could mislead spam filters. We find the main geographical sources of messaging abuse in the US. We also find evidence of spammer mobility, voice and data traffic resembling the behavior of legitimate customers. Finally, we include new findings on the invariance of the main characteristics of spam messages and spammers over time. Also, we present results that indicate a clear device reuse strategy in SMS spam activities.",
"title": ""
},
{
"docid": "bb444221c5a8eefad3e2a9a175bfccbc",
"text": "This paper presents new experimental results of angle of arrival (AoA) measurements for localizing passive RFID tags in the UHF frequency range. The localization system is based on the principle of a phased array with electronic beam steering mechanism. This approach has been successfully applied within a UHF RFID system and it allows the precise determination of the angle and the position of small passive RFID tags. The paper explains the basic principle, the experimental setup with the phased array and shows results of the measurements.",
"title": ""
},
{
"docid": "9d1c0462c27516974a2b4e520916201e",
"text": "The current method of grading prostate cancer on histology uses the Gleason system, which describes five increasingly malignant stages of cancer according to qualitative analysis of tissue architecture. The Gleason grading system has been shown to suffer from inter- and intra-observer variability. In this paper we present a new method for automated and quantitative grading of prostate biopsy specimens. A total of 102 graph-based, morphological, and textural features are extracted from each tissue patch in order to quantify the arrangement of nuclei and glandular structures within digitized images of histological prostate tissue specimens. A support vector machine (SVM) is used to classify the digitized histology slides into one of four different tissue classes: benign epithelium, benign stroma, Gleason grade 3 adenocarcinoma, and Gleason grade 4 adenocarcinoma. The SVM classifier was able to distinguish between all four types of tissue patterns, achieving an accuracy of 92.8% when distinguishing between Gleason grade 3 and stroma, 92.4% between epithelium and stroma, and 76.9% between Gleason grades 3 and 4. Both textural and graph-based features were found to be important in discriminating between different tissue classes. This work suggests that the current Gleason grading scheme can be improved by utilizing quantitative image analysis to aid pathologists in producing an accurate and reproducible diagnosis",
"title": ""
},
{
"docid": "a423435c1dc21c33b93a262fa175f5c5",
"text": "The study investigated several teacher characteristics, with a focus on two measures of teaching experience, and their association with second grade student achievement gains in low performing, high poverty schools in a Mid-Atlantic state. Value-added models using three-level hierarchical linear modeling were used to analyze the data from 1,544 students, 154 teachers, and 53 schools. Results indicated that traditional teacher qualification characteristics such as licensing status and educational attainment were not statistically significant in producing student achievement gains. Total years of teaching experience was also not a significant predictor but a more specific measure, years of teaching experience at a particular grade level, was significantly associated with increased student reading achievement. We caution researchers and policymakers when interpreting results from studies that have used only a general measure of teacher experience as effects are possibly underestimated. Policy implications are discussed.",
"title": ""
},
{
"docid": "970fed17476873ab69b0359f6d74ab40",
"text": "The smart grid is an innovative energy network that will improve the conventional electrical grid network to be more reliable, cooperative, responsive, and economical. Within the context of the new capabilities, advanced data sensing, communication, and networking technology will play a significant role in shaping the future of the smart grid. The smart grid will require a flexible and efficient framework to ensure the collection of timely and accurate information from various locations in power grid to provide continuous and reliable operation. This article presents a tutorial on the sensor data collection, communications, and networking issues for the smart grid. First, the applications of data sensing in the smart grid are reviewed. Then, the requirements for data sensing and collection, the corresponding sensors and actuators, and the communication and networking architecture are discussed. The communication technologies and the data communication network architecture and protocols for the smart grid are described. Next, different emerging techniques for data sensing, communications, and sensor data networking are reviewed. The issues related to security of data sensing and communications in the smart grid are then discussed. To this end, the standardization activities and use cases related to data sensing and communications in the smart grid are summarized. Finally, several open issues and challenges are outlined. Copyright © 2012 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "561c671e4d9a466cd83205bf7645b5ba",
"text": "There is surprising confusion surrounding the concept of biological totipotency, both within the scientific community and in society at large. Increasingly, ethical objections to scientific research have both practical and political implications. Ethical controversy surrounding an area of research can have a chilling effect on investors and industry, which in turn slows the development of novel medical therapies. In this context, clarifying precisely what is meant by \"totipotency\" and how it is experimentally determined will both avoid unnecessary controversy and potentially reduce inappropriate barriers to research. Here, the concept of totipotency is discussed, and the confusions surrounding this term in the scientific and nonscientific literature are considered. A new term, \"plenipotent,\" is proposed to resolve this confusion. The requirement for specific, oocyte-derived cytoplasm as a component of totipotency is outlined. Finally, the implications of twinning for our understanding of totipotency are discussed.",
"title": ""
},
{
"docid": "1aeb86588b3864eec62c47290c57e3bb",
"text": "The programming language, Prolog, was born of a project aimed not at producing a programming language but at processing natural languages; in this case, French. The project gave rise to a preliminary version of Prolog at the end of 1971 and a more definitive version at the end of 1972. This article gives the history of this project and describes in detail the preliminary and then the final versions of Prolog. The authors also felt it appropriate to describe the Q-systems since it was a language which played a prominent part in Prolog's genesis.",
"title": ""
}
] |
scidocsrr
|
35e489219fbf598e961d6ee1668b92bb
|
Smart city architecture for community level services through the internet of things
|
[
{
"docid": "a36d019f5016d0e86ac8d7c412a3c9fd",
"text": "Increasing population density in urban centers demands adequate provision of services and infrastructure to meet the needs of city inhabitants, encompassing residents, workers, and visitors. The utilization of information and communications technologies to achieve this objective presents an opportunity for the development of smart cities, where city management and citizens are given access to a wealth of real-time information about the urban environment upon which to base decisions, actions, and future planning. This paper presents a framework for the realization of smart cities through the Internet of Things (IoT). The framework encompasses the complete urban information system, from the sensory level and networking support structure through to data management and Cloud-based integration of respective systems and services, and forms a transformational part of the existing cyber-physical system. This IoT vision for a smart city is applied to a noise mapping case study to illustrate a new method for existing operations that can be adapted for the enhancement and delivery of important city services.",
"title": ""
},
{
"docid": "461ee7b6a61a6d375a3ea268081f80f5",
"text": "In this paper, we review the background and state-of-the-art of big data. We first introduce the general background of big data and review related technologies, such as could computing, Internet of Things, data centers, and Hadoop. We then focus on the four phases of the value chain of big data, i.e., data generation, data acquisition, data storage, and data analysis. For each phase, we introduce the general background, discuss the technical challenges, and review the latest advances. We finally examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks, medial applications, collective intelligence, and smart grid. These discussions aim to provide a comprehensive overview and big-picture to readers of this exciting area. This survey is concluded with a discussion of open problems and future directions.",
"title": ""
}
] |
[
{
"docid": "265dfe8d0338889a6bf1766421d89972",
"text": "Retroperitoneal fibrosis represents a rare inflammatory disease. About two thirds of all cases seem to be idiopathic (= Ormond's disease). The remaining one third is secondary and may be ascribed to infections, trauma, radiation therapy, malignant diseases, and the use of certain drugs. Up to 15 % of patients have additional fibrotic processes outside the retroperitoneum. The clinical symptoms of retroperitoneal fibrosis are non-specific. In sonography retroperitoneal fibrosis appears as a retroperitoneal hypoechoic mass which can involve the ureters and thus cause hydronephrosis. Intravenous urography and MR urography can demonstrate the typical triad of medial deviation and extrinsic compression of the ureters and hydronephrosis. CT and MRI are the modalities of choice for the diagnosis and follow-up of this disease. The lesion typically begins at the level of the fourth or fifth lumbar vertebra and appears as a plaque, encasing the aorta and the inferior vena cava and often enveloping and medially displacing the ureters. In unenhanced CT, retroperitoneal fibrosis appears as a mass that is isodense with muscle. When using MRI, the mass is hypointense in T 1-weighted images and of variable intensity in T 2-weighted images according to its stage: it may be hyperintense in early stages, while the tissue may have a low signal in late stages. After the administration of contrast media, enhancement is greatest in the early inflammatory phase and minimal in the late fibrotic phase. Dynamic gadolinium enhancement can be useful for assessing disease activity, monitoring response to treatment, and detecting relapse. To differentiate retroperitoneal masses, diffusion-weighted MRI may provide useful information.",
"title": ""
},
{
"docid": "fc3f8ffc3ae33a3049214a13a7578e67",
"text": "By combining the false belief (FB) and photo (PH) vignettes to identify theory-of-mind areas with the false sign (FS) vignettes, we re-establish the functional asymmetry between the left and right temporo-parietal junction (TPJ). The right TPJ (TPJ-R) is specially sensitive to processing belief information, whereas the left TPJ (TPJ-L) is equally responsible for FBs as well as FSs. Measuring BOLD at two time points in each vignette, at the time the FB-inducing information (or lack of information) is presented and at the time the test question is processed, made clear that the FB is processed spontaneously as soon as the relevant information is presented and not on demand for answering the question in contrast to extant behavioral data. Finally, a fourth, true belief vignette (TB) required teleological reasoning, that is, prediction of a rational action without any doubts being raised about the adequacy of the actor's information about reality. Activation by this vignette supported claims that the TPJ-R is activated by TBs as well as FBs.",
"title": ""
},
{
"docid": "e5ad17a5e431c8027ae58337615a60bd",
"text": "In this paper, we focus on learning structure-aware document representations from data without recourse to a discourse parser or additional annotations. Drawing inspiration from recent efforts to empower neural networks with a structural bias (Cheng et al., 2016; Kim et al., 2017), we propose a model that can encode a document while automatically inducing rich structural dependencies. Specifically, we embed a differentiable non-projective parsing algorithm into a neural model and use attention mechanisms to incorporate the structural biases. Experimental evaluations across different tasks and datasets show that the proposed model achieves state-of-the-art results on document modeling tasks while inducing intermediate structures which are both interpretable and meaningful.",
"title": ""
},
{
"docid": "a446793baa99390a00ea58e799fbf6e3",
"text": "A survey has been carried out to study the occurrence and distribution of Trichodorus primitivus, T. sparsus and T. viruliferus in the Czech Republic under the rhizosphere of orchards, forests, vineyards and strawberry. Total 208 sites were surveyed and only 29 sites were found positive for these species. All three species are reported in the Czech Republic for the first time.",
"title": ""
},
{
"docid": "fc67a42a0c1d278994f0255e6cf3331a",
"text": "ibrant public securities markets rely on complex systems of supporting institutions that promote the governance of publicly traded companies. Corporate governance structures serve: 1) to ensure that minority shareholders receive reliable information about the value of firms and that a company’s managers and large shareholders do not cheat them out of the value of their investments, and 2) to motivate managers to maximize firm value instead of pursuing personal objectives.1 Institutions promoting the governance of firms include reputational intermediaries such as investment banks and audit firms, securities laws and regulators such as the Securities and Exchange Commission (SEC) in the United States, and disclosure regimes that produce credible firm-specific information about publicly traded firms. In this paper, we discuss economics-based research focused primarily on the governance role of publicly reported financial accounting information. Financial accounting information is the product of corporate accounting and external reporting systems that measure and routinely disclose audited, quantitative data concerning the financial position and performance of publicly held firms. Audited balance sheets, income statements, and cash-flow statements, along with supporting disclosures, form the foundation of the firm-specific information set available to investors and regulators. Developing and maintaining a sophisticated financial disclosure regime is not cheap. Countries with highly developed securities markets devote substantial resources to producing and regulating the use of extensive accounting and disclosure rules that publicly traded firms must follow. Resources expended are not only financial, but also include opportunity costs associated with deployment of highly educated human capital, including accountants, lawyers, academicians, and politicians. In the United States, the SEC, under the oversight of the U.S. Congress, is responsible for maintaining and regulating the required accounting and disclosure rules that firms must follow. These rules are produced both by the SEC itself and through SEC oversight of private standards-setting bodies such as the Financial Accounting Standards Board and the Emerging Issues Task Force, which in turn solicit input from business leaders, academic researchers, and regulators around the world. In addition to the accounting standards-setting investments undertaken by many individual countries and securities exchanges, there is currently a major, well-funded effort in progress, under the auspices of the International Accounting Standards Board (IASB), to produce a single set of accounting standards that will ultimately be acceptable to all countries as the basis for cross-border financing transactions.2 The premise behind governance research in accounting is that a significant portion of the return on investment in accounting regimes derives from enhanced governance of firms, which in turn facilitates the operation of securities Robert M. Bushman and Abbie J. Smith",
"title": ""
},
{
"docid": "fabc65effd31f3bb394406abfa215b3e",
"text": "Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems. A more detailed overview of the theory (without proofs) can be found in Vapnik (1995). In Vapnik (1998) one can find detailed description of the theory (including proofs).",
"title": ""
},
{
"docid": "8051535c66ecd4a8553a7d33051b1ad4",
"text": "There are several invariant features of pointto-point human arm movements: trajectories tend to be straight, smooth, and have bell-shaped velocity profiles. One approach to accounting for these data is via optimization theory; a movement is specified implicitly as the optimum of a cost function, e.g., integrated jerk or torque change. Optimization models of trajectory planning, as well as models not phrased in the optimization framework, generally fall into two main groups-those specified in kinematic coordinates and those specified in dynamic coordinates. To distinguish between these two possibilities we have studied the effects of artificial visual feedback on planar two-joint arm movements. During self-paced point-to-point arm movements the visual feedback of hand position was altered so as to increase the perceived curvature of the movement. The perturbation was zero at both ends of the movement and reached a maximum at the midpoint of the movement. Cost functions specified by hand coordinate kinematics predict adaptation to increased curvature so as to reduce the visual curvature, while dynamically specified cost functions predict no adaptation in the underlying trajectory planner, provided the final goal of the movement can still be achieved. We also studied the effects of reducing the perceived curvature in transverse movements, which are normally slightly curved. Adaptation should be seen in this condition only if the desired trajectory is both specified in kinematic coordinates and actually curved. Increasing the perceived curvature of normally straight sagittal movements led to significant (P<0.001) corrective adaptation in the curvature of the actual hand movement; the hand movement became curved, thereby reducing the visually perceived curvature. Increasing the curvature of the normally curved transverse movements produced a significant (P<0.01) corrective adaptation; the hand movement became straighter, thereby again reducing the visually perceived curvature. When the curvature of naturally curved transverse movements was reduced, there was no significant adaptation (P>0.05). The results of the curvature-increasing study suggest that trajectories are planned in visually based kinematic coordinates. The results of the curvature-reducing study suggest that the desired trajectory is straight in visual space. These results are incompatible with purely dynamicbased models such as the minimum torque change model. We suggest that spatial perception-as mediated by vision-plays a fundamental role in trajectory planning.",
"title": ""
},
{
"docid": "ea28d601dfbf1b312904e39802ce25b8",
"text": "In this paper, we present the implementation and performance evaluation of security functionalities at the link layer of IEEE 802.15.4-compliant IoT devices. Specifically, we implement the required encryption and authentication mechanisms entirely in software and as well exploit the hardware ciphers that are made available by our IoT platform. Moreover, we present quantitative results on the memory footprint, the execution time and the energy consumption of selected implementation modes and discuss some relevant tradeoffs. As expected, we find that hardware-based implementations are not only much faster, leading to latencies shorter than two orders of magnitude compared to software-based security suites, but also provide substantial savings in terms of ROM memory occupation, i.e. up to six times, and energy consumption. Furthermore, the addition of hardware-based security support at the link layer only marginally impacts the network lifetime metric, leading to worst-case reductions of just 2% compared to the case where no security is employed. This is due to the fact that energy consumption is dominated by other factors, including the transmission and reception of data packets and the control traffic that is required to maintain the network structures for routing and data collection. On the other hand, entirely software-based implementations are to be avoided as the network lifetime reduction in this case can be as high as 25%.",
"title": ""
},
{
"docid": "1c6a9910a51656a47a8599a98dba77bb",
"text": "In real life facial expressions show mixture of emotions. This paper proposes a novel expression descriptor based expression map that can efficiently represent pure, mixture and transition of facial expressions. The expression descriptor is the integration of optic flow and image gradient values and the descriptor value is accumulated in temporal scale. The expression map is realized using self-organizing map. We develop an objective scheme to find the percentage of different prototypical pure emotions (e.g., happiness, surprise, disgust etc.) that mix up to generate a real facial expression. Experimental results show that the expression map can be used as an effective classifier for facial expressions.",
"title": ""
},
{
"docid": "fa81463948ef7d6f5eb3f6e928567b15",
"text": "Many web sites collect reviews of products and services and use them provide rankings of their quality. However, such rankings are not personalized. We investigate how the information in the reviews written by a particular user can be used to personalize the ranking she is shown. We propose a new technique, topic profile collaborative filtering, where we build user profiles from users’ review texts and use these profiles to filter other review texts with the eyes of this user. We verify on data from an actual review site that review texts and topic profiles indeed correlate with ratings, and show that topic profile collaborative filtering provides both a better mean average error when predicting ratings and a better approximation of user preference orders.",
"title": ""
},
{
"docid": "b1b2a83d67456c0f0bf54092cbb06e65",
"text": "The transmission of voice communications as datagram packets over IP networks, commonly known as voice-over-IP (VoIP) telephony, is rapidly gaining wide acceptance. With private phone conversations being conducted on insecure public networks, security of VoIP communications is increasingly important. We present a structured security analysis of the VoIP protocol stack, which consists of signaling (SIP), session description (SDP), key establishment (SDES, MIKEY, and ZRTP) and secure media transport (SRTP) protocols. Using a combination of manual and tool-supported formal analysis, we uncover several design flaws and attacks, most of which are caused by subtle inconsistencies between the assumptions that protocols at different layers of the VoIP stack make about each other. The most serious attack is a replay attack on SDES, which causes SRTP to repeat the keystream used for media encryption, thus completely breaking transport-layer security. We also demonstrate a man-in-the-middle attack on ZRTP, which allows the attacker to convince the communicating parties that they have lost their shared secret. If they are using VoIP devices without displays and thus cannot execute the \"human authentication\" procedure, they are forced to communicate insecurely, or not communicate at all, i.e., this becomes a denial of service attack. Finally, we show that the key derivation process used in MIKEY cannot be used to prove security of the derived key in the standard cryptographic model for secure key exchange.",
"title": ""
},
{
"docid": "a713b20398c1eb4d8490ccf2681a748f",
"text": "The discovery of liposome or lipid vesicle emerged from self forming enclosed lipid bi-layer upon hydration; liposome drug delivery systems have played a significant role in formulation of potent drug to improve therapeutics. Recently the liposome formulations are targeted to reduce toxicity and increase accumulation at the target site. There are several new methods of liposome preparation based on lipid drug interaction and liposome disposition mechanism including the inhibition of rapid clearance of liposome by controlling particle size, charge and surface hydration. Most clinical applications of liposomal drug delivery are targeting to tissue with or without expression of target recognition molecules on lipid membrane. The liposomes are characterized with respect to physical, chemical and biological parameters. The sizing of liposome is also critical parameter which helps characterize the liposome which is usually performed by sequential extrusion at relatively low pressure through polycarbonate membrane (PCM). This mode of drug delivery lends more safety and efficacy to administration of several classes of drugs like antiviral, antifungal, antimicrobial, vaccines, anti-tubercular drugs and gene therapeutics. Present applications of the liposomes are in the immunology, dermatology, vaccine adjuvant, eye disorders, brain targeting, infective disease and in tumour therapy. The new developments in this field are the specific binding properties of a drug-carrying liposome to a target cell such as a tumor cell and specific molecules in the body (antibodies, proteins, peptides etc.); stealth liposomes which are especially being used as carriers for hydrophilic (water soluble) anticancer drugs like doxorubicin, mitoxantrone; and bisphosphonate-liposome mediated depletion of macrophages. This review would be a help to the researchers working in the area of liposomal drug delivery.",
"title": ""
},
{
"docid": "ec38bccf7c613016615198348f8f7ea6",
"text": "We present a new type of probabilistic model which we call DISsimilarity COefficient Networks (DISCO Nets). DISCO Nets allow us to efficiently sample from a posterior distribution parametrised by a neural network. During training, DISCO Nets are learned by minimising the dissimilarity coefficient between the true distribution and the estimated distribution. This allows us to tailor the training to the loss related to the task at hand. We empirically show that (i) by modeling uncertainty on the output value, DISCO Nets outperform equivalent non-probabilistic predictive networks and (ii) DISCO Nets accurately model the uncertainty of the output, outperforming existing probabilistic models based on deep neural networks.",
"title": ""
},
{
"docid": "6f8559ae0c06383d30ded2b2651beeff",
"text": "Gradient-based meta-learning methods leverage gradient descent to learn the commonalities among various tasks. While previous such methods have been successful in meta-learning tasks, they resort to simple gradient descent during metatesting. Our primary contribution is the MT-net, which enables the meta-learner to learn on each layer’s activation space a subspace that the taskspecific learner performs gradient descent on. Additionally, a task-specific learner of an MT-net performs gradient descent with respect to a metalearned distance metric, which warps the activation space to be more sensitive to task identity. We demonstrate that the dimension of this learned subspace reflects the complexity of the task-specific learner’s adaptation task, and also that our model is less sensitive to the choice of initial learning rates than previous gradient-based meta-learning methods. Our method achieves state-of-the-art or comparable performance on few-shot classification and regression tasks.",
"title": ""
},
{
"docid": "0ac2926f57bbe02e193a65388640b3b9",
"text": "Non-biological experimental variation or \"batch effects\" are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes ( > 25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/.",
"title": ""
},
{
"docid": "9362781ea97715077d54e8e9645552e2",
"text": "Web sites are often a mixture of static sites and programs that integrate relational databases as a back-end. Software that implements Web sites continuously evolve to meet ever-changing user needs. As a Web sites evolve, new versions of programs, interactions and functionalities are added and existing ones are removed or modified. Web sites require configuration and programming attention to assure security, confidentiality, and trustiness of the published information. During evolution of Web software, from one version to the next one, security flaws may be introduced, corrected, or ignored. This paper presents an investigation of the evolution of security vulnerabilities as detected by propagating and combining granted authorization levels along an inter-procedural control flow graph (CFG) together with required security levels for DB accesses with respect to SQL-injection attacks. The paper reports results about experiments performed on 31 versions of phpBB, that is a publicly available bulletin board written in PHP, version 1.0.0 (9547 LOC) to version 2.0.22 (40663 LOC) have been considered as a case study. Results show that the vulnerability analysis can be used to observe and monitor the evolution of security vulnerabilities in subsequent versions of the same software package. Suggestions for further research are also presented.",
"title": ""
},
{
"docid": "f06d083ebd1449b1fd84e826898c2fda",
"text": "The resolution of any linear imaging system is given by its point spread function (PSF) that quantifies the blur of an object point in the image. The sharper the PSF, the better the resolution is. In standard fluorescence microscopy, however, diffraction dictates a PSF with a cigar-shaped main maximum, called the focal spot, which extends over at least half the wavelength of light (λ = 400–700 nm) in the focal plane and >λ along the optical axis (z). Although concepts have been developed to sharpen the focal spot both laterally and axially, none of them has reached their ultimate goal: a spherical spot that can be arbitrarily downscaled in size. Here we introduce a fluorescence microscope that creates nearly spherical focal spots of 40–45 nm (λ/16) in diameter. Fully relying on focused light, this lens-based fluorescence nanoscope unravels the interior of cells noninvasively, uniquely dissecting their sub-λ–sized organelles.",
"title": ""
},
{
"docid": "395afccf9891cfcc8e14d82a6e968918",
"text": "In this paper, we present an ultra-low-power smart visual sensor architecture. A 10.6-μW low-resolution contrast-based imager featuring internal analog preprocessing is coupled with an energy-efficient quad-core cluster processor that exploits near-threshold computing within a few milliwatt power envelope. We demonstrate the capability of the smart camera on a moving object detection framework. The computational load is distributed among mixed-signal pixel and digital parallel processing. Such local processing reduces the amount of digital data to be sent out of the node by 91%. Exploiting context aware analog circuits, the imager only dispatches meaningful postprocessed data to the processing unit, lowering the sensor-to-processor bandwidth by 31× with respect to transmitting a full pixel frame. To extract high-level features, an event-driven approach is applied to the sensor data and optimized for parallel runtime execution. A 57.7× system energy saving is reached through the event-driven approach with respect to frame-based processing, on a low-power MCU node. The near-threshold parallel processor further reduces the processing energy cost by 6.64×, achieving an overall system energy cost of 1.79 μJ per frame, which results to be 21.8× and up to 383× lower than, respectively, an event-based imaging system based on an asynchronous visual sensor and a traditional frame-based smart visual sensor.",
"title": ""
},
{
"docid": "c2571f794304a6b0efdc4fe22bac89e5",
"text": "PURPOSE\nThe aim of this study was to analyse the psychometric properties of the Portuguese version of the body image scale (BIS; Hopwood, P., Fletcher, I., Lee, A., Al Ghazal, S., 2001. A body image scale for use with cancer patients. European Journal of Cancer, 37, 189-197). This is a brief and psychometric robust measure of body image for use with cancer patients, independently of age, cancer type, treatment or stage of the disease and it was developed in collaboration with the European Organization for Research and Treatment of Cancer (EORTC) Quality of Life Study Group.\n\n\nMETHOD\nThe sample is comprised of 173 Portuguese postoperative breast cancer patients that completed a battery of measures that included the BIS and other scales of body image and quality of life, in order to explore its construct validity.\n\n\nRESULTS\nThe Portuguese version of BIS confirmed the original unidimensional structure and demonstrated adequate internal consistency, both in the global sample (alpha=.93) as in surgical subgroups (mastectomy=.92 and breast-conserving surgery=.93). Evidence for the construct validity was provided through moderate to largely sized correlations between the BIS and other related measures. In further support of its discriminant validity, significant differences in BIS scores were found between women who underwent mastectomy and those who underwent breast-conserving surgery, with the former presenting higher scores. Age and time since diagnosis were not associated with BIS scores.\n\n\nCONCLUSIONS\nThe Portuguese BIS proved to be a reliable and valid measure of body image concerns in a sample of breast cancer patients, allowing a brief and comprehensive assessment, both on clinical and research settings.",
"title": ""
},
{
"docid": "5d527ad4493860a8d96283a5c58c3979",
"text": "Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. More than four decades after it was first proposed, the seminal error reduction algorithm of Gerchberg and Saxton and Fienup is still the popular choice for solving many variants of this problem. The algorithm is based on alternating minimization; i.e., it alternates between estimating the missing phase information, and the candidate solution. Despite its wide usage in practice, no global convergence guarantees for this algorithm are known. In this paper, we show that a (resampling) variant of this approach converges geometrically to the solution of one such problem-finding a vector x from y, A, where y = |ATx| and |z| denotes a vector of element-wise magnitudes of z-under the assumption that A is Gaussian. Empirically, we demonstrate that alternating minimization performs similar to recently proposed convex techniques for this problem (which are based on “lifting” to a convex matrix problem) in sample complexity and robustness to noise. However, it is much more efficient and can scale to large problems. Analytically, for a resampling version of alternating minimization, we show geometric convergence to the solution, and sample complexity that is off by log factors from obvious lower bounds. We also establish close to optimal scaling for the case when the unknown vector is sparse. Our work represents the first theoretical guarantee for alternating minimization (albeit with resampling) for any variant of phase retrieval problems in the non-convex setting.",
"title": ""
}
] |
scidocsrr
|
979c28052086afd6f024520b1a8df730
|
Automatic Text Simplification For Handling Intellectual Property (The Case of Multiple Patent Claims)
|
[
{
"docid": "6ee0c9832d82d6ada59025d1c7bb540e",
"text": "Advances in computational linguistics and discourse processing have made it possible to automate many language- and text-processing mechanisms. We have developed a computer tool called Coh-Metrix, which analyzes texts on over 200 measures of cohesion, language, and readability. Its modules use lexicons, part-of-speech classifiers, syntactic parsers, templates, corpora, latent semantic analysis, and other components that are widely used in computational linguistics. After the user enters an English text, CohMetrix returns measures requested by the user. In addition, a facility allows the user to store the results of these analyses in data files (such as Text, Excel, and SPSS). Standard text readability formulas scale texts on difficulty by relying on word length and sentence length, whereas Coh-Metrix is sensitive to cohesion relations, world knowledge, and language and discourse characteristics.",
"title": ""
}
] |
[
{
"docid": "4834d8ed2d60cb419b8dc9256911ba09",
"text": "In this paper we present a complete measurement study that compares YouTube traffic generated by mobile devices (smart-phones,tablets) with traffic generated by common PCs (desktops, notebooks, netbooks). We investigate the users' behavior and correlate it with the system performance. Our measurements are performed using unique data sets which are collected from vantage points in nation-wide ISPs and University campuses from two countries in Europe and the U.S.\n Our results show that the user access patterns are similar across a wide range of user locations, access technologies and user devices. Users stick with default player configurations, e.g., not changing video resolution or rarely enabling full screen playback. Furthermore it is very common that users abort video playback, with 60% of videos watched for no more than 20% of their duration.\n We show that the YouTube system is highly optimized for PC access and leverages aggressive buffering policies to guarantee excellent video playback. This however causes 25%-39% of data to be unnecessarily transferred, since users abort the playback very early. This waste of data transferred is even higher when mobile devices are considered. The limited storage offered by those devices makes the video download more complicated and overall less efficient, so that clients typically download more data than the actual video size. Overall, this result calls for better system optimization for both, PC and mobile accesses.",
"title": ""
},
{
"docid": "ef691f3d6dedd7c2b5ac1cf457271ea8",
"text": "This paper presents the application of a substrate integrated waveguide (SIW) for the design of a leaky wave antenna radiating from a slot in the broad wall. The antenna radiates into a beam split into two main lobes and its gain is about 7 dB at 19 GHz. The characteristics and radiation aspects of the antenna are discussed here. The measured antenna characteristics are in good agreement with those predicted by the simulation. Due to the SIW technology, the antenna is suitable for integration into T/X circuits and antenna arrays.",
"title": ""
},
{
"docid": "338ca06cef026cb107d3fbaa181f58eb",
"text": "This paper proposes and analyzes a new bridgeless flyback power factor correction rectifier for ac-dc power conversion. By eliminating four bridge diodes and adding a few circuit elements, the proposed rectifier reduces the primary side conduction loss and improves efficiency. The addition of the new elements has minimal effect on the circuit simplicity because it does not need any additional gate driver and magnetic elements. The losses in the semiconductor devices of the proposed circuit are analyzed and compared with that of the conventional one, followed by transformer design guideline. Experimental results with the practically implemented prototype prove its higher efficiency than its conventional counterparts.",
"title": ""
},
{
"docid": "e4f5b211598570faf43fc08e961faf86",
"text": "Activities such as Web Services and the Semantic Web are working to create a web of distributed machine understandable data. In this paper we present an application called 'Semantic Search' which is built on these supporting technologies and is designed to improve traditional web searching. We provide an overview of TAP, the application framework upon which the Semantic Search is built. We describe two implemented Semantic Search systems which, based on the denotation of the search query, augment traditional search results with relevant data aggregated from distributed sources. We also discuss some general issues related to searching and the Semantic Web and outline how an understanding of the semantics of the search terms can be used to provide better results.",
"title": ""
},
{
"docid": "a0aff8fc2e7766dc8c6a786e2d4ebfc9",
"text": "While significant progress has been made separately on analytics systems for scalable stochastic gradient descent (SGD) and private SGD, none of the major scalable analytics frameworks have incorporated differentially private SGD. There are two inter-related issues for this disconnect between research and practice: (1) low model accuracy due to added noise to guarantee privacy, and (2) high development and runtime overhead of the private algorithms. This paper takes a first step to remedy this disconnect and proposes a private SGD algorithm to address both issues in an integrated manner. In contrast to the white-box approach adopted by previous work, we revisit and use the classical technique of output perturbation to devise a novel ``bolt-on'' approach to private SGD. While our approach trivially addresses (2), it makes (1) even more challenging. We address this challenge by providing a novel analysis of the L2-sensitivity of SGD, which allows, under the same privacy guarantees, better convergence of SGD when only a constant number of passes can be made over the data. We integrate our algorithm, as well as other state-of-the-art differentially private SGD, into Bismarck, a popular scalable SGD-based analytics system on top of an RDBMS. Extensive experiments show that our algorithm can be easily integrated, incurs virtually no overhead, scales well, and most importantly, yields substantially better (up to 4X) test accuracy than the state-of-the-art algorithms on many real datasets.",
"title": ""
},
{
"docid": "05d8383eb6b1c6434f75849859c35fd0",
"text": "This paper proposes a robust approach for image based floor detection and segmentation from sequence of images or video. In contrast to many previous approaches, which uses a priori knowledge of the surroundings, our method uses combination of modified sparse optical flow and planar homography for ground plane detection which is then combined with graph based segmentation for extraction of floor from images. We also propose a probabilistic framework which makes our method adaptive to the changes in the surroundings. We tested our algorithm on several common indoor environment scenarios and were able to extract floor even under challenging circumstances. We obtained extremely satisfactory results in various practical scenarios such as where the floor and non floor areas are of same color, in presence of textured flooring, and where illumination changes are steep.",
"title": ""
},
{
"docid": "0a9cad7b636fe99fba3ea6e23f33708c",
"text": "Crowdsourcing websites (e.g. Yahoo! Answers, Amazon Mechanical Turk, and etc.) emerged in recent years that allow requesters from all around the world to post tasks and seek help from an equally global pool of workers. However, intrinsic incentive problems reside in crowdsourcing applications as workers and requester are selfish and aim to strategically maximize their own benefit. In this paper, we propose to provide incentives for workers to exert effort using a novel game-theoretic model based on repeated games. As there is always a gap in the social welfare between the non-cooperative equilibria emerging when workers pursue their self-interests and the desirable Pareto efficient outcome, we propose a novel class of incentive protocols based on social norms which integrates reputation mechanisms into the existing pricing schemes currently implemented on crowdsourcing websites, in order to improve the performance of the non-cooperative equilibria emerging in such applications. We first formulate the exchanges on a crowdsourcing website as a two-sided market where requesters and workers are matched and play gift-giving games repeatedly. Subsequently, we study the protocol designer's problem of finding an optimal and sustainable (equilibrium) protocol which achieves the highest social welfare for that website. We prove that the proposed incentives protocol can make the website operate close to Pareto efficiency. Moreover, we also examine an alternative scenario, where the protocol designer aims at maximizing the revenue of the website and evaluate the performance of the optimal protocol.",
"title": ""
},
{
"docid": "290796519b7757ce7ec0bf4d37290eed",
"text": "A freely available English thesaurus of related words is presented that has been automatically compiled by analyzing the distributional similarities of words in the British National Corpus. The quality of the results has been evaluated by comparison with human judgments as obtained from non-native and native speakers of English who were asked to provide rankings of word similarities. According to this measure, the results generated by our system are better than the judgments of the non-native speakers and come close to the native speakers’ performance. An advantage of our approach is that it does not require syntactic parsing and therefore can be more easily adapted to other languages. As an example, a similar thesaurus for German has already been completed.",
"title": ""
},
{
"docid": "98f8c85de43a551dfbcf14b6ad2dc6cb",
"text": "ly, schema based data can be defined as a set of data (which is denoted as 'S') that satisfies the following properties: there exists a set of finite size of dimension (which is denoted as 'D') such that every element of S can be expressed as a linear combination of elements from D. Flexible schema based data is the negation of Schema based data. That is, there does NOT exit a set of finite size of dimension D such that every element of S can be expressed as a linear combination of elements from set D. Intuitively, schema based data can have unbounded number of elements but has a bounded dimensions as schema definition whereas flexible schema based data has unbounded dimensions. Because schema based data has finite dimensions, therefore, schema based data can be processed by separating the data away from its dimension so that an element in a schema based data set can be expressed by a vector of values, each of which represents the projection of the element in a particular dimension. All the dimensions are known as schema. Flexible schema based data cannot be processed by separating the data away from its dimension. Each element in a flexible schema based data has to keep track of its dimensions and the corresponding value. An element in a flexible schema based data is expressed by a vector of dimension and value (namevalue pair). Therefore, flexible schema based data requires store, query and index both schema and data together. 3.2 FSD Storage Current Practises Self-contained Document-object-store model: The current practice for storing FSD is to store FSD instances in a FSD collection using document-object-store model where both structure and data are stored together for each FSD instance so that it is self-descriptive without relying on a central schema dictionary. New structures can be added on a per-record basis without dealing with schema evolution. Aggregated storage supports full document-object retrieval efficiently without the cost of querying and stitching pieces of data from multiple relational tables. Each FSD instance can be independently imported, exported, distributed without any schema dependency. Table1 shows DDL to create resumeDoc_tab collection of resume XML documents, a shoppingCar_tab collection of shopping cart JSON objects. SQL/XML standard defines XML as a built-in datatype in SQL. For upcoming SQL/JSON standard [21], it supports storing JSON in SQL varchar, varbinary, CLOB, BLOB datatype with the new ‘IS JSON’ check constraint to ensure the data stored in the column is a valid JSON object. Adding a new domain FSD by storing into existing SQL datatype, such as varchar or LOB, without adding a new SQL type allows the new domain FSD to have full data operational completeness capability (Transactions, Replication, Partition, Security, Provenance, Export/Export, Client APIs etc) support with minimal development efforts. T1 CREATE TABLE resumeDoc_tab (id number, docEnterDate date, docVerifyDate date, resume XMLType) T2 CREATE TABLE shoppingCar_tab (oid number, shoppingCar BLOB check (shoppingCar IS JSON)) Table 1 – Document-Object-Store Table Examples Data-Guide as soft Schema: The data-guide can be computed from FSD collections to understand the complete structures of the data which helps to form queries over FSD collection. That is, FSD management with data-guide supports the paradigm of “storage without schema but query with schema”. For common top-level scalar attributes that exist in all FSD instances of a FSD collection, they can be automatically projected out as virtual columns or flexible table view [21, 22, 24]. For nested master-detail hierarchical structures exist in FSD instances, relational table indexes [11] and materialized views [35], are defined using FSD_TABLE() table function (Q4 in Table 2). They can be built as secondary structures on top of the primary hierarchical FSD storage to provide efficient relational view access of FSD. FSD_TABLE() serves as a bridge between FSD data and relational data. They are flexible because they can be created on demand. See section 5.2 for how to manage FSD_TABLE() and virtual columns as indexing or in-memory columnar structures. Furthermore, to ensure data integrity, soft schema can be defined as check constraint as verification mechanism but not storage mechanism. 3.3 FSD Storage Limitations and Research Challenges Single Hierarchy: The document-object-storage model is essentially a de-normalized storage model with single root hierarchy. When XML support was added into RDBMSs, the IMS hierarchical data model issues were brought up [32]. Fundamentally, the hierarchy storage model re-surfaces the single root hierarchy problem that relational model has resolved successfully. In particular, supporting m-n relationship in one hierarchy is quite awkward. Therefore, a research challenge is how to resolve single hierarchy problem in document-objectstorage mode that satisfies ‘data first, structural later’ requirement. Is there an aggregated storage model, other than E/R model, that can support multi-hierarchy access efficiently? Papers [20, 23] have proposed ideas on approaching certain aspects of this problem. Optimal instance level binary FSD format: The documentobject-storage model is essentially a de-normalized storage where master and detail data are stored together as one hierarchical tree structure, therefore, it is feasible to achieve better query performance than with normalized storage at the expense of update. Other than storing FSD instances in textual form, they can also be stored in a compact binary form native to the FSD domain data so that the binary storage format can be used to efficiently process FSD domain specific query language [3, 22]. In particular, since FSD is a hierarchical structure based, the domain language for hierarchical data is path-driven. The underlying native binary storage form of FSD is tree navigation friendly which improves significant performance improvement than text parsing based processing. The challenge in designing the binary storage format of FSD instance is to optimize the format for both query and update. A query friendly format typically uses compact structures to achieve ultra query performance while leaving no room for accommodating update, especially for the delta-update of a FSD instance involving structural change instead of just leaf value change. The current practise is to do full FSD instance update physically even though logically only components of a FSD instance need to be updated. Although typically a FSD instance is of small to medium size, the update may still cause larger transaction log than updating simple relational columns. A command level logging approach [27] can be investigated to see if it is optimal for high frequent delta-update of FSD instances. Optimal FSD instance size: Although the size of FSD collections can be scaled to very large number, in practise, each FSD instances is of small to medium size instead of single large size. In fact, many vendors have imposed size limit per FSD instance. This is because each FSD instance provides a logical unit for concurrency access control, document and Index update and logging granularity. Supporting single large FSD instance requires RDBMS locking, logging to provide intra-document scalability [43] in addition to the current mature inter-document scalability. 4. Querying and Updating FSD 4.1 FSD Query and Update Requirements A FSD collection is stored as a table of FSD instances. A FSD instance itself is domain specific and typically has its own domain-specific query language. For FSD of XML documents, the domain-specific query language is XQuery. For FSD of JSON objects, the domain-specific query language is the SQL/JSON path language as described in [21]. Table 2 shows the example of SQL/XML[10] and SQL/JSON[21] queries and DML statements embedding XQuery and SQL/JSON path language. In general, the domain-specific query language provides the following requirements: • Capability of querying and navigating document-object structures declaratively: A FSD instance is not shredded into tables since hierarchies in a FSD can be flexible and dynamic without being modelled as a fixed master-detail join pattern. Therefore, it is natural to express hierarchical traversal of FSD as path navigation with value predicate constructs in the FSD domain language. The path name can contain a wildcard name match and the path step can be recursive to facilitate exploratory query of the FSD data. For example, capabilities of the wildcard tag name match and recursive descendant tag match in XPath expressions support the notation of navigating structures without knowing the exact names or the exact hierarchy of the structures. See ‘.//experience’ XPath expression in Q1 and Q2. Such capability is needed to provide flexibility of writing explorative and discovery queries. • Capability of doing full context aware text search declaratively: FSD instances can be document centric with mixture of textual content and structures. There is a significant amount of full text content in FSD that are subject to full text search. However, unlike plain textual document, FSD has text content that is embedded inside hierarchical structure. Full text search can be further confined within a context identified by path navigation into the FSD instance. Therefore, context aware full text search is needed in FSD domain languages. See XQuery full text search expression in XMLEXISTS() predicate of Q1 and Q2 and path-aware full text search expression in JSON_TEXTCONTAINS() predicate of Q3. • Capability of projecting, transforming object component and constructing new document or object: Unlike relational query results which are tuples of scalar data, results of path navigational queries can be fragments of FSD. New FSD can be constructed by extracting components of existing FSD and combine them through construction and transformation. Therefore, constructing and transform",
"title": ""
},
{
"docid": "ae8f5c568b2fdbb2dbef39ac277ddb24",
"text": "Knowledge graph construction consists of two tasks: extracting information from external resources (knowledge population) and inferring missing information through a statistical analysis on the extracted information (knowledge completion). In many cases, insufficient external resources in the knowledge population hinder the subsequent statistical inference. The gap between these two processes can be reduced by an incremental population approach. We propose a new probabilistic knowledge graph factorisation method that benefits from the path structure of existing knowledge (e.g. syllogism) and enables a common modelling approach to be used for both incremental population and knowledge completion tasks. More specifically, the probabilistic formulation allows us to develop an incremental population algorithm that trades off exploitation-exploration. Experiments on three benchmark datasets show that the balanced exploitation-exploration helps the incremental population, and the additional path structure helps to predict missing information in knowledge completion.",
"title": ""
},
{
"docid": "fc8e2b38273e13a70bdcc9c7487e647f",
"text": "Feature set partitioning generalizes the task of fe atur selection by partitioning the feature set int o subsets of features that are collectively useful, rather th an by finding a single useful subset of features. T his paper presents a novel feature set partitioning approach that is based on a genetic algorithm. As part of th is new approach a new encoding schema is also proposed and its properties are discussed. We examine the effectiveness of using a Vapnik-Chervonenkis dimens ion bound for evaluating the fitness function of multiple, oblivious tree classifiers. The new algor ithm was tested on various datasets and the results indicate the superiority of the proposed algorithm to other methods.",
"title": ""
},
{
"docid": "83651ca357b0f978400de4184be96443",
"text": "The most common temporomandibular joint (TMJ) pathologic disease is anterior-medial displacement of the articular disk, which can lead to TMJ-related symptoms.The indication for disk repositioning surgery is irreversible TMJ damage associated with temporomandibular pain. We describe a surgical technique using a preauricular approach with a high condylectomy to reshape the condylar head. The disk is anchored with a bioabsorbable microanchor (Mitek Microfix QuickAnchor Plus 1.3) to the lateral aspect of the condylar head. The anchor is linked with a 3.0 Ethibond absorbable suture to fix the posterolateral side of the disk above the condyle.The aims of this surgery were to alleviate temporomandibular pain, headaches, and neck pain and to restore good jaw mobility. In the long term, we achieved these objectives through restoration of the physiological position and function of the disk and the lower articular compartment.In our opinion, the bioabsorbable anchor is the best choice for this type of surgery because it ensures the stability of the restored disk position and leaves no artifacts in the long term that might impede follow-up with magnetic resonance imaging.",
"title": ""
},
{
"docid": "598a2753e9e63f6930c63d36960e1aae",
"text": "Recent advances in understanding prejudice and intergroup behavior have made clear that emotions help explain people's reactions to social groups and their members. Intergroup emotions theory (D. M. Mackie, T. Devos, & E. R. Smith, 2000; E. R. Smith, 1993) holds that intergroup emotions are experienced by individuals when they identify with a social group, making the group part of the psychological self. What differentiates such group-level emotions from emotions that occur purely at the individual level? The authors argue that 4 key criteria define group-level emotions: Group emotions are distinct from the same person's individual-level emotions, depend on the person's degree of group identification, are socially shared within a group, and contribute to regulating intragroup and intergroup attitudes and behavior. Evidence from 2 studies supports all 4 of these predictions and thus points to the meaningfulness, coherence, and functionality of group-level emotions.",
"title": ""
},
{
"docid": "70e5b3af4496ccae2523ed1cdf1d57a2",
"text": "Modern languages for shared-memory parallelism are moving from a bulk-synchronous Single Program Multiple Data (SPMD) execution model to lightweight Task Parallel execution models for improved productivity. This shift is intended to encourage programmers to express the ideal parallelism in an application at a fine granularity that is natural for the underlying domain, while delegating to the compiler and runtime system the job of extracting coarser-grained useful parallelism for a given target system. A simple and important example of this separation of concerns between ideal and useful parallelism can be found in chunking of parallel loops, where the programmer expresses ideal parallelism by declaring all iterations of a loop to be parallel and the implementation exploits useful parallelism by executing iterations of the loop in sequential chunks.\n Though chunking of parallel loops has been used as a standard transformation for several years, it poses some interesting challenges when the parallel loop may directly or indirectly (via procedure calls) perform synchronization operations such as barrier, signal or wait statements. In such cases, a straightforward transformation that attempts to execute a chunk of loops in sequence in a single thread may violate the semantics of the original parallel program. In this paper, we address the problem of chunking parallel loops that may contain synchronization operations. We present a transformation framework that uses a combination of transformations from past work (e.g., loop strip-mining, interchange, distribution, unswitching) to obtain an equivalent set of parallel loops that chunk together statements from multiple iterations while preserving the semantics of the original parallel program. These transformations result in reduced synchronization and scheduling overheads, thereby improving performance and scalability. Our experimental results for 11 benchmark programs on an UltraSPARC II multicore processor showed a geometric mean speedup of 0.52x for the unchunked case and 9.59x for automatic chunking using the techniques described in this paper. This wide gap underscores the importance of using these techniques in future compiler and runtime systems for programming models with lightweight parallelism.",
"title": ""
},
{
"docid": "450c45a66296ae39712b01f387e6b0d5",
"text": "The Bitcoin protocol allows to save arbitrary data on the blockchain through a special instruction of the scripting language, called OP RETURN. A growing number of protocols exploit this feature to extend the range of applications of the Bitcoin blockchain beyond transfer of currency. A point of debate in the Bitcoin community is whether loading data through OP RETURN can negatively affect the performance of the Bitcoin network with respect to its primary goal. This paper is an empirical study of the usage of OP RETURN over the years. We identify several protocols based on OP RETURN, which we classify by their application domain. We measure the evolution in time of the usage of each protocol, the distribution of OP RETURN transactions by application domain, and their space consumption.",
"title": ""
},
{
"docid": "d1d862185a20e1f1efc7d3dc7ca8524b",
"text": "In what ways do the online behaviors of wizards and ogres map to players’ actual leadership status in the offline world? What can we learn from players’ experience in Massively Multiplayer Online games (MMOGs) to advance our understanding of leadership, especially leadership in online settings (E-leadership)? As part of a larger agenda in the emerging field of empirically testing the ‘‘mapping’’ between the online and offline worlds, this study aims to tackle a central issue in the E-leadership literature: how have technology and technology mediated communications transformed leadership-diagnostic traits and behaviors? To answer this question, we surveyed over 18,000 players of a popular MMOG and also collected behavioral data of a subset of survey respondents over a four-month period. Motivated by leadership theories, we examined the connection between respondents’ offline leadership status and their in-game relationship-oriented and task-related-behaviors. Our results indicate that individuals’ relationship-oriented behaviors in the virtual world are particularly relevant to players’ leadership status in voluntary organizations, while their task-oriented behaviors are marginally linked to offline leadership status in voluntary organizations, but not in companies. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e086ff89154da60ab858dadaa0fbaba0",
"text": "Pressure ulcer prevention strategies include the prevention, and early recognition, of deep tissue injury (DTI), which can evolve into a Stage III or Stage IV pressure ulcer. In addition to their role in pressure-induced ischemia, shearing forces are believed to contribute substantially to the risk of DTI. Because the visual manifestation of a DTI may not occur until many hours after tissues were damaged, research to explore methods for early detection is on-going. For example, rhabdomyolysis is a common complication of deep tissue damage; its detection via blood chemistry and urinalysis is explored as a possible diagnostic tool of early DTI in anatomical areas where muscle is present. Substances released from injured muscle cells have a predictable time frame for detection in blood and urine, possibly enabling the clinician to estimate the time of the tissue death. Several small case studies suggest the potential validity and reliability of ultrasoun for visualizing soft tissue damage also deserve further research. While recommendations to reduce mechanical pressure and shearing damage in high-risk patients remain unchanged, their implementation is not always practical, feasible, or congruent with the overall plan of patient care. Early detection of existing tissue damage will help clinicians implement appropriate care plans that also may prevent further damage. Research to evaluate the validity, reliability, sensitivity, and specificity of diagnostic studies to detect pressure-related tissue death is warranted.",
"title": ""
},
{
"docid": "cd0bd7ac3aead17068c7f223fc19da60",
"text": "In this letter, a class of wideband impedance transformers based on multisection quarter-wave transmission lines and short-circuited stubs are proposed to be incorporated with good passband frequency selectivity. A synthesis approach is then presented to design this two-port asymmetrical transformer with Chebyshev frequency response. For the specified impedance transformation ratio, bandwidth, and in-band return loss, the required impedance parameters can be directly determined. Next, a transformer with two section transmission lines in the middle is characterized, where a set of design curves are given for practical design. Theoretically, the proposed multisection transformer has attained good passband frequency selectivity against the reported counterparts. Finally, a 50-110 Ω impedance transformer with a fractional bandwidth of 77.8% and 15 dB in-band return loss is designed, fabricated and measured to verify the prediction.",
"title": ""
},
{
"docid": "189d0b173f8a9e0b3deb21398955dc3c",
"text": "Do investments in customer satisfaction lead to excess returns? If so, are these returns associated with higher stock market risk? The empirical evidence presented in this article suggests that the answer to the first question is yes, but equally remarkable, the answer to the second question is no, suggesting that satisfied customers are economic assets with high returns/low risk. Although these results demonstrate stock market imperfections with respect to the time it takes for share prices to adjust, they are consistent with previous studies in marketing in that a firm’s satisfied customers are likely to improve both the level and the stability of net cash flows. The implication, implausible as it may seem in other contexts, is high return/low risk. Specifically, the authors find that customer satisfaction, as measured by the American Customer Satisfaction Index (ACSI), is significantly related to market value of equity. Yet news about ACSI results does not move share prices. This apparent inconsistency is the catalyst for examining whether excess stock returns might be generated as a result. The authors present two stock portfolios: The first is a paper portfolio that is back tested, and the second is an actual case. At low systematic risk, both outperform the market by considerable margins. In other words, it is possible to beat the market consistently by investing in firms that do well on the ACSI.",
"title": ""
},
{
"docid": "8e3b73204d1d62337c4b2aabdbaa8973",
"text": "The goal of this paper is to analyze the geometric properties of deep neural network classifiers in the input space. We specifically study the topology of classification regions created by deep networks, as well as their associated decision boundary. Through a systematic empirical investigation, we show that state-of-the-art deep nets learn connected classification regions, and that the decision boundary in the vicinity of datapoints is flat along most directions. We further draw an essential connection between two seemingly unrelated properties of deep networks: their sensitivity to additive perturbations in the inputs, and the curvature of their decision boundary. The directions where the decision boundary is curved in fact characterize the directions to which the classifier is the most vulnerable. We finally leverage a fundamental asymmetry in the curvature of the decision boundary of deep nets, and propose a method to discriminate between original images, and images perturbed with small adversarial examples. We show the effectiveness of this purely geometric approach for detecting small adversarial perturbations in images, and for recovering the labels of perturbed images.",
"title": ""
}
] |
scidocsrr
|
94a705d50be98fa7c64b5803fd6582bd
|
Botnet spam campaigns can be long lasting: evidence, implications, and analysis
|
[
{
"docid": "22554a4716f348a6f43299f193d5534f",
"text": "Unsolicited bulk e-mail, or SPAM, is a means to an end. For virtually all such messages, the intent is to attract the recipient into entering a commercial transaction — typically via a linked Web site. While the prodigious infrastructure used to pump out billions of such solicitations is essential, the engine driving this process is ultimately th e “point-of-sale” — the various money-making “scams” that extract value from Internet users. In the hopes of better understanding the business pressures exerted on spammers, this paper focuses squarely on the Internet infrastructure used to host and support such scams. We describe an opportunistic measurement technique called spamscatterthat mines emails in real-time, follows the embedded link structure, and automatically clusters the destination Web sites using image shinglingto capture graphical similarity between rendered sites. We have implemented this approach on a large real-time spam feed (over 1M messages per week) and have identified and analyzed over 2,000 distinct scams on 7,000 distinct servers.",
"title": ""
},
{
"docid": "c89b740ec1d752415eaea873a1bbe55d",
"text": "Spam filters often use the reputation of an IP address (or IP address range) to classify email senders. This approach worked well when most spam originated from senders with fixed IP addresses, but spam today is also sent from IP addresses for which blacklist maintainers have outdated or inaccurate information (or no information at all). Spam campaigns also involve many senders, reducing the amount of spam any particular IP address sends to a single domain; this method allows spammers to stay \"under the radar\". The dynamism of any particular IP address begs for blacklisting techniques that automatically adapt as the senders of spam change.\n This paper presents SpamTracker, a spam filtering system that uses a new technique called behavioral blacklisting to classify email senders based on their sending behavior rather than their identity. Spammers cannot evade SpamTracker merely by using \"fresh\" IP addresses because blacklisting decisions are based on sending patterns, which tend to remain more invariant. SpamTracker uses fast clustering algorithms that react quickly to changes in sending patterns. We evaluate SpamTracker's ability to classify spammers using email logs for over 115 email domains; we find that SpamTracker can correctly classify many spammers missed by current filtering techniques. Although our current datasets prevent us from confirming SpamTracker's ability to completely distinguish spammers from legitimate senders, our evaluation shows that SpamTracker can identify a significant fraction of spammers that current IP-based blacklists miss. SpamTracker's ability to identify spammers before existing blacklists suggests that it can be used in conjunction with existing techniques (e.g., as an input to greylisting). SpamTracker is inherently distributed and can be easily replicated; incorporating it into existing email filtering infrastructures requires only small modifications to mail server configurations.",
"title": ""
}
] |
[
{
"docid": "ddccad7ce01cad45413e0bcc06ba6668",
"text": "This article highlights the thus far unexplained social and professional effects raised by robotization in surgical applications, and further develops an understanding of social acceptance among professional users of robots in the healthcare sector. It presents findings from ethnographic workplace research on human-robot interactions (HRI) in a population of twenty-three professionals. When considering all the findings, the latest da Vinci system equipped with four robotic arms substitutes two table-side surgical assistants, in contrast to the single-arm AESOP robot that only substitutes one surgical assistant. The adoption of robots and the replacement of surgical assistants provide clear evidence that robots are well-accepted among operating surgeons. Because HRI decrease the operating surgeon’s dependence on social assistance and since they replace the work tasks of surgical assistants, the robot is considered a surrogate artificial work partner and worker. This finding is consistent with prior HRI research indicating that users, through their cooperation with robots, often become less reliant on supportive social actions. This research relates to societal issues and provides the first indication that highly educated knowledge workers are beginning to be replaced by robot technology in working life and therefore points towards a paradigm shift in the service sector.",
"title": ""
},
{
"docid": "d41365a01ad3ce3ec30141ed08132aa1",
"text": "This paper reports on a mixed-method study in progress. The qualitative part has been completed and the quantitative part is underway. The findings of the qualitative study -- the theory of Integral Decision-Making (IDM) -- are introduced, and the research method to test IDM is discussed. It is expected that the integration of the qualitative and quantitative studies will provide insight into how data, information, and knowledge capacities can lead to more effective management decisions by incorporating more human inputs in the decision-and policy-making process. Implications for theory and practice will be suggested.",
"title": ""
},
{
"docid": "9244acef01812d757639bd4f09631c22",
"text": "This paper describes the results of the first shared task on Multilingual Emoji Prediction, organized as part of SemEval 2018. Given the text of a tweet, the task consists of predicting the most likely emoji to be used along such tweet. Two subtasks were proposed, one for English and one for Spanish, and participants were allowed to submit a system run to one or both subtasks. In total, 49 teams participated in the English subtask and 22 teams submitted a system run to the Spanish subtask. Evaluation was carried out emoji-wise, and the final ranking was based on macro F-Score. Data and further information about this task can be found at https://competitions. codalab.org/competitions/17344.",
"title": ""
},
{
"docid": "fed9defe1a4705390d72661f96b38519",
"text": "Multivariate resultants generalize the Sylvester resultant of two polynomials and characterize the solvability of a polynomial system. They also reduce the computation of all common roots to a problem in linear algebra. We propose a determinantal formula for the sparse resultant of an arbitrary system of n + 1 polynomials in n variables. This resultant generalizes the classical one and has significantly lower degree for polynomials that are sparse in the sense that their mixed volume is lower than their Bézout number. Our algorithm uses a mixed polyhedral subdivision of the Minkowski sum of the Newton polytopes in order to construct a Newton matrix. Its determinant is a nonzero multiple of the sparse resultant and the latter equals the GCD of at most n + 1 such determinants. This construction implies a restricted version of an effective sparse Nullstellensatz. For an arbitrary specialization of the coefficients, there are two methods that use one extra variable and yield the sparse resultant. This is the first algorithm to handle the general case with complexity polynomial in the resultant degree and simply exponential in n. We conjecture its extension to producing an exact rational expression for the sparse resultant.",
"title": ""
},
{
"docid": "fe13ddb78243e3bbb03917be0752872e",
"text": "One of the powerful applications of Booiean expression is to allow users to extract relevant information from a database. Unfortunately, previous research has shown that users have difficulty specifying Boolean queries. In an attempt to overcome this limitation, a graphical Filter/Flow representation of Boolean queries was designed to provide users with an interface that visually conveys the meaning of the Booiean operators (AND, OR, and NOT). This was accomplished by impiementing a graphical interface prototype that uses the metaphor of water flowing through filters. Twenty subjects having no experience with Boolean logic participated in an experiment comparing the Booiean operations represented in the Filter/Flow interface with a text-oniy SQL interface. The subjects independently performed five comprehension tasks and five composition tasks in each of the interfaces. A significant difference (p < 0.05) in the total number of correct queries in each of the comprehension and composition tasks was found favoring Filter/Flow.",
"title": ""
},
{
"docid": "dcf4278becbc530d9648b5df4a64ec53",
"text": "Variable speed operation is essential for large wind turbines in order to optimize the energy capture under variable wind speed conditions. Variable speed wind turbines require a power electronic interface converter to permit connection with the grid. The power electronics can be either partially-rated or fully-rated [1]. A popular interface method for large wind turbines that is based on a partiallyrated interface is the doubly-fed induction generator (DFIG) system [2]. In the DFIG system, the power electronic interface controls the rotor currents in order to control the electrical torque and thus the rotational speed. Because the power electronics only process the rotor power, which is typically less than 25% of the overall output power, the DFIG offers the advantages of speed control for a reduction in cost and power losses. This report presents a DFIG wind turbine system that is modeled in PLECS and Simulink. A full electrical model that includes the switching converter implementation for the rotor-side power electronics and a dq model of the induction machine is given. The aerodynamics of the wind turbine and the mechanical dynamics of the induction machine are included to extend the use of the model to simulating system operation under variable wind speed conditions. For longer simulations that include these slower mechanical and wind dynamics, an averaged PWM converter model is presented. The averaged electrical model offers improved simulation speed at the expense of neglecting converter switching detail.",
"title": ""
},
{
"docid": "b0e5507d6e5ba443d55cba948800819e",
"text": "In this letter, a four-element wideband multiple-input–multiple-output (MIMO) configuration consisting of inverted L-monopole antenna (ILA) elements is proposed. An additional low-frequency operating mode arises in the MIMO system due to symmetric arrangement of elements with interconnected ground, apart from the resonance of the isolated ILA. Utilizing this mode, the proposed MIMO antenna operates in wide frequency range of <inline-formula> <tex-math notation=\"LaTeX\">$\\text{2.70-4.94}$</tex-math></inline-formula> GHz (impedance bandwidth <inline-formula> <tex-math notation=\"LaTeX\">$=58.6\\%$</tex-math></inline-formula>). The proposed four-element MIMO system occupies compact total area of <inline-formula><tex-math notation=\"LaTeX\">$0.13\\lambda _{0}^{2}$</tex-math></inline-formula> ( <inline-formula><tex-math notation=\"LaTeX\">$\\lambda _{0}$</tex-math></inline-formula> = highest operating wavelength) and has no complex decoupling scheme. Satisfactory interelement isolation (<inline-formula><tex-math notation=\"LaTeX\"> $\\geq $</tex-math></inline-formula>11 dB) and directional pattern with average gain <inline-formula> <tex-math notation=\"LaTeX\">$\\approx 4$</tex-math></inline-formula> dBi are achieved throughout the operating band of the proposed MIMO antenna. Furthermore, envelope correlation coefficient <inline-formula><tex-math notation=\"LaTeX\"> $< 0.1$</tex-math></inline-formula> and mean effective gain ratio close to 1 are obtained in the working frequencies, confirming satisfactory MIMO/diversity performance.",
"title": ""
},
{
"docid": "4019beb9fa6ec59b4b19c790fe8ff832",
"text": "R. Cropanzano, D. E. Rupp, and Z. S. Byrne (2003) found that emotional exhaustion (i.e., 1 dimension of burnout) negatively affects organizational citizenship behavior (OCB). The authors extended this research by investigating relationships among 3 dimensions of burnout (emotional exhaustion, depersonalization, and diminished personal accomplishment) and OCB. They also affirmed the mediating effect of job involvement on these relationships. Data were collected from 296 paired samples of service employees and their supervisors from 12 hotels and restaurants in Taiwan. Findings demonstrated that emotional exhaustion and diminished personal accomplishment were related negatively to OCB, whereas depersonalization had no independent effect on OCB. Job involvement mediated the relationships among emotional exhaustion, diminished personal accomplishment, and OCB.",
"title": ""
},
{
"docid": "35a27f366b2a1f04f511eee99e1bfbb5",
"text": "Physical unclonable functions (PUFs) have emerged as a promising security primitive for low-cost authentication and cryptographic key generation. However, PUF stability with respect to temporal variations still limits its utility and widespread acceptance. Previous techniques in the literature have focused on improving PUF robustness against voltage and temperature variations, but the issues associated with aging have been largely neglected. In this paper, we address aging in the popular ring oscillator (RO)-PUF. We propose a new aging-resistant design that reduces sensitivity to negative-bias temperature instability and hot-carrier injection stresses. Simulation results demonstrate that our aging-resistant RO-PUF (ARO-PUF) can produce unique, random, and more reliable keys. On an average, only 3.8% bits of an ARO-PUF flip over a ten-year operational period because of aging, compared with a 12.8% bit flip for a conventional RO-PUF. The proposed ARO-PUF allows us to eliminate the need for error correction by adding extra ROs. The result shows that an ARO-PUF saves ~32x area overhead compared with a conventional RO-PUF with required error correction schemes for a reliable key.",
"title": ""
},
{
"docid": "112867f6b760184a310baaab53ea51c9",
"text": "Here I give an overview of recent work on natural language realization with Combinatory Categorial Grammar, done by Michael White and his colleagues, with some more specific descriptions of the algorithms used, where they were unclear to me. In particular, I focus on the work presented in his 2007 paper [5], in which White et al describe a process for extracting a grammar from the CCGBank and using it to generate text based on semantic descriptions given in HLDS, the Hybrid Logic Dependency Semantics. I also give some background from his earlier work, a description of some background ideas.",
"title": ""
},
{
"docid": "40fda9cba754c72f1fba17dd3a5759b2",
"text": "Humans can easily recognize handwritten words, after gaining basic knowledge of languages. This knowledge needs to be transferred to computers for automatic character recognition. The work proposed in this paper tries to automate recognition of handwritten hindi isolated characters using multiple classifiers. For feature extraction, it uses histogram of oriented gradients as one feature and profile projection histogram as another feature. The performance of various classifiers has been evaluated using theses features experimentally and quadratic SVM has been found to produce better results.",
"title": ""
},
{
"docid": "ddc18f2d129d95737b8f0591560d202d",
"text": "A variety of real-life mobile sensing applications are becoming available, especially in the life-logging, fitness tracking and health monitoring domains. These applications use mobile sensors embedded in smart phones to recognize human activities in order to get a better understanding of human behavior. While progress has been made, human activity recognition remains a challenging task. This is partly due to the broad range of human activities as well as the rich variation in how a given activity can be performed. Using features that clearly separate between activities is crucial. In this paper, we propose an approach to automatically extract discriminative features for activity recognition. Specifically, we develop a method based on Convolutional Neural Networks (CNN), which can capture local dependency and scale invariance of a signal as it has been shown in speech recognition and image recognition domains. In addition, a modified weight sharing technique, called partial weight sharing, is proposed and applied to accelerometer signals to get further improvements. The experimental results on three public datasets, Skoda (assembly line activities), Opportunity (activities in kitchen), Actitracker (jogging, walking, etc.), indicate that our novel CNN-based approach is practical and achieves higher accuracy than existing state-of-the-art methods.",
"title": ""
},
{
"docid": "d64b3b68f094ade7881f2bb0f2572990",
"text": "Large-scale transactional systems still suffer from not viable trust management strategies. Given its intrinsic characteristics, blockchain technology appears as interesting from this perspective. A semantic layer built upon a basic blockchain infrastructure would join the benefits of flexible resource/service discovery and validation by consensus. This paper proposes a novel Service-oriented Architecture (SOA) based on a semantic blockchain. Registration, discovery, selection and payment operations are implemented as smart contracts, allowing decentralized execution and trust. Potential applications include material and immaterial resource marketplaces and trustless collaboration among autonomous entities, spanning many areas of interest for smart cities and communities.",
"title": ""
},
{
"docid": "ce5ede79daee56d50f5b086ad8f18a28",
"text": "In order to improve the efficiency and classification ability of Support vector machines (SVM) based on stochastic gradient descent algorithm, three algorithms of improved stochastic gradient descent (SGD) are used to solve support vector machine, which are Momentum, Nesterov accelerated gradient (NAG), RMSprop. The experimental results show that the algorithm based on RMSprop for solving the linear support vector machine has faster convergence speed and higher testing precision on five datasets (Alpha, Gamma, Delta, Mnist, Usps).",
"title": ""
},
{
"docid": "1d101f1b8075ea375894365fbd545c36",
"text": "We propose a novel formal model for optimizing interactive information retrieval interfaces. To model interactive retrieval in a general way, we frame the task of an interactive retrieval system as to choose a sequence of interface cards to present to the user. At each interaction lap, the system's goal is to choose an interface card that can maximize the expected gain of relevant information for the user while minimizing the effort of the user with consideration of the user's action model and any desired constraints on the interface card. We show that such a formal interface card model can not only cover the Probability Ranking Principle for Interactive Information Retrieval as a special case by making multiple simplification assumptions, but also be used to derive a novel formal interface model for adaptively optimizing navigational interfaces in a retrieval system. Experimental results show that the proposed model is effective in automatically generating adaptive navigational interfaces, which outperform the baseline pre-designed static interfaces.",
"title": ""
},
{
"docid": "13dde006bafe07a259b15ffade01e972",
"text": "Although studies on employee recovery accumulate at a stunning pace, the commonly used theory (Effort-Recovery model) that explains how recovery occurs has not been explicitly tested. We aimed to unravel the recovery process by examining whether off-job activities enhance next morning vigor to the extent that they enable employees to relax and detach from work. In addition, we investigated whether adequate recovery also helps employees to work with more enthusiasm and vigor on the next workday. On five consecutive days, a total of 74 employees (356 data points) reported the hours they spent on various off-job activities, their feelings of psychological detachment, and feelings of relaxation before going to sleep. Feelings of vigor were reported on the next morning, and day-levels of work engagement were reported after work. As predicted, leisure activities (social, low-effort, and physical activities) increased next morning vigor through enhanced psychological detachment and relaxation. High-duty off-job activities (work and household tasks) reduced vigor because these activities diminished psychological detachment and relaxation. Moreover, off-job activities significantly affected next day work engagement. Our results support the assumption that recovery occurs when employees engage in off-job activities that allow for relaxation and psychological detachment. The findings also underscore the significance of recovery after work: Adequate recovery not only enhances vigor in the morning, but also helps employees to stay engaged during the next workday.",
"title": ""
},
{
"docid": "ae1c2bd24b72cd7a2321a935b7df1e8c",
"text": "This research aims at providing a decision-support method for the government and the public of their water resource projects allocation. The Water Poverty Index (WPI) is introduced to evaluate the extent of water supply shortage, and the WPI driving factors of each evaluated unit are analyzed using the Least Square Error (LSE) method. Then 32 types of water-supply related projects are organized by their contribution to WPI components. This paper provides a method to calculate a decision matrix bases on the results of WPI driving factor analysis and 32 types of water resource projects. The result of decision matrix calculation can be visualized to illustrate suggestions for the government and the public that which projects are most effective for a certain administrative unit. Qianjiang district, a mountainous poverty district in Chongqing city, southwest of China, is chosen as the study case in this research.",
"title": ""
},
{
"docid": "96452a8d943c391bdc09fa9b19e9a76f",
"text": "A statistical active appearance model (AAM) is developed to track and detect eye blinking. The model has been designed to be robust to variations of head pose or gaze. In particular we analyze and determine the model parameters which encode the variations caused by blinking. This global model is further extended using a series of sub-models to enable independent modeling and tracking of the two eye regions. Several methods to enable measurement and detection of eye-blink are proposed and evaluated. The results of various tests on different image databases are presented to validate each model.",
"title": ""
},
{
"docid": "93388c2897ec6ec7141bcc820ab6734c",
"text": "We address the task of single depth image inpainting. Without the corresponding color images, previous or next frames, depth image inpainting is quite challenging. One natural solution is to regard the image as a matrix and adopt the low rank regularization just as color image inpainting. However, the low rank assumption does not make full use of the properties of depth images. A shallow observation inspires us to penalize the nonzero gradients by sparse gradient regularization. However, statistics show that though most pixels have zero gradients, there is still a non-ignorable part of pixels, whose gradients are small but nonzero. Based on this property of depth images, we propose a low gradient regularization method in which we reduce the penalty for small gradients while penalizing the nonzero gradients to allow for gradual depth changes. The proposed low gradient regularization is integrated with the low rank regularization into the low rank low gradient approach for depth image inpainting. We compare our proposed low gradient regularization with the sparse gradient regularization. The experimental results show the effectiveness of our proposed approach.",
"title": ""
}
] |
scidocsrr
|
69f362eb5aa81f0c179830c69eeb49f2
|
Fabrication of omni-directional driving system using unconstrained steel ball
|
[
{
"docid": "51fbebff61232e46381b243023c35dc5",
"text": "In this paper, mechanical design of a novel spherical wheel shape for a omni-directional mobile robot is presented. The wheel is used in a omnidirectional mobile robot realizing high step-climbing capability with its hemispherical wheel. Conventional Omniwheels can realize omnidirectional motion, however they have a poor step overcoming ability due to the sub-wheel small size. The proposed design solves this drawback by means of a 4 wheeled design. \"Omni-Ball\" is formed by two passive rotational hemispherical wheels and one active rotational axis. An actual prototype model has been developed to illustrate the concept and to perform preliminary motion experiments, through which the basic performance of the Omnidirectional vehicle with this proposed Omni-Ball mechanism was confirmed. An prototype has been developed to illustrate the concept. Motion experiments, with a test vehicle are also presented.",
"title": ""
}
] |
[
{
"docid": "41b7b8638fa1d3042873ca70f9c338f1",
"text": "The LC50 (78, 85 ppm) and LC90 (88, 135 ppm) of Anagalis arvensis and Calendula micrantha respectively against Biomphalaria alexandrina were higher than those of the non-target snails, Physa acuta, Planorbis planorbis, Helisoma duryi and Melanoides tuberculata. In contrast, the LC50 of Niclosamide (0.11 ppm) and Copper sulphate (CuSO4) (0.42 ppm) against B. alexandrina were lower than those of the non-target snails. The mortalities percentage among non-target snails ranged between 0.0 & 20% when sublethal concentrations of CuSO4 against B. alexandrina mixed with those of C. micrantha and between 0.0 & 40% when mixed with A. arvensis. Mortalities ranged between 0.0 & 50% when Niclosamide was mixed with each of A. arvensis and C. micrantha. A. arvensis induced 100% mortality on Oreochromis niloticus after 48 hrs exposure and after 24 hrs for Gambusia affinis. C. micrantha was non-toxic to the fish. The survival rate of O. niloticus and G. affinis after 48 hrs exposure to 0.11 ppm of Niclosamide were 83.3% & 100% respectively. These rates were 91.7% & 93.3% respectively when each of the two fish species was exposed to 0.42 ppm of CuSO4. Mixture of sub-lethal concentrations of A. arvensis against B. alexandrina and those of Niclosamide or CuSO4 at ratios 10:40 & 25:25 induced 66.6% mortalities on O. niloticus and 83.3% at 40:10. These mixtures caused 100% mortalities on G. affinis at all ratios. A. arvensis CuSO4 mixtures at 10:40 induced 83.3% & 40% mortalities on O. niloticus and G. affinis respectively and 100% mortalities on both fish species at ratios 25:25 & 40:10. A mixture of sub-lethal concentrations of C. micrantha against B. alexandrina and of Niclosamide or CuSO4 caused mortalities of O. niloticus between 0.0 & 33.3% and between 5% & 35% of G. affinis. The residue of Cu in O. niloticus were 4.69, 19.06 & 25.37 mg/1kgm fish after 24, 48 & 72 hrs exposure to LC0 of CuSO4 against B. alexandrina respectively.",
"title": ""
},
{
"docid": "9e32ff523f592d1988b79b5a8a56ef81",
"text": "We propose a semi-automatic method to obtain foreground object masks for a large set of related images. We develop a stagewise active approach to propagation: in each stage, we actively determine the images that appear most valuable for human annotation, then revise the foreground estimates in all unlabeled images accordingly. In order to identify images that, once annotated, will propagate well to other examples, we introduce an active selection procedure that operates on the joint segmentation graph over all images. It prioritizes human intervention for those images that are uncertain and influential in the graph, while also mutually diverse. We apply our method to obtain foreground masks for over 1 million images. Our method yields state-of-the-art accuracy on the ImageNet and MIT Object Discovery datasets, and it focuses human attention more effectively than existing propagation strategies.",
"title": ""
},
{
"docid": "3e9f54363d930c703dfe20941b2568b0",
"text": "Organizations are looking to new graduate nurses to fill expected staffing shortages over the next decade. Creative and effective onboarding programs will determine the success or failure of these graduates as they transition from student to professional nurse. This longitudinal quantitative study with repeated measures used the Casey-Fink Graduate Nurse Experience Survey to investigate the effects of offering a prelicensure extern program and postlicensure residency program on new graduate nurses and organizational outcomes versus a residency program alone. Compared with the nurse residency program alone, the combination of extern program and nurse residency program improved neither the transition factors most important to new nurse graduates during their first year of practice nor a measure important to organizations, retention rates. The additional cost of providing an extern program should be closely evaluated when making financially responsible decisions.",
"title": ""
},
{
"docid": "301fc0a18bec8128165ec73e15e66eb1",
"text": "data structure queries (A). Some queries check properties of abstract data struct [11][131] such as stacks, hash tables, trees, and so on. These queries are not domain because the data structures can hold data of any domain. These queries are also differ the programming construct queries, because they check the constraints of well-defined a data structures. For example, a query about a binary tree may find the number of its nod have only one child. On the other hand, programming construct queries usually span di data structures. Abstract data structure queries can usually be expressed as class invar could be packaged with the class that implements an ADT. However, the queries that p information rather than detect violations are best answered by dynamic queries. For ex monitoring B+ trees using queries may indicate whether this data structure is efficient f underlying problem. Program construct queries (P). Program construct queries verify object relationships that related to the program implementation and not directly to the problem domain. Such q verify and visualize groups of objects that have to conform to some constraints because lower level of program design and implementation. For example, in a graphical user int implementation, every window object has a parent window, and this window referenc children widgets through the widget_collection collection (section 5.2.2). Such construct is n",
"title": ""
},
{
"docid": "16d7767e9f2216ce0789b8a92d8d65e4",
"text": "In the rst genetic programming (GP) book John Koza noticed that tness histograms give a highly informative global view of the evolutionary process (Koza, 1992). The idea is further developed in this paper by discussing GP evolution in analogy to a physical system. I focus on three interrelated major goals: (1) Study the the problem of search eeort allocation in GP; (2) Develop methods in the GA/GP framework that allow adap-tive control of diversity; (3) Study ways of adaptation for faster convergence to optimal solution. An entropy measure based on phenotype classes is introduced which abstracts tness histograms. In this context, entropy represents a measure of population diversity. An analysis of entropy plots and their correlation with other statistics from the population enables an intelligent adaptation of search control.",
"title": ""
},
{
"docid": "3962a6ca8200000b650d210dae7899ec",
"text": "Mental fatigue is often characterized by reduced motivation for effortful activity and impaired task performance. We used subjective, behavioral (performance), and psychophysiological (P3, pupil diameter) measures during an n-back task to investigate the link between mental fatigue and task disengagement. After 2 h, we manipulated the rewards to examine a possible reengagement effect. Analyses showed that, with increasing fatigue and time-on-task, performance, P3 amplitude, and pupil diameter decreased. After increasing the rewards, all measures reverted to higher levels. Multilevel analysis revealed positive correlations between the used measures with time-on-task. We interpret these results as support for a strong link between task disengagement and mental fatigue.",
"title": ""
},
{
"docid": "51d534721e7003cf191189be37342394",
"text": "This paper addresses the problem of automatic player identification in broadcast sports videos filmed with a single side-view medium distance camera. Player identification in this setting is a challenging task because visual cues such as faces and jersey numbers are not clearly visible. Thus, this task requires sophisticated approaches to capture distinctive features from players to distinguish them. To this end, we use Convolutional Neural Networks (CNN) features extracted at multiple scales and encode them with an advanced pooling, called Fisher vector. We leverage it for exploring representations that have sufficient discriminatory power and ability to magnify subtle differences. We also analyze the distinguishing parts of the players and present a part based pooling approach to use these distinctive feature points. The resulting player representation is able to identify players even in difficult scenes. It achieves state-of-the-art results up to 96% on NBA basketball clips.",
"title": ""
},
{
"docid": "5fc192fc2f5be64a69eea7c4e848dd95",
"text": "Hypertrophic scars and keloids are fibroproliferative disorders that may arise after any deep cutaneous injury caused by trauma, burns, surgery, etc. Hypertrophic scars and keloids are cosmetically problematic, and in combination with functional problems such as contractures and subjective symptoms including pruritus, these significantly affect patients' quality of life. There have been many studies on hypertrophic scars and keloids; but the mechanisms underlying scar formation have not yet been well established, and prophylactic and treatment strategies remain unsatisfactory. In this review, the authors introduce and summarize classical concepts surrounding wound healing and review recent understandings of the biology, prevention and treatment strategies for hypertrophic scars and keloids.",
"title": ""
},
{
"docid": "46eaa1108cf5027b5427fda8fc9197ff",
"text": "ExpertEyes is a low-cost, open-source package of hardware and software that is designed to provide portable high-definition eyetracking. The project involves several technological innovations, including portability, high-definition video recording, and multiplatform software support. It was designed for challenging recording environments, and all processing is done offline to allow for optimization of parameter estimation. The pupil and corneal reflection are estimated using a novel forward eye model that simultaneously fits both the pupil and the corneal reflection with full ellipses, addressing a common situation in which the corneal reflection sits at the edge of the pupil and therefore breaks the contour of the ellipse. The accuracy and precision of the system are comparable to or better than what is available in commercial eyetracking systems, with a typical accuracy of less than 0.4° and best accuracy below 0.3°, and with a typical precision (SD method) around 0.3° and best precision below 0.2°. Part of the success of the system comes from a high-resolution eye image. The high image quality results from uncasing common digital camcorders and recording directly to SD cards, which avoids the limitations of the analog NTSC format. The software is freely downloadable, and complete hardware plans are available, along with sources for custom parts.",
"title": ""
},
{
"docid": "70ff662c629aa2eb4e524387476826aa",
"text": "Interconnection of number of devices through internet describes the Internet of things (IoT). Every object is connected with each other through unique identifier so that data can be transferred without human to human interaction. It allows establishing solutions for better management of natural resources. The smart objects embedded with sensors enables interaction with the physical and logical worlds according to the concept of IoT. In this paper proposed system is based on IoT that uses real time input data. Smart farm irrigation system uses android phone for remote monitoring and controlling of drips through wireless sensor network. Zigbee is used for communication between sensor nodes and base station. Real time sensed data handling and demonstration on the server is accomplished using web based java graphical user interface. Wireless monitoring of field irrigation system reduces human intervention and allows remote monitoring and controlling on android phone. Cloud Computing is an attractive solution to the large amount of data generated by the wireless sensor network. This paper proposes and evaluates a cloud-based wireless communication system to monitor and control a set of sensors and actuators to assess the plants water need.",
"title": ""
},
{
"docid": "9a1986c78681a8601d760dccf57f4302",
"text": "Perceptron training is widely applied in the natural language processing community for learning complex structured models. Like all structured prediction learning frameworks, the structured perceptron can be costly to train as training complexity is proportional to inference, which is frequently non-linear in example sequence length. In this paper we investigate distributed training strategies for the structured perceptron as a means to reduce training times when computing clusters are available. We look at two strategies and provide convergence bounds for a particular mode of distributed structured perceptron training based on iterative parameter mixing (or averaging). We present experiments on two structured prediction problems – namedentity recognition and dependency parsing – to highlight the efficiency of this method.",
"title": ""
},
{
"docid": "4e4bd38230dba0012227d8b40b01e867",
"text": "In this paper, we present a travel guidance system W2Go (Where to Go), which can automatically recognize and rank the landmarks for travellers. In this system, a novel Automatic Landmark Ranking (ALR) method is proposed by utilizing the tag and geo-tag information of photos in Flickr and user knowledge from Yahoo Travel Guide. ALR selects the popular tourist attractions (landmarks) based on not only the subjective opinion of the travel editors as is currently done on sites like WikiTravel and Yahoo Travel Guide, but also the ranking derived from popularity among tourists. Our approach utilizes geo-tag information to locate the positions of the tag-indicated places, and computes the probability of a tag being a landmark/site name. For potential landmarks, impact factors are calculated from the frequency of tags, user numbers in Flickr, and user knowledge in Yahoo Travel Guide. These tags are then ranked based on the impact factors. Several representative views for popular landmarks are generated from the crawled images with geo-tags to describe and present them in context of information derived from several relevant reference sources. The experimental comparisons to the other systems are conducted on eight famous cities over the world. User-based evaluation demonstrates the effectiveness of the proposed ALR method and the W2Go system.",
"title": ""
},
{
"docid": "137cb8666a1b5465abf8beaf394e3a30",
"text": "Person re-identification (re-ID) has been gaining in popularity in the research community owing to its numerous applications and growing importance in the surveillance industry. Recent methods often employ partial features for person re-ID and offer fine-grained information beneficial for person retrieval. In this paper, we focus on learning improved partial discriminative features using a deep convolutional neural architecture, which includes a pyramid spatial pooling module for efficient person feature representation. Furthermore, we propose a multi-task convolutional network that learns both personal attributes and identities in an end-to-end framework. Our approach incorporates partial features and global features for identity and attribute prediction, respectively. Experiments on several large-scale person re-ID benchmark data sets demonstrate the accuracy of our approach. For example, we report rank-1 accuracies of 85.37% (+3.47 %) and 92.81% (+0.51 %) on the DukeMTMC re-ID and Market-1501 data sets, respectively. The proposed method shows encouraging improvements compared with the state-of-the-art methods.",
"title": ""
},
{
"docid": "aecc5e00e4be529c76d6d629310c8b5c",
"text": "For a user to perceive continuous interactive response time in a visualization tool, the rule of thumb is that it must process, deliver, and display rendered results for any given interaction in under 100 milliseconds. In many visualization systems, successive interactions trigger independent queries and caching of results. Consequently, computationally expensive queries like multidimensional clustering cannot keep up with rapid sequences of interactions, precluding visual benefits such as motion parallax. In this paper, we describe a heuristic prefetching technique to improve the interactive response time of KMeans clustering in dynamic query visualizations of multidimensional data. We address the tradeoff between high interaction and intense query computation by observing how related interactions on overlapping data subsets produce similar clustering results, and characterizing these similarities within a parameter space of interaction. We focus on the two-dimensional parameter space defined by the minimum and maximum values of a time range manipulated by dragging and stretching a one-dimensional filtering lens over a plot of time series data. Using calculation of nearest neighbors of interaction points in parameter space, we reuse partial query results from prior interaction sequences to calculate both an immediate best-effort clustering result and to schedule calculation of an exact result. The method adapts to user interaction patterns in the parameter space by reprioritizing the interaction neighbors of visited points in the parameter space. A performance study on Mesonet meteorological data demonstrates that the method is a significant improvement over the baseline scheme in which interaction triggers on-demand, exact-range clustering with LRU caching. We also present initial evidence that approximate, temporary clustering results are sufficiently accurate (compared to exact results) to convey useful cluster structure during rapid and protracted interaction.",
"title": ""
},
{
"docid": "81f9a52b6834095cd7be70b39af0e7f0",
"text": "In this paper we present BatchDB, an in-memory database engine designed for hybrid OLTP and OLAP workloads. BatchDB achieves good performance, provides a high level of data freshness, and minimizes load interaction between the transactional and analytical engines, thus enabling real time analysis over fresh data under tight SLAs for both OLTP and OLAP workloads.\n BatchDB relies on primary-secondary replication with dedicated replicas, each optimized for a particular workload type (OLTP, OLAP), and a light-weight propagation of transactional updates. The evaluation shows that for standard TPC-C and TPC-H benchmarks, BatchDB can achieve competitive performance to specialized engines for the corresponding transactional and analytical workloads, while providing a level of performance isolation and predictable runtime for hybrid workload mixes (OLTP+OLAP) otherwise unmet by existing solutions.",
"title": ""
},
{
"docid": "50d60d023f55ec1f24e700198835e8c8",
"text": "Given the increasing interest for future connected vehicles, the long term evolution (LTE) specifications are being enhanced by 3GPP to cope with the vehicle-to-everything (V2X) scenarios starting from the upcoming Release 14, which will include, among others, vehicle-to-vehicle (V2V) direct communications. Since the main service expected for connected vehicles is the cooperative awareness, in this work we aim at deriving the number of neighbors that can be managed by LTE-V2V. In addition to the normal half duplex (HD) radios, which strictly limit the granularity of resource allocation, advanced full duplex (FD) radios are also considered. Results show that LTE-V2V with HD radios is able to manage up to few tens of neighbors, whereas FD can increase this limit significantly at the expense of an increase of complexity.",
"title": ""
},
{
"docid": "7b1b0e31384cb99caf0f3d8cf8134a53",
"text": "Toxic epidermal necrolysis (TEN) is one of the most threatening adverse reactions to various drugs. No case of concomitant occurrence TEN and severe granulocytopenia following the treatment with cefuroxime has been reported to date. Herein we present a case of TEN that developed eighteen days of the initiation of cefuroxime axetil therapy for urinary tract infection in a 73-year-old woman with chronic renal failure and no previous history of allergic diathesis. The condition was associated with severe granulocytopenia and followed by gastrointestinal hemorrhage, severe sepsis and multiple organ failure syndrome development. Despite intensive medical treatment the patient died. The present report underlines the potential of cefuroxime to simultaneously induce life threatening adverse effects such as TEN and severe granulocytopenia. Further on, because the patient was also taking furosemide for chronic renal failure, the possible unfavorable interactions between the two drugs could be hypothesized. Therefore, awareness of the possible drug interaction is necessary, especially when given in conditions of their altered pharmacokinetics as in case of chronic renal failure.",
"title": ""
},
{
"docid": "8f79bd3f51ec54a3e86553514881088c",
"text": "A time series is a sequence of observations collected over fixed sampling intervals. Several real-world dynamic processes can be modeled as a time series, such as stock price movements, exchange rates, temperatures, among others. As a special kind of data stream, a time series may present concept drift, which affects negatively time series analysis and forecasting. Explicit drift detection methods based on monitoring the time series features may provide a better understanding of how concepts evolve over time than methods based on monitoring the forecasting error of a base predictor. In this paper, we propose an online explicit drift detection method that identifies concept drifts in time series by monitoring time series features, called Feature Extraction for Explicit Concept Drift Detection (FEDD). Computational experiments showed that FEDD performed better than error-based approaches in several linear and nonlinear artificial time series with abrupt and gradual concept drifts.",
"title": ""
},
{
"docid": "3dbe750d6e06963c8b983343f1341a41",
"text": "Automatic bug finding with static analysis requires precise tracking of different memory object values. This paper describes a memory modeling method for static analysis of C programs. It is particularly suitable for precise path-sensitive analyses, e.g., symbolic execution. It can handle almost all kinds of C expressions, including arbitrary levels of pointer dereferences, pointer arithmetic, composite array and struct data types, arbitrary type casts, dynamic memory allocation, etc. It maps aliased lvalue expressions to the identical object without extra alias analysis. The model has been implemented in the Clang static analyzer and enhanced the analyzer a lot by enabling it to have precise value tracking ability.",
"title": ""
},
{
"docid": "68bb80aa94c4d4119d1974f644cf9190",
"text": "This paper introduces the design of a l.8 V low dropout voltage regulator (LDO) and a foldback current limit circuit which limits the output current to 3 mA when load over-current occurs. The LDO was implemented in a 0.18 μm CMOS technology. The measured result reveals that the LDO′s power supply rejection (PSR) is about −58 dB and –54 dB at 20 Hz and 1 kHz respectively, the response time is 4 μs and the quiescent current is 20 μA. The designed LDO regulator can work with a supply voltage down to 2.0 V with a drop-out voltage of 200 mV at a maximum load current of 240 mA.",
"title": ""
}
] |
scidocsrr
|
9bbd18072b5f4665fde503174efba01e
|
Shifting the Baseline: Single Modality Performance on Visual Navigation & QA
|
[
{
"docid": "9c44b6e7b91ecfeab5bba95a25d59401",
"text": "Many recent papers address reading comprehension, where examples consist of (question, passage, answer) tuples. Presumably, a model must combine information from both questions and passages to predict corresponding answers. However, despite intense interest in the topic, with hundreds of published papers vying for leaderboard dominance, basic questions about the difficulty of many popular benchmarks remain unanswered. In this paper, we establish sensible baselines for the bAbI, SQuAD, CBT, CNN, and Whodid-What datasets, finding that questionand passage-only models often perform surprisingly well. On 14 out of 20 bAbI tasks, passage-only models achieve greater than 50% accuracy, sometimes matching the full model. Interestingly, while CBT provides 20-sentence passages, only the last is needed for comparably accurate prediction. By comparison, SQuAD and CNN appear better-constructed.",
"title": ""
},
{
"docid": "658f2d045fe005ee1a4016b2de0ae1b1",
"text": "Given a partial description like “she opened the hood of the car,” humans can reason about the situation and anticipate what might come next (“then, she examined the engine”). In this paper, we introduce the task of grounded commonsense inference, unifying natural language inference and commonsense reasoning. We present Swag, a new dataset with 113k multiple choice questions about a rich spectrum of grounded situations. To address the recurring challenges of the annotation artifacts and human biases found in many existing datasets, we propose Adversarial Filtering (AF), a novel procedure that constructs a de-biased dataset by iteratively training an ensemble of stylistic classifiers, and using them to filter the data. To account for the aggressive adversarial filtering, we use state-of-theart language models to massively oversample a diverse set of potential counterfactuals. Empirical results demonstrate that while humans can solve the resulting inference problems with high accuracy (88%), various competitive models struggle on our task. We provide comprehensive analysis that indicates significant opportunities for future research.",
"title": ""
},
{
"docid": "09860eae7ecc85460c3c3d50ae749f69",
"text": "Following verbal route instructions requires knowledge of language, space, action and perception. We present MARCO, an agent that follows free-form, natural language route instructions by representing and executing a sequence of compound action specifications that model which actions to take under which conditions. MARCO infers implicit actions from knowledge of both linguistic conditional phrases and from spatial action and local configurations. Thus, MARCO performs explicit actions, implicit actions necessary to achieve the stated conditions, and exploratory actions to learn about the world. We gathered a corpus of 786 route instructions from six people in three large-scale virtual indoor environments. Thirtysix other people followed these instructions and rated them for quality. These human participants finished at the intended destination on 69% of the trials. MARCO followed the same instructions in the same environments, with a success rate of 61%. We measured the efficacy of action inference with MARCO variants lacking action inference: executing only explicit actions, MARCO succeeded on just 28% of the trials. For this task, inferring implicit actions is essential to follow poor instructions, but is also crucial for many highly-rated route instructions.",
"title": ""
}
] |
[
{
"docid": "d71faafdcf1b97951e979f13dbe91cb2",
"text": "We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrasebased statistical machine translation.",
"title": ""
},
{
"docid": "2cea5f37c8c03fc0b6abc9e5d70bb1b3",
"text": "This paper summarize our approach to author profiling task – a part of evaluation lab PAN’13. We have used ensemble-based classification on large features set. All the features are roughly described and experimental section provides evaluation of different methods and classification approaches.",
"title": ""
},
{
"docid": "edbad8d3889a431c16e4a51d0c1cc19c",
"text": "We propose to automatically create capsule wardrobes. Given an inventory of candidate garments and accessories, the algorithm must assemble a minimal set of items that provides maximal mix-and-match outfits. We pose the task as a subset selection problem. To permit efficient subset selection over the space of all outfit combinations, we develop submodular objective functions capturing the key ingredients of visual compatibility, versatility, and user-specific preference. Since adding garments to a capsule only expands its possible outfits, we devise an iterative approach to allow near-optimal submodular function maximization. Finally, we present an unsupervised approach to learn visual compatibility from \"in the wild\" full body outfit photos; the compatibility metric translates well to cleaner catalog photos and improves over existing methods. Our results on thousands of pieces from popular fashion websites show that automatic capsule creation has potential to mimic skilled fashionistas in assembling flexible wardrobes, while being significantly more scalable.",
"title": ""
},
{
"docid": "4804ace4dcbc0c58ae460d0791d00ec7",
"text": "The development and usage of Unmanned Aerial Vehicles (UAVs) quickly increased in the last decades, mainly for military purposes. This technology is also now of high interest in non-military contexts like logistics, environmental studies and different areas of civil protection. While the technology for operating a single UAV is rather mature, additional efforts are still necessary for using UAVs in fleets (or swarms). The Aid to SItuation Management based on MUltimodal, MUltiUAVs, MUltilevel acquisition Techniques (ASIMUT) project which is supported by the European Defence Agency (EDA) aims at investigating and demonstrating dedicated surveillance services based on fleets of UAVs. The aim is to enhance the situation awareness of an operator and to decrease his workload by providing support for the detection of threats based on multi-sensor multi-source data fusion. The operator is also supported by the combination of information delivered by the heterogeneous swarms of UAVs and by additional information extracted from intelligence databases. As a result, a distributed surveillance system increasing detection, high-level data fusion capabilities and UAV autonomy is proposed.",
"title": ""
},
{
"docid": "4e35031e5d0e6698f90bfec7a1e6bfb8",
"text": "Numerous studies have examined the neuronal inputs and outputs of many areas within the mammalian cerebral cortex, but how these areas are organized into neural networks that communicate across the entire cortex is unclear. Over 600 labeled neuronal pathways acquired from tracer injections placed across the entire mouse neocortex enabled us to generate a cortical connectivity atlas. A total of 240 intracortical connections were manually reconstructed within a common neuroanatomic framework, forming a cortico-cortical connectivity map that facilitates comparison of connections from different cortical targets. Connectivity matrices were generated to provide an overview of all intracortical connections and subnetwork clusterings. The connectivity matrices and cortical map revealed that the entire cortex is organized into four somatic sensorimotor, two medial, and two lateral subnetworks that display unique topologies and can interact through select cortical areas. Together, these data provide a resource that can be used to further investigate cortical networks and their corresponding functions.",
"title": ""
},
{
"docid": "7e6fafe512ccb0a9760fab1b14aa374f",
"text": "Studying execution of concurrent real-time online systems, to identify far-reaching and hard to reproduce latency and performance problems, requires a mechanism able to cope with voluminous information extracted from execution traces. Furthermore, the workload must not be disturbed by tracing, thereby causing the problematic behavior to become unreproducible.\n In order to satisfy this low-disturbance constraint, we created the LTTng kernel tracer. It is designed to enable safe and race-free attachment of probes virtually anywhere in the operating system, including sites executed in non-maskable interrupt context.\n In addition to being reentrant with respect to all kernel execution contexts, LTTng offers good performance and scalability, mainly due to its use of per-CPU data structures, local atomic operations as main buffer synchronization primitive, and RCU (Read-Copy Update) mechanism to control tracing.\n Given that kernel infrastructure used by the tracer could lead to infinite recursion if traced, and typically requires non-atomic synchronization, this paper proposes an asynchronous mechanism to inform the kernel that a buffer is ready to read. This ensures that tracing sites do not require any kernel primitive, and therefore protects from infinite recursion.\n This paper presents the core of LTTng's buffering algorithms and measures its performance.",
"title": ""
},
{
"docid": "c523c8870d864a760fbb78bfbf4b00b6",
"text": "Deep neural networks are among the most influential architectures of deep learning algorithms, being deployed in many mobile intelligent applications. End-side services, such as intelligent personal assistants (IPAs), autonomous cars, and smart home services often employ either simple local models or complex remote models on the cloud. Mobile-only and cloud-only computations are currently the status-quo approaches. In this paper, we propose an efficient, adaptive, and practical engine, JointDNN, for collaborative computation between a mobile device and cloud for DNNs in both inference and training phase. JointDNN not only provides an energy and performance efficient method of querying DNNs for the mobile side, but also benefits the cloud server by reducing the amount of its workload and communications compared to the cloud-only approach. Given the DNN architecture, we investigate the efficiency of processing some layers on the mobile device and some layers on the cloud server. We provide optimization formulations at layer granularity for forward and backward propagation in DNNs, which can adapt to mobile battery limitations and cloud server load constraints and quality of service. JointDNN achieves up to 18× and 32× reductions on the latency and mobile energy consumption of querying DNNs compared to the status-quo approaches, respectively.",
"title": ""
},
{
"docid": "4261306ca632ada117bdb69af81dcb3f",
"text": "Real-world deployments of wireless sensor networks (WSNs) require secure communication. It is important that a receiver is able to verify that sensor data was generated by trusted nodes. In some cases it may also be necessary to encrypt sensor data in transit. Recently, WSNs and traditional IP networks are more tightly integrated using IPv6 and 6LoWPAN. Available IPv6 protocol stacks can use IPsec to secure data exchange. Thus, it is desirable to extend 6LoWPAN such that IPsec communication with IPv6 nodes is possible. It is beneficial to use IPsec because the existing end-points on the Internet do not need to be modified to communicate securely with the WSN. Moreover, using IPsec, true end-to-end security is implemented and the need for a trustworthy gateway is removed. In this paper we provide End-to-End (E2E) secure communication between an IP enabled sensor nodes and a device on traditional Internet. This is the first compressed lightweight design, implementation, and evaluation of 6LoWPAN extension for IPsec on Contiki. Our extension supports both IPsec’s Authentication Header (AH) and Encapsulation Security Payload (ESP). Thus, communication endpoints are able to authenticate, encrypt and check the integrity of messages using standardized and established IPv6 mechanisms.",
"title": ""
},
{
"docid": "16da6b46cd53304923720ba4b5e92427",
"text": "Despite its unambiguous advantages, cellular phone use has been associated with harmful or potentially disturbing behaviors. Problematic use of the mobile phone is considered as an inability to regulate one’s use of the mobile phone, which eventually involves negative consequences in daily life (e.g., financial problems). The current article describes what can be considered dysfunctional use of the mobile phone and emphasizes its multifactorial nature. Validated assessment instruments to measure problematic use of the mobile phone are described. The available literature on risk factors for dysfunctional mobile phone use is then reviewed, and a pathways model that integrates the existing literature is proposed. Finally, the assumption is made that dysfunctional use of the mobile phone is part of a spectrum of cyber addictions that encompasses a variety of dysfunctional behaviors and implies involvement in specific online activities (e.g., video games, gambling, social networks, sex-related websites).",
"title": ""
},
{
"docid": "3433b283726a7e95ba5cb2a3c97cd195",
"text": "Black silicon (BSi) represents a very active research area in renewable energy materials. The rise of BSi as a focus of study for its fundamental properties and potentially lucrative practical applications is shown by several recent results ranging from solar cells and light-emitting devices to antibacterial coatings and gas-sensors. In this paper, the common BSi fabrication techniques are first reviewed, including electrochemical HF etching, stain etching, metal-assisted chemical etching, reactive ion etching, laser irradiation and the molten salt Fray-Farthing-Chen-Cambridge (FFC-Cambridge) process. The utilization of BSi as an anti-reflection coating in solar cells is then critically examined and appraised, based upon strategies towards higher efficiency renewable solar energy modules. Methods of incorporating BSi in advanced solar cell architectures and the production of ultra-thin and flexible BSi wafers are also surveyed. Particular attention is given to routes leading to passivated BSi surfaces, which are essential for improving the electrical properties of any devices incorporating BSi, with a special focus on atomic layer deposition of Al2O3. Finally, three potential research directions worth exploring for practical solar cell applications are highlighted, namely, encapsulation effects, the development of micro-nano dual-scale BSi, and the incorporation of BSi into thin solar cells. It is intended that this paper will serve as a useful introduction to this novel material and its properties, and provide a general overview of recent progress in research currently being undertaken for renewable energy applications.",
"title": ""
},
{
"docid": "3609f4923b9aebc3d18f31ac6ae78bea",
"text": "Cloud computing is playing an ever larger role in the IT infrastructure. The migration into the cloud means that we must rethink and adapt our security measures. Ultimately, both the cloud provider and the customer have to accept responsibilities to ensure security best practices are followed. Firewalls are one of the most critical security features. Most IaaS providers make firewalls available to their customers. In most cases, the customer assumes a best-case working scenario which is often not assured. In this paper, we studied the filtering behavior of firewalls provided by five different cloud providers. We found that three providers have firewalls available within their infrastructure. Based on our findings, we developed an open-ended firewall monitoring tool which can be used by cloud customers to understand the firewall's filtering behavior. This information can then be efficiently used for risk management and further security considerations. Measuring today's firewalls has shown that they perform well for the basics, although may not be fully featured considering fragmentation or stateful behavior.",
"title": ""
},
{
"docid": "17e087f27a3178e46dbe14fb25027641",
"text": "Social media has become an important tool for the business of marketers. Increasing exposure and traffics are the main two benefits of social media marketing. Most marketers are using social media to develop loyal fans and gain marketplace intelligence. Marketers reported increased benefits across all categories since 2013 and trademarks increased the number of loyal fans and sales [1]. Therefore, 2013 was a significant year for social media. Feeling the power of Instagram may be one of the most interesting cases. Social media is an effective key for fashion brands as they allow them to communicate directly with their consumers, promote various events and initiatives, and build brand awareness. As the increasing use of visual info graphic and marketing practices in social media, trademarks has begun to show more interest in Instagram. There is also no language barriers in Instagram and provides visuals which are very crucial for fashion industry. The purpose of this study is to determine and contrast the content sharing types of 10 well-known fashion brands (5 Turkish brands and 5 international brands), and to explain their attitude in Instagram. Hence, the content of Instagram accounts of those brands were examined according to post type (photo/video), content type (9 elements), number of likes and reviews, photo type (amateur/professional), shooting place (studio/outdoor/shops/etc.), and brand comments on their posts. This study provides a snapshot of how fashion brands utilize Instagram in their efforts of marketing.",
"title": ""
},
{
"docid": "f182fdd2f5bae84b5fc38284f83f0c27",
"text": "We adopted an approach based on an LSTM neural network to monitor and detect faults in industrial multivariate time series data. To validate the approach we created a Modelica model of part of a real gasoil plant. By introducing hacks into the logic of the Modelica model, we were able to generate both the roots and causes of fault behavior in the plant. Having a self-consistent data set with labeled faults, we used an LSTM architecture with a forecasting error threshold to obtain precision and recall quality metrics. The dependency of the quality metric on the threshold level is considered. An appropriate mechanism such as “one handle” was introduced for filtering faults that are outside of the plant operator field of interest.",
"title": ""
},
{
"docid": "1007a655557a8e4c99cd9caf904ceb5c",
"text": "OBJECTIVE\nTo compare the efficacy of 2 strategies, errorless learning (EL) and self-instruction training (SIT), for remediating emotion perception deficits in individuals with traumatic brain injury (TBI).\n\n\nDESIGN\nRandomized controlled trial comparing groups receiving 25 hours (across 10 weeks) of treatment with either EL or SIT with waitlist control.\n\n\nSETTING AND PARTICIPANTS\nEighteen adult outpatient volunteers with severe TBI who were at least 6 months postinjury.\n\n\nMAIN OUTCOMES MEASURES\nPhotograph-based emotion recognition tasks, The Awareness of Social Inferences Test, and questionnaire measures, for example, the Sydney Psychosocial Reintegration Scale.\n\n\nRESULTS\nBoth treatment groups showed modest improvement in emotion perception ability. Limited evidence suggests that SIT may be a favorable approach for this type of remediation.\n\n\nCONCLUSIONS\nAlthough further research is needed, there are reasons for optimism regarding rehabilitation of emotion perception following TBI.",
"title": ""
},
{
"docid": "c16499b3945603d04cf88fec7a2c0a85",
"text": "Recovering structure and motion parameters given a image pair or a sequence of images is a well studied problem in computer vision. This is often achieved by employing Structure from Motion (SfM) or Simultaneous Localization and Mapping (SLAM) algorithms based on the real-time requirements. Recently, with the advent of Convolutional Neural Networks (CNNs) researchers have explored the possibility of using machine learning techniques to reconstruct the 3D structure of a scene and jointly predict the camera pose. In this work, we present a framework that achieves state-of-the-art performance on single image depth prediction for both indoor and outdoor scenes. The depth prediction system is then extended to predict optical flow and ultimately the camera pose and trained end-to-end. Our framework outperforms previous deep-learning based motion prediction approaches, and we also demonstrate that the state-of-the-art metric depths can be further improved using the knowledge of pose.",
"title": ""
},
{
"docid": "6992e0712e99e11b9ebe862c01c0882b",
"text": "This paper is in many respects a continuation of the earlier paper by the author published in Proc. R. Soc. A in 1998 entitled ‘A comprehensive methodology for the design of ships (and other complex systems)’. The earlier paper described the approach to the initial design of ships developedby the author during some 35years of design practice, including two previous secondments to teach ship design atUCL.Thepresent paper not only takes thatdevelopment forward, it also explains how the research tool demonstrating the author’s approach to initial ship design has now been incorporated in an industry based design system to provide a working graphically and numerically integrated design system. This achievement is exemplified by a series of practical design investigations, undertaken by the UCL Design Research Centre led by the author, which were mainly undertaken for industry clients in order to investigate real problems towhich the approachhasbrought significant insights.The other new strand in the present paper is the emphasis on the human factors or large scale ergonomics dimension, vital to complex and large scale design products but rarely hitherto beengiven sufficientprominence in the crucial formative stagesof large scale designbecauseof the inherent difficulties in doing so. The UCL Design Building Block approach has now been incorporated in the established PARAMARINE ship design system through a module entitled SURFCON. Work is now underway on an Engineering and Physical Sciences Research Council joint project with the University of Greenwich to interface the latter’s escape simulation toolmaritimeEXODUSwithSURFCONtoprovide initial design guidance to ship designers on personnelmovement. The paper’s concluding section considers the wider applicability of the integration of simulation during initial design with the graphically driven synthesis to other complex and large scale design tasks. The paper concludes by suggesting how such an approach to complex design can contribute to the teaching of designers and, moreover, how this designapproach can enable a creative qualitative approach to engineering design to be sustained despite the risk that advances in computer based methods might encourage emphasis being accorded to solely to quantitative analysis.",
"title": ""
},
{
"docid": "73b76fa13443a4c285dc9a97cfaa22dd",
"text": "As mobile ad hoc network applications are deployed, security emerges as a central requirement. In this paper, we introduce the wormhole attack, a severe attack in ad hoc networks that is particularly challenging to defend against. The wormhole attack is possible even if the attacker has not compromised any hosts, and even if all communication provides authenticity and confidentiality. In the wormhole attack, an attacker records packets (or bits) at one location in the network, tunnels them (possibly selectively) to another location, and retransmits them there into the network. The wormhole attack can form a serious threat in wireless networks, especially against many ad hoc network routing protocols and location-based wireless security systems. For example, most existing ad hoc network routing protocols, without some mechanism to defend against the wormhole attack, would be unable to find routes longer than one or two hops, severely disrupting communication. We present a general mechanism, called packet leashes, for detecting and, thus defending against wormhole attacks, and we present a specific protocol, called TIK, that implements leashes. We also discuss topology-based wormhole detection, and show that it is impossible for these approaches to detect some wormhole topologies.",
"title": ""
},
{
"docid": "22d8bfa59bb8e25daa5905dbb9e1deea",
"text": "BACKGROUND\nSubacromial impingement syndrome (SAIS) is a painful condition resulting from the entrapment of anatomical structures between the anteroinferior corner of the acromion and the greater tuberosity of the humerus.\n\n\nOBJECTIVE\nThe aim of this study was to evaluate the short-term effectiveness of high-intensity laser therapy (HILT) versus ultrasound (US) therapy in the treatment of SAIS.\n\n\nDESIGN\nThe study was designed as a randomized clinical trial.\n\n\nSETTING\nThe study was conducted in a university hospital.\n\n\nPATIENTS\nSeventy patients with SAIS were randomly assigned to a HILT group or a US therapy group.\n\n\nINTERVENTION\nStudy participants received 10 treatment sessions of HILT or US therapy over a period of 2 consecutive weeks.\n\n\nMEASUREMENTS\nOutcome measures were the Constant-Murley Scale (CMS), a visual analog scale (VAS), and the Simple Shoulder Test (SST).\n\n\nRESULTS\nFor the 70 study participants (42 women and 28 men; mean [SD] age=54.1 years [9.0]; mean [SD] VAS score at baseline=6.4 [1.7]), there were no between-group differences at baseline in VAS, CMS, and SST scores. At the end of the 2-week intervention, participants in the HILT group showed a significantly greater decrease in pain than participants in the US therapy group. Statistically significant differences in change in pain, articular movement, functionality, and muscle strength (force-generating capacity) (VAS, CMS, and SST scores) were observed after 10 treatment sessions from the baseline for participants in the HILT group compared with participants in the US therapy group. In particular, only the difference in change of VAS score between groups (1.65 points) surpassed the accepted minimal clinically important difference for this tool.\n\n\nLIMITATIONS\nThis study was limited by sample size, lack of a control or placebo group, and follow-up period.\n\n\nCONCLUSIONS\nParticipants diagnosed with SAIS showed greater reduction in pain and improvement in articular movement functionality and muscle strength of the affected shoulder after 10 treatment sessions of HILT than did participants receiving US therapy over a period of 2 consecutive weeks.",
"title": ""
},
{
"docid": "be6ae3d9324fec5a4a5a5e8b5f0d6e0f",
"text": "ive Summarization Improved by WordNet-based Extractive Sentences Niantao Xie, Sujian Li, Huiling Ren, and Qibin Zhai 1 MOE Key Laboratory of Computational Linguistics, Peking University, China 2 Institute of Medical Information, Chinese Academy of Medical Sciences 3 MOE Information Security Lab, School of Software & Microelectronics, Peking University, China {xieniantao,lisujian}@pku.edu.cn ren.huiling@imicams.ac.cn qibinzhai@ss.pku.edu.cn Abstract. Recently, the seq2seq abstractive summarization models have achieved good results on the CNN/Daily Mail dataset. Still, how to improve abstractive methods with extractive methods is a good research direction, since extractive methods have their potentials of exploiting various efficient features for extracting important sentences in one text. In this paper, in order to improve the semantic relevance of abstractive summaries, we adopt the WordNet based sentence ranking algorithm to extract the sentences which are most semantically to one text. Then, we design a dual attentional seq2seq framework to generate summaries with consideration of the extracted information. At the same time, we combine pointer-generator and coverage mechanisms to solve the problems of out-of-vocabulary (OOV) words and duplicate words which exist in the abstractive models. Experiments on the CNN/Daily Mail dataset show that our models achieve competitive performance with the state-of-theart ROUGE scores. Human evaluations also show that the summaries generated by our models have high semantic relevance to the original text. Recently, the seq2seq abstractive summarization models have achieved good results on the CNN/Daily Mail dataset. Still, how to improve abstractive methods with extractive methods is a good research direction, since extractive methods have their potentials of exploiting various efficient features for extracting important sentences in one text. In this paper, in order to improve the semantic relevance of abstractive summaries, we adopt the WordNet based sentence ranking algorithm to extract the sentences which are most semantically to one text. Then, we design a dual attentional seq2seq framework to generate summaries with consideration of the extracted information. At the same time, we combine pointer-generator and coverage mechanisms to solve the problems of out-of-vocabulary (OOV) words and duplicate words which exist in the abstractive models. Experiments on the CNN/Daily Mail dataset show that our models achieve competitive performance with the state-of-theart ROUGE scores. Human evaluations also show that the summaries generated by our models have high semantic relevance to the original text.",
"title": ""
},
{
"docid": "29af3fa5673624831be3ba4a64a078d6",
"text": "A low-profile single-fed, wideband, circularly-polarized slot antenna is proposed. The antenna comprises a square slot fed by a U-shaped microstrip line which provides a wide impedance bandwidth. Wideband circular polarization is obtained by incorporating a metasurface consisting of a 9 × 9 lattice of periodic metal plates. It is shown that this metasurface generates additional resonances, lowers the axial ratio (AR) of the radiating structure, and enhances the radiation pattern stability at higher frequencies. The overall size of the antenna is only 28 mm × 28 mm (0.3 λo × 0.3 λo). The proposed antenna shows an impedance bandwidth from 2.6 GHz to 9 GHz (110.3%) for |S11| > −10 dB, and axial ratio bandwidth from 3.5 GHz to 6.1 GHz (54.1%) for AR > 3 dB. The antenna has a stable radiation pattern and a gain of greater than 3 dBi over the entire frequency band.",
"title": ""
}
] |
scidocsrr
|
a80223434c2eb92f2914c6fd63630443
|
An open software architecture for virtual reality interaction
|
[
{
"docid": "8745e21073db143341e376bad1f0afd7",
"text": "The Virtual Reality (VR) user interface style allows natural hand and body motions to manipulate virtual objects in 3D environments using one or more 3D input devices. This style is best suited to application areas where traditional two-dimensional styles fall short, such as scienti c visualization, architectural visualization, and remote manipulation. Currently, the programming e ort required to produce a VR application is too large, and many pitfalls must be avoided in the creation of successful VR programs. In this paper we describe the Decoupled Simulation Model for creating successful VR applications, and a software system that embodies this model. The MR Toolkit simpli es the development of VR applications by providing standard facilities required by a wide range of VR user interfaces. These facilities include support for distributed computing, head-mounted displays, room geometry management, performance monitoring, hand input devices, and sound feedback. The MR Toolkit encourages programmers to structure their applications to take advantage of the distributed computing capabilities of workstation networks improving the application's performance. In this paper, the motivations and the architecture of the toolkit are outlined, the programmer's view is described, and a simple application is brie y described. CR",
"title": ""
}
] |
[
{
"docid": "86da740a4eab22a9914c72376ae4e9e3",
"text": "A new system for automatic detection of angry speech is proposed. Using simulation of far-end-noise-corrupted telephone speech and the widely used Berlin database of emotional speech, autoregressive prediction of features across speech frames is shown to contribute significantly to both the clean speech performance and the robustness of the system. The autoregressive models are learned from the training data in order to capture long-term temporal dynamics of the features. Additionally, linear predictive spectrum analysis outperforms conventional Fourier spectrum analysis in terms of robustness in the computation of mel-frequency cepstral coefficients in the feature extraction stage.",
"title": ""
},
{
"docid": "3e23069ba8a3ec3e4af942727c9273e9",
"text": "This paper describes an automated tool called Dex (difference extractor) for analyzing syntactic and semantic changes in large C-language code bases. It is applied to patches obtained from a source code repository, each of which comprises the code changes made to accomplish a particular task. Dex produces summary statistics characterizing these changes for all of the patches that are analyzed. Dex applies a graph differencing algorithm to abstract semantic graphs (ASGs) representing each version. The differences are then analyzed to identify higher-level program changes. We describe the design of Dex, its potential applications, and the results of applying it to analyze bug fixes from the Apache and GCC projects. The results include detailed information about the nature and frequency of missing condition defects in these projects.",
"title": ""
},
{
"docid": "931b7407043777109a04e365b475c40c",
"text": "Fine-grained image labels are desirable for many computer vision applications, such as visual search or mobile AI assistant. These applications rely on image classification models that can produce hundreds of thousands (e.g. 100K) of diversified fine-grained image labels on input images. However, training a network at this vocabulary scale is challenging, and suffers from intolerable large model size and slow training speed, which leads to unsatisfying classification performance. A straightforward solution would be training separate expert networks (specialists), with each specialist focusing on learning one specific vertical (e.g. cars, birds...). However, deploying dozens of expert networks in a practical system would significantly increase system complexity and inference latency, and consumes large amounts of computational resources. To address these challenges, we propose a Knowledge Concentration method, which effectively transfers the knowledge from dozens of specialists (multiple teacher networks) into one single model (one student network) to classify 100K object categories. There are three salient aspects in our method: (1) a multi-teacher single-student knowledge distillation framework; (2) a self-paced learning mechanism to allow the student to learn from different teachers at various paces; (3) structurally connected layers to expand the student network capacity with limited extra parameters. We validate our method on OpenImage and a newly collected dataset, Entity-Foto-Tree (EFT), with 100K categories, and show that the proposed model performs significantly better than the baseline generalist model.",
"title": ""
},
{
"docid": "2af08bcc4f088c6550610b257469d1df",
"text": "The appropriate deployment of technology contributes to the improvement in the quality of healthcare delivered, the containment of cost, and to increased access to services offered by the healthcare system. Over the past one-hundred years, the dependence of the healthcare system on medical technology for the delivery of its services has continuously grown. In this system, the technology facilitates the delivery of the \"human touch.\" Medical technology enables practitioners to collaboratively intervene together with other caregivers to treat patients in a cost-effective and efficient manner. Technology also enables integration and systems management in a way that contributes to improvements in the level of health indicators. Hospital and clinical administrators are faced with the expectation for return on investment that meets accounting guidelines and financial pressures. This article describes the emerging process for managing medical technology in the hospital and the role that clinical engineers are fulfilling.",
"title": ""
},
{
"docid": "0e521af53f9faf4fee38843a22ec2185",
"text": "Steering of main beam of radiation at fixed millimeter wave frequency in a Substrate Integrated Waveguide (SIW) Leaky Wave Antenna (LWA) has not been investigated so far in literature. In this paper a Half-Mode Substrate Integrated Waveguide (HMSIW) LWA is proposed which has the capability to steer its main beam at fixed millimeter wave frequency of 24GHz. Beam steering is made feasible by changing the capacitance of the capacitors, connected at the dielectric side of HMSIW. The full wave EM simulations show that the main beam scans from 36° to 57° in the first quadrant.",
"title": ""
},
{
"docid": "4f3be105eaaad6d3741c370caa8e764e",
"text": "Ankylosing spondylitis (AS) is a chronic systemic inflammatory disease that affects mainly the axial skeleton and causes significant pain and disability. Aquatic (water-based) exercise may have a beneficial effect in various musculoskeletal conditions. The aim of this study was to compare the effectiveness of aquatic exercise interventions with land-based exercises (home-based exercise) in the treatment of AS. Patients with AS were randomly assigned to receive either home-based exercise or aquatic exercise treatment protocol. Home-based exercise program was demonstrated by a physiotherapist on one occasion and then, exercise manual booklet was given to all patients in this group. Aquatic exercise program consisted of 20 sessions, 5× per week for 4 weeks in a swimming pool at 32–33 °C. All the patients in both groups were assessed for pain, spinal mobility, disease activity, disability, and quality of life. Evaluations were performed before treatment (week 0) and after treatment (week 4 and week 12). The baseline and mean values of the percentage changes calculated for both groups were compared using independent sample t test. Paired t test was used for comparison of pre- and posttreatment values within groups. A total of 69 patients with AS were included in this study. We observed significant improvements for all parameters [pain score (VAS) visual analog scale, lumbar flexion/extension, modified Schober test, chest expansion, bath AS functional index, bath AS metrology index, bath AS disease activity index, and short form-36 (SF-36)] in both groups after treatment at week 4 and week 12 (p < 0.05). Comparison of the percentage changes of parameters both at week 4 and week 12 relative to pretreatment values showed that improvement in VAS (p < 0.001) and bodily pain (p < 0.001), general health (p < 0.001), vitality (p < 0.001), social functioning (p < 0.001), role limitations due to emotional problems (p < 0.001), and general mental health (p < 0.001) subparts of SF-36 were better in aquatic exercise group. It is concluded that a water-based exercises produced better improvement in pain score and quality of life of the patients with AS compared with home-based exercise.",
"title": ""
},
{
"docid": "84301afe8fa5912dc386baab84dda7ea",
"text": "There is a growing understanding that machine learning architectures have to be much bigger and more complex to approach any intelligent behavior. There is also a growing understanding that purely supervised learning is inadequate to train such systems. A recent paradigm of artificial recurrent neural network (RNN) training under the umbrella-name Reservoir Computing (RC) demonstrated that training big recurrent networks (the reservoirs) differently than supervised readouts from them is often better. It started with Echo State Networks (ESNs) and Liquid State Machines ten years ago where the reservoir was generated randomly and only linear readouts from it were trained. Rather surprisingly, such simply and fast trained ESNs outperformed classical fully-trained RNNs in many tasks. While full supervised training of RNNs is problematic, intuitively there should also be something better than a random network. In recent years RC became a vivid research field extending the initial paradigm from fixed random reservoir and trained output into using different methods for training the reservoir and the readout. In this thesis we overview existing and investigate new alternatives to the classical supervised training of RNNs and their hierarchies. First we present a taxonomy and a systematic overview of the RNN training approaches under the RC umbrella. Second, we propose and investigate the use of two different neural network models for the reservoirs together with several unsupervised adaptation techniques, as well as unsupervisedly layer-wise trained deep hierarchies of such models. We rigorously empirically test the proposed methods on two temporal pattern recognition datasets, comparing it to the classical reservoir computing state of art.",
"title": ""
},
{
"docid": "100b664ee1bba4ecf2694ec4c60d4346",
"text": "This paper explores two modulation techniques for power factor corrector (PFC) based on critical conduction mode (CRM) and proposes a new modulation technique which has the benefits of CRM and allows quasi constant switching frequency also. The converter is designed for MHz range switching frequency as high frequency reduces the size of the EMI filter. However at high frequency, the switching losses become the dominant losses making soft switching a necessity. CRM allows zero current switching (ZCS) turn-on but it's not able to achieve zero voltage switching (ZVS) turn-on when the input voltage is greater than half of the output voltage. To achieve ZVS turn-on over the entire mains cycle, triangular current mode (TCM) was proposed by Marxgut, C. et al.[1] but both these methods have the drawback of variable switching frequency. The new method proposed modifies TCM so that the switching frequency is quasi constant and ZVS turn-on is also achieved over the entire mains cycle. Based on analytical loss model of Cascode GaN transistor, the efficiency of the three modulation techniques is compared. Also, the parameters of the EMI filter required are compared based on simulated noise measurement. As variable switching frequency is not preferred in three phase systems, the quasi constant frequency approach finds its benefits in three phase PFC.",
"title": ""
},
{
"docid": "a52673140d86780db6c73787e5f53139",
"text": "Human papillomavirus (HPV) is the most important etiological factor for cervical cancer. A recent study demonstrated that more than 20 HPV types were thought to be oncogenic for uterine cervical cancer. Notably, more than one-half of women show cervical HPV infections soon after their sexual debut, and about 90 % of such infections are cleared within 3 years. Immunity against HPV might be important for elimination of the virus. The innate immune responses involving macrophages, natural killer cells, and natural killer T cells may play a role in the first line of defense against HPV infection. In the second line of defense, adaptive immunity via cytotoxic T lymphocytes (CTLs) targeting HPV16 E2 and E6 proteins appears to eliminate cells infected with HPV16. However, HPV can evade host immune responses. First, HPV does not kill host cells during viral replication and therefore neither presents viral antigen nor induces inflammation. HPV16 E6 and E7 proteins downregulate the expression of type-1 interferons (IFNs) in host cells. The lack of co-stimulatory signals by inflammatory cytokines including IFNs during antigen recognition may induce immune tolerance rather than the appropriate responses. Moreover, HPV16 E5 protein downregulates the expression of HLA-class 1, and it facilitates evasion of CTL attack. These mechanisms of immune evasion may eventually support the establishment of persistent HPV infection, leading to the induction of cervical cancer. Considering such immunological events, prophylactic HPV16 and 18 vaccine appears to be the best way to prevent cervical cancer in women who are immunized in adolescence.",
"title": ""
},
{
"docid": "db43034e91dbc74fc7db7f1fc02ccd7e",
"text": "We describe our experience using both Amazon Mechanical Turk (MTurk) and CrowdFlower to collect simple named entity annotations for Twitter status updates. Unlike most genres that have traditionally been the focus of named entity experiments, Twitter is far more informal and abbreviated. The collected annotations and annotation techniques will provide a first step towards the full study of named entity recognition in domains like Facebook and Twitter. We also briefly describe how to use MTurk to collect judgements on the quality of “word clouds.”",
"title": ""
},
{
"docid": "58e27ab73a264718f78effb4460c471d",
"text": "Cross-chain communication is one of the major design considerations in current blockchain systems [4-7] such as Ethereum[8]. Currently, Blockchain operates like information isolated island, they cannot obtain external data or execute transactions on their own.\n Motivated by recent studies [1-3] on blockchain's multiChain framework, we investigate the cross-chain communication. We introduces blockchain router, which empowers blockchains to connect and communicate cross chains. By establishing an economic model, blockchain router enables different blockchains in the network communicate with each other same like Internet network. In the network of blockchain router, some blockchain plays the role of a router which, according to the communication protocol, analyzes and transmits communication requests, dynamically maintaining a topology structure of the blockchain network.",
"title": ""
},
{
"docid": "496fdf000074eb55f9e42e356d97b4b1",
"text": "Attention networks have proven to be an effective approach for embedding categorical inference within a deep neural network. However, for many tasks we may want to model richer structural dependencies without abandoning end-to-end training. In this work, we experiment with incorporating richer structural distributions, encoded using graphical models, within deep networks. We show that these structured attention networks are simple extensions of the basic attention procedure, and that they allow for extending attention beyond the standard softselection approach, such as attending to partial segmentations or to subtrees. We experiment with two different classes of structured attention networks: a linearchain conditional random field and a graph-based parsing model, and describe how these models can be practically implemented as neural network layers. Experiments show that this approach is effective for incorporating structural biases, and structured attention networks outperform baseline attention models on a variety of synthetic and real tasks: tree transduction, neural machine translation, question answering, and natural language inference. We further find that models trained in this way learn interesting unsupervised hidden representations that generalize simple attention.",
"title": ""
},
{
"docid": "69ce0e617b13feb42b72ea33355ff2e5",
"text": "Sarcasm can radically alter or invert a phrase’s meaning. Sarcasm detection can therefore help improve natural language processing (NLP) tasks. The majority of prior research has modeled sarcasm detection as classification, with two important limitations: 1. Balanced datasets, when sarcasm is actually rather rare. 2. Using Twitter users’ self-declarations in the form of hashtags to label data, when sarcasm can take many forms. To address these issues, we create an unbalanced corpus of manually annotated Twitter conversations. We compare human and machine ability to recognize sarcasm on this data under varying amounts of context. Our results indicate that both class imbalance and labelling method affect performance, and should both be considered when designing automatic sarcasm detection systems. We conclude that for progress to be made in real-world sarcasm detection, we will require a new class labelling scheme that is able to access the ‘common ground’ held between conversational parties.",
"title": ""
},
{
"docid": "74b2697e6faf8339ec11b29092758272",
"text": "A tactile sense is key to advanced robotic grasping and manipulation. By touching an object it is possible to measure contact properties such as contact forces, torques, and contact position. From these, we can estimate object properties such as geometry, stiffness, and surface condition. This information can then be used to control grasping or manipulation, to detect slip, and also to create or improve object models. This paper presents an overview of tactile sensing in intelligent robotic manipulation. The history, the common issues, and applications are reviewed. Sensor performance is briefly discussed and compared to the human tactile sense. Advantages and disadvantages of the most common sensor approaches are discussed. Some examples are given of sensors widely available today. Eventually the state of the art in applying tactile sensing experimentally is presented.",
"title": ""
},
{
"docid": "b587de667df04de627a3f4b5cc658341",
"text": "Terrorism has led to many problems in Thai societies, not only property damage but also civilian casualties. Predicting terrorism activities in advance can help prepare and manage risk from sabotage by these activities. This paper proposes a framework focusing on event classification in terrorism domain using fuzzy inference systems (FISs). Each FIS is a decisionmaking model combining fuzzy logic and approximate reasoning. It is generated in five main parts: the input interface, the fuzzification interface, knowledge base unit, decision making unit and output defuzzification interface. Adaptive neuro-fuzzy inference system (ANFIS) is a FIS model adapted by combining the fuzzy logic and neural network. The ANFIS utilizes automatic identification of fuzzy logic rules and adjustment of membership function (MF). Moreover, neural network can directly learn from data set to construct fuzzy logic rules and MF implemented in various applications. FIS settings are evaluated based on two comparisons. The first evaluation is the comparison between unstructured and structured events using the same FIS setting. The second comparison is the model settings between FIS and ANFIS for classifying structured events. The data set consists of news articles related to terrosim events in three southern provinces of Thailand. The experimental results show that the classification performance of the FIS resulting from structured events achieves satisfactory accuracy and is better than the unstructured events. In addition, the classification of structured events using ANFIS gives higher performance than the events using only FIS in the prediction of terrorism events. KeywordsEvent classification; terrorism domain; fuzzy inference system (FIS); adaptive neuro-fuzzy inference system (ANFIS); membership function (MF)",
"title": ""
},
{
"docid": "8e4c56d70394c2b91081621bf8220aad",
"text": "Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.",
"title": ""
},
{
"docid": "29ac2afc399bbf61927c4821d3a6e0a0",
"text": "A well used approach for echo cancellation is the two-path method, where two adaptive filters in parallel are utilized. Typically, one filter is continuously updated, and when this filter is considered better adjusted to the echo-path than the other filter, the coefficients of the better adjusted filter is transferred to the other filter. When this transfer should occur is controlled by the transfer logic. This paper proposes transfer logic that is both more robust and more simple to tune, owing to fewer parameters, than the conventional approach. Extensive simulations show the advantages of the proposed method.",
"title": ""
},
{
"docid": "ecb82d413c47cff0e054c76360f09d48",
"text": "Grades often decline during the high school transition, creating stress. The present research integrates the biopsychosocial model of challenge and threat with the implicit theories model to understand who shows maladaptive stress responses. A diary study measured declines in grades in the first few months of high school: salivary cortisol (N = 360 students, N = 3,045 observations) and daily stress appraisals (N = 499 students, N = 3,854 observations). Students who reported an entity theory of intelligence (i.e., the belief that intelligence is fixed) showed higher cortisol when grades were declining. Moreover, daily academic stressors showed a different lingering effect on the next day's cortisol for those with different implicit theories. Findings support a process model through which beliefs affect biological stress responses during difficult adolescent transitions.",
"title": ""
},
{
"docid": "9737feb4befdaf995b1f9e88535577ec",
"text": "This paper addresses the problem of detecting the presence of malware that leaveperiodictraces innetworktraffic. This characteristic behavior of malware was found to be surprisingly prevalent in a parallel study. To this end, we propose a visual analytics solution that supports both automatic detection and manual inspection of periodic signals hidden in network traffic. The detected periodic signals are visually verified in an overview using a circular graph and two stacked histograms as well as in detail using deep packet inspection. Our approach offers the capability to detect complex periodic patterns, but avoids the unverifiability issue often encountered in related work. The periodicity assumption imposed on malware behavior is a relatively weak assumption, but initial evaluations with a simulated scenario as well as a publicly available network capture demonstrate its applicability.",
"title": ""
},
{
"docid": "fc6214a4b20dba903a1085bd1b6122e0",
"text": "a r t i c l e i n f o Keywords: CRM technology use Marketing capability Customer-centric organizational culture Customer-centric management system Customer relationship management (CRM) technology has attracted significant attention from researchers and practitioners as a facilitator of organizational performance. Even though companies have made tremendous investments in CRM technology, empirical research offers inconsistent support that CRM technology enhances organizational performance. Given this equivocal effect and the increasing need for the generalization of CRM implementation research outside western context, the authors, using data from Korean companies, address the process concerning how CRM technology translates into business outcomes. The results highlight that marketing capability mediates the association between CRM technology use and performance. Moreover, a customer-centric organizational culture and management system facilitate CRM technology use. This study serves not only to clarify the mechanism between CRM technology use and organizational performance, but also to generalize the CRM results in the Korean context. In today's competitive business environment, the success of firm increasingly hinges on the ability to operate customer relationship management (CRM) that enables the development and implementation of more efficient and effective customer-focused strategies. Based on this belief, many companies have made enormous investment in CRM technology as a means to actualize CRM efficiently. Despite conceptual underpinnings of CRM technology and substantial financial implications , empirical research examining the CRM technology-performance link has met with equivocal results. Recent studies demonstrate that only 30% of the organizations introducing CRM technology achieved improvements in their organizational performance (Bull, 2003; Corner and Hinton, 2002). These conflicting findings hint at the potential influences of unexplored mediating or moderating factors and the need of further research on the mechanism by which CRM technology leads to improved business performance. Such inconsistent results of CRM technology implementation are not limited to western countries which most of previous CRM research originated from. Even though Korean companies have poured tremendous resources to CRM initiatives since 2000, they also cut down investment in CRM technology drastically due to disappointing returns (Knowledge Research Group, 2004). As a result, Korean companies are increasingly eager to corroborate the returns from investment in CRM. In the eastern culture like Korea that promotes holistic thinking focusing on the relationships between a focal object and overall context (Monga and John, 2007), CRM operates as a two-edged sword. Because eastern culture with holistic thinking tends to value existing relationship with firms or contact point persons …",
"title": ""
}
] |
scidocsrr
|
37b1819bf31055475f458221ac62a4e4
|
JointDNN: An Efficient Training and Inference Engine for Intelligent Mobile Cloud Computing Services
|
[
{
"docid": "6e9e687db8f202a8fa6d49c5996e7141",
"text": "Although various scalable deep learning software packages have been proposed, it remains unclear how to best leverage parallel and distributed computing infrastructure to accelerate their training and deployment. Moreover, the effectiveness of existing parallel and distributed systems varies widely based on the neural network architecture and dataset under consideration. In order to efficiently explore the space of scalable deep learning systems and quickly diagnose their effectiveness for a given problem instance, we introduce an analytical performance model called PALEO. Our key observation is that a neural network architecture carries with it a declarative specification of the computational requirements associated with its training and evaluation. By extracting these requirements from a given architecture and mapping them to a specific point within the design space of software, hardware and communication strategies, PALEO can efficiently and accurately model the expected scalability and performance of a putative deep learning system. We show that PALEO is robust to the choice of network architecture, hardware, software, communication schemes, and parallelization strategies. We further demonstrate its ability to accurately model various recently published scalability results for CNNs such as NiN, Inception and AlexNet.",
"title": ""
}
] |
[
{
"docid": "96d123a5c9a01922ebb99623fddd1863",
"text": "Previous studies have shown that Wnt signaling is involved in postnatal mammalian myogenesis; however, the downstream mechanism of Wnt signaling is not fully understood. This study reports that the murine four-and-a-half LIM domain 1 (Fhl1) could be stimulated by β-catenin or LiCl treatment to induce myogenesis. In contrast, knockdown of the Fhl1 gene expression in C2C12 cells led to reduced myotube formation. We also adopted reporter assays to demonstrate that either β-catenin or LiCl significantly activated the Fhl1 promoter, which contains four putative consensus TCF/LEF binding sites. Mutations of two of these sites caused a significant decrease in promoter activity by luciferase reporter assay. Thus, we suggest that Wnt signaling induces muscle cell differentiation, at least partly, through Fhl1 activation.",
"title": ""
},
{
"docid": "b0ae3875b79f8453a3752d1e684abeaa",
"text": "This study applied a functional approach to the assessment of self-mutilative behavior (SMB) among adolescent psychiatric inpatients. On the basis of past conceptualizations of different forms of self-injurious behavior, the authors hypothesized that SMB is performed because of the automatically reinforcing (i.e., reinforced by oneself; e.g., emotion regulation) and/or socially reinforcing (i.e., reinforced by others; e.g., attention, avoidance-escape) properties associated with such behaviors. Data were collected from 108 adolescent psychiatric inpatients referred for self-injurious thoughts or behaviors. Adolescents reported engaging in SMB frequently, using multiple methods, and having an early age of onset. Moreover, the results supported the structural validity and reliability of the hypothesized functional model of SMB. Most adolescents engaged in SMB for automatic reinforcement, although a sizable portion endorsed social reinforcement functions as well. These findings have direct implications for the understanding, assessment, and treatment of SMB.",
"title": ""
},
{
"docid": "2a9b63323552de6aec737b90f0be1e9c",
"text": "The Smodels system implements the stable model semantics for normal logic programs. It handles a subclass of programs which contain no function symbols and are domain-restricted but supports extensions including built-in functions as well as cardinality and weight constraints. On top of this core engine more involved systems can be built. As an example, we have implemented total and partial stable model computation for disjunctive logic programs. An interesting application method is based on answer set programming, i.e., encoding an application problem as a set of rules so that its solutions are captured by the stable models of the rules. Smodels has been applied to a number of areas including planning, model checking, reachability analysis, product configuration, dynamic constraint satisfaction, and feature interaction.",
"title": ""
},
{
"docid": "c7afa12d10877eb7397176f2c4ab143e",
"text": "Software-defined networking (SDN) has received a great deal of attention from both academia and industry in recent years. Studies on SDN have brought a number of interesting technical discussions on network architecture design, along with scientific contributions. Researchers, network operators, and vendors are trying to establish new standards and provide guidelines for proper implementation and deployment of such novel approach. It is clear that many of these research efforts have been made in the southbound of the SDN architecture, while the northbound interface still needs improvements. By focusing in the SDN northbound, this paper surveys the body of knowledge and discusses the challenges for developing SDN software. We investigate the existing solutions and identify trends and challenges on programming for SDN environments. We also discuss future developments on techniques, specifications, and methodologies for programmable networks, with the orthogonal view from the software engineering discipline.",
"title": ""
},
{
"docid": "e7522c776e1219196aa52147834b6f61",
"text": "Machine learning deals with the issue of how to build programs that improve their performance at some task through experience. Machine learning algorithms have proven to be of great practical value in a variety of application domains. They are particularly useful for (a) poorly understood problem domains where littl e knowledge exists for the humans to develop effective algorithms; (b) domains where there are large databases containing valuable implicit regularities to be discovered; or (c) domains where programs must adapt to changing conditions. Not surprisingly, the field of software engineering turns out to be a fertile ground where many software development tasks could be formulated as learning problems and approached in terms of learning algorithms. In this paper, we first take a look at the characteristics and applicabilit y of some frequently utili zed machine learning algorithms. We then provide formulations of some software development tasks using learning algorithms. Finally, a brief summary is given of the existing work.",
"title": ""
},
{
"docid": "6db6819627305ab61c2e5d8de70e9c2e",
"text": "Purpose – The purpose of this paper is to critically assess current developments in the theory and practice of supply management and through such an assessment to identify barriers, possibilities and key trends. Design/methodology/approach – The paper is based on a three-year detailed study of six supply chains which encompassed 72 companies in Europe. The focal firms in each instance were sophisticated, blue-chip corporations operating on an international scale. Managers across at least four echelons of the supply chain were interviewed and the supply chains were traced and observed. Findings – The paper reveals that supply management is, at best, still emergent in terms of both theory and practice. Few practitioners were able – or even seriously aspired – to extend their reach across the supply chain in the manner prescribed in much modern theory. The paper identifies the range of key barriers and enablers to supply management and it concludes with an assessment of the main trends. Research limitations/implications – The research presents a number of challenges to existing thinking about supply strategy and supply chain management. It reveals the substantial gaps between theory and practice. A number of trends are identified which it is argued may work in favour of better prospects for SCM in the future and for the future of supply management as a discipline. Practical implications – A central challenge concerns who could or should manage the supply chain. Barriers to effective supply management are identified and some practical steps to surmount them are suggested. Originality/value – The paper is original in the way in which it draws on an extensive systematic study to critically assess current theory and current developments. The paper points the way for theorists and practitioners to meet future challenges.",
"title": ""
},
{
"docid": "bd30e7918a0187ff3d01d3653258bf27",
"text": "Recursive neural network is one of the most successful deep learning models for natural language processing due to the compositional nature of text. The model recursively composes the vector of a parent phrase from those of child words or phrases, with a key component named composition function. Although a variety of composition functions have been proposed, the syntactic information has not been fully encoded in the composition process. We propose two models, Tag Guided RNN (TGRNN for short) which chooses a composition function according to the part-ofspeech tag of a phrase, and Tag Embedded RNN/RNTN (TE-RNN/RNTN for short) which learns tag embeddings and then combines tag and word embeddings together. In the fine-grained sentiment classification, experiment results show the proposed models obtain remarkable improvement: TG-RNN/TE-RNN obtain remarkable improvement over baselines, TE-RNTN obtains the second best result among all the top performing models, and all the proposed models have much less parameters/complexity than their counterparts.",
"title": ""
},
{
"docid": "76d22feb7da3dbc14688b0d999631169",
"text": "Guilt proneness is a personality trait indicative of a predisposition to experience negative feelings about personal wrongdoing, even when the wrongdoing is private. It is characterized by the anticipation of feeling bad about committing transgressions rather than by guilty feelings in a particular moment or generalized guilty feelings that occur without an eliciting event. Our research has revealed that guilt proneness is an important character trait because knowing a person’s level of guilt proneness helps us to predict the likelihood that they will behave unethically. For example, online studies of adults across the U.S. have shown that people who score high in guilt proneness (compared to low scorers) make fewer unethical business decisions, commit fewer delinquent behaviors, and behave more honestly when they make economic decisions. In the workplace, guilt-prone employees are less likely to engage in counterproductive behaviors that harm their organization.",
"title": ""
},
{
"docid": "1860f5f97e79d400fcb4e3be6737e47a",
"text": "High-Q tunable filters are in demand in both wireless and satellite applications. The need for tunability and configurability in wireless systems arises when deploying different systems that coexist geographically. Such deployments take place regularly when an operator has already installed a network and needs to add a new-generation network, for example, to add a long-term evolution (LTE) network to an existing third-generation (3G) network. The availability of tunable/reconfigurable hardware will also provide the network operator the means for efficiently managing hardware resources, while accommodating multistandards requirements and achieving network traffic/capacity optimization. Wireless systems can also benefit from tunable filter technologies in other areas; for example, installing wireless infrastructure equipment, such as a remote radio unit (RRU) on top of a 15-story high communication tower, is a very costly task. By using tunable filters, one installation can serve many years since if there is a need to change the frequency or bandwidth, it can be done through remote electronic tuning, rather than installing a new filter. Additionally, in urban areas, there is a very limited space for wireless service providers to install their base stations due to expensive real estate and/or maximum weight loading constrains on certain installation locations such as light poles or power lines. Therefore, once an installation site is acquired, it is natural for wireless service providers to use tunable filters to pack many functions, such as multistandards and multibands, into one site.",
"title": ""
},
{
"docid": "421a0d89557ea20216e13dee9db317ca",
"text": "Online advertising is progressively moving towards a programmatic model in which ads are matched to actual interests of individuals collected as they browse the web. Letting the huge debate around privacy aside, a very important question in this area, for which little is known, is: How much do advertisers pay to reach an individual?\n In this study, we develop a first of its kind methodology for computing exactly that - the price paid for a web user by the ad ecosystem - and we do that in real time. Our approach is based on tapping on the Real Time Bidding (RTB) protocol to collect cleartext and encrypted prices for winning bids paid by advertisers in order to place targeted ads. Our main technical contribution is a method for tallying winning bids even when they are encrypted. We achieve this by training a model using as ground truth prices obtained by running our own \"probe\" ad-campaigns. We design our methodology through a browser extension and a back-end server that provides it with fresh models for encrypted bids. We validate our methodology using a one year long trace of 1600 mobile users and demonstrate that it can estimate a user's advertising worth with more than 82% accuracy.",
"title": ""
},
{
"docid": "3c1297b61456db30faefefc19bc079bd",
"text": "The present paper examined the structure of Dutch adolescents’ music preferences, the stability of music preferences and the relations between Big-Five personality characteristics and (changes in) music preferences. Exploratory and confirmatory factor analyses of music-preference data from 2334 adolescents aged 12–19 revealed four clearly interpretable music-preference dimensions: Rock, Elite, Urban and Pop/Dance. One thousand and forty-four randomly selected adolescents from the original sample filled out questionnaires on music preferences and personality at three follow-up measurements. In addition to being relatively stable over 1, 2 and 3-year intervals, music preferences were found to be consistently related to personality characteristics, generally confirming prior research in the United States. Personality characteristics were also found to predict changes in music preferences over a 3-year interval. Copyright # 2007 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "3a12c19fce9d9fbde7fdb6afa161bb7e",
"text": "The accurate diagnosis of Alzheimer's disease (AD) is essential for patient care and will be increasingly important as disease modifying agents become available, early in the course of the disease. Although studies have applied machine learning methods for the computer-aided diagnosis of AD, a bottleneck in the diagnostic performance was shown in previous methods, due to the lacking of efficient strategies for representing neuroimaging biomarkers. In this study, we designed a novel diagnostic framework with deep learning architecture to aid the diagnosis of AD. This framework uses a zero-masking strategy for data fusion to extract complementary information from multiple data modalities. Compared to the previous state-of-the-art workflows, our method is capable of fusing multimodal neuroimaging features in one setting and has the potential to require less labeled data. A performance gain was achieved in both binary classification and multiclass classification of AD. The advantages and limitations of the proposed framework are discussed.",
"title": ""
},
{
"docid": "e0f66f533c0af19126565160ff423949",
"text": "Antibiotic resistance, prompted by the overuse of antimicrobial agents, may arise from a variety of mechanisms, particularly horizontal gene transfer of virulence and antibiotic resistance genes, which is often facilitated by biofilm formation. The importance of phenotypic changes seen in a biofilm, which lead to genotypic alterations, cannot be overstated. Irrespective of if the biofilm is single microbe or polymicrobial, bacteria, protected within a biofilm from the external environment, communicate through signal transduction pathways (e.g., quorum sensing or two-component systems), leading to global changes in gene expression, enhancing virulence, and expediting the acquisition of antibiotic resistance. Thus, one must examine a genetic change in virulence and resistance not only in the context of the biofilm but also as inextricably linked pathologies. Observationally, it is clear that increased virulence and the advent of antibiotic resistance often arise almost simultaneously; however, their genetic connection has been relatively ignored. Although the complexities of genetic regulation in a multispecies community may obscure a causative relationship, uncovering key genetic interactions between virulence and resistance in biofilm bacteria is essential to identifying new druggable targets, ultimately providing a drug discovery and development pathway to improve treatment options for chronic and recurring infection.",
"title": ""
},
{
"docid": "25b183ce7ecc4b9203686c7ea68aacea",
"text": "A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on. The current approaches to examining this issue require significant human supervision, such as visual inspection of sampled images, and often offer only fairly limited scalability. In this paper, we propose new techniques that employ a classification–based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data. These techniques require only minimal human supervision and can easily be scaled and adapted to evaluate a variety of state-of-the-art GANs on large, popular datasets. Our analysis indicates that GANs have significant problems in reproducing the more distributional properties of the training dataset. In particular, when seen through the lens of classification, the diversity of GAN data is orders of magnitude less than that of the original data.",
"title": ""
},
{
"docid": "a1b6fc8362fab0c062ad31a205e74898",
"text": "Air-gapped computers are disconnected from the Internet physically and logically. This measure is taken in order to prevent the leakage of sensitive data from secured networks. It has been shown that malware can exfiltrate data from air-gapped computers by transmitting ultrasonic signals via the computer’s speakers. However, such acoustic communication relies on the availability of speakers on a computer.",
"title": ""
},
{
"docid": "0c9b46fba19b6604570ff41fcb400640",
"text": "Tangible user interfaces (TUIs) provide physical form to digital information and computation, facilitating the direct manipulation of bits. Our goal in TUI development is to empower collaboration, learning, and design by using digital technology and at the same time taking advantage of human abilities to grasp and manipulate physical objects and materials. This paper discusses a model of TUI, key properties, genres, applications, and summarizes the contributions made by the Tangible Media Group and other researchers since the publication of the first Tangible Bits paper at CHI 1997. http://tangible.media.mit.edu/",
"title": ""
},
{
"docid": "b0b8839c77452dde0f5f3d57417dedb8",
"text": "We propose ‘Hide-and-Seek’ a general purpose data augmentation technique, which is complementary to existing data augmentation techniques and is beneficial for various visual recognition tasks. The key idea is to hide patches in a training image randomly, in order to force the network to seek other relevant content when the most discriminative content is hidden. Our approach only needs to modify the input image and can work with any network to improve its performance. During testing, it does not need to hide any patches. The main advantage of Hide-and-Seek over existing data augmentation techniques is its ability to improve object localization accuracy in the weakly-supervised setting, and we therefore use this task to motivate the approach. However, Hide-and-Seek is not tied only to the image localization task, and can generalize to other forms of visual input like videos, as well as other recognition tasks like image classification, temporal action localization, semantic segmentation, emotion recognition, age/gender estimation, and person re-identification. We perform extensive experiments to showcase the advantage of Hide-and-Seek on these various visual recognition problems.",
"title": ""
},
{
"docid": "45cea05e301d47ade7eae2f442529435",
"text": "As consumer depth sensors become widely available, estimating scene flow from RGBD sequences has received increasing attention. Although the depth information allows the recovery of 3D motion from a single view, it poses new challenges. In particular, depth boundaries are not well-aligned with RGB image edges and therefore not reliable cues to localize 2D motion boundaries. In addition, methods that extend the 2D optical flow formulation to 3D still produce large errors in occlusion regions. To better use depth for occlusion reasoning, we propose a layered RGBD scene flow method that jointly solves for the scene segmentation and the motion. Our key observation is that the noisy depth is sufficient to decide the depth ordering of layers, thereby avoiding a computational bottleneck for RGB layered methods. Furthermore, the depth enables us to estimate a per-layer 3D rigid motion to constrain the motion of each layer. Experimental results on both the Middlebury and real-world sequences demonstrate the effectiveness of the layered approach for RGBD scene flow estimation.",
"title": ""
},
{
"docid": "41b87466db128bee207dd157a9fef761",
"text": "Systems that enforce memory safety for today’s operating system kernels and other system software do not account for the behavior of low-level software/hardware interactions such as memory-mapped I/O, MMU configuration, and context switching. Bugs in such low-level interactions can lead to violations of the memory safety guarantees provided by a safe execution environment and can lead to exploitable vulnerabilities in system software . In this work, we present a set of program analysis and run-time instrumentation techniques that ensure that errors in these low-level operations do not violate the assumptions made by a safety checking system. Our design introduces a small set of abstractions and interfaces for manipulating processor state, kernel stacks, memory mapped I/O objects, MMU mappings, and self modifying code to achieve this goal, without moving resource allocation and management decisions out of the kernel. We have added these techniques to a compiler-based virtual machine called Secure Virtual Architecture (SVA), to which the standard Linux kernel has been ported previously. Our design changes to SVA required only an additional 100 lines of code to be changed in this kernel. Our experimental results show that our techniques prevent reported memory safety violations due to low-level Linux operations and that these violations are not prevented by SVA without our techniques . Moreover, the new techniques in this paper introduce very little overhead over and above the existing overheads of SVA. Taken together, these results indicate that it is clearly worthwhile to add these techniques to an existing memory safety system.",
"title": ""
},
{
"docid": "f829097794802117bf37ea8ce891611a",
"text": "Manually crafted combinatorial features have been the \"secret sauce\" behind many successful models. For web-scale applications, however, the variety and volume of features make these manually crafted features expensive to create, maintain, and deploy. This paper proposes the Deep Crossing model which is a deep neural network that automatically combines features to produce superior models. The input of Deep Crossing is a set of individual features that can be either dense or sparse. The important crossing features are discovered implicitly by the networks, which are comprised of an embedding and stacking layer, as well as a cascade of Residual Units. Deep Crossing is implemented with a modeling tool called the Computational Network Tool Kit (CNTK), powered by a multi-GPU platform. It was able to build, from scratch, two web-scale models for a major paid search engine, and achieve superior results with only a sub-set of the features used in the production models. This demonstrates the potential of using Deep Crossing as a general modeling paradigm to improve existing products, as well as to speed up the development of new models with a fraction of the investment in feature engineering and acquisition of deep domain knowledge.",
"title": ""
}
] |
scidocsrr
|
63e21fd09293a3a1550d8f06d1157222
|
Control of elastic soft robots based on real-time finite element method
|
[
{
"docid": "7c68d608762cf466a9ac3b4f7676135c",
"text": "Abstract. An implicit non-linear finite element (FE) numerical procedure for the simulation of biological muscular tissues is presented. The method has been developed for studying the motion of muscular hydrostats, such as squid and octopus arms and its general framework is applicable to other muscular tissues. The FE framework considered is suitable for the dynamic numerical simulations of three-dimensional non-linear nearly incompressible hyperelastic materials that undergo large displacements and deformations. Human and animal muscles, consisting of fibers and connective tissues, belong to this class of materials. The stress distribution inside the muscular FE model is considered as the superposition of stresses along the muscular fibers and the connective tissues. The stresses along the fibers are modeled as the sum of active and passive stresses, according to the muscular model of Van Leeuwen and Kier (1997) Philos. Trans. R. Soc. London, 352: 551-571. Passive stress distribution is an experimentally-defined function of fibers’ deformation; while active stress distribution is the product of an activation level time function, a force-stretch function and a force-stretch ratio function. The mechanical behavior of the surrounding tissues is determined adopting a Mooney-Rivlin constitutive model. The incompressibility criterion is met by enforcing large bulk modulus and by introducing modified deformation measures. Due to the non-linear nature of the problem,",
"title": ""
}
] |
[
{
"docid": "1c94a04fdeb39ba00357e4dcc87d3862",
"text": "Automatic segmentation of speech is an important problem that is useful in speech recognition, synthesis and coding. We explore in this paper, the robust parameter set, weighting function and distance measure for reliable segmentation of noisy speech. It is found that the MFCC parameters, successful in speech recognition. holds the best promise for robust segmentation also. We also explored a variety of symmetric and asymmetric weighting lifters. from which it is found that a symmetric lifter of the form 1 + A sin1/2(πn/L), 0 ≤ n ≤ L − 1, for MFCC dimension L, is most effective. With regard to distance measure, the direct L2 norm is found adequate.",
"title": ""
},
{
"docid": "cc5ede31b7dd9faa2cce9d2aa8819a3c",
"text": "Despite considerable research on systems, algorithms and hardware to speed up deep learning workloads, there is no standard means of evaluating end-to-end deep learning performance. Existing benchmarks measure proxy metrics, such as time to process one minibatch of data, that do not indicate whether the system as a whole will produce a high-quality result. In this work, we introduce DAWNBench, a benchmark and competition focused on end-to-end training time to achieve a state-of-the-art accuracy level, as well as inference time with that accuracy. Using time to accuracy as a target metric, we explore how different optimizations, including choice of optimizer, stochastic depth, and multi-GPU training, affect end-to-end training performance. Our results demonstrate that optimizations can interact in non-trivial ways when used in conjunction, producing lower speed-ups and less accurate models. We believe DAWNBench will provide a useful, reproducible means of evaluating the many trade-offs in deep learning systems.",
"title": ""
},
{
"docid": "19a1aab60faad5a9376bb220352dc081",
"text": "BACKGROUND\nPatients with type 2 diabetes mellitus (T2DM) struggle with the management of their condition due to difficulty relating lifestyle behaviors with glycemic control. While self-monitoring of blood glucose (SMBG) has proven to be effective for those treated with insulin, it has been shown to be less beneficial for those only treated with oral medications or lifestyle modification. We hypothesized that the effective self-management of non-insulin treated T2DM requires a behavioral intervention that empowers patients with the ability to self-monitor, understand the impact of lifestyle behaviors on glycemic control, and adjust their self-care based on contextualized SMBG data.\n\n\nOBJECTIVE\nThe primary objective of this randomized controlled trial (RCT) is to determine the impact of bant2, an evidence-based, patient-centered, behavioral mobile app intervention, on the self-management of T2DM. Our second postulation is that automated feedback delivered through the mobile app will be as effective, less resource intensive, and more scalable than interventions involving additional health care provider feedback.\n\n\nMETHODS\nThis study is a 12-month, prospective, multicenter RCT in which 150 participants will be randomly assigned to one of two groups: the control group will receive current standard of care, and the intervention group will receive the mobile phone app system in addition to standard of care. The primary outcome measure is change in glycated hemoglobin A1c from baseline to 12 months.\n\n\nRESULTS\nThe first patient was enrolled on July 28, 2015, and we anticipate completing this study by September, 2018.\n\n\nCONCLUSIONS\nThis RCT is one of the first to evaluate an evidence-based mobile app that focuses on facilitating lifestyle behavior change driven by contextualized and structured SMBG. The results of this trial will provide insights regarding the usage of mobile tools and consumer-grade devices for diabetes self-care, the economic model of using incentives to motivate behavior change, and the consumption of test strips when following a rigorously structured approach for SMBG.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02370719; https://clinicaltrials.gov/ct2/show/NCT02370719 (Archived at http://www.webcitation.org/6jpyjfVRs).",
"title": ""
},
{
"docid": "7292afc542ddc8a9d9f048dff221b0d8",
"text": "Spatial modulation (SM) has emerged as a low-complexity and energy-efficient multiple-input multiple-output transmission technique, where the information bits are not only transmitted by amplitude phase modulation but also conveyed by the index of activated transmit antenna (TA). By deploying SM in downlink multi-user (DL-MU) scenarios, conventional orthogonal multiple access-based SM (OMA-SM) allocates exclusive time-frequency resources to users, but suffers from low spectral efficiency. TA grouping-based SM (TAG-SM) divides TAs into sub-groups to serve different users independently, but suffers from severe inter-user interference. By introducing non-OMA (NOMA) into SM for DL-MU transmission, NOMA-based SM (NOMA-SM) is proposed to mitigate inter-user interference, while maintaining high spectral efficiency. Specifically, by applying successive interference cancellation at user side, the inter-user interference could be effectively eliminated with the sacrifice of increased computational complexity. Afterward, based on a symbol error rate analysis, a low-complexity power allocation scheme is provided to achieve high spectral efficiency through power domain multiplexing. When considering near-far effect from user distribution, user pairing issue is also discussed. Numerical simulation compares NOMA-SM with OMA-SM and TAG-SM, and verifies the effectiveness of the proposed low-complexity power allocation and user pairing methodologies.",
"title": ""
},
{
"docid": "db3fc6ae924c0758bb58cd04f395520e",
"text": "Engineering from the University of Michigan, and a Ph.D. in Information Technologies from the MIT Sloan School of Management. His current research interests include IT adoption and diffusion, management of technology and innovation, software development tools and methods, and real options. He has published in Abstract The extent of organizational innovation with IT, an important construct in the IT innovation literature, has been measured in many different ways. Some measures are more narrowly focused while others aggregate innovative behaviors across a set of innovations or across stages in the assimilation lifecycle within organizations. There appear to be some significant tradeoffs involving aggregation. More aggregated measures can be more robust and generalizable and can promote stronger predictive validity, while less aggregated measures allow more context-specific investigations and can preserve clearer theoretical interpretations. This article begins with a conceptual analysis that identifies the circumstances when these tradeoffs are most likely to favor aggregated measures. It is found that aggregation should be favorable when: (1) the researcher's interest is in general innovation or a model that generalizes to a class of innovations, (2) antecedents have effects in the same direction in all assimilation stages, (3) characteristics of organizations can be treated as constant across the innovations in the study, (4) characteristics of innovations can not be treated as constant across organizations in the study, (5) the set of innovations being aggregated includes substitutes or moderate complements, and (6) sources of noise in the measurement of innovation may be present. The article then presents an empirical study using data on the adoption of software process technologies by 608 US based corporations. This study—which had circumstances quite favorable to aggregation—found that aggregating across three innovations within a technology class more than doubled the variance explained compared to single innovation models. Aggregating across assimilation stages had a slight positive effect on predictive validity. Taken together, these results provide initial confirmation of the conclusions from the conceptual analysis regarding the circumstances favoring aggregation.",
"title": ""
},
{
"docid": "df5778fce3318029d249de1ff37b0715",
"text": "The Switched Reluctance Machine (SRM) is a robust machine and is a candidate for ultra high speed applications. Until now the area of ultra high speed machines has been dominated by permanent magnet machines (PM). The PM machine has a higher torque density and some other advantages compared to SRMs. However, the soaring prices of the rare earth materials are driving the efforts to find an alternative to PM machines without significantly impacting the performance. At the same time significant progress has been made in the design and control of the SRM. This paper reviews the progress of the SRM as a high speed machine and proposes a novel rotor structure design to resolve the challenge of high windage losses at ultra high speed. It then elaborates on the path of modifying the design to achieve optimal performance. The simulation result of the final design is verified on FEA software. Finally, a prototype machine with similar design is built and tested to verify the simulation model. The experimental waveform indicates good agreement with the simulation result. Therefore, the performance of the prototype machine is analyzed and presented at the end of this paper.",
"title": ""
},
{
"docid": "405b908921f36dc6526653229d723d63",
"text": "Bitcoin is a computerized digital money and exchange network, represents an essential change in financial sectors, an interesting number of customers and excellent evaluation of channel inspection. In this research, dataset related to ten cryptocurrencies are used and created a new dataset by taking the closing price of each cryptocurrency for the research goal to ascertain how the direction and accuracy of price of the Bitcoin can be predicted by using data mining methods. Features engineering evaluated that all the ten cryptocurrencies are strongly correlated with each other. The task is achieved by implementation of supervised learning method in which random forest, support vector classifier, gradient boosting classifier, and neural network classifier are used under classification category and linear regression, recurrent neural network, gradient boosting regressor are used under regression category. In the classification category, support vector classifier achieved the highest accuracy of 62.31% and precision value 0.77. In regression category, gradient boosting regressor got the highest R-squared value 0.99.",
"title": ""
},
{
"docid": "68295a432f68900911ba29e5a6ca5e42",
"text": "In many forecasting applications, it is valuable to predict not only the value of a signal at a certain time point in the future, but also the values leading up to that point. This is especially true in clinical applications, where the future state of the patient can be less important than the patient's overall trajectory. This requires multi-step forecasting, a forecasting variant where one aims to predict multiple values in the future simultaneously. Standard methods to accomplish this can propagate error from prediction to prediction, reducing quality over the long term. In light of these challenges, we propose multi-output deep architectures for multi-step forecasting in which we explicitly model the distribution of future values of the signal over a prediction horizon. We apply these techniques to the challenging and clinically relevant task of blood glucose forecasting. Through a series of experiments on a real-world dataset consisting of 550K blood glucose measurements, we demonstrate the effectiveness of our proposed approaches in capturing the underlying signal dynamics. Compared to existing shallow and deep methods, we find that our proposed approaches improve performance individually and capture complementary information, leading to a large improvement over the baseline when combined (4.87 vs. 5.31 absolute percentage error (APE)). Overall, the results suggest the efficacy of our proposed approach in predicting blood glucose level and multi-step forecasting more generally.",
"title": ""
},
{
"docid": "99511c1267d396d3745f075a40a06507",
"text": "Problem Description: It should be well known that processors are outstripping memory performance: specifically that memory latencies are not improving as fast as processor cycle time or IPC or memory bandwidth. Thought experiment: imagine that a cache miss takes 10000 cycles to execute. For such a processor instruction level parallelism is useless, because most of the time is spent waiting for memory. Branch prediction is also less effective, since most branches can be determined with data already in registers or in the cache; branch prediction only helps for branches which depend on outstanding cache misses. At the same time, pressures for reduced power consumption mount. Given such trends, some computer architects in industry (although not Intel EPIC) are talking seriously about retreating from out-of-order superscalar processor architecture, and instead building simpler, faster, dumber, 1-wide in-order processors with high degrees of speculation. Sometimes this is proposed in combination with multiprocessing and multithreading: tolerate long memory latencies by switching to other processes or threads. I propose something different: build narrow fast machines but use intelligent logic inside the CPU to increase the number of outstanding cache misses that can be generated from a single program. By MLP I mean simply the number of outstanding cache misses that can be generated (by a single thread, task, or program) and executed in an overlapped manner. It does not matter what sort of execution engine generates the multiple outstanding cache misses. An out-of-order superscalar ILP CPU may generate multiple outstanding cache misses, but 1-wide processors can be just as effective. Change the metrics: total execution time remains the overall goal, but instead of reporting IPC as an approximation to this, we must report MLP. Limit studies should be in terms of total number of non-overlapped cache misses on critical path. Now do the research: Many present-day hot topics in computer architecture help ILP, but do not help MLP. As mentioned above, predicting branch directions for branches that can be determined from data already in the cache or in registers does not help MLP for extremely long latencies. Similarly, prefetching of data cache misses for array processing codes does not help MLP – it just moves it around. Instead, investigate microarchitectures that help MLP: (0) Trivial case – explicit multithreading, like SMT. (1) Slightly less trivial case – implicitly multithread single programs, either by compiler software on an MT machine, or by a hybrid, such as …",
"title": ""
},
{
"docid": "e5667a65bc628b93a1d5b0e37bfb8694",
"text": "The problem of determining whether an object is in motion, irrespective of camera motion, is far from being solved. We address this challenging task by learning motion patterns in videos. The core of our approach is a fully convolutional network, which is learned entirely from synthetic video sequences, and their ground-truth optical flow and motion segmentation. This encoder-decoder style architecture first learns a coarse representation of the optical flow field features, and then refines it iteratively to produce motion labels at the original high-resolution. We further improve this labeling with an objectness map and a conditional random field, to account for errors in optical flow, and also to focus on moving things rather than stuff. The output label of each pixel denotes whether it has undergone independent motion, i.e., irrespective of camera motion. We demonstrate the benefits of this learning framework on the moving object segmentation task, where the goal is to segment all objects in motion. Our approach outperforms the top method on the recently released DAVIS benchmark dataset, comprising real-world sequences, by 5.6%. We also evaluate on the Berkeley motion segmentation database, achieving state-of-the-art results.",
"title": ""
},
{
"docid": "000961818e2e0e619f1fc0464f69a496",
"text": "Database query languages can be intimidating to the non-expert, leading to the immense recent popularity for keyword based search in spite of its significant limitations. The holy grail has been the development of a natural language query interface. We present NaLIX, a generic interactive natural language query interface to an XML database. Our system can accept an arbitrary English language sentence as query input, which can include aggregation, nesting, and value joins, among other things. This query is translated, potentially after reformulation, into an XQuery expression that can be evaluated against an XML database. The translation is done through mapping grammatical proximity of natural language parsed tokens to proximity of corresponding elements in the result XML. In this demonstration, we show that NaLIX, while far from being able to pass the Turing test, is perfectly usable in practice, and able to handle even quite complex queries in a variety of application domains. In addition, we also demonstrate how carefully designed features in NaLIX facilitate the interactive query process and improve the usability of the interface.",
"title": ""
},
{
"docid": "8c56987e08f33c4d763341ec251cc463",
"text": "BACKGROUND\nA neonatal haemoglobinopathy screening programme was implemented in Brussels more than a decade ago and in Liège 5 years ago; the programme was adapted to the local situation.\n\n\nMETHODS\nNeonatal screening for haemoglobinopathies was universal, performed using liquid cord blood and an isoelectric focusing technique. All samples with abnormalities underwent confirmatory testing. Major and minor haemoglobinopathies were reported. Affected children were referred to a specialist centre. A central database in which all screening results were stored was available and accessible to local care workers. A central clinical database to monitor follow-up is under construction.\n\n\nRESULTS\nA total of 191,783 newborns were screened. One hundred and twenty-three (1:1559) newborns were diagnosed with sickle cell disease, seven (1:27,398) with beta thalassaemia major, five (1:38,357) with haemoglobin H disease, and seven (1:27,398) with haemoglobin C disease. All major haemoglobinopathies were confirmed, and follow-up of the infants was undertaken except for three infants who did not attend the first medical consultation despite all efforts.\n\n\nCONCLUSIONS\nThe universal neonatal screening programme was effective because no case of major haemoglobinopathy was identified after the neonatal period. The affected children received dedicated medical care from birth. The screening programme, and specifically the reporting of minor haemoglobinopathies, has been an excellent health education tool in Belgium for more than 12 years.",
"title": ""
},
{
"docid": "8a301043492d870d62b0f8c7e1d9228f",
"text": "Plenoptic cameras or light field cameras are a recent type of imaging devices that are starting to regain some popularity. These cameras are able to acquire the plenoptic function (4D light field) and, consequently, able to output the depth of a scene, by making use of the redundancy created by the multi-view geometry, where a single 3D point is imaged several times. Despite the attention given in the literature to standard plenoptic cameras, like Lytro, due to their simplicity and lower price, we did our work based on results obtained from a multi-focus plenoptic camera (Raytrix, in our case), due to their quality and higher resolution images. In this master thesis, we present an automatic method to estimate the virtual depth of a scene. Since the capture is done using a multi-focus plenoptic camera, we are working with multi-view geometry and lens with different focal lengths, and we can use that to back trace the rays in order to obtain the depth. We start by finding salient points and their respective correspondences using a scaled SAD (sum of absolute differences) method. In order to perform the referred back trace, obtaining robust results, we developed a RANSAC-like method, which we call COMSAC (Complete Sample Consensus). It is an iterative method that back trace the ligth rays in order to estimate the depth, eliminating the outliers. Finally, and since the depth map obtained is sparse, we developed a way to make it dense, by random growing. Since we used a publicly available dataset from Raytrix, comparisons between our results and the manufacturers’ ones are also presented. A paper was also submitted to 3DV 2014 (International Conference on 3D Vision), a conference on three-dimensional vision.",
"title": ""
},
{
"docid": "e98a987fce667f1bb0123448f1b08ce4",
"text": "Commonly, HoG/SVM classifier uses rectangular images for HoG feature descriptor extraction and training. This means that significant additional work has to be done to process irrelevant pixels belonging to the background surrounding the object of interest. Moreover, some areas of the foreground also can be eliminated from the processing to improve the algorithm speed and memory wise. In Boundary-Bitmap HoG approach proposed in this paper, the boundary of irregular shape of the object is represented by a bitmap to avoid processing of extra background and (partially) foreground pixels. Bitmap, derived from the training dataset, encodes those portions of an image to be used to train a classifier. Experimental results show that not only the proposed algorithm decreases the workload associated with HoG/SVM classifiers by 92.5% compared to the state-of-the-art, but also it shows an average increase about 6% in recall and a decrease about 3% in precision in comparison with standard HoG.",
"title": ""
},
{
"docid": "68649624bbd2aa73acd98df12f06fd28",
"text": "Grey wolf optimizer (GWO) is one of recent metaheuristics swarm intelligence methods. It has been widely tailored for a wide variety of optimization problems due to its impressive characteristics over other swarm intelligence methods: it has very few parameters, and no derivation information is required in the initial search. Also it is simple, easy to use, flexible, scalable, and has a special capability to strike the right balance between the exploration and exploitation during the search which leads to favourable convergence. Therefore, the GWO has recently gained a very big research interest with tremendous audiences from several domains in a very short time. Thus, in this review paper, several research publications using GWO have been overviewed and summarized. Initially, an introductory information about GWO is provided which illustrates the natural foundation context and its related optimization conceptual framework. The main operations of GWO are procedurally discussed, and the theoretical foundation is described. Furthermore, the recent versions of GWO are discussed in detail which are categorized into modified, hybridized and paralleled versions. The main applications of GWO are also thoroughly described. The applications belong to the domains of global optimization, power engineering, bioinformatics, environmental applications, machine learning, networking and image processing, etc. The open source software of GWO is also provided. The review paper is ended by providing a summary conclusion of the main foundation of GWO and suggests several possible future directions that can be further investigated.",
"title": ""
},
{
"docid": "3ab4c2383569fc02f0395e79070dc16d",
"text": "A report released last week by the US National Academies makes recommendations for tackling the issues surrounding the era of petabyte science.",
"title": ""
},
{
"docid": "25779dfc55dc29428b3939bb37c47d50",
"text": "Human daily activity recognition using mobile personal sensing technology plays a central role in the field of pervasive healthcare. One major challenge lies in the inherent complexity of human body movements and the variety of styles when people perform a certain activity. To tackle this problem, in this paper, we present a novel human activity recognition framework based on recently developed compressed sensing and sparse representation theory using wearable inertial sensors. Our approach represents human activity signals as a sparse linear combination of activity signals from all activity classes in the training set. The class membership of the activity signal is determined by solving a l1 minimization problem. We experimentally validate the effectiveness of our sparse representation-based approach by recognizing nine most common human daily activities performed by 14 subjects. Our approach achieves a maximum recognition rate of 96.1%, which beats conventional methods based on nearest neighbor, naive Bayes, and support vector machine by as much as 6.7%. Furthermore, we demonstrate that by using random projection, the task of looking for “optimal features” to achieve the best activity recognition performance is less important within our framework.",
"title": ""
},
{
"docid": "62db862b080e12decd61b09878e4b893",
"text": "OBJECTIVE\nThe purpose of this study was to estimate the incidence of postpartum hemorrhage (PPH) in the United States and to assess trends.\n\n\nSTUDY DESIGN\nPopulation-based data from the 1994-2006 National Inpatient Sample were used to identify women who were hospitalized with postpartum hemorrhage. Data for each year were plotted, and trends were assessed. Multivariable logistic regression was used in an attempt to explain the difference in PPH incidence between 1994 and 2006.\n\n\nRESULTS\nPPH increased 26% between 1994 and 2006 from 2.3% (n = 85,954) to 2.9% (n = 124,708; P < .001). The increase primarily was due to an increase in uterine atony, from 1.6% (n = 58,597) to 2.4% (n = 99,904; P < .001). The increase in PPH could not be explained by changes in rates of cesarean delivery, vaginal birth after cesarean delivery, maternal age, multiple birth, hypertension, or diabetes mellitus.\n\n\nCONCLUSION\nPopulation-based surveillance data signal an apparent increase in PPH caused by uterine atony. More nuanced clinical data are needed to understand the factors that are associated with this trend.",
"title": ""
},
{
"docid": "e029a189f85f9cb47a5ad0a766efad1d",
"text": "\"Next generation\" data acquisition technologies are allowing scientists to collect exponentially more data at a lower cost. These trends are broadly impacting many scientific fields, including genomics, astronomy, and neuroscience. We can attack the problem caused by exponential data growth by applying horizontally scalable techniques from current analytics systems to accelerate scientific processing pipelines.\n In this paper, we describe ADAM, an example genomics pipeline that leverages the open-source Apache Spark and Parquet systems to achieve a 28x speedup over current genomics pipelines, while reducing cost by 63%. From building this system, we were able to distill a set of techniques for implementing scientific analyses efficiently using commodity \"big data\" systems. To demonstrate the generality of our architecture, we then implement a scalable astronomy image processing system which achieves a 2.8--8.9x improvement over the state-of-the-art MPI-based system.",
"title": ""
},
{
"docid": "5ef2d7bacc85c9fe598248ec6ace70b2",
"text": "The form of hidden activation functions has been always an important issue in deep neural network (DNN) design. The most common choices for acoustic modelling are the standard Sigmoid and rectified linear unit (ReLU), which are normally used with fixed function shapes and no adaptive parameters. Recently, there have been several papers that have studied the use of parameterised activation functions for both computer vision and speaker adaptation tasks. In this paper, we investigate generalised forms of both Sigmoid and ReLU with learnable parameters, as well as their integration with the standard DNN acoustic model training process. Experiments using conversational telephone speech (CTS) Mandarin data, result in an average of 3.4% and 2.0% relative word error rate (WER) reduction with Sigmoid and ReLU parameterisations.",
"title": ""
}
] |
scidocsrr
|
0181a29f058e0bef4f856386a9c3332b
|
Primate anterior cingulate cortex: Where motor control, drive and cognition interface
|
[
{
"docid": "522eb9e461e3108308e668e746f2ee42",
"text": "We found that medial frontal cortex activity associated with action monitoring (detecting errors and behavioral conflict) depended on activity in the lateral prefrontal cortex. We recorded the error-related negativity (ERN), an event-related brain potential proposed to reflect anterior cingulate action monitoring, from individuals with lateral prefrontal damage or age-matched or young control participants. In controls, error trials generated greater ERN activity than correct trials. In individuals with lateral prefrontal damage, however, correct-trial ERN activity was equal to error-trial ERN activity. Lateral prefrontal damage also affected corrective behavior. Thus the lateral prefrontal cortex seemed to interact with the anterior cingulate cortex in monitoring behavior and in guiding compensatory systems.",
"title": ""
}
] |
[
{
"docid": "05e3d07db8f5ecf3e446a28217878b56",
"text": "In this paper, we investigate the topic of gender identification for short length, multi-genre, content-free e-mails. We introduce for the first time (to our knowledge), psycholinguistic and gender-linked cues for this problem, along with traditional stylometric features. Decision tree and Support Vector Machines learning algorithms are used to identify the gender of the author of a given e-mail. The experiment results show that our approach is promising with an average accuracy of 82.2%.",
"title": ""
},
{
"docid": "55908d56d0a3e702c8c42267e2bc433a",
"text": "In this paper, a brain computer interface (BCI) is designed using electroencephalogram (EEG) signals where the subjects have to think of only a single mental task. The method uses spectral power and power difference in 4 bands: delta and theta, beta, alpha and gamma. This could be used as an alternative to the existing BCI designs that require classification of several mental tasks. In addition, an attempt is made to show that different subjects require different mental task for minimising the error in BCI output. In the experimental study, EEG signals were recorded from 4 subjects while they were thinking of 4 different mental tasks. Combinations of resting (baseline) state and another mental task are studied at a time for each subject. Spectral powers in the 4 bands from 6 channels are computed using the energy of the elliptic FIR filter output. The mental tasks are detected by a neural network classifier. The results show that classification accuracy up to 97.5% is possible, provided that the most suitable mental task is used. As an application, the proposed method could be used to move a cursor on the screen. If cursor movement is used with a translation scheme like Morse code, the subjects could use the proposed BCI for constructing letters/words. This would be very useful for paralysed individuals to communicate with their external surroundings",
"title": ""
},
{
"docid": "994bebd20ef2594f5337387d97c6bd12",
"text": "In complex, open, and heterogeneous environments, agents must be able to reorganize towards the most appropriate organizations to adapt unpredictable environment changes within Multi-Agent Systems (MAS). Types of reorganization can be seen from two different levels. The individual agents level (micro-level) in which an agent changes its behaviors and interactions with other agents to adapt its local environment. And the organizational level (macro-level) in which the whole system changes it structure by adding or removing agents. This chapter is dedicated to overview different aspects of what is called MAS Organization including its motivations, paradigms, models, and techniques adopted for statically or dynamically organizing agents in MAS.",
"title": ""
},
{
"docid": "4843d4b24161d1dd594d2c0a0fb61ef1",
"text": "Cells release nano-sized membrane vesicles that are involved in intercellular communication by transferring biological information between cells. It is generally accepted that cells release at least three types of extracellular vesicles (EVs): apoptotic bodies, microvesicles and exosomes. While a wide range of putative biological functions have been attributed to exosomes, they are assumed to represent a homogenous population of EVs. We hypothesized the existence of subpopulations of exosomes with defined molecular compositions and biological properties. Density gradient centrifugation of isolated exosomes revealed the presence of two distinct subpopulations, differing in biophysical properties and their proteomic and RNA repertoires. Interestingly, the subpopulations mediated differential effects on the gene expression programmes in recipient cells. In conclusion, we demonstrate that cells release distinct exosome subpopulations with unique compositions that elicit differential effects on recipient cells. Further dissection of exosome heterogeneity will advance our understanding of exosomal biology in health and disease and accelerate the development of exosome-based diagnostics and therapeutics.",
"title": ""
},
{
"docid": "2fa193c95bf2f932f7020a6b78c33183",
"text": "The cost and efficiency of a photovoltaic (PV)-based grid-connected system depends upon the number of components and stages involved in the power conversion. This has led to the development of several single-stage configurations that can perform voltage transformation, maximum power point tracking (MPPT), inversion, and current shaping-all in one stage. Such configurations would usually require at least a couple of current and voltage sensors and a relatively complex control strategy. With a view to minimize the overall cost and control complexity, this paper presents a novel MPPT scheme with reduced number of sensors. The proposed scheme is applicable to any single-stage, single-phase grid-connected inverter operating in continuous conduction mode (CCM). The operation in CCM is desirable as it drastically reduces the stress on the components. Unlike other MPPT methods, which sense both PV array's output current and voltage, only PV array's output voltage is required to be sensed to implement MPPT. Only one current sensor is used for shaping the buck-boost inductor current as well as for MPPT. The information about power output of the array is obtained indirectly from array's voltage and the inductor current amplitude. Detailed analysis and the flowchart of the algorithm for the proposed scheme are included. Simulation and experimental results are also presented to highlight the usefulness of the scheme.",
"title": ""
},
{
"docid": "447c5b2db5b1d7555cba2430c6d73a35",
"text": "Recent years have seen a proliferation of complex Advanced Driver Assistance Systems (ADAS), in particular, for use in autonomous cars. These systems consist of sensors and cameras as well as image processing and decision support software components. They are meant to help drivers by providing proper warnings or by preventing dangerous situations. In this paper, we focus on the problem of design time testing of ADAS in a simulated environment. We provide a testing approach for ADAS by combining multi-objective search with surrogate models developed based on neural networks. We use multi-objective search to guide testing towards the most critical behaviors of ADAS. Surrogate modeling enables our testing approach to explore a larger part of the input search space within limited computational resources. We characterize the condition under which the multi-objective search algorithm behaves the same with and without surrogate modeling, thus showing the accuracy of our approach. We evaluate our approach by applying it to an industrial ADAS system. Our experiment shows that our approach automatically identifies test cases indicating critical ADAS behaviors. Further, we show that combining our search algorithm with surrogate modeling improves the quality of the generated test cases, especially under tight and realistic computational resources.",
"title": ""
},
{
"docid": "922ae1491235e249e809199ff983bf19",
"text": "Creativity has been defined in many different ways by different authors. This article explores these different definitions of creativity; the relationship between creativity and intelligence, and those factors which affect creativity, such as convergent and divergent thinking. In addition, the article explores the importance of computer technology for testing ideas and the importance of reflective thinking and the evaluation of thoughts. It concludes with a synthesis of the basic attributes of highly creative students and present some ideas of what scholars have said about strategies we can use to enhance creativity in students. Although originality and creative imagination are private, guidance and training can substantially increase the learner’s output.",
"title": ""
},
{
"docid": "e20a1bb5b9b02bc27f04bad55b82483b",
"text": "Acute heart failure syndromes (AHFS) are a heterogeneous group of commonly encountered and difficult to manage clinical syndromes associated with high morbidity and mortality. Dyspnoea, pulmonary, and systemic congestion often characterize AHFS due to acutely elevated intracardiac filling pressures and fluid overload. Diuresis, respiratory support, vasodilator therapy, and gradual attenuation of the activation of renin-angiotensin-aldosterone system (RAAS) and sympathetic nervous system (SNS) are the keystones of AHFS management. Despite available therapies, post-discharge mortality and re-hospitalization rates remain unacceptably high in AHFS. Neurohumoral-mediated cardiorenal dysfunction and congestion may contribute to these high event rates. Mineralocorticoid receptor antagonists (MRAs) serve a dual therapeutic role by enhancing diuresis and attenuating the pathological effects of RAAS and SNS activation. Although these agents are indicated in patients with chronic, severe heart failure with reduced ejection fraction (HF/REF) and in patients with HF/REF post-myocardial infarction (MI), they have not been systematically studied in patients with AHFS. The purpose of this review is to explore the potential efficacy and safety of MRAs in AHFS.",
"title": ""
},
{
"docid": "9097c75f98fcf355ce802f91b7599704",
"text": "LetM be an asymptotically flat 3-manifold of nonnegative scalar curvature. The Riemannian Penrose Inequality states that the area of an outermost minimal surface N in M is bounded by the ADM mass m according to the formula |N | ≤ 16πm2. We develop a theory of weak solutions of the inverse mean curvature flow, and employ it to prove this inequality for each connected component of N using Geroch’s monotonicity formula for the ADM mass. Our method also proves positivity of Bartnik’s gravitational capacity by computing a positive lower bound for the mass purely in terms of local geometry. 0. Introduction In this paper we develop the theory of weak solutions for the inverse mean curvature flow of hypersurfaces in a Riemannian manifold, and apply it to prove the Riemannian Penrose Inequality for a connected horizon, to wit: the total mass of an asymptotically flat 3-manifold of nonnegative scalar curvature is bounded below in terms of the area of each smooth, compact, connected, “outermost” minimal surface in the 3-manifold. A minimal surface is called outermost if it is not separated from infinity by any other compact minimal surface. The result was announced in [51]. The first author acknowledges the support of Sonderforschungsbereich 382, Tübingen. The second author acknowledges the support of an NSF Postdoctoral Fellowship, NSF Grants DMS-9626405 and DMS-9708261, a Sloan Foundation Fellowship, and the Max-Planck-Institut for Mathematics in the Sciences, Leipzig. Received May 15, 1998.",
"title": ""
},
{
"docid": "74acfe91e216c8494b7304cff03a8c66",
"text": "Diagnostic accuracy of the talar tilt test is not well established in a chronic ankle instability (CAI) population. Our purpose was to determine the diagnostic accuracy of instrumented and manual talar tilt tests in a group with varied ankle injury history compared with a reference standard of self-report questionnaire. Ninety-three individuals participated, with analysis occurring on 88 (39 CAI, 17 ankle sprain copers, and 32 healthy controls). Participants completed the Cumberland Ankle Instability Tool, arthrometer inversion talar tilt tests (LTT), and manual medial talar tilt stress tests (MTT). The ability to determine CAI status using the LTT and MTT compared with a reference standard was performed. The sensitivity (95% confidence intervals) of LTT and MTT was low [LTT = 0.36 (0.23-0.52), MTT = 0.49 (0.34-0.64)]. Specificity was good to excellent (LTT: 0.72-0.94; MTT: 0.78-0.88). Positive likelihood ratio (+ LR) values for LTT were 1.26-6.10 and for MTT were 2.23-4.14. Negative LR for LTT were 0.68-0.89 and for MTT were 0.58-0.66. Diagnostic odds ratios ranged from 1.43 to 8.96. Both clinical and arthrometer laxity testing appear to have poor overall diagnostic value for evaluating CAI as stand-alone measures. Laxity testing to assess CAI may only be useful to rule in the condition.",
"title": ""
},
{
"docid": "b29caaa973e60109fbc2f68e0eb562a6",
"text": "This correspondence introduces a new approach to characterize textures at multiple scales. The performance of wavelet packet spaces are measured in terms of sensitivity and selectivity for the classification of twenty-five natural textures. Both energy and entropy metrics were computed for each wavelet packet and incorporated into distinct scale space representations, where each wavelet packet (channel) reflected a specific scale and orientation sensitivity. Wavelet packet representations for twenty-five natural textures were classified without error by a simple two-layer network classifier. An analyzing function of large regularity ( 0 2 0 ) was shown to be slightly more efficient in representation and discrimination than a similar function with fewer vanishing moments (Ds) . In addition, energy representations computed from the standard wavelet decomposition alone (17 features) provided classification without error for the twenty-five textures included in our study. The reliability exhibited by texture signatures based on wavelet packets analysis suggest that the multiresolution properties of such transforms are beneficial for accomplishing segmentation, classification and subtle discrimination of texture.",
"title": ""
},
{
"docid": "0a3d649baf7483245167979fbbb008d2",
"text": "Students participate more in a classroom and also report a better understanding of course concepts when steps are taken to actively engage them. The Student Engagement (SE) Survey was developed and used in this study for measuring student engagement at the class level and consisted of 14 questions adapted from the original National Survey of Student Engagement (NSSE) survey. The adapted survey examined levels of student engagement in 56 classes at an upper mid-western university in the USA. Campus-wide faculty members participated in a program for training them in innovative teaching methods including problem-based learning (PBL). Results of this study typically showed a higher engagement in higher-level classes and also those classes with fewer students. In addition, the level of engagement was typically higher in those classrooms with more PBL.",
"title": ""
},
{
"docid": "4d520e0a64c5ba95df44901646e145cf",
"text": "This paper describes the installation of a mathematical formula recognition module into an open source OCR system: OCRopus. In particular we consider the identification of inline formulas utilizing existing modules. Text lines including math formulas are first processed using a N-gram language model to reduce the number of formula candidates by thresholding the conditional probability of words. Then the formula candidates are classified into formulas and texts by SVM using geometric features associated with the bounding boxes of symbols.",
"title": ""
},
{
"docid": "a6c3a4dfd33eb902f5338f7b8c7f78e5",
"text": "A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.",
"title": ""
},
{
"docid": "ea96aa3b9f162c69c738be2b190db9e0",
"text": "Batteries are currently being developed to power an increasingly diverse range of applications, from cars to microchips. How can scientists achieve the performance that each application demands? How will batteries be able to power the many other portable devices that will no doubt be developed in the coming years? And how can batteries become a sustainable technology for the future? The technological revolution of the past few centuries has been fuelled mainly by variations of the combustion reaction, the fire that marked the dawn of humanity. But this has come at a price: the resulting emissions of carbon dioxide have driven global climate change. For the sake of future generations, we urgently need to reconsider how we use energy in everything from barbecues to jet aeroplanes and power stations. If a new energy economy is to emerge, it must be based on a cheap and sustainable energy supply. One of the most flagrantly wasteful activities is travel, and here battery devices can potentially provide a solution, especially as they can be used to store energy from sustainable sources such as the wind and solar power. Because batteries are inherently simple in concept, it is surprising that their development has progressed much more slowly than other areas of electronics. As a result, they are often seen as being the heaviest, costliest and least-green components of any electronic device. It was the lack of good batteries that slowed down the deployment of electric cars and wireless communication, which date from at least 1899 and 1920, respectively (Fig. 1). The slow progress is due to the lack of suitable electrode materials and electrolytes, together with difficulties in mastering the interfaces between them. All batteries are composed of two electrodes connected by an ionically conductive material called an electrolyte. The two electrodes have different chemical potentials, dictated by the chemistry that occurs at each. When these electrodes are connected by means of an external device, electrons spontaneously flow from the more negative to the more positive potential. Ions are transported through the electrolyte, maintaining the charge balance, and electrical energy can be tapped by the external circuit. In secondary, or rechargeable, batteries, a larger voltage applied in the opposite direction can cause the battery to recharge. The amount of electrical energy per mass or volume that a battery can deliver is a function of the cell's voltage and capacity, which are dependent on the …",
"title": ""
},
{
"docid": "33126812301dfc04b475ecbc9c8ae422",
"text": "From fishtail to princess braids, these intricately woven structures define an important and popular class of hairstyle, frequently used for digital characters in computer graphics. In addition to the challenges created by the infinite range of styles, existing modeling and capture techniques are particularly constrained by the geometric and topological complexities. We propose a data-driven method to automatically reconstruct braided hairstyles from input data obtained from a single consumer RGB-D camera. Our approach covers the large variation of repetitive braid structures using a family of compact procedural braid models. From these models, we produce a database of braid patches and use a robust random sampling approach for data fitting. We then recover the input braid structures using a multi-label optimization algorithm and synthesize the intertwining hair strands of the braids. We demonstrate that a minimal capture equipment is sufficient to effectively capture a wide range of complex braids with distinct shapes and structures.",
"title": ""
},
{
"docid": "2cb0c74e57dea6fead692d35f8a8fac6",
"text": "Matching local image descriptors is a key step in many computer vision applications. For more than a decade, hand-crafted descriptors such as SIFT have been used for this task. Recently, multiple new descriptors learned from data have been proposed and shown to improve on SIFT in terms of discriminative power. This paper is dedicated to an extensive experimental evaluation of learned local features to establish a single evaluation protocol that ensures comparable results. In terms of matching performance, we evaluate the different descriptors regarding standard criteria. However, considering matching performance in isolation only provides an incomplete measure of a descriptors quality. For example, finding additional correct matches between similar images does not necessarily lead to a better performance when trying to match images under extreme viewpoint or illumination changes. Besides pure descriptor matching, we thus also evaluate the different descriptors in the context of image-based reconstruction. This enables us to study the descriptor performance on a set of more practical criteria including image retrieval, the ability to register images under strong viewpoint and illumination changes, and the accuracy and completeness of the reconstructed cameras and scenes. To facilitate future research, the full evaluation pipeline is made publicly available.",
"title": ""
},
{
"docid": "7998670588bee1965fd5a18be9ccb0d9",
"text": "In this letter, a hybrid visual servoing with a hierarchical task-composition control framework is described for aerial manipulation, i.e., for the control of an aerial vehicle endowed with a robot arm. The proposed approach suitably combines into a unique hybrid-control framework the main benefits of both image-based and position-based control schemes. Moreover, the underactuation of the aerial vehicle has been explicitly taken into account in a general formulation, together with a dynamic smooth activation mechanism. Both simulation case studies and experiments are presented to demonstrate the performance of the proposed technique.",
"title": ""
},
{
"docid": "32bb9f12da68d89a897c8fc7937c0a7d",
"text": "In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries.",
"title": ""
},
{
"docid": "1fb40441cd6d439a0e024fd888de0b2d",
"text": "Purpose – The aim of this research is to define a data model of theses and dissertations that enables data exchange with CERIF-compatible CRIS systems and data exchange according to OAI-PMH protocol in different metadata formats (Dublin Core, EDT-MS, etc.). Design/methodology/approach – Various systems that contain metadata about theses and dissertations are analyzed. There are different standards and protocols that enable the interoperability of those systems: CERIF standard, AOI-PMH protocol, etc. A physical data model that enables interoperability with almost all of those systems is created using the PowerDesigner CASE tool. Findings – A set of metadata about theses and dissertations that contain all the metadata required by CERIF data model, Dublin Core format, EDT-MS format and all the metadata prescribed by the University of Novi Sad is defined. Defined metadata can be stored in the CERIF-compatible data model based on the MARC21 format. Practical implications – CRIS-UNS is a CRIS which has been developed at the University of Novi Sad since 2008. The system is based on the proposed data model, which enables the system’s interoperability with other CERIF-compatible CRIS systems. Also, the system based on the proposed model can become a member of NDLTD. Social implications – A system based on the proposed model increases the availability of theses and dissertations, and thus encourages the development of the knowledge-based society. Originality/value – A data model of theses and dissertations that enables interoperability with CERIF-compatible CRIS systems is proposed. A software system based on the proposed model could become a member of NDLTD and exchange metadata with institutional repositories. The proposed model increases the availability of theses and dissertations.",
"title": ""
}
] |
scidocsrr
|
d80c839d0a8de145abed209d5acb4852
|
A Survey on Wireless Indoor Localization from the Device Perspective
|
[
{
"docid": "5ed31e2c0b4f958996df1ac8f5dfd6cc",
"text": "Through-wall tracking has gained a lot of attentions in civilian applications recently. Many applications would benefit from such device-free tracking, e.g. elderly people surveillance, intruder detection, gaming, etc. In this work, we present a system, named Tadar, for tracking moving objects without instrumenting them us- ing COTS RFID readers and tags. It works even through walls and behind closed doors. It aims to enable a see-through-wall technology that is low-cost, compact, and accessible to civilian purpose. In traditional RFID systems, tags modulate their IDs on the backscatter signals, which is vulnerable to the interferences from the ambient reflections. Unlike past work, which considers such vulnerability as detrimental, our design exploits it to detect surrounding objects even through walls. Specifically, we attach a group of RFID tags on the outer wall and logically convert them into an antenna array, receiving the signals reflected off moving objects. This paper introduces two main innovations. First, it shows how to eliminate the flash (e.g. the stronger reflections off walls) and extract the reflections from the backscatter signals. Second, it shows how to track the moving object based on HMM (Hidden Markov Model) and its reflections. To the best of our knowledge, we are the first to implement a through-wall tracking using the COTS RFID systems. Empirical measurements with a prototype show that Tadar can detect objects behind 5\" hollow wall and 8\" concrete wall, and achieve median tracking errors of 7.8cm and 20cm in the X and Y dimensions.",
"title": ""
},
{
"docid": "dee069074d1bb5ae383ee7b3d3dd8f74",
"text": "WiFi based indoor positioning has recently gained more attention due to the advent of the IEEE 802.11v standard, requirements by the FCC for E911 calls, and increased interest in location-based services. While there exist several indoor localization techniques, we find that these techniques tradeoff either accuracy, scalability, pervasiveness or cost -- all of which are important requirements for a truly deployable positioning solution. Wireless signal-strength based approaches suffer from location errors, whereas time-of-flight (ToF) based solutions provide good accuracy but are not scalable. Recent solutions address these issues by augmenting WiFi with either smartphone sensing or mobile crowdsourcing. However, they require tight coupling between WiFi infrastructure and a client device, or they can determine the client's location only if it is mobile. In this paper, we present CUPID2.0 which improved our previously proposed CUPID indoor positioning system to overcome these limitations. We achieve this by addressing the fundamental limitations in Time-of-Flight based localization and combining ToF with signal strength to address scalability. Experiments from $6$ cities using $40$ different mobile devices, comprising of more than $2.5$ million location fixes demonstrate feasibility. CUPID2.0 is currently under production, and we expect CUPID2.0 to ignite the wide adoption of WLAN-based positioning systems and their services.",
"title": ""
},
{
"docid": "e84c15551a746b18936cf43a7d7f1c63",
"text": "Indoor localization is of great importance to a wide range of applications in the era of mobile computing. Current mainstream solutions rely on Received Signal Strength (RSS) of wireless signals as fingerprints to distinguish and infer locations. However, those methods suffer from fingerprint ambiguity that roots in multipath fading and temporal dynamics of wireless signals. Though pioneer efforts have resorted to motion-assisted or peer-assisted localization, they neither work in real time nor work without the help of peer users, which introduces extra costs and constraints, and thus degrades their practicality. To get over these limitations, we propose Argus, an image-assisted localization system for mobile devices. The basic idea of Argus is to extract geometric constraints from crowdsourced photos, and to reduce fingerprint ambiguity by mapping the constraints jointly against the fingerprint space. We devise techniques for photo selection, geometric constraint extraction, joint location estimation, and build a prototype that runs on commodity phones. Extensive experiments show that Argus triples the localization accuracy of classic RSS-based method, in time no longer than normal WiFi scanning, with negligible energy consumption.",
"title": ""
}
] |
[
{
"docid": "07f1caa5f4c0550e3223e587239c0a14",
"text": "Due to the unavailable GPS signals in indoor environments, indoor localization has become an increasingly heated research topic in recent years. Researchers in robotics community have tried many approaches, but this is still an unsolved problem considering the balance between performance and cost. The widely deployed low-cost WiFi infrastructure provides a great opportunity for indoor localization. In this paper, we develop a system for WiFi signal strength-based indoor localization and implement two approaches. The first is improved KNN algorithm-based fingerprint matching method, and the other is the Gaussian Process Regression (GPR) with Bayes Filter approach. We conduct experiments to compare the improved KNN algorithm with the classical KNN algorithm and evaluate the localization performance of the GPR with Bayes Filter approach. The experiment results show that the improved KNN algorithm can bring enhancement for the fingerprint matching method compared with the classical KNN algorithm. In addition, the GPR with Bayes Filter approach can provide about 2m localization accuracy for our test environment.",
"title": ""
},
{
"docid": "2c63d6e44d9582355d9ac4b471fe28c3",
"text": "Introduction Immediate implant placement is a well-recognized and successful treatment option following tooth removal.1 Although the success rates for both immediate and delayed implant techniques are comparable, the literature reports that one can expect there to be recession of the buccal / facial gingiva of at least 1 mm following immediate implant placement, with the recession to possibly worsen in thin gingival biotypes.2 Low aesthetic value areas may be of less concern, however this recession and ridge collapse can pose an aesthetic disaster in areas such as the anterior maxilla. Compromised aesthetics may be masked to some degree by a low lip-line, thick gingival biotype, when treating single tooth cases, and so forth, but when implant therapy is carried out in patients with high lip-lines, patients with high aesthetic demands, with a very thin gingival biotype or multiple missing teeth where there is more extensive tissue deficit, then the risk for an aesthetic failure is far greater.3 The socket-shield (SS) technique provides a promising treatment adjunct to better manage these risks and preserve the post-extraction tissues in aesthetically challenging cases.4 The principle is to prepare the root of a tooth indicated for extraction in such a manner that the buccal / facial root section remains in-situ with its physiologic relation to the buccal plate intact. The tooth root section’s periodontal attachment apparatus (periodontal ligament (PDL), attachment fibers, vascularization, root cementum, bundle bone, alveolar bone) is intended to remain vital and undamaged so as to prevent the expected post-extraction socket remodeling and to support the buccal / facial tissues. Hereafter a case is presented where the SS technique was carried out at implant placement and the results from the case followed up at 1 year post-treatment demonstrate the degree of facial ridge tissue preservation achieved. C L I N I C A L",
"title": ""
},
{
"docid": "d6dba7a89bc123bc9bb616df6faee2bc",
"text": "Continuing interest in digital games indicated that it would be useful to update [Authors’, 2012] systematic literature review of empirical evidence about the positive impacts an d outcomes of games. Since a large number of papers was identified in th e period from 2009 to 2014, the current review focused on 143 papers that provided higher quality evidence about the positive outcomes of games. [Authors’] multidimensional analysis of games and t heir outcomes provided a useful framework for organising the varied research in this area. The mo st frequently occurring outcome reported for games for learning was knowledge acquisition, while entertain me t games addressed a broader range of affective, behaviour change, perceptual and cognitive and phys iological outcomes. Games for learning were found across varied topics with STEM subjects and health the most popular. Future research on digital games would benefit from a systematic programme of experi m ntal work, examining in detail which game features are most effective in promoting engagement and supporting learning.",
"title": ""
},
{
"docid": "d1796cd063e0d1ea03462d2002c4dae5",
"text": "This paper describes the experimental characterization of MOS bipolar pseudo-resistors for a general purpose technology. Very-high resistance values can be obtained in small footprint layouts, allowing the development of high-pass filters with RC constants over 1 second. The pseudo-resistor presents two different behavior regions, and as described in this work, in bio-amplifiers applications, important functions are assigned to each of these regions. 0.13 μm 8HP technology from GlobalFoundries was chosen as the target technology for the prototypes, because of its versatility. Due to the very-low current of pseudo-resistors, a circuit for indirect resistance measurement was proposed and applied. The fabricated devices presented resistances over 1 teraohm and preserved both the linear and the exponential operation regions, proving that they are well suited for bio-amplifier applications.",
"title": ""
},
{
"docid": "1824ca63290cd19394e6257cb18b198d",
"text": "We sought to assess the prevalence of methicillin-resistance among Staphylococcus aureus isolates in Africa. We included articles published in 2005 or later reporting for the prevalence of MRSA among S. aureus clinical isolates. Thirty-two studies were included. In Tunisia, the prevalence of MRSA increased from 16% to 41% between 2002-2007, while in Libya it was 31% in 2007. In South Africa, the prevalence decreased from 36% in 2006 to 24% during 2007-2011. In Botswana, the prevalence varied from 23-44% between 2000-2007. In Algeria and Egypt, the prevalence was 45% and 52% between 2003-2005, respectively. In Nigeria, the prevalence was greater in the northern than the southern part. In Ethiopia and the Ivory Coast, the prevalence was 55% and 39%, respectively. The prevalence of MRSA was lower than 50% in most of the African countries, although it appears to have risen since 2000 in many African countries, except for South Africa.",
"title": ""
},
{
"docid": "b9ac19895dbc80f6b732cd8967fb92fb",
"text": "Tracking congestion throughout the network road is a critical component of Intelligent transportation network management systems. Understanding how the traffic flows and short-term prediction of congestion occurrence due to rush-hour or incidents can be beneficial to such systems to effectively manage and direct the traffic to the most appropriate detours. Many of the current traffic flow prediction systems are designed by utilizing a central processing component where the prediction is carried out through aggregation of the information gathered from all measuring stations. However, centralized systems are not scalable and fail provide real-time feedback to the system whereas in a decentralized scheme, each node is responsible to predict its own short-term congestion based on the local current measurements in neighboring nodes. We propose a decentralized deep learning-based method where each node accurately predicts its own congestion state in realtime based on the congestion state of the neighboring stations. Moreover, historical data from the deployment site is not required, which makes the proposed method more suitable for newly installed stations. In order to achieve higher performance, we introduce a regularized euclidean loss function that favors high congestion samples over low congestion samples to avoid the impact of the unbalanced training dataset. A novel dataset for this purpose is designed based on the traffic data obtained from traffic control stations in northern California. Extensive experiments conducted on the designed benchmark reflect a successful congestion prediction.",
"title": ""
},
{
"docid": "048081246f39fc80273d08493c770016",
"text": "Skin detection is used in many applications, such as face recognition, hand tracking, and human-computer interaction. There are many skin color detection algorithms that are used to extract human skin color regions that are based on the thresholding technique since it is simple and fast for computation. The efficiency of each color space depends on its robustness to the change in lighting and the ability to distinguish skin color pixels in images that have a complex background. For more accurate skin detection, we are proposing a new threshold based on RGB and YUV color spaces. The proposed approach starts by converting the RGB color space to the YUV color model. Then it separates the Y channel, which represents the intensity of the color model from the U and V channels to eliminate the effects of luminance. After that the threshold values are selected based on the testing of the boundary of skin colors with the help of the color histogram. Finally, the threshold was applied to the input image to extract skin parts. The detected skin regions were quantitatively compared to the actual skin parts in the input images to measure the accuracy and to compare the results of our threshold to the results of other’s thresholds to prove the efficiency of our approach. The results of the experiment show that the proposed threshold is more robust in terms of dealing with the complex background and light conditions than others. Keyword: Skin segmentation; Thresholding technique; Skin detection; Color space",
"title": ""
},
{
"docid": "3c27b3e11ba9924e9c102fc9ba7907b6",
"text": "The Visagraph IITM Eye Movement Recording System is an instrument that assesses reading eye movement efficiency and related parameters objectively. It also incorporates automated data analysis. In the standard protocol, the patient reads selections only at the level of their current school grade, or at the level that has been determined by a standardized reading test. In either case, deficient reading eye movements may be the consequence of a language-based reading disability, an oculomotor-based reading inefficiency, or both. We propose an addition to the standard protocol: the patient’s eye movements are recorded a second time with text that is significantly below the grade level of the initial reading. The goal is to determine which factor is primarily contributing to the patient’s reading problem, oculomotor or language. This concept is discussed in the context of two representative cases.",
"title": ""
},
{
"docid": "b055d5ce27758b4d759fdc66dc24144b",
"text": "We present an approach for forecasting labor demand in a services business. We introduce an arrangement of machine learning techniques, each constructed by necessity to overcome issues with data veracity and high dimensionality.",
"title": ""
},
{
"docid": "2f6ed4c2988391cc4ad95fe742994a1d",
"text": "The negative effect of increasing atmospheric nitrogen (N) pollution on grassland biodiversity is now incontrovertible. However, the recent introduction of cleaner technologies in the UK has led to reductions in the emissions of nitrogen oxides, with concomitant decreases in N deposition. The degree to which grassland biodiversity can be expected to ‘bounce back’ in response to these improvements in air quality is uncertain, with a suggestion that long-term chronic N addition may lead to an alternative low biodiversity state. Here we present evidence from the 160-year-old Park Grass Experiment at Rothamsted Research, UK, that shows a positive response of biodiversity to reducing N addition from either atmospheric pollution or fertilizers. The proportion of legumes, species richness and diversity increased across the experiment between 1991 and 2012 as both wet and dry N deposition declined. Plots that stopped receiving inorganic N fertilizer in 1989 recovered much of the diversity that had been lost, especially if limed. There was no evidence that chronic N addition has resulted in an alternative low biodiversity state on the Park Grass plots, except where there has been extreme acidification, although it is likely that the recovery of plant communities has been facilitated by the twice-yearly mowing and removal of biomass. This may also explain why a comparable response of plant communities to reduced N inputs has yet to be observed in the wider landscape.",
"title": ""
},
{
"docid": "c77fec3ea0167df15cfd4105a7101a1e",
"text": "This paper is about extending the reach and endurance of outdoor localisation using stereo vision. At the heart of the localisation is the fundamental task of discovering feature correspondences between recorded and live images. One aspect of this problem involves deciding where to look for correspondences in an image and the second is deciding what to look for. This latter point, which is the main focus of our paper, requires understanding how and why the appearance of visual features can change over time. In particular, such knowledge allows us to better deal with abrupt and challenging changes in lighting. We show how by instantiating a parallel image processing stream which operates on illumination-invariant images, we can substantially improve the performance of an outdoor visual navigation system. We will demonstrate, explain and analyse the effect of the RGB to illumination-invariant transformation and suggest that for little cost it becomes a viable tool for those concerned with having robots operate for long periods outdoors.",
"title": ""
},
{
"docid": "de4677a8bb9d1e43a4b6fe4f2e6b6106",
"text": "Reinforcement learning (RL) has developed into a large research field. The current state-ofthe-art is comprised of several subfields dealing with, for example, hierarchical abstraction and relational representations. This overview is targeted at researchers interested in RL who want to know where to start when studying RL in general, and where to start within the field of RL when faced with specific problem domains. This overview is by no means complete, nor does it describe all relevant texts. In fact, there are many more. The main function of this overview is to provide a reasonable amount of good entry points into the rich field of RL. All texts are widely available and most of them are online. General and Introductory Texts There are many texts that introduce the exciting field of RL and Markov decision processes (see for example the mentioned PhD theses at the end of this overview). Furthermore, many recent AI and machine learning textbooks cover basic RL. Some of the core texts in the field are the following. I M. L. Puterman. Markov Decision Processes—Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., New York, NY, 1994 I D. P. Bertsekas and J. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont, MA, 1996 I L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237–285, 1996 I S. S. Keerthi and B. Ravindran. Reinforcement learning. In E. Fiesler and R. Beale, editors, Handbook of Neural Computation, chapter C3. Institute of Physics and Oxford University Press, New York, New York, 1997 I R. S. Sutton and A. G. Barto. Reinforcement Learning: an Introduction. The MIT Press, Cambridge, 1998 I C. Boutilier, T. Dean, and S. Hanks. Decision theoretic planning: Structural assumptions and computational leverage. Journal of Artificial Intelligence Research, 11:1–94, 1999 I M. van Otterlo. The Logic of Adaptive Behavior: Knowledge Representation and Algorithms for Adaptive Sequential Decision Making under Uncertainty in First-Order and Relational Domains. IOS Press, Amsterdam, The Netherlands, 2009 The book by Sutton and Barto is available online, for free. You can find it at http://www.cs.ualberta.ca/∼ sutton/book/the-book.html Function Approximation, Generalization and Abstraction Because most problems are too large to represent explicitly, the majority of techniques in current RL research employs some form of generalization, abstraction or function approximation. Ergo, there are innumerable texts that deal with these matters. Some interesting starting points are the following.",
"title": ""
},
{
"docid": "3ecb8f96641ea6b3f24161ac9b8c14ad",
"text": "This paper describes different arrangements for a dual-rotor, radial-flux, and permanent-magnet brushless dc motor for application to variable-speed air conditioners. In conventional air conditioners, two motors of appropriate ratings are usually used to drive the condenser and evaporator separately. Alternatively, a motor with two output shafts may be employed, and this is studied here. The motor has inner and outer rotors with a stator in between which is toroidally wound or axially wound with inner and outer slotted stator surfaces. The power sharing on the two rotors is designed to meet the requirement of the condenser and evaporator. Finite element analysis (FEA) is employed to verify the designs. A prototype is made and tested to evaluate the performance. Alternative windings are investigated to assess the possibilities of decoupling the rotors so that they run independently. In the final section, a new and novel arrangement is proposed, where one three-phase winding set and one two-phase winding set (both toroidal) are wound on the same stator to control two rotors of different pole numbers. The two winding sets can be bifilar or share the same set of phase windings. This design simplifies the winding (because it is toroidal) and reduces the copper loss or amount of copper required. The design is tested using FEA solutions, and the initial results indicate that this machine could operate successfully.",
"title": ""
},
{
"docid": "6141b0cb5d5b2f24336714453a29b03f",
"text": "We present the Extended Paraphrase Typology (EPT) and the Extended Typology Paraphrase Corpus (ETPC). The EPT typology addresses several practical limitations of existing paraphrase typologies: it is the first typology that copes with the non-paraphrase pairs in the paraphrase identification corpora and distinguishes between contextual and habitual paraphrase types. ETPC is the largest corpus to date annotated with atomic paraphrase types. It is the first corpus with detailed annotation of both the paraphrase and the non-paraphrase pairs and the first corpus annotated with paraphrase and negation. Both new resources contribute to better understanding the paraphrase phenomenon, and allow for studying the relationship between paraphrasing and negation. To the developers of Paraphrase Identification systems ETPC corpus offers better means for evaluation and error analysis. Furthermore, the EPT typology and ETPC corpus emphasize the relationship with other areas of NLP such as Semantic Similarity, Textual Entailment, Summarization and Simplification.",
"title": ""
},
{
"docid": "953505ad3cb7991a051225e10ae8f2db",
"text": "Recent botnets such as Conficker, Kraken, and Torpig have used DNS-based \"domain fluxing\" for command-and-control, where each Bot queries for existence of a series of domain names and the owner has to register only one such domain name. In this paper, we develop a methodology to detect such \"domain fluxes\" in DNS traffic by looking for patterns inherent to domain names that are generated algorithmically, in contrast to those generated by humans. In particular, we look at distribution of alphanumeric characters as well as bigrams in all domains that are mapped to the same set of IP addresses. We present and compare the performance of several distance metrics, including K-L distance, Edit distance, and Jaccard measure. We train by using a good dataset of domains obtained via a crawl of domains mapped to all IPv4 address space and modeling bad datasets based on behaviors seen so far and expected. We also apply our methodology to packet traces collected at a Tier-1 ISP and show we can automatically detect domain fluxing as used by Conficker botnet with minimal false positives, in addition to discovering a new botnet within the ISP trace. We also analyze a campus DNS trace to detect another unknown botnet exhibiting advanced domain-name generation technique.",
"title": ""
},
{
"docid": "1c5591bec1b8bfab63309aa2eb488e83",
"text": "When performing visualization and classification, people often confront the problem of dimensionality reduction. Isomap is one of the most promising nonlinear dimensionality reduction techniques. However, when Isomap is applied to real-world data, it shows some limitations, such as being sensitive to noise. In this paper, an improved version of Isomap, namely S-Isomap, is proposed. S-Isomap utilizes class information to guide the procedure of nonlinear dimensionality reduction. Such a kind of procedure is called supervised nonlinear dimensionality reduction. In S-Isomap, the neighborhood graph of the input data is constructed according to a certain kind of dissimilarity between data points, which is specially designed to integrate the class information. The dissimilarity has several good properties which help to discover the true neighborhood of the data and, thus, makes S-Isomap a robust technique for both visualization and classification, especially for real-world problems. In the visualization experiments, S-Isomap is compared with Isomap, LLE, and WeightedIso. The results show that S-Isomap performs the best. In the classification experiments, S-Isomap is used as a preprocess of classification and compared with Isomap, WeightedIso, as well as some other well-established classification methods, including the K-nearest neighbor classifier, BP neural network, J4.8 decision tree, and SVM. The results reveal that S-Isomap excels compared to Isomap and WeightedIso in classification, and it is highly competitive with those well-known classification methods.",
"title": ""
},
{
"docid": "6ed1985653d20180383948515edaf9a8",
"text": "The paradigm of ubiquitous computing has the potential to enhance classroom behavior management. In this work, we used an action research approach to examine the use of a tablet-based behavioral data collection system by school practitioners, and co-design an interface for displaying the behavioral data to their students. We present a wall-mounted display prototype and discuss its potential for supplementing existing classroom behavior management practices. We found that wall-mounted displays could help school practitioners to provide a wider range of behavioral reinforces and deliver specific and immediate feedback to students.",
"title": ""
},
{
"docid": "f1a162f64838817d78e97a3c3087fae4",
"text": "Most literature on support vector machines (SVMs) concentrates on the dual optimization problem. In this letter, we point out that the primal problem can also be solved efficiently for both linear and nonlinear SVMs and that there is no reason for ignoring this possibility. On the contrary, from the primal point of view, new families of algorithms for large-scale SVM training can be investigated.",
"title": ""
},
{
"docid": "dbf5fd755e91c4a67446dcce2d8759ba",
"text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. .",
"title": ""
},
{
"docid": "cb088e1c4aaf5f021f7120b2a4f388ad",
"text": "This paper investigates an explicit dynamic model of the PUMA 560 robot manipulators, based on standard Denavit-Hartenberg approach and without any mathematical simplifications. The presented model obviates the existing shortcomings in reference model in MATLAB robotic toolbox and it can be an appropriate substitution for robotic toolbox. A numerical comparison, employing different inputs, is utilized to illustrate the accuracy of the mentioned model.",
"title": ""
}
] |
scidocsrr
|
6e254f1a3e0039abac80b9b06f4b8a6f
|
Using Proactive Fault-Tolerance Approach to Enhance Cloud Service Reliability
|
[
{
"docid": "1b04911f677767284063133908ab4bb1",
"text": "An increasing number of companies are beginning to deploy services/applications in the cloud computing environment. Enhancing the reliability of cloud service has become a critical and challenging research problem. In the cloud computing environment, all resources are commercialized. Therefore, a reliability enhancement approach should not consume too much resource. However, existing approaches cannot achieve the optimal effect because of checkpoint image-sharing neglect, and checkpoint image inaccessibility caused by node crashing. To address this problem, we propose a cloud service reliability enhancement approach for minimizing network and storage resource usage in a cloud data center. In our proposed approach, the identical parts of all virtual machines that provide the same service are checkpointed once as the service checkpoint image, which can be shared by those virtual machines to reduce the storage resource consumption. Then, the remaining checkpoint images only save the modified page. To persistently store the checkpoint image, the checkpoint image storage problem is modeled as an optimization problem. Finally, we present an efficient heuristic algorithm to solve the problem. The algorithm exploits the data center network architecture characteristics and the node failure predicator to minimize network resource usage. To verify the effectiveness of the proposed approach, we extend the renowned cloud simulator Cloudsim and conduct experiments on it. Experimental results based on the extended Cloudsim show that the proposed approach not only guarantees cloud service reliability, but also consumes fewer network and storage resources than other approaches.",
"title": ""
},
{
"docid": "acaaa0a6316bffb3ed618da7ec4d8d80",
"text": "The rapid growth in demand for computational power driven by modern service applications combined with the shift to the Cloud computing model have led to the establishment of large-scale virtualized data centers. Such data centers consume enormous amounts of electrical energy resulting in high operating costs and carbon dioxide emissions. Dynamic consolidation of virtual machines (VMs) using live migration and switching idle nodes to the sleep mode allow Cloud providers to optimize resource usage and reduce energy consumption. However, the obligation of providing high quality of service to customers leads to the necessity in dealing with the energy-performance trade-off, as aggressive consolidation may lead to performance degradation. Due to the variability of workloads experienced by modern applications, the VM placement should be optimized continuously in an online manner. To understand the implications of the online nature of the problem, we conduct competitive analysis and prove competitive ratios of optimal online deterministic algorithms for the single VM migration and dynamic VM consolidation problems. Furthermore, we propose novel adaptive heuristics for dynamic consolidation of VMs based on an analysis of historical data from the resource usage by VMs. The proposed algorithms significantly reduce energy consumption, while ensuring a high level of adherence to the Service Level Agreements (SLA). We validate the high efficiency of the proposed algorithms by extensive simulations using real-world workload traces from more than a thousand PlanetLab VMs. Copyright c © 2012 John Wiley & Sons, Ltd.",
"title": ""
}
] |
[
{
"docid": "4f973dfbea2cd0273d060f6917eac0af",
"text": "For an understanding of the aberrant biology seen in mouse mutations and identification of more subtle phenotype variation, there is a need for a full clinical and pathological characterization of the animals. Although there has been some use of sophisticated techniques, the majority of behavioral and functional analyses in mice have been qualitative rather than quantitative in nature. There is, however, no comprehensive routine screening and testing protocol designed to identify and characterize phenotype variation or disorders associated with the mouse genome. We have developed the SHIRPA procedure to characterize the phenotype of mice in three stages. The primary screen utilizes standard methods to provide a behavioral and functional profile by observational assessment. The secondary screen involves a comprehensive behavioral assessment battery and pathological analysis. These protocols provide the framework for a general phenotype assessment that is suitable for a wide range of applications, including the characterization of spontaneous and induced mutants, the analysis of transgenic and gene-targeted phenotypes, and the definition of variation between strains. The tertiary screening stage described is tailored to the assessment of existing or potential models of neurological disease, as well as the assessment of phenotypic variability that may be the result of unknown genetic influences. SHIRPA utilizes standardized protocols for behavioral and functional assessment that provide a sensitive measure for quantifying phenotype expression in the mouse. These paradigms can be refined to test the function of specific neural pathways, which will, in turn, contribute to a greater understanding of neurological disorders.",
"title": ""
},
{
"docid": "02a130ee46349366f2df347119831e5c",
"text": "Low power ad hoc wireless networks operate in conditions where channels are subject to fading. Cooperative diversity mitigates fading in these networks by establishing virtual antenna arrays through clustering the nodes. A cluster in a cooperative diversity network is a collection of nodes that cooperatively transmits a single packet. There are two types of clustering schemes: static and dynamic. In static clustering all nodes start and stop transmission simultaneously, and nodes do not join or leave the cluster while the packet is being transmitted. Dynamic clustering allows a node to join an ongoing cooperative transmission of a packet as soon as the packet is received. In this paper we take a broad view of the cooperative network by examining packet flows, while still faithfully implementing the physical layer at the bit level. We evaluate both clustering schemes using simulations on large multi-flow networks. We demonstrate that dynamically-clustered cooperative networks substantially outperform both statically-clustered cooperative networks and classical point-to-point networks.",
"title": ""
},
{
"docid": "6dbf49c714f6e176273317d4274b93de",
"text": "Categorical compositional distributional model of [9] sug gests a way to combine grammatical composition of the formal, type logi cal models with the corpus based, empirical word representations of distribut ional semantics. This paper contributes to the project by expanding the model to al so capture entailment relations. This is achieved by extending the representatio s of words from points in meaning space to density operators, which are probabilit y d stributions on the subspaces of the space. A symmetric measure of similarity an d an asymmetric measure of entailment is defined, where lexical entailment i s measured using von Neumann entropy, the quantum variant of Kullback-Leibl er divergence. Lexical entailment, combined with the composition map on wo rd representations, provides a method to obtain entailment relations on the leve l of sentences. Truth theoretic and corpus-based examples are provided.",
"title": ""
},
{
"docid": "1557392e8482bafe53eb50fccfd60157",
"text": "A common practice among servers in restaurants is to give their dining parties an unexpected gift in the form of candy when delivering the check. Two studies were conducted to evaluate the impact of this gesture on the tip percentages received by servers. Study 1 found that customers who received a small piece of chocolate along with the check tipped more than did customers who received no candy. Study 2 found that tips varied with the amount of the candy given to the customers as well as with the manner in which it was offered. It is argued that reciprocity is a stronger explanation for these findings than either impression management or the good mood effect.",
"title": ""
},
{
"docid": "7535a7351849c5a6dd65611037d06678",
"text": "In this paper, we present an optimistic concurrency control solution. The proposed solution represents an excellent blossom in the concurrency control field. It deals with the concurrency control anomalies, and, simultaneously, assures the reliability of the data before read-write transactions and after successfully committed. It can be used within the distributed database to track data logs and roll back processes to overcome distributed database anomalies. The method is based on commit timestamps for validation and an integer flag that is incremented each time a successful update on the record is committed.",
"title": ""
},
{
"docid": "e6e78cf1e5dc6332e872bad7321f9c16",
"text": "Structural analysis and design is often conducted under the assumption of rigid base boundary conditions, particularly if the foundation system extends to bedrock, though the extent to which the actual flexibility of the soil-foundation system affects the predicted periods of vibration depends on the application. While soil-structure interaction has mostly received attention in seismic applications, lateral flexibility below the ground surface may in some cases influence the dynamic properties of tall, flexible structures, generally greater than 50 stories and dominated by wind loads. This study will explore this issue and develop a hybrid framework within which these effects can be captured and eventually be applied to existing finite element models of two tall buildings in the Chicago Full-Scale Monitoring Program. It is hypothesized that the extent to which the rigid base condition assumption applies in these buildings depends on the relative role of cantilever and frame actions in their structural systems. In this hybrid approach, the lateral and axial flexibility of the foundation systems are first determined in isolation and then introduced to the existing finite element models of the buildings as springs, replacing the rigid boundary conditions assumed by designers in the original finite element model development. The evaluation of the periods predicted by this hybrid framework, validated against companion studies and full-scale data, are used to quantify the sensitivity of foundation modeling to the super-structural system primary deformation mechanisms and soil type. Not only will this study demonstrate the viability of this hybrid approach, but also illustrate situations under which foundation flexibility in various degrees of freedom should be considered in the modeling process.",
"title": ""
},
{
"docid": "26b38a6dc48011af80547171a9f3ecbd",
"text": "This work addresses two classification problems that fall under the heading of domain adaptation, wherein the distributions of training and testing examples differ. The first problem studied is that of class proportion estimation, which is the problem of estimating the class proportions in an unlabeled testing data set given labeled examples of each class. Compared to previous work on this problem, our approach has the novel feature that it does not require labeled training data from one of the classes. This property allows us to address the second domain adaptation problem, namely, multiclass anomaly rejection. Here, the goal is to design a classifier that has the option of assigning a “reject” label, indicating that the instance did not arise from a class present in the training data. We establish consistent learning strategies for both of these domain adaptation problems, which to our knowledge are the first of their kind. We also implement the class proportion estimation technique and demonstrate its performance on several benchmark data sets.",
"title": ""
},
{
"docid": "b50498964a73a59f54b3a213f2626935",
"text": "To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering the statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that for a pruned network to retain its predictive power, it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the \"final response layer\" (FRL), which is the second-to-last layer before classification. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, formulate network pruning as a binary integer optimization problem, and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and it is then fine-tuned to recover its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.",
"title": ""
},
{
"docid": "85fe68b957a8daa69235ef65d92b1990",
"text": "Although Neural Machine Translation (NMT) models have advanced state-of-the-art performance in machine translation, they face problems like the inadequate translation. We attribute this to that the standard Maximum Likelihood Estimation (MLE) cannot judge the real translation quality due to its several limitations. In this work, we propose an adequacyoriented learning mechanism for NMT by casting translation as a stochastic policy in Reinforcement Learning (RL), where the reward is estimated by explicitly measuring translation adequacy. Benefiting from the sequence-level training of RL strategy and a more accurate reward designed specifically for translation, our model outperforms multiple strong baselines, including (1) standard and coverage-augmented attention models with MLE-based training, and (2) advanced reinforcement and adversarial training strategies with rewards based on both word-level BLEU and character-level CHRF3. Quantitative and qualitative analyses on different language pairs and NMT architectures demonstrate the effectiveness and universality of the proposed approach.",
"title": ""
},
{
"docid": "c0ef15616ba357cb522b828e03a5298c",
"text": "This paper introduces the compact genetic algorithm (cGA) which represents the population as a probability distribution over the set of solutions and is operationally equivalent to the order-one behavior of the simple GA with uniform crossover. It processes each gene independently and requires less memory than the simple GA. The development of the compact GA is guided by a proper understanding of the role of the GA’s parameters and operators. The paper clearly illustrates the mapping of the simple GA’s parameters into those of an equivalent compact GA. Computer simulations compare both algorithms in terms of solution quality and speed. Finally, this work raises important questions about the use of information in a genetic algorithm, and its ramifications show us a direction that can lead to the design of more efficient GA’s.",
"title": ""
},
{
"docid": "936353c90f0e0ce7946a11b4a60d494c",
"text": "This paper deals with multi-class classification problems. Many methods extend binary classifiers to operate a multi-class task, with strategies such as the one-vs-one and the one-vs-all schemes. However, the computational cost of such techniques is highly dependent on the number of available classes. We present a method for multi-class classification, with a computational complexity essentially independent of the number of classes. To this end, we exploit recent developments in multifunctional optimization in machine learning. We show that in the proposed algorithm, labels only appear in terms of inner products, in the same way as input data emerge as inner products in kernel machines via the so-called the kernel trick. Experimental results on real data show that the proposed method reduces efficiently the computational time of the classification task without sacrificing its generalization ability.",
"title": ""
},
{
"docid": "79b3dc474bc2a75185c6cb7486ad7dde",
"text": "BACKGROUND\nCanine rabies causes many thousands of human deaths every year in Africa, and continues to increase throughout much of the continent.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nThis paper identifies four common reasons given for the lack of effective canine rabies control in Africa: (a) a low priority given for disease control as a result of lack of awareness of the rabies burden; (b) epidemiological constraints such as uncertainties about the required levels of vaccination coverage and the possibility of sustained cycles of infection in wildlife; (c) operational constraints including accessibility of dogs for vaccination and insufficient knowledge of dog population sizes for planning of vaccination campaigns; and (d) limited resources for implementation of rabies surveillance and control. We address each of these issues in turn, presenting data from field studies and modelling approaches used in Tanzania, including burden of disease evaluations, detailed epidemiological studies, operational data from vaccination campaigns in different demographic and ecological settings, and economic analyses of the cost-effectiveness of dog vaccination for human rabies prevention.\n\n\nCONCLUSIONS/SIGNIFICANCE\nWe conclude that there are no insurmountable problems to canine rabies control in most of Africa; that elimination of canine rabies is epidemiologically and practically feasible through mass vaccination of domestic dogs; and that domestic dog vaccination provides a cost-effective approach to the prevention and elimination of human rabies deaths.",
"title": ""
},
{
"docid": "021c7631ac1ac3c47029468563f8d310",
"text": "It is widely accepted that variable names in computer programs should be meaningful, and that this aids program comprehension. \"Meaningful\" is commonly interpreted as favoring long descriptive names. However, there is at least some use of short and even single-letter names: using 'i' in loops is very common, and we show (by extracting variable names from 1000 popular github projects in 5 languages) that some other letters are also widely used. In addition, controlled experiments with different versions of the same functions (specifically, different variable names) failed to show significant differences in ability to modify the code. Finally, an online survey showed that certain letters are strongly associated with certain types and meanings. This implies that a single letter can in fact convey meaning. The conclusion from all this is that single letter variables can indeed be used beneficially in certain cases, leading to more concise code.",
"title": ""
},
{
"docid": "9cae19b4d3b4a8258b1013a9895a6c91",
"text": "Research has mainly neglected to examine if the possible antagonism of play/games and seriousness affects the educational potential of serious gaming. This article follows a microsociological approach and treats play and seriousness as different social frames, with each being indicated by significant symbols and containing unique social rules, adequate behavior and typical consequences of action. It is assumed that due to the specific qualities of these frames, serious frames are perceived as more credible but less entertaining than playful frames – regardless of subject matter. Two empirical studies were conducted to test these hypotheses. Results partially confirm expectations, but effects are not as strong as assumed and sometimes seem to be moderated by further variables, such as gender and attitudes. Overall, this article demonstrates that the educational potential of serious gaming depends not only on media design, but also on social context and personal variables.",
"title": ""
},
{
"docid": "7df6898369d5e307610f43c59ff048ea",
"text": "In the industrial fields, Mecanum robots have been widely used. The Mecanum Wheel can do omnidirectional movements by electric machinery drive. It's more flexible than ordinary robots. It has massive potential in some situation which has small space. The robots with control system can complete the function of location and the calculation of optimal route. The Astar algorithm is most common mothed. However, Due to the orthogonal turning point, this algorithm takes a lot of Adjusting time. The Improved algorithm raised in this paper can reduce the occurrence of orthogonal turning point. It can generate a new smooth path automatically. This method can greatly reduce the time of the motion of the path. At the same time, it is difficult to obtain satisfactory performance by using the traditional control algorithm because of the complicated road conditions and the difficulty of establishing the model of the robot, so we use fuzzy algorithm to control robots. In fuzzy algorithm, the use of static membership function will affect the control effect, therefore, for complex control environment, using PSO algorithm to dynamically determine the membership function. It can effectively improve the motion performance and improve the dynamic characteristics and the adjustment time of the robot.",
"title": ""
},
{
"docid": "f28170dcc3c4949c27ee609604c53bc2",
"text": "Debates over Cannabis sativa L. and C. indica Lam. center on their taxonomic circumscription and rank. This perennial puzzle has been compounded by the viral spread of a vernacular nomenclature, “Sativa” and “Indica,” which does not correlate with C. sativa and C. indica. Ambiguities also envelop the epithets of wild-type Cannabis: the spontanea versus ruderalis debate (i.e., vernacular “Ruderalis”), as well as another pair of Cannabis epithets, afghanica and kafirstanica. To trace the rise of vernacular nomenclature, we begin with the protologues (original descriptions, synonymies, type specimens) of C. sativa and C. indica. Biogeographical evidence (obtained from the literature and herbarium specimens) suggests 18th–19th century botanists were biased in their assignment of these taxa to field specimens. This skewed the perception of Cannabis biodiversity and distribution. The development of vernacular “Sativa,” “Indica,” and “Ruderalis” was abetted by twentieth century botanists, who ignored original protologues and harbored their own cultural biases. Predominant taxonomic models by Vavilov, Small, Schultes, de Meijer, and Hillig are compared and critiqued. Small’s model adheres closest to protologue data (with C. indica treated as a subspecies). “Sativa” and “Indica” are subpopulations of C. sativa subsp. indica; “Ruderalis” represents a protean assortment of plants, including C. sativa subsp. sativa and recent hybrids.",
"title": ""
},
{
"docid": "d056e5ea017eb3e5609dcc978e589158",
"text": "In this paper we study and evaluate rumor-like methods for combating the spread of rumors on a social network. We model rumor spread as a diffusion process on a network and suggest the use of an \"anti-rumor\" process similar to the rumor process. We study two natural models by which these anti-rumors may arise. The main metrics we study are the belief time, i.e., the duration for which a person believes the rumor to be true and point of decline, i.e., point after which anti-rumor process dominates the rumor process. We evaluate our methods by simulating rumor spread and anti-rumor spread on a data set derived from the social networking site Twitter and on a synthetic network generated according to the Watts and Strogatz model. We find that the lifetime of a rumor increases if the delay in detecting it increases, and the relationship is at least linear. Further our findings show that coupling the detection and anti-rumor strategy by embedding agents in the network, we call them beacons, is an effective means of fighting the spread of rumor, even if these beacons do not share information.",
"title": ""
},
{
"docid": "a457545baa59e39e6ef6d7e0d2f9c02e",
"text": "The domain adaptation problem in machine learning occurs when the test data generating distribution differs from the one that generates the training data. It is clear that the success of learning under such circumstances depends on similarities between the two data distributions. We study assumptions about the relationship between the two distributions that one needed for domain adaptation learning to succeed. We analyze the assumptions in an agnostic PAC-style learning model for a the setting in which the learner can access a labeled training data sample and an unlabeled sample generated by the test data distribution. We focus on three assumptions: (i) similarity between the unlabeled distributions, (ii) existence of a classifier in the hypothesis class with low error on both training and testing distributions, and (iii) the covariate shift assumption. I.e., the assumption that the conditioned label distribution (for each data point) is the same for both the training and test distributions. We show that without either assumption (i) or (ii), the combination of the remaining assumptions is not sufficient to guarantee successful learning. Our negative results hold with respect to any domain adaptation learning algorithm, as long as it does not have access to target labeled examples. In particular, we provide formal proofs that the popular covariate shift assumption is rather weak and does not relieve the necessity of the other assumptions. We also discuss the intuitively appealing Appearing in Proceedings of the 13 International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: W&CP 9. Copyright 2010 by the authors. paradigm of re-weighting the labeled training sample according to the target unlabeled distribution and show that, somewhat counter intuitively, we show that paradigm cannot be trusted in the following sense. There are DA tasks that are indistinguishable as far as the training data goes but in which re-weighting leads to significant improvement in one task while causing dramatic deterioration of the learning success in the other.",
"title": ""
},
{
"docid": "49e1dc71e71b45984009f4ee20740763",
"text": "The ecosystem of open source software (OSS) has been growing considerably in size. In addition, code clones - code fragments that are copied and pasted within or between software systems - are also proliferating. Although code cloning may expedite the process of software development, it often critically affects the security of software because vulnerabilities and bugs can easily be propagated through code clones. These vulnerable code clones are increasing in conjunction with the growth of OSS, potentially contaminating many systems. Although researchers have attempted to detect code clones for decades, most of these attempts fail to scale to the size of the ever-growing OSS code base. The lack of scalability prevents software developers from readily managing code clones and associated vulnerabilities. Moreover, most existing clone detection techniques focus overly on merely detecting clones and this impairs their ability to accurately find \"vulnerable\" clones. In this paper, we propose VUDDY, an approach for the scalable detection of vulnerable code clones, which is capable of detecting security vulnerabilities in large software programs efficiently and accurately. Its extreme scalability is achieved by leveraging function-level granularity and a length-filtering technique that reduces the number of signature comparisons. This efficient design enables VUDDY to preprocess a billion lines of code in 14 hour and 17 minutes, after which it requires a few seconds to identify code clones. In addition, we designed a security-aware abstraction technique that renders VUDDY resilient to common modifications in cloned code, while preserving the vulnerable conditions even after the abstraction is applied. This extends the scope of VUDDY to identifying variants of known vulnerabilities, with high accuracy. In this study, we describe its principles and evaluate its efficacy and effectiveness by comparing it with existing mechanisms and presenting the vulnerabilities it detected. VUDDY outperformed four state-of-the-art code clone detection techniques in terms of both scalability and accuracy, and proved its effectiveness by detecting zero-day vulnerabilities in widely used software systems, such as Apache HTTPD and Ubuntu OS Distribution.",
"title": ""
},
{
"docid": "aa7d94bebbd988af48bc7cb9f5e35a39",
"text": "Over the recent years, embedding methods have attracted increasing focus as a means for knowledge graph completion. Similarly, rule-based systems have been studied for this task in the past. What is missing so far is a common evaluation that includes more than one type of method. We close this gap by comparing representatives of both types of systems in a frequently used evaluation protocol. Leveraging the explanatory qualities of rule-based systems, we present a fine-grained evaluation that gives insight into characteristics of the most popular datasets and points out the different strengths and shortcomings of the examined approaches. Our results show that models such as TransE, RESCAL or HolE have problems in solving certain types of completion tasks that can be solved by a rulebased approach with high precision. At the same time, there are other completion tasks that are difficult for rule-based systems. Motivated by these insights, we combine both families of approaches via ensemble learning. The results support our assumption that the two methods complement each other in a beneficial way.",
"title": ""
}
] |
scidocsrr
|
53ac422681d002219e89783a9340f510
|
Why logical clocks are easy
|
[
{
"docid": "7530de11afdbb1e09c363644b0866bcb",
"text": "The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying cost and performance that depend on the degree of ordering desired. In particular, a protocol that enforces causal delivery orderings is introduced and shown to be a valuable alternative to conventional asynchronous communication protocols. The facility also ensures that the processes belonging to a fault-tolerant process group will observe consistent orderings of events affecting the group as a whole, including process failures, recoveries, migration, and dynamic changes to group properties like member rankings. A review of several uses for the protocols in the ISIS system, which supports fault-tolerant resilient objects and bulletin boards, illustrates the significant simplification of higher level algorithms made possible by our approach.",
"title": ""
}
] |
[
{
"docid": "a95b9fbd2f5f6373fb9d04a29f1beab3",
"text": "Discovering and accessing hydrologic and climate data for use in research or water management can be a difficult task that consumes valuable time and personnel resources. Until recently, this task required discovering and navigating many different data repositories, each having its ownwebsite, query interface, data formats, and descriptive language. New advances in cyberinfrastructure and in semantic mediation technologies have provided the means for creating better tools supporting data discovery and access. In this paper we describe a freely available and open source software tool, called HydroDesktop, that can be used for discovering, downloading, managing, visualizing, and analyzing hydrologic data. HydroDesktop was created as a means for searching across and accessing hydrologic data services that have been published using the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS). We describe the design and architecture of HydroDesktop, its novel contributions in web services-based hydrologic data search and discovery, and its unique extensibility interface that enables developers to create custom data analysis and visualization plug-ins. The functionality of HydroDesktop and some of its existing plug-ins are introduced in the context of a case study for discovering, downloading, and visualizing data within the Bear River Watershed in Idaho, USA. 2012 Elsevier Ltd. All rights reserved. Software availability All CUAHSI HydroDesktop software and documentation can be accessed at http://his.cuahsi.org. Source code and additional documentation for HydroDesktop can be accessed at the HydroDesktop code repository website http://hydrodesktop.codeplex. com. HydroDesktop and its source code are released under the New Berkeley Software Distribution (BSD) License which allows for liberal reuse of the software and code.",
"title": ""
},
{
"docid": "a96d8a1763da1e806a8044f2b9338507",
"text": "Performing cellular long term evolution (LTE) communications in unlicensed spectrum using licensed assisted access LTE (LTE-LAA) is a promising approach to overcome wireless spectrum scarcity. However, to reap the benefits of LTE-LAA, a fair coexistence mechanism with other incumbent WiFi deployments is required. In this paper, a novel deep learning approach is proposed for modeling the resource allocation problem of LTE-LAA small base stations (SBSs). The proposed approach enables multiple SBSs to proactively perform dynamic channel selection, carrier aggregation, and f ractional spectrum access while guaranteeing fairness with existing WiFi networks and other LTE-LAA operators. Adopting a proactive coexistence mechanism enables future delay-tolerant LTE-LAA data demands to be served within a given prediction window ahead of their actual arrival time thus avoiding the underutilization of the unlicensed spectrum during off-peak hours while maximizing the total served LTE-LAA traffic load. To this end, a noncooperative game model is formulated in which SBSs are modeled as homo egualis agents that aim at predicting a sequence of future actions and thus achieving long-term equal weighted fairness with wireless local area network and other LTE-LAA operators over a given time horizon. The proposed deep learning algorithm is then shown to reach a mixed-strategy Nash equilibrium, when it converges. Simulation results using real data traces show that the proposed scheme can yield up to 28% and 11% gains over a conventional reactive approach and a proportional fair coexistence mechanism, respectively. The results also show that the proposed framework prevents WiFi performance degradation for a densely deployed LTE-LAA network.",
"title": ""
},
{
"docid": "f597c21404b091c0f4046b7c6429c98c",
"text": "We report on an architecture for the unsupervised discovery of talker-invariant subword embeddings. It is made out of two components: a dynamic-time warping based spoken term discovery (STD) system and a Siamese deep neural network (DNN). The STD system clusters word-sized repeated fragments in the acoustic streams while the DNN is trained to minimize the distance between time aligned frames of tokens of the same cluster, and maximize the distance between tokens of different clusters. We use additional side information regarding the average duration of phonemic units, as well as talker identity tags. For evaluation we use the datasets and metrics of the Zero Resource Speech Challenge. The model shows improvement over the baseline in subword unit modeling.",
"title": ""
},
{
"docid": "2b68a925b9056e150a67d794b993e7c7",
"text": "The rise and development of O2O e-commerce has brought new opportunities for the enterprise, and also proposed the new challenge to the traditional electronic commerce. The formation process of customer loyalty of O2O e-commerce environment is a complex psychological process. This paper will combine the characteristics of O2O e-commerce, customer's consumer psychology and consumer behavior characteristics to build customer loyalty formation mechanism model which based on the theory of reasoned action model. The related factors of the model including the customer perceived value, customer satisfaction, customer trust and customer switching costs. By exploring the factors affecting customer’ loyalty of O2O e-commerce can provide reference and basis for enterprises to develop e-commerce and better for O2O e-commerce enterprises to develop marketing strategy and enhance customer loyalty. At the end of this paper will also put forward some targeted suggestions for O2O e-commerce enterprises.",
"title": ""
},
{
"docid": "a93cd1c2e04e2d33c5174f18909dae9d",
"text": "For more than two decades, the key objective for synthesis of linear decompressors has been maximizing encoding efficiency. For combinational decompressors, encoding satisfiability is dynamically checked for each specified care bit. By contrast, for sequential linear decompressors (e.g. PRPGs), encoding is performed for each test cube; the resultant static encoding considers that a test cube is encodable only if all of its care bits are encodable. The paper introduces a new class of sequential linear decompressors that provides a trade-off between the computational complexity and the encoding efficiency of linear encoding. As a result, it becomes feasible to dynamically encode care bits before a test cube has been completed, and derive decompressor-implied scan cell values during test generation. The resultant dynamic encoding enables an identification of encoding conflicts during branch-and-bound search and a reduction of search space for dynamic compaction. Experimental results demonstrate that dynamic encoding consistently outperforms static encoding in a wide range of compression ratios.",
"title": ""
},
{
"docid": "5f068a11901763af752df9480b97e0c0",
"text": "Beginning with a brief review of CMOS scaling trends from 1 m to 0.1 m, this paper examines the fundamental factors that will ultimately limit CMOS scaling and considers the design issues near the limit of scaling. The fundamental limiting factors are electron thermal energy, tunneling leakage through gate oxide, and 2D electrostatic scale length. Both the standby power and the active power of a processor chip will increase precipitously below the 0.1m or 100-nm technology generation. To extend CMOS scaling to the shortest channel length possible while still gaining significant performance benefit, an optimized, vertically and laterally nonuniform doping design (superhalo) is presented. It is projected that room-temperature CMOS will be scaled to 20-nm channel length with the superhalo profile. Low-temperature CMOS allows additional design space to further extend CMOS scaling to near 10 nm.",
"title": ""
},
{
"docid": "76cd577955213ce193dcc5c821e05cf6",
"text": "Although much biological research depends upon species diagnoses, taxonomic expertise is collapsing. We are convinced that the sole prospect for a sustainable identification capability lies in the construction of systems that employ DNA sequences as taxon 'barcodes'. We establish that the mitochondrial gene cytochrome c oxidase I (COI) can serve as the core of a global bioidentification system for animals. First, we demonstrate that COI profiles, derived from the low-density sampling of higher taxonomic categories, ordinarily assign newly analysed taxa to the appropriate phylum or order. Second, we demonstrate that species-level assignments can be obtained by creating comprehensive COI profiles. A model COI profile, based upon the analysis of a single individual from each of 200 closely allied species of lepidopterans, was 100% successful in correctly identifying subsequent specimens. When fully developed, a COI identification system will provide a reliable, cost-effective and accessible solution to the current problem of species identification. Its assembly will also generate important new insights into the diversification of life and the rules of molecular evolution.",
"title": ""
},
{
"docid": "f526fac71caa2bc5709fdf724eedd6b7",
"text": "Anomaly based intrusion detection systems suffer from a lack of appropriate evaluation data sets. Often, existing data sets may not be published due to privacy concerns or do not reflect actual and current attack scenarios. In order to overcome these problems, we identify characteristics of good data sets and develop an appropriate concept for the generation of labelled flow-based data sets that satisfy these criteria. The concept is implemented based on OpenStack, thus demonstrating the suitability of virtual environments. Virtual environments offer advantages compared to static data sets by easily creating up-to-date data sets with recent trends in user behaviour and new attack scenarios. In particular, we emulate a small business environment which includes several clients and typical servers. Network traffic is generated by scripts which emulate typical user activities like surfing the web, writing emails, or printing documents on the clients. These scripts follow some guidelines to ensure that the user behaviour is as realistic as possible, also with respect to working hours and lunch breaks. The generated network traffic is recorded in unidirectional NetFlow format. For generating malicious traffic, attacks like Denial of Service, Brute Force, and Port Scans are executed within the network. Since origins, targets, and timestamps of executed attacks are known, labelling of recorded NetFlow data is easily possible. For inclusion of actual traffic, which has its origin outside the OpenStack environment, an external server with two services is deployed. This server has a public IP address and is exposed to real and up-to-date attacks from the internet. We captured approximately 32 million flows over a period of four weeks and categorized them into five classes. Further, the chronological sequence of the flows is analysed and the distribution of normal and malicious traffic is discussed in detail. The main contribution of this paper is the demonstration of a novel approach to use OpenStack as a basis for generating realistic data sets that can be used for the evaluation of network intrusion detection systems.",
"title": ""
},
{
"docid": "2d3649fb154297553ba913380d65a5f3",
"text": "Predictive analytics is a group of methods that uses statistical and other empirical techniques to predict future events, based on past occurrences. Predictive analytics can generate valuable information for the management of a supply chain company to improve decision-making. Even though the importance of the topic is clear, there is no clear overview of the use of predictive analytics in the supply chain currently. This research provides a state-ofthe-art by performing a systematic literature review. In the literature review we have found the models, methods, techniques and applications of predictive analytics in the supply chain and have determined the trends and literature gaps. The most important finding is that even though there is only a limited amount of literature available, the interest in this topic is growing gradually. We also provide future research directions for further research on this subject.",
"title": ""
},
{
"docid": "b54a2d0350ceac52ed92565af267b6e2",
"text": "In this paper, we address the problem of classifying image sets for face recognition, where each set contains images belonging to the same subject and typically covering large variations. By modeling each image set as a manifold, we formulate the problem as the computation of the distance between two manifolds, called manifold-manifold distance (MMD). Since an image set can come in three pattern levels, point, subspace, and manifold, we systematically study the distance among the three levels and formulate them in a general multilevel MMD framework. Specifically, we express a manifold by a collection of local linear models, each depicted by a subspace. MMD is then converted to integrate the distances between pairs of subspaces from one of the involved manifolds. We theoretically and experimentally study several configurations of the ingredients of MMD. The proposed method is applied to the task of face recognition with image sets, where identification is achieved by seeking the minimum MMD from the probe to the gallery of image sets. Our experiments demonstrate that, as a general set similarity measure, MMD consistently outperforms other competing nondiscriminative methods and is also promisingly comparable to the state-of-the-art discriminative methods.",
"title": ""
},
{
"docid": "0570bf6abea7b8c4dcad1fb05b9672c6",
"text": "The purpose of this chapter is to describe some similarities, as well as differences, between theoretical proposals emanating from the tradition of phenomenology and the currently popular approach to language and cognition known as cognitive linguistics (hence CL). This is a rather demanding and potentially controversial topic. For one thing, neither CL nor phenomenology constitute monolithic theories, and are actually rife with internal controversies. This forces me to make certain “schematizations”, since it is impossible to deal with the complexity of these debates in the space here allotted.",
"title": ""
},
{
"docid": "7e14a72a5bdb6d053e04de9b5e54d495",
"text": "This paper presents a power-efficient SNR enhancement technique for SAR ADCs. By accurately estimating the conversion residue, it can suppress both comparator noise and quantization error. Thus, it allows the use of a noisy low-power comparator and a relatively low resolution DAC to achieve high resolution. The proposed technique has low hardware complexity, requiring no change to the standard ADC operation except for repeating the LSB comparisons. A prototype ADC is designed in 65nm CMOS. Its SNR is improved by 7dB with the proposed technique. Overall, it achieves 10.5-b ENOB while operating at 100kS/s and consuming 645nW from a 0.7V power supply.",
"title": ""
},
{
"docid": "f99f522836431aae3e3f98564bcfc125",
"text": "Malaysia is a developing country and government’s urbanization policy in 1980s has encouraged migration of rural population to urban centres, consistent with the shift of economy orientation from agriculture base to industrial base. At present about 60% Malaysian live in urban areas. Live demands and labour shortage in industrial sector have forced mothers to join labour force. At present there are about 65% mothers with children below 15 years of age working fulltime outside homes. Issues related to parenting and children’s development becomes crucial especially in examination oriented society like Malaysia. Using 200 families as sample this study attempted to examine effects of parenting styles of dual-earner families on children behaviour and school achievement. Results of the study indicates that for mothers and fathers authoritative style have positive effects on children behaviour and school achievement. In contrast, the permissive and authoritarian styles have negative effects on children behaviour and school achievement. Effects of findings on children development are discussed.",
"title": ""
},
{
"docid": "cafdc8bb8b86171026d5a852e7273486",
"text": "A majority of the existing algorithms which mine graph datasets target complete, frequent sub-graph discovery. We describe the graph-based data mining system Subdue which focuses on the discovery of sub-graphs which are not only frequent but also compress the graph dataset, using a heuristic algorithm. The rationale behind the use of a compression-based methodology for frequent pattern discovery is to produce a fewer number of highly interesting patterns than to generate a large number of patterns from which interesting patterns need to be identified. We perform an experimental comparison of Subdue with the graph mining systems gSpan and FSG on the Chemical Toxicity and the Chemical Compounds datasets that are provided with gSpan. We present results on the performance on the Subdue system on the Mutagenesis and the KDD 2003 Citation Graph dataset. An analysis of the results indicates that Subdue can efficiently discover best-compressing frequent patterns which are fewer in number but can be of higher interest.",
"title": ""
},
{
"docid": "ccd64b0be6fee634e928206867ab4116",
"text": "CASE REPORT A 55 year old female was referred for investigation and possible surgery of a thyroid swelling. She had smoked 30 cigarettes per day for many years. Past medical history consisted of insertion of bilateral silicone breast implants 10 years previously. Clinical examination suggested a multinodular goitre and identified slight thickening superior to the left breast implant. Investigations revealed normal blood tests, and a multinodular goitre was confirmed on ultrasound scan. Routine chest X-ray (Fig. 1) identified an opacity in the left upper lobe showing features suggestive of a primary lung tumour. However, a lateral view failed to detect an abnormality in the thoracic cavity. CT scanning, performed with a view to percutaneous biopsy, revealed that the \"lung tumour\" was in fact related to the silicone implant (Fig. 2). Subsequent surgery confirmed rupture of the left breast prosthesis.",
"title": ""
},
{
"docid": "c7f0856c282d1039e44ba6ef50948d32",
"text": "This paper presents the analysis and operation of a three-phase pulsewidth modulation rectifier system formed by the star-connection of three single-phase boost rectifier modules (Y-rectifier) without a mains neutral point connection. The current forming operation of the Y-rectifier is analyzed and it is shown that the phase current has the same high quality and low ripple as the Vienna rectifier. The isolated star point of Y-rectifier results in a mutual coupling of the individual phase module outputs and has to be considered for control of the module dc link voltages. An analytical expression for the coupling coefficients of the Y-rectifier phase modules is derived. Based on this expression, a control concept with reduced calculation effort is designed and it provides symmetric loading of the phase modules and solves the balancing problem of the dc link voltages. The analysis also provides insight that enables the derivation of a control concept for two phase operation, such as in the case of a mains phase failure. The theoretical and simulated results are proved by experimental analysis on a fully digitally controlled, 5.4-kW prototype.",
"title": ""
},
{
"docid": "e59b7782cefc46191d36ba7f59d2f2b8",
"text": "Music is capable of evoking exceptionally strong emotions and of reliably affecting the mood of individuals. Functional neuroimaging and lesion studies show that music-evoked emotions can modulate activity in virtually all limbic and paralimbic brain structures. These structures are crucially involved in the initiation, generation, detection, maintenance, regulation and termination of emotions that have survival value for the individual and the species. Therefore, at least some music-evoked emotions involve the very core of evolutionarily adaptive neuroaffective mechanisms. Because dysfunctions in these structures are related to emotional disorders, a better understanding of music-evoked emotions and their neural correlates can lead to a more systematic and effective use of music in therapy.",
"title": ""
},
{
"docid": "1167ab5a79d1c29adcf90e2b0c28a79e",
"text": "Prior research has shown that within a racial category, people with more Afrocentric facial features are presumed more likely to have traits that are stereotypic of Black Americans compared with people with less Afrocentric features. The present study investigated whether this form of feature-based stereotyping might be observed in criminal-sentencing decisions. Analysis of a random sample of inmate records showed that Black and White inmates, given equivalent criminal histories, received roughly equivalent sentences. However, within each race, inmates with more Afrocentric features received harsher sentences than those with less Afrocentric features. These results are consistent with laboratory findings, and they suggest that although racial stereotyping as a function of racial category has been successfully removed from sentencing decisions, racial stereotyping based on the facial features of the offender is a form of bias that is largely overlooked.",
"title": ""
},
{
"docid": "bdd8384f470fbcbd48cec83585b7eeae",
"text": "Healthy diet with balanced nutrition is key to the prevention of life-threatening diseases such as obesity, cardiovascular disease, and cancer. Recent advances in smartphone and wearable sensor technologies have led to a proliferation of food monitoring applications based on automated food image processing and eating episode detection, with the goal to conquer drawbacks of the traditional manual food journaling that is time consuming, inaccurate, underreporting, and low adherent. In order to provide users feedback with nutritional information accompanied by insightful dietary advice, various techniques in light of the key computational learning principles have been explored. This survey presents a variety of methodologies and resources on this topic, along with unsolved problems, and closes with a perspective and boarder implications of this field.",
"title": ""
},
{
"docid": "15f6b6be4eec813fb08cb3dd8b9c97f2",
"text": "ACKNOWLEDGEMENTS First, I would like to thank my supervisor Professor H. Levent Akın for his guidance. This thesis would not have been possible without his encouragement and enthusiastic support. I would also like to thank all the staff at the Artificial Intelligence Laboratory for their encouragement throughout the year. Their success in RoboCup is always a good motivation. Sharing their precious ideas during the weekly seminars have always guided me to the right direction. Finally I am deeply grateful to my family and to my wife Derya. They always give me endless love and support, which has helped me to overcome the various challenges along the way. Thank you for your patience... The field of Intelligent Transport Systems (ITS) is improving rapidly in the world. Ultimate aim of such systems is to realize fully autonomous vehicle. The researches in the field offer the potential for significant enhancements in safety and operational efficiency. Lane tracking is an important topic in autonomous navigation because the navigable region usually stands between the lanes, especially in urban environments. Several approaches have been proposed, but Hough transform seems to be the dominant among all. A robust lane tracking method is also required for reducing the effect of the noise and achieving the required processing time. In this study, we present a new lane tracking method which uses a partitioning technique for obtaining Multiresolution Hough Transform (MHT) of the acquired vision data. After the detection process, a Hidden Markov Model (HMM) based method is proposed for tracking the detected lanes. Traffic signs are important instruments to indicate the rules on roads. This makes them an essential part of the ITS researches. It is clear that leaving traffic signs out of concern will cause serious consequences. Although the car manufacturers have started to deploy intelligent sign detection systems on their latest models, the road conditions and variations of actual signs on the roads require much more robust and fast detection and tracking methods. Localization of such systems is also necessary because traffic signs differ slightly between countries. This study also presents a fast and robust sign detection and tracking method based on geometric transformation and genetic algorithms (GA). Detection is done by a genetic algorithm (GA) approach supported by a radial symmetry check so that false alerts are considerably reduced. Classification v is achieved by a combination of SURF features with NN or SVM classifiers. A heuristic …",
"title": ""
}
] |
scidocsrr
|
2e7d78ea417684563f9c27165e3cbcd8
|
Generating Diverse Numbers of Diverse Keyphrases
|
[
{
"docid": "54d3d5707e50b979688f7f030770611d",
"text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.",
"title": ""
},
{
"docid": "ec37e61fcac2639fa6e605b362f2a08d",
"text": "Keyphrases that efficiently summarize a document’s content are used in various document processing and retrieval tasks. Current state-of-the-art techniques for keyphrase extraction operate at a phrase-level and involve scoring candidate phrases based on features of their component words. In this paper, we learn keyphrase taggers for research papers using token-based features incorporating linguistic, surfaceform, and document-structure information through sequence labeling. We experimentally illustrate that using withindocument features alone, our tagger trained with Conditional Random Fields performs on-par with existing state-of-the-art systems that rely on information from Wikipedia and citation networks. In addition, we are also able to harness recent work on feature labeling to seamlessly incorporate expert knowledge and predictions from existing systems to enhance the extraction performance further. We highlight the modeling advantages of our keyphrase taggers and show significant performance improvements on two recently-compiled datasets of keyphrases from Computer Science research papers.",
"title": ""
},
{
"docid": "97838cc3eb7b31d49db6134f8fc81c84",
"text": "We study the problem of semi-supervised question answering—-utilizing unlabeled text to boost the performance of question answering models. We propose a novel training framework, the Generative Domain-Adaptive Nets. In this framework, we train a generative model to generate questions based on the unlabeled text, and combine model-generated questions with human-generated questions for training question answering models. We develop novel domain adaptation algorithms, based on reinforcement learning, to alleviate the discrepancy between the modelgenerated data distribution and the humangenerated data distribution. Experiments show that our proposed framework obtains substantial improvement from unlabeled text.",
"title": ""
},
{
"docid": "8f916f7be3048ae2a367096f4f82207d",
"text": "Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach.",
"title": ""
},
{
"docid": "73bf620a97b2eadeb2398dd718b85fe8",
"text": "The Semeval task 5 was an opportunity for experimenting with the key term extraction module of GROBID, a system for extracting and generating bibliographical information from technical and scientific documents. The tool first uses GROBID’s facilities for analyzing the structure of scientific articles, resulting in a first set of structural features. A second set of features captures content properties based on phraseness, informativeness and keywordness measures. Two knowledge bases, GRISP and Wikipedia, are then exploited for producing a last set of lexical/semantic features. Bagged decision trees appeared to be the most efficient machine learning algorithm for generating a list of ranked key term candidates. Finally a post ranking was realized based on statistics of cousage of keywords in HAL, a large Open Access publication repository.",
"title": ""
},
{
"docid": "1593fd6f9492adc851c709e3dd9b3c5f",
"text": "This paper addresses the problem of extracting keyphrases from scientific articles and categorizing them as corresponding to a task, process, or material. We cast the problem as sequence tagging and introduce semi-supervised methods to a neural tagging model, which builds on recent advances in named entity recognition. Since annotated training data is scarce in this domain, we introduce a graph-based semi-supervised algorithm together with a data selection scheme to leverage unannotated articles. Both inductive and transductive semi-supervised learning strategies outperform state-of-the-art information extraction performance on the 2017 SemEval Task 10 ScienceIE task.",
"title": ""
}
] |
[
{
"docid": "dcd9a430a69fc3a938ea1068273627ff",
"text": "Background Nursing theory should provide the principles that underpin practice and help to generate further nursing knowledge. However, a lack of agreement in the professional literature on nursing theory confuses nurses and has caused many to dismiss nursing theory as irrelevant to practice. This article aims to identify why nursing theory is important in practice. Conclusion By giving nurses a sense of identity, nursing theory can help patients, managers and other healthcare professionals to recognise the unique contribution that nurses make to the healthcare service ( Draper 1990 ). Providing a definition of nursing theory also helps nurses to understand their purpose and role in the healthcare setting.",
"title": ""
},
{
"docid": "9636c75bdbbd7527abdd8fbac1466d55",
"text": "Predicting the occurrence of a particular event of interest at future time points is the primary goal of survival analysis. The presence of incomplete observations due to time limitations or loss of data traces is known as censoring which brings unique challenges in this domain and differentiates survival analysis from other standard regression methods. The popularly used survival analysis methods such as Cox proportional hazard model and parametric survival regression suffer from some strict assumptions and hypotheses that are not realistic in most of the real-world applications. To overcome the weaknesses of these two types of methods, in this paper, we reformulate the survival analysis problem as a multi-task learning problem and propose a new multi-task learning based formulation to predict the survival time by estimating the survival status at each time interval during the study duration. We propose an indicator matrix to enable the multi-task learning algorithm to handle censored instances and incorporate some of the important characteristics of survival problems such as non-negative non-increasing list structure into our model through max-heap projection. We employ the L2,1-norm penalty which enables the model to learn a shared representation across related tasks and hence select important features and alleviate over-fitting in high-dimensional feature spaces; thus, reducing the prediction error of each task. To efficiently handle the two non-smooth constraints, in this paper, we propose an optimization method which employs Alternating Direction Method of Multipliers (ADMM) algorithm to solve the proposed multi-task learning problem. We demonstrate the performance of the proposed method using real-world microarray gene expression high-dimensional benchmark datasets and show that our method outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "cbdace4636017f925b89ecf266fde019",
"text": "It is traditionally known that wideband apertures lose bandwidth when placed over a ground plane. To overcome this issue, this paper introduces a new non-symmetric tightly coupled dipole element for wideband phased arrays. The proposed array antenna incorporates additional degrees of freedom to control capacitance and cancel the ground plane inductance. Specifically, each arm on the dipole is different than the other (or non-symmetric). The arms are identical near the center feed section but dissimilar towards the ends, forming a ball-and-cup. It is demonstrated that the non-symmetric qualities achieve wideband performance. Concurrently, a design example for planar installation with balun and matching network is presented to cover X-band. The balun avoids extraneous radiation, maintains the array's low-profile height and is printed on top of the ground plane connecting to the array aperture with 180° out of phase vertical twin-wire transmission lines. To demonstrate the concept, a 64-element array with integrated feed and matching network is designed, fabricated and verified experimentally. The array aperture is placed λ/7 (at 8 GHz) above the ground plane and shown to maintain a active VSWR less than 2 from 8-12.5 GHz while scanning up to 70° and 60° in E- and H-plane, respectively. The array's simulated diagonal plane cross-polarization is approximately 10 dB below the co-polarized component during 60° diagonal scan and follows the theoretical limit for an infinite current sheet.",
"title": ""
},
{
"docid": "eec15a5d14082d625824452bd070ec38",
"text": "Food waste is a major environmental issue. Expired products are thrown away, implying that too much food is ordered compared to what is sold and that a more accurate prediction model is required within grocery stores. In this study the two prediction models Long Short-Term Memory (LSTM) and Autoregressive Integrated Moving Average (ARIMA) were compared on their prediction accuracy in two scenarios, given sales data for different products, to observe if LSTM is a model that can compete against the ARIMA model in the field of sales forecasting in retail. In the first scenario the models predict sales for one day ahead using given data, while they in the second scenario predict each day for a week ahead. Using the evaluation measures RMSE and MAE together with a t-test the results show that the difference between the LSTM and ARIMA model is not of statistical significance in the scenario of predicting one day ahead. However when predicting seven days ahead, the results show that there is a statistical significance in the difference indicating that the LSTM model has higher accuracy. This study therefore concludes that the LSTM model is promising in the field of sales forecasting in retail and able to compete against the ARIMA model.",
"title": ""
},
{
"docid": "6d2adebf7fbdf67b778b60ac69ea5cd3",
"text": "In this paper, we propose Zero-Suppressed BDDs (0-Sup-BDDs), which are BDDs based on a new reduction rule. This data structure brings unique and compact representation of sets which appear in many combinatorial problems. Using 0-Sup-BDDs, we can manipulate such sets more simply and efficiently than using original BDDs. We show the properties of 0-Sup-BDDs, their manipulation algorithms, and good applications for LSI CAD systems.",
"title": ""
},
{
"docid": "d0c85b824d7d3491f019f47951d1badd",
"text": "A nine-year-old female Rottweiler with a history of repeated gastrointestinal ulcerations and three previous surgical interventions related to gastrointestinal ulceration presented with symptoms of anorexia and intermittent vomiting. Benign gastric outflow obstruction was diagnosed in the proximal duodenal area. The initial surgical plan was to perform a pylorectomy with gastroduodenostomy (Billroth I procedure), but owing to substantial scar tissue and adhesions in the area a palliative gastrojejunostomy was performed. This procedure provided a bypass for the gastric contents into the proximal jejunum via the new stoma, yet still allowed bile and pancreatic secretions to flow normally via the patent duodenum. The gastrojejunostomy technique was successful in the surgical management of this case, which involved proximal duodenal stricture in the absence of neoplasia. Regular telephonic followup over the next 12 months confirmed that the patient was doing well.",
"title": ""
},
{
"docid": "13748d365584ef2e680affb67cfcc882",
"text": "In this paper, we discuss the development of cost effective, wireless, and wearable vibrotactile haptic device for stiffness perception during an interaction with virtual objects. Our experimental setup consists of haptic device with five vibrotactile actuators, virtual reality environment tailored in Unity 3D integrating the Oculus Rift Head Mounted Display (HMD) and the Leap Motion controller. The virtual environment is able to capture touch inputs from users. Interaction forces are then rendered at 500 Hz and fed back to the wearable setup stimulating fingertips with ERM vibrotactile actuators. Amplitude and frequency of vibrations are modulated proportionally to the interaction force to simulate the stiffness of a virtual object. A quantitative and qualitative study is done to compare the discrimination of stiffness on virtual linear spring in three sensory modalities: visual only feedback, tactile only feedback, and their combination. A common psychophysics method called the Two Alternative Forced Choice (2AFC) approach is used for quantitative analysis using Just Noticeable Difference (JND) and Weber Fractions (WF). According to the psychometric experiment result, average Weber fraction values of 0.39 for visual only feedback was improved to 0.25 by adding the tactile feedback.",
"title": ""
},
{
"docid": "23641b410a3d1ae3f270bb19988ad4f5",
"text": "Brain Computer Interface systems rely on lengthy training phases that can last up to months due to the inherent variability in brainwave activity between users. We propose a BCI architecture based on the co-learning between the user and the system through different feedback strategies. Thus, we achieve an operational BCI within minutes. We apply our system to the piloting of an AR.Drone 2.0 quadricopter. We show that our architecture provides better task performance than traditional BCI paradigms within a shorter time frame. We further demonstrate the enthusiasm of users towards our BCI-based interaction modality and how they find it much more enjoyable than traditional interaction modalities.",
"title": ""
},
{
"docid": "4adfc2bf6907305fc4da20a5b753c2b1",
"text": "Book recommendation systems can benefit commercial websites, social media sites, and digital libraries, to name a few, by alleviating the knowledge acquisition process of users who look for books that are appealing to them. Even though existing book recommenders, which are based on either collaborative filtering, text content, or the hybrid approach, aid users in locating books (among the millions available), their recommendations are not personalized enough to meet users’ expectations due to their collective assumption on group preference and/or exact content matching, which is a failure. To address this problem, we have developed PBRecS, a book recommendation system that is based on social interactions and personal interests to suggest books appealing to users. PBRecS relies on the friendships established on a social networking site, such as LibraryThing, to generate more personalized suggestions by including in the recommendations solely books that belong to a user’s friends who share common interests with the user, in addition to applying word-correlation factors for partially matching book tags to disclose books similar in contents. The conducted empirical study on data extracted from LibraryThing has verified (i) the effectiveness of PBRecS using social-media data to improve the quality of book recommendations and (ii) that PBRecS outperforms the recommenders employed by Amazon and LibraryThing.",
"title": ""
},
{
"docid": "f095118c63d1531ebdbaec3565b0d91f",
"text": "BACKGROUND\nSystematic reviews are most helpful if they are up-to-date. We did a systematic review of strategies and methods describing when and how to update systematic reviews.\n\n\nOBJECTIVES\nTo identify, describe and assess strategies and methods addressing: 1) when to update systematic reviews and 2) how to update systematic reviews.\n\n\nSEARCH STRATEGY\nWe searched MEDLINE (1966 to December 2005), PsycINFO, the Cochrane Methodology Register (Issue 1, 2006), and hand searched the 2005 Cochrane Colloquium proceedings.\n\n\nSELECTION CRITERIA\nWe included methodology reports, updated systematic reviews, commentaries, editorials, or other short reports describing the development, use, or comparison of strategies and methods for determining the need for updating or updating systematic reviews in healthcare.\n\n\nDATA COLLECTION AND ANALYSIS\nWe abstracted information from each included report using a 15-item questionnaire. The strategies and methods for updating systematic reviews were assessed and compared descriptively with respect to their usefulness, comprehensiveness, advantages, and disadvantages.\n\n\nMAIN RESULTS\nFour updating strategies, one technique, and two statistical methods were identified. Three strategies addressed steps for updating and one strategy presented a model for assessing the need to update. One technique discussed the use of the \"entry date\" field in bibliographic searching. Statistical methods were cumulative meta-analysis and predicting when meta-analyses are outdated.\n\n\nAUTHORS' CONCLUSIONS\nLittle research has been conducted on when and how to update systematic reviews and the feasibility and efficiency of the identified approaches is uncertain. These shortcomings should be addressed in future research.",
"title": ""
},
{
"docid": "71fa9602c24916b8c868c24ba50a74e8",
"text": "In this paper, we review the research on virtual teams in an effort to assess the state of the literature. We start with an examination of the definitions of virtual teams used and propose an integrative definition that suggests that all teams may be defined in terms of their extent of virtualness. Next, we review findings related to team inputs, processes, and outcomes, and identify areas of agreement and inconsistency in the literature on virtual teams. Based on this review, we suggest avenues for future research, including methodological and theoretical considerations that are important to advancing our understanding of virtual teams. © 2004 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "64122833d6fa0347f71a9abff385d569",
"text": "We present a brief history and overview of statistical methods in frame-semantic parsing – the automatic analysis of text using the theory of frame semantics. We discuss how the FrameNet lexicon and frameannotated datasets have been used by statistical NLP researchers to build usable, state-of-the-art systems. We also focus on future directions in frame-semantic parsing research, and discuss NLP applications that could benefit from this line of work. 1 Frame-Semantic Parsing Frame-semantic parsing has been considered as the task of automatically finding semantically salient targets in text, disambiguating their semantic frame representing an event and scenario in discourse, and annotating arguments consisting of words or phrases in text with various frame elements (or roles). The FrameNet lexicon (Baker et al., 1998), an ontology inspired by the theory of frame semantics (Fillmore, 1982), serves as a repository of semantic frames and their roles. Figure 1 depicts a sentence with three evoked frames for the targets “million”, “created” and “pushed” with FrameNet frames and roles. Automatic analysis of text using framesemantic structures can be traced back to the pioneering work of Gildea and Jurafsky (2002). Although their experimental setup relied on a primitive version of FrameNet and only made use of “exemplars” or example usages of semantic frames (containing one target per sentence) as opposed to a “corpus” of sentences, it resulted in a flurry of work in the area of automatic semantic role labeling (Màrquez et al., 2008). However, the focus of semantic role labeling (SRL) research has mostly been on PropBank (Palmer et al., 2005) conventions, where verbal targets could evoke a “sense” frame, which is not shared across targets, making the frame disambiguation setup different from the representation in FrameNet. Furthermore, it is fair to say that early research on PropBank focused primarily on argument structure prediction, and the interaction between frame and argument structure analysis has mostly been unaddressed (Màrquez et al., 2008). There are exceptions, where the verb frame has been taken into account during SRL (Meza-Ruiz and Riedel, 2009; Watanabe et al., 2010). Moreoever, the CoNLL 2008 and 2009 shared tasks also include the verb and noun frame identification task in their evaluations, although the overall goal was to predict semantic dependencies based on PropBank, and not full argument spans (Surdeanu et al., 2008; Hajič",
"title": ""
},
{
"docid": "265bf26646113a56101c594f563cb6dc",
"text": "A system, particularly a decision-making concept, that facilitates highly automated driving on freeways in real traffic is presented. The system is capable of conducting fully automated lane change (LC) maneuvers with no need for driver approval. Due to the application in real traffic, a robust functionality and the general safety of all traffic participants are among the main requirements. Regarding these requirements, the consideration of measurement uncertainties demonstrates a major challenge. For this reason, a fully integrated probabilistic concept is developed. By means of this approach, uncertainties are regarded in the entire process of determining driving maneuvers. While this also includes perception tasks, this contribution puts a focus on the driving strategy and the decision-making process for the execution of driving maneuvers. With this approach, the BMW Group Research and Technology managed to drive 100% automated in real traffic on the freeway A9 from Munich to Ingolstadt, showing a robust, comfortable, and safe driving behavior, even during multiple automated LC maneuvers.",
"title": ""
},
{
"docid": "f8093849e9157475149d00782c60ae60",
"text": "Social media use, potential and challenges in innovation have received little attention in literature, especially from the standpoint of the business-to-business sector. Therefore, this paper focuses on bridging this gap with a survey of social media use, potential and challenges, combined with a social media - focused innovation literature review of state-of-the-art. The study also studies the essential differences between business-to-consumer and business-to-business in the above respects. The paper starts by defining of social media and web 2.0, and then characterizes social media in business, social media in business-to-business sector and social media in business-to-business innovation. Finally we present and analyze the results of our empirical survey of 122 Finnish companies. This paper suggests that there is a significant gap between perceived potential of social media and social media use in innovation activity in business-to-business companies, recognizes potentially effective ways to reduce the gap, and clarifies the found differences between B2B's and B2C's.",
"title": ""
},
{
"docid": "3848b727cfda3031742cec04abd74608",
"text": "This paper presents SemFrame, a system that induces frame semantic verb classes from WordNet and LDOCE. Semantic frames are thought to have significant potential in resolving the paraphrase problem challenging many languagebased applications. When compared to the handcrafted FrameNet, SemFrame achieves its best recall-precision balance with 83.2% recall (based on SemFrame's coverage of FrameNet frames) and 73.8% precision (based on SemFrame verbs’ semantic relatedness to frame-evoking verbs). The next best performing semantic verb classes achieve 56.9% recall and 55.0% precision.",
"title": ""
},
{
"docid": "3a549571e281b9b381a347fb49953d2c",
"text": "Social media has been gaining popularity among university students who use social media at higher rates than the general population. Students consequently spend a significant amount of time on social media, which may inevitably have an effect on their academic engagement. Subsequently, scholars have been intrigued to examine the impact of social media on students' academic engagement. Research that has directly explored the use of social media and its impact on students in tertiary institutions has revealed limited and mixed findings, particularly within a South African context; thus leaving a window of opportunity to further investigate the impact that social media has on students' academic engagement. This study therefore aims to investigate the use of social media in tertiary institutions, the impact that the use thereof has on students' academic engagement and to suggest effective ways of using social media in tertiary institutions to improve students' academic engagement from students' perspectives. This study used an interpretivist (inductive) approach in order to determine and comprehend student's perspectives and experiences towards the use of social media and the effects thereof on their academic engagement. A single case study design at Rhodes University was used to determine students' perceptions and data was collected using an online survey. The findings reveal that students use social media for both social and academic purposes. Students further perceived that social media has a positive impact on their academic engagement and suggest that using social media at tertiary level could be advantageous and could enhance students' academic engagement.",
"title": ""
},
{
"docid": "c4d816303790125c790a3a09edcf499b",
"text": "Predictive modeling techniques are increasingly being used by data scientists to understand the probability of predicted outcomes. However, for data that is high-dimensional, a critical step in predictive modeling is determining which features should be included in the models. Feature selection algorithms are often used to remove non-informative features from models. However, there are many different classes of feature selection algorithms. Deciding which one to use is problematic as the algorithmic output is often not amenable to user interpretation. This limits the ability for users to utilize their domain expertise during the modeling process. To improve on this limitation, we developed INFUSE, a novel visual analytics system designed to help analysts understand how predictive features are being ranked across feature selection algorithms, cross-validation folds, and classifiers. We demonstrate how our system can lead to important insights in a case study involving clinical researchers predicting patient outcomes from electronic medical records.",
"title": ""
},
{
"docid": "f9afcc134abda1c919cf528cbc975b46",
"text": "Multimodal question answering in the cultural heritage domain allows visitors to museums, landmarks or other sites to ask questions in a more natural way. This in turn provides better user experiences. In this paper, we propose the construction of a golden standard dataset dedicated to aiding research into multimodal question answering in the cultural heritage domain. The dataset, soon to be released to the public, contains multimodal content about the fascinating old-Egyptian Amarna period, including images of typical artworks, documents about these artworks (containing images) and over 800 multimodal queries integrating visual and textual questions. The multimodal questions and related documents are all in English. The multimodal questions are linked to relevant paragraphs in the related documents that contain the answer to the multimodal query.",
"title": ""
},
{
"docid": "6bdcd13e63a4f24561f575efcd232dad",
"text": "Men have called me mad,” wrote Edgar Allan Poe, “but the question is not yet settled, whether madness is or is not the loftiest intelligence— whether much that is glorious—whether all that is profound—does not spring from disease of thought—from moods of mind exalted at the expense of the general intellect.” Many people have long shared Poe’s suspicion that genius and insanity are entwined. Indeed, history holds countless examples of “that fine madness.” Scores of influential 18thand 19th-century poets, notably William Blake, Lord Byron and Alfred, Lord Tennyson, wrote about the extreme mood swings they endured. Modern American poets John Berryman, Randall Jarrell, Robert Lowell, Sylvia Plath, Theodore Roethke, Delmore Schwartz and Anne Sexton were all hospitalized for either mania or depression during their lives. And many painters and composers, among them Vincent van Gogh, Georgia O’Keeffe, Charles Mingus and Robert Schumann, have been similarly afflicted. Judging by current diagnostic criteria, it seems that most of these artists—and many others besides—suffered from one of the major mood disorders, namely, manic-depressive illness or major depression. Both are fairly common, very treatable and yet frequently lethal diseases. Major depression induces intense melancholic spells, whereas manic-depression, Manic-Depressive Illness and Creativity",
"title": ""
},
{
"docid": "f7deaa9b65be6b8de9f45fb0dec3879d",
"text": "This paper reports the first 8kV+ ESD-protected SP10T transmit/receive (T/R) antenna switch for quad-band (0.85/0.9/1.8/1.9-GHz) GSM and multiple W-CDMA smartphones fabricated in an 180-nm SOI CMOS. A novel physics-based switch-ESD co-design methodology is applied to ensure full-chip optimization for a SP10T test chip and its ESD protection circuit simultaneously.",
"title": ""
}
] |
scidocsrr
|
ec17753411d281b2fe7eae0bb0198bf0
|
Evaluating Shallow and Deep Neural Networks for Network Intrusion Detection Systems in Cyber Security
|
[
{
"docid": "1c6078d68891b6600727a82841812666",
"text": "Network traffic prediction aims at predicting the subsequent network traffic by using the previous network traffic data. This can serve as a proactive approach for network management and planning tasks. The family of recurrent neural network (RNN) approaches is known for time series data modeling which aims to predict the future time series based on the past information with long time lags of unrevealed size. RNN contains different network architectures like simple RNN, long short term memory (LSTM), gated recurrent unit (GRU), identity recurrent unit (IRNN) which is capable to learn the temporal patterns and long range dependencies in large sequences of arbitrary length. To leverage the efficacy of RNN approaches towards traffic matrix estimation in large networks, we use various RNN networks. The performance of various RNN networks is evaluated on the real data from GÉANT backbone networks. To identify the optimal network parameters and network structure of RNN, various experiments are done. All experiments are run up to 200 epochs with learning rate in the range [0.01-0.5]. LSTM has performed well in comparison to the other RNN and classical methods. Moreover, the performance of various RNN methods is comparable to LSTM.",
"title": ""
},
{
"docid": "61c268616851d28855ed8fe14a6de205",
"text": "Ransomware is one type of malware that covertly installs and executes a cryptovirology attack on a victims computer to demand a ransom payment for restoration of the infected resources. This kind of malware has been growing largely in recent days and causes tens of millions of dollars losses to consumers. In this paper, we evaluate shallow and deep networks for the detection and classification of ransomware. To characterize and distinguish ransomware over benign and various other families of ransomwares, we leverage the dominance of application programming interface (API) invocations. To select a best architecture for the multi-layer perceptron (MLP), we done various experiments related to network parameters and structures. All the experiments are run up to 500 epochs with a learning rate in the range [0.01-0.5]. Result obtained on our data set is more promising to distinguish ransomware not only from benign from its families too. On distinguishing the .EXE as either benign or ransomware, MLP has attained highest accuracy 1.0 and classifying the ransomware to their categories obtained highest accuracy 0.98. Moreover, MLP has performed well in detecting and classifying ransomwares in comparison to the other classical machine learning classifiers.",
"title": ""
}
] |
[
{
"docid": "028eb67d71987c33c4a331cf02c6ff00",
"text": "We explore the feasibility of using crowd workers from Amazon Mechanical Turk to identify and rank sidewalk accessibility issues from a manually curated database of 100 Google Street View images. We examine the effect of three different interactive labeling interfaces (Point, Rectangle, and Outline) on task accuracy and duration. We close the paper by discussing limitations and opportunities for future work.",
"title": ""
},
{
"docid": "54df0e1a435d673053f9264a4c58e602",
"text": "Next location prediction anticipates a person’s movement based on the history of previous sojourns. It is useful for proactive actions taken to assist the person in an ubiquitous environment. This paper evaluates next location prediction methods: dynamic Bayesian network, multi-layer perceptron, Elman net, Markov predictor, and state predictor. For the Markov and state predictor we use additionally an optimization, the confidence counter. The criterions for the comparison are the prediction accuracy, the quantity of useful predictions, the stability, the learning, the relearning, the memory and computing costs, the modelling costs, the expandability, and the ability to predict the time of entering the next location. For evaluation we use the same benchmarks containing movement sequences of real persons within an office building.",
"title": ""
},
{
"docid": "1ec8f7bb8de36b625cb8fee335557acf",
"text": "Airborne laser scanner technique is broadly the most appropriate way to acquire rapidly and with high density 3D data over a city. Once the 3D Lidar data are available, the next task is the automatic data processing, with major aim to construct 3D building models. Among the numerous automatic reconstruction methods, the techniques allowing the detection of 3D building roof planes are of crucial importance. Three main methods arise from the literature: region growing, Hough-transform and Random Sample Consensus (RANSAC) paradigm. Since region growing algorithms are sometimes not very transparent and not homogenously applied, this paper focuses only on the Hough-transform and the RANSAC algorithm. Their principles, their pseudocode rarely detailed in the related literature as well as their complete analyses are presented in this paper. An analytic comparison of both algorithms, in terms of processing time and sensitivity to cloud characteristics, shows that despite the limitation encountered in both methods, RANSAC algorithm is still more efficient than the first one. Under other advantages, its processing time is negligible even when the input data size is very large. On the other hand, Hough-transform is very sensitive to the segmentation parameters values. Therefore, RANSAC algorithm has been chosen and extended to exceed its limitations. Its major limitation is that it searches to detect the best mathematical plane among 3D building point cloud even if this plane does not always represent a roof plane. So the proposed extension allows harmonizing the mathematical aspect of the algorithm with the geometry of a roof. At last, it is shown that the extended approach provides very satisfying results, even in the case of very weak point density and for different levels of building complexity. Therefore, once the roof planes are successfully detected, the automatic building modelling can be carried out.",
"title": ""
},
{
"docid": "4b5ff1f0ef9e668f5e76a69b0c77c1e8",
"text": "This investigation was concerned with providing a rationale for the understanding and measurement of quality of life. The investigation proposes a modified version of Veenhoven’s Four-Qualities-of-Life Framework. Its main purpose is to bring order to the vast literature on measuring quality of life; another purpose is to provide a richer framework to guide public policy in the procurement of a better society. The framework is used to assess quality of life in Latin America; the purpose of this exercise is to illustrate the utility of the framework and to show that importance of conceptualizing what quality of life is before any attempt to measure it is undertaken.",
"title": ""
},
{
"docid": "d6d275b719451982fa67d442c55c186c",
"text": "Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature with the results of a case study at Ericsson AB in Sweden, investigating issues in the waterfall model. The case study aims at validating or contradicting the beliefs of what the problems are in waterfall development through empirical research.",
"title": ""
},
{
"docid": "c55e7c3825980d0be4546c7fadc812fe",
"text": "Individual graphene oxide sheets subjected to chemical reduction were electrically characterized as a function of temperature and external electric fields. The fully reduced monolayers exhibited conductivities ranging between 0.05 and 2 S/cm and field effect mobilities of 2-200 cm2/Vs at room temperature. Temperature-dependent electrical measurements and Raman spectroscopic investigations suggest that charge transport occurs via variable range hopping between intact graphene islands with sizes on the order of several nanometers. Furthermore, the comparative study of multilayered sheets revealed that the conductivity of the undermost layer is reduced by a factor of more than 2 as a consequence of the interaction with the Si/SiO2 substrate.",
"title": ""
},
{
"docid": "b5927458f6d34f2ff326f0f631a0e450",
"text": "Bipolar disorder (BD) is a common and disabling psychiatric condition with a severe socioeconomic impact. BD is treated with mood stabilizers, among which lithium represents the first-line treatment. Lithium alone or in combination is effective in 60% of chronically treated patients, but response remains heterogenous and a large number of patients require a change in therapy after several weeks or months. Many studies have so far tried to identify molecular and genetic markers that could help us to predict response to mood stabilizers or the risk for adverse drug reactions. Pharmacogenetic studies in BD have been for the most part focused on lithium, but the complexity and variability of the response phenotype, together with the unclear mechanism of action of lithium, limited the power of these studies to identify robust biomarkers. Recent pharmacogenomic studies on lithium response have provided promising findings, suggesting that the integration of genome-wide investigations with deep phenotyping, in silico analyses and machine learning could lead us closer to personalized treatments for BD. Nevertheless, to date none of the genes suggested by pharmacogenetic studies on mood stabilizers have been included in any of the genetic tests approved by the Food and Drug Administration (FDA) for drug efficacy. On the other hand, genetic information has been included in drug labels to test for the safety of carbamazepine and valproate. In this review, we will outline available studies investigating the pharmacogenetics and pharmacogenomics of lithium and other mood stabilizers, with a specific focus on the limitations of these studies and potential strategies to overcome them. We will also discuss FDA-approved pharmacogenetic tests for treatments commonly used in the management of BD.",
"title": ""
},
{
"docid": "45c9ecc06dca6e18aae89ebf509d31d2",
"text": "For estimating causal effects of treatments, randomized experiments are generally considered the gold standard. Nevertheless, they are often infeasible to conduct for a variety of reasons, such as ethical concerns, excessive expense, or timeliness. Consequently, much of our knowledge of causal effects must come from non-randomized observational studies. This article will advocate the position that observational studies can and should be designed to approximate randomized experiments as closely as possible. In particular, observational studies should be designed using only background information to create subgroups of similar treated and control units, where 'similar' here refers to their distributions of background variables. Of great importance, this activity should be conducted without any access to any outcome data, thereby assuring the objectivity of the design. In many situations, this objective creation of subgroups of similar treated and control units, which are balanced with respect to covariates, can be accomplished using propensity score methods. The theoretical perspective underlying this position will be presented followed by a particular application in the context of the US tobacco litigation. This application uses propensity score methods to create subgroups of treated units (male current smokers) and control units (male never smokers) who are at least as similar with respect to their distributions of observed background characteristics as if they had been randomized. The collection of these subgroups then 'approximate' a randomized block experiment with respect to the observed covariates.",
"title": ""
},
{
"docid": "af9b81a034c76a7706d362105beff3cf",
"text": "A core aspect of human intelligence is the ability to learn new tasks quickly and switch between them flexibly. Here, we describe a modular continual reinforcement learning paradigm inspired by these abilities. We first introduce a visual interaction environment that allows many types of tasks to be unified in a single framework. We then describe a reward map prediction scheme that learns new tasks robustly in the very large state and action spaces required by such an environment. We investigate how properties of module architecture influence efficiency of task learning, showing that a module motif incorporating specific design principles (e.g. early bottlenecks, low-order polynomial nonlinearities, and symmetry) significantly outperforms more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance. Finally, we present a meta-controller architecture for task switching based on a dynamic neural voting scheme, which allows new modules to use information learned from previouslyseen tasks to substantially improve their own learning efficiency.",
"title": ""
},
{
"docid": "fdc4d23fa336ca122fdfb12818901180",
"text": "Concept of communication systems, which use smart antennas is based on digital signal processing algorithms. Thus, the smart antennas system becomes capable to locate and track signals by the both: users and interferers and dynamically adapts the antenna pattern to enhance the reception in Signal-Of-Interest direction and minimizing interference in Signal-Of-Not-Interest direction. Hence, Space Division Multiple Access system, which uses smart antennas, is being used more often in wireless communications, because it shows improvement in channel capacity and co-channel interference. However, performance of smart antenna system greatly depends on efficiency of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This paper investigates performance of the DOA algorithms like MUSIC, ESPRIT and ROOT MUSIC on the uniform linear array in the presence of white noise. The simulation results show that MUSIC algorithm is the best. The resolution of the DOA techniques improves as number of snapshots, number of array elements and signalto-noise ratio increases.",
"title": ""
},
{
"docid": "6d07571fa4a7027a260bd6586d59e2bd",
"text": "As there is a need for innovative and new medical technologies in the healthcare, we identified Thalmic's “MYO Armband”, which is used for gaming systems and controlling applications in mobiles and computers. We can exploit this development in the field of medicine and healthcare to improve public health care system. So, we spotted “MYO diagnostics”, a computer-based application developed by Thalmic labs to understand Electromyography (EMG) lines (graphs), bits of vector data, and electrical signals of our complicated biology inside our arm. The human gestures will allow to gather huge amount of data and series of EMG lines which can be analysed to detect medical abnormalities and hand movements. This application has powerful algorithms which are translated into commands to recognise human hand gestures. The effect of doctors experience on user satisfaction metrics in using MYO armband can be measured in terms of effectiveness, efficiency and satisfaction which are based on the metrics-task completion, error counts, task times and satisfaction scores. In this paper, we considered only satisfaction metrics using a widely used System Usability Scale (SUS) questionnaire model to study the usability on the twenty-four medical students of the Brighton and Sussex Medical School. This helps in providing guidelines about the use of MYO armband for physiotherapy analysis by the doctors and patients. Another questionnaire with a focus on ergonomic (human factors) issues related to the use of the device such as social acceptability, ease of use and ease of learning, comfort and stress, attempted to discover characteristics of hand gestures using MYO. The results of this study can be used in a way to support the development of interactive physiotherapy analysis by individuals using MYO and hand gesture applications at their home for self-examination. Also, the relationship and correlation between the signals received will lead to a better understanding of the whole myocardium system and assist doctors in early diagnosis.",
"title": ""
},
{
"docid": "a9399439831a970fcce8e0101696325f",
"text": "We describe the design, implementation, and evaluation of EMBERS, an automated, 24x7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the June 2013 protests in Brazil and Feb 2014 violent protests in Venezuela. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings.",
"title": ""
},
{
"docid": "fa4480bbc460658bd1ea5804fdebc5ed",
"text": "This paper examines the problem of how to teach multiple tasks to a Reinforcement Learning (RL) agent. To this end, we use Linear Temporal Logic (LTL) as a language for specifying multiple tasks in a manner that supports the composition of learned skills. We also propose a novel algorithm that exploits LTL progression and off-policy RL to speed up learning without compromising convergence guarantees, and show that our method outperforms the state-of-the-art approach on randomly generated Minecraft-like grids.",
"title": ""
},
{
"docid": "98ca1c0100115646bb14a00f19c611a5",
"text": "The interconnected nature of graphs often results in difficult to interpret clutter. Typically techniques focus on either decluttering by clustering nodes with similar properties or grouping edges with similar relationship. We propose using mapper, a powerful topological data analysis tool, to summarize the structure of a graph in a way that both clusters data with similar properties and preserves relationships. Typically, mapper operates on a given data by utilizing a scalar function defined on every point in the data and a cover for scalar function codomain. The output of mapper is a graph that summarize the shape of the space. In this paper, we outline how to use this mapper construction on an input graphs, outline three filter functions that capture important structures of the input graph, and provide an interface for interactively modifying the cover. To validate our approach, we conduct several case studies on synthetic and real world data sets and demonstrate how our method can give meaningful summaries for graphs with various",
"title": ""
},
{
"docid": "d719fb1fe0faf76c14d24f7587c5345f",
"text": "This paper describes a framework for the estimation of shape from sparse or incomplete range data. It uses a shape representation called blending, which allows for the geometric combination of shapes into a unified model— selected regions of the component shapes are cut-out and glued together. Estimation of shape using this representation is realized using a physics-based framework, and also includes a process for deciding how to adapt the structure and topology of the model to improve the fit. The blending representation helps avoid abrupt changes in model geometry during fitting by allowing the smooth evolution of the shape, which improves the robustness of the technique. We demonstrate this framework with a series of experiments showing its ability to automatically extract structured representations from range data given both structurally and topologically complex objects. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-97-12. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/47 (appeared inIEEE Transactions on Pattern Analysis and Machine Intelligence , Vol. 20, No. 11, pp. 1186-1205, November 1998) Shape Evolution with Structural and Topological Changes using Blending Douglas DeCarlo and Dimitris Metaxas †",
"title": ""
},
{
"docid": "054c2e8fa9421c77939091e5adfc07e5",
"text": "Visualization is a powerful paradigm for exploratory data analysis. Visualizing large graphs, however, often results in excessive edges crossings and overlapping nodes. We propose a new scalable approach called FACETS that helps users adaptively explore large million-node graphs from a local perspective, guiding them to focus on nodes and neighborhoods that are most subjectively interesting to users. We contribute novel ideas to measure this interestingness in terms of how surprising a neighborhood is given the background distribution, as well as how well it matches what the user has chosen to explore. FACETS uses Jensen-Shannon divergence over information-theoretically optimized histograms to calculate the subjective user interest and surprise scores. Participants in a user study found FACETS easy to use, easy to learn, and exciting to use. Empirical runtime analyses demonstrated FACETS’s practical scalability on large real-world graphs with up to 5 million edges, returning results in fewer than 1.5 seconds.",
"title": ""
},
{
"docid": "dc81f63623020220eba19f4f6ae545e0",
"text": "In this paper, a new technique for human identification task based on heart sound signals has been proposed. It utilizes a feature level fusion technique based on canonical correlation analysis. For this purpose a robust pre-processing scheme based on the wavelet analysis of the heart sounds is introduced. Then, three feature vectors are extracted depending on the cepstral coefficients of different frequency scale representation of the heart sound namely; the mel, bark, and linear scales. Among the investigated feature extraction methods, experimental results show that the mel-scale is the best with 94.4% correct identification rate. Using a hybrid technique combining MFCC and DWT, a new feature vector is extracted improving the system's performance up to 95.12%. Finally, canonical correlation analysis is applied for feature fusion. This improves the performance of the proposed system up to 99.5%. The experimental results show significant improvements in the performance of the proposed system over methods adopting single feature extraction.",
"title": ""
},
{
"docid": "87bded10bc1a29a3c0dead2958defc2e",
"text": "Context: Web applications are trusted by billions of users for performing day-to-day activities. Accessibility, availability and omnipresence of web applications have made them a prime target for attackers. A simple implementation flaw in the application could allow an attacker to steal sensitive information and perform adversary actions, and hence it is important to secure web applications from attacks. Defensive mechanisms for securing web applications from the flaws have received attention from both academia and industry. Objective: The objective of this literature review is to summarize the current state of the art for securing web applications from major flaws such as injection and logic flaws. Though different kinds of injection flaws exist, the scope is restricted to SQL Injection (SQLI) and Cross-site scripting (XSS), since they are rated as the top most threats by different security consortiums. Method: The relevant articles recently published are identified from well-known digital libraries, and a total of 86 primary studies are considered. A total of 17 articles related to SQLI, 35 related to XSS and 34 related to logic flaws are discussed. Results: The articles are categorized based on the phase of software development life cycle where the defense mechanism is put into place. Most of the articles focus on detecting the flaws and preventing attacks against web applications. Conclusion: Even though various approaches are available for securing web applications from SQLI and XSS, they are still prevalent due to their impact and severity. Logic flaws are gaining attention of the researchers since they violate the business specifications of applications. There is no single solution to mitigate all the flaws. More research is needed in the area of fixing flaws in the source code of applications.",
"title": ""
},
{
"docid": "c8b9bba65b8561b48abe68a72c02f054",
"text": "The Bitcoin backbone protocol [Eurocrypt 2015] extracts basic properties of Bitcoin's underlying blockchain data structure, such as common pre x and chain quality, and shows how fundamental applications including consensus and a robust public transaction ledger can be built on top of them. The underlying assumptions are proofs of work (POWs), adversarial hashing power strictly less than 1/2 and no adversarial pre-computation or, alternatively, the existence of an unpredictable genesis block. In this paper we show how to remove the latter assumption, presenting a bootstrapped Bitcoin-like blockchain protocol relying on POWs that builds genesis blocks from scratch in the presence of adversarial pre-computation. The only known previous result in the same setting (unauthenticated parties, no trusted setup) [Crypto 2015] is indirect in the sense of creating a PKI rst and then employing conventional PKI-based authenticated communication. With our construction we establish that consensus can be solved directly by a blockchain protocol without trusted setup assuming an honest majority (in terms of computational power). We also formalize miner unlinkability, a privacy property for blockchain protocols, and demonstrate that our protocol retains the same level of miner unlinkability as Bitcoin itself.",
"title": ""
}
] |
scidocsrr
|
15debd1bfde240cb01ff2c4fbc0dfe95
|
A New Algorithm for SAR Image Target Recognition Based on an Improved Deep Convolutional Neural Network
|
[
{
"docid": "06755f8680ee8b43e0b3d512b4435de4",
"text": "Stacked autoencoders (SAEs), as part of the deep learning (DL) framework, have been recently proposed for feature extraction in hyperspectral remote sensing. With the help of hidden nodes in deep layers, a high-level abstraction is achieved for data reduction whilst maintaining the key information of the data. As hidden nodes in SAEs have to deal simultaneously with hundreds of features from hypercubes as inputs, this increases the complexity of the process and leads to limited abstraction and performance. As such, segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs. This has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification.",
"title": ""
},
{
"docid": "5ab8a8f4991f7c701c51e32de7f97b36",
"text": "Recent breakthroughs in computational capabilities and optimization algorithms have enabled a new class of signal processing approaches based on deep neural networks (DNNs). These algorithms have been extremely successful in the classification of natural images, audio, and text data. In particular, a special type of DNNs, called convolutional neural networks (CNNs) have recently shown superior performance for object recognition in image processing applications. This paper discusses modern training approaches adopted from the image processing literature and shows how those approaches enable significantly improved performance for synthetic aperture radar (SAR) automatic target recognition (ATR). In particular, we show how a set of novel enhancements to the learning algorithm, based on new stochastic gradient descent approaches, generate significant classification improvement over previously published results on a standard dataset called MSTAR.",
"title": ""
}
] |
[
{
"docid": "cc0c1c11d437060e9492a3a1218e1271",
"text": "Graph coloring problems, in which one would like to color the vertices of a given graph with a small number of colors so that no two adjacent vertices receive the same color, arise in many applications, including various scheduling and partitioning problems. In this paper the complexity and performance of algorithms which construct such colorings are investigated. For a graph <italic>G</italic>, let &khgr;(<italic>G</italic>) denote the minimum possible number of colors required to color <italic>G</italic> and, for any graph coloring algorithm <italic>A</italic>, let <italic>A</italic>(<italic>G</italic>) denote the number of colors used by <italic>A</italic> when applied to <italic>G</italic>. Since the graph coloring problem is known to be “NP-complete,” it is considered unlikely that any efficient algorithm can guarantee <italic>A</italic>(<italic>G</italic>) = &khgr;(<italic>G</italic>) for all input graphs. In this paper it is proved that even coming close to khgr;(<italic>G</italic>) with a fast algorithm is hard. Specifically, it is shown that if for some constant <italic>r</italic> < 2 and constant <italic>d</italic> there exists a polynomial-time algorithm <italic>A</italic> which guarantees <italic>A</italic>(<italic>G</italic>) ≤ <italic>r</italic>·&khgr;(<italic>G</italic>) + <italic>d</italic>, then there also exists a polynomial-time algorithm <italic>A</italic> which guarantees <italic>A</italic>(<italic>G</italic>) = &khgr;(<italic>G</italic>).",
"title": ""
},
{
"docid": "2b4b639973f54bdd7b987d5bc9bb3978",
"text": "Computational stereo is one of the classical problems in computer vision. Numerous algorithms and solutions have been reported in recent years focusing on developing methods for computing similarity, aggregating it to obtain spatial support and finally optimizing an energy function to find the final disparity. In this paper, we focus on the feature extraction component of stereo matching architecture and we show standard CNNs operation can be used to improve the quality of the features used to find point correspondences. Furthermore, we propose a simple space aggregation that hugely simplifies the correlation learning problem. Our results on benchmark data are compelling and show promising potential even without refining the solution.",
"title": ""
},
{
"docid": "3a7d3f98e4501e04e68334d492ad2df8",
"text": "Several studies focused on single human activity recognition, while the classification of group activities is still under-investigated. In this paper, we present an approach for classifying the activity performed by a group of people during daily life tasks at work. We address the problem in a hierarchical way by first examining individual person actions, reconstructed from data coming from wearable and ambient sensors. We then observe if common temporal/spatial dynamics exist at the level of group activity. We deployed a Multimodal Deep Learning Network, where the term multimodal is not intended to separately elaborate the considered different input modalities, but refers to the possibility of extracting activity-related features for each group member, and then merge them through shared levels. We evaluated the proposed approach in a laboratory environment, where the employees are monitored during their normal activities. The experimental results demonstrate the effectiveness of the proposed model with respect to an SVM benchmark.",
"title": ""
},
{
"docid": "1c6e591999cd8b0eff7a637bf0753927",
"text": "The last few years have seen the emergence of several Open Access (OA) options in scholarly communication, which can broadly be grouped into two areas referred to as gold and green roads. Several recent studies showed how big the extent of OA is, but there have been few studies showing impact of OA in the visibility of journals covering all scientific fields and geographical regions. This research shows the extent of OA from the perspective of the journals indexed in Scopus, as well as influence on visibility, in view of the various geographic and thematic distributions. The results show that in all the disciplinary groups the presence of green road journals widely surpasses the percentage of gold road publications. The peripheral and emerging regions have greater proportions of gold road journals. These journals pertain for the 2 most part to the last quartile. The benefits of open access on visibility of the journals are to be found on the green route, but paradoxically this advantage is not lent by the OA per se, but rather of the quality of the articles/journals themselves, regardless of their mode of access.",
"title": ""
},
{
"docid": "3b99d426e5b667ae7ab3ead13d3806ab",
"text": "The blockchain technology, including Bitcoin and other crypto currencies, has been adopted in many application areas during recent years. However, the main attention has been on the currency and not so much on the underlying blockchain technology, including peer-to-peer networking, security and consensus mechanisms. This paper argues that we need to look beyond the currency applications and investigate the potential use of the blockchain tech‐ nology in governmental tasks such as digital ID management and secure docu‐ ment handling. The paper discusses the use of blockchain technology as a plat‐ form for various applications in e-Government and furthermore as an emerging support infrastructure by showing that blockchain technology demonstrates a potential for authenticating many types of persistent documents.",
"title": ""
},
{
"docid": "8bf1b97320a6b7319e4b36dfc11b6c7b",
"text": "In recent years, virtual reality exposure therapy (VRET) has become an interesting alternative for the treatment of anxiety disorders. Research has focused on the efficacy of VRET in treating anxiety disorders: phobias, panic disorder, and posttraumatic stress disorder. In this systematic review, strict methodological criteria are used to give an overview of the controlled trials regarding the efficacy of VRET in patients with anxiety disorders. Furthermore, research into process variables such as the therapeutic alliance and cognitions and enhancement of therapy effects through cognitive enhancers is discussed. The implications for implementation into clinical practice are considered.",
"title": ""
},
{
"docid": "b759613b1eedd29d32fbbc118767b515",
"text": "Deep learning has been shown successful in a number of domains, ranging from acoustics, images to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, a significant amount of research efforts have been devoted to this area, greatly advancing graph analyzing techniques. In this survey, we comprehensively review different kinds of deep learning methods applied to graphs. We divide existing methods into three main categories: semi-supervised methods including Graph Neural Networks and Graph Convolutional Networks, unsupervised methods including Graph Autoencoders, and recent advancements including Graph Recurrent Neural Networks and Graph Reinforcement Learning. We then provide a comprehensive overview of these methods in a systematic manner following their history of developments. We also analyze the differences of these methods and how to composite different architectures. Finally, we briefly outline their applications and discuss potential future directions.",
"title": ""
},
{
"docid": "88421f4be8de411ce0fe0c5e2e4e60c0",
"text": "The organization of HTML into a tag tree structure, which is rendered by browsers as roughly rectangular regions with embedded text and HREF links, greatly helps surfers locate and click on links that best satisfy their information need. Can an automatic program emulate this human behavior and thereby learn to predict the relevance of an unseen HREF target page w.r.t. an information need, based on information limited to the HREF source page? Such a capability would be of great interest in focused crawling and resource discovery, because it can fine-tune the priority of unvisited URLs in the crawl frontier, and reduce the number of irrelevant pages which are fetched and discarded.",
"title": ""
},
{
"docid": "20c309bbc6eea75fa9b57ee98b73cbc1",
"text": "Chua proposed an Elementary Circuit Element Quadrangle including the three classic elements (resistor, inductor, and capacitor) and his formulated, named memristor as the fourth element. Based on an observation that this quadrangle may not be symmetric, I proposed an Elementary Circuit Element Triangle, in which memristor as well as mem-capacitor and mem-inductor lead three basic element classes, respectively. An intrinsic mathematical relationship is found to support this new classification. It is believed that this triangle is concise, mathematically sound and aesthetically beautiful, compared with Chua's quadrangle. The importance of finding a correct circuit element table is similar to that of Mendeleev's periodic table of chemical elements in chemistry and the table of 61 elementary particles in physics, in terms of categorizing the existing elements and predicting new elements. A correct circuit element table would also request to rewrite the 20th century textbooks.",
"title": ""
},
{
"docid": "171d9acd0e2cb86a02d5ff56d4515f0d",
"text": "We explore two solutions to the problem of mistranslating rare words in neural machine translation. First, we argue that the standard output layer, which computes the inner product of a vector representing the context with all possible output word embeddings, rewards frequent words disproportionately, and we propose to fix the norms of both vectors to a constant value. Second, we integrate a simple lexical module which is jointly trained with the rest of the model. We evaluate our approaches on eight language pairs with data sizes ranging from 100k to 8M words, and achieve improvements of up to +4.3 BLEU, surpassing phrasebased translation in nearly all settings.1",
"title": ""
},
{
"docid": "599fb363d80fd1a7a6faaccbde3ecbb5",
"text": "In this survey a new application paradigm life and safety for critical operations and missions using wearable Wireless Body Area Networks (WBANs) technology is introduced. This paradigm has a vast scope of applications, including disaster management, worker safety in harsh environments such as roadside and building workers, mobile health monitoring, ambient assisted living and many more. It is often the case that during the critical operations and the target conditions, the existing infrastructure is either absent, damaged or overcrowded. In this context, it is envisioned that WBANs will enable the quick deployment of ad-hoc/on-the-fly communication networks to help save many lives and ensuring people's safety. However, to understand the applications more deeply and their specific characteristics and requirements, this survey presents a comprehensive study on the applications scenarios, their context and specific requirements. It explores details of the key enabling standards, existing state-of-the-art research studies, and projects to understand their limitations before realizing aforementioned applications. Application-specific challenges and issues are discussed comprehensively from various perspectives and future research and development directions are highlighted as an inspiration for new innovative solutions. To conclude, this survey opens up a good opportunity for companies and research centers to investigate old but still new problems, in the realm of wearable technologies, which are increasingly evolving and getting more and more attention recently.",
"title": ""
},
{
"docid": "54e541c0a2c8c90862ce5573899aacc7",
"text": "The moving sofa problem, posed by L. Moser in 1966, asks for the planar shape of maximal area that can move around a right-angled corner in a hallway of unit width. It is known that a maximal area shape exists, and that its area is at least 2.2195 . . .—the area of an explicit construction found by Gerver in 1992—and at most 2 √ 2 ≈ 2.82, with the lower bound being conjectured as the true value. We prove a new and improved upper bound of 2.37. The method involves a computer-assisted proof scheme that can be used to rigorously derive further improved upper bounds that converge to the correct value.",
"title": ""
},
{
"docid": "c526954906119aad904c27255d61b262",
"text": "Over the last two or three decades, growing numbers of parents in the industrialized world are choosing not to have their children vaccinated. In trying to explain why this is occurring, public health commentators refer to the activities of an anti-vaccination 'movement'. In the light of three decades of research on (new) social movements, what sense does it make to attribute decline in vaccination rates to the actions of an influential anti-vaccination movement? Two sorts of empirical data, drawn largely from UK and The Netherlands, are reviewed. These relate to the claims, actions and discourse of anti-vaccination groups on the one hand, and to the way parents of young children think about vaccines and vaccination on the other. How much theoretical sense it makes to view anti-vaccination groups as (new) social movement organizations (as distinct from pressure groups or self-help organizations) is as yet unclear. In any event there is no simple and unambiguous demarcation criterion. From a public health perspective, however, to focus attention on organized opponents of vaccination is appealing because it unites health professionals behind a banner of reason. At the same time it diverts attention from a potentially disruptive critique of vaccination practices; the critique in fact articulated by many parents. In the light of current theoretical discussion of 'scientific citizenship' this paper argues that identifying anti-vaccination groups with other social movements may ultimately have the opposite effect to that intended.",
"title": ""
},
{
"docid": "473baf99a816e24cec8dec2b03eb0958",
"text": "We propose a method that allows an unskilled user to create an accurate physical replica of a digital 3D model. We use a projector/camera pair to scan a work in progress, and project multiple forms of guidance onto the object itself that indicate which areas need more material, which need less, and where any ridges, valleys or depth discontinuities are. The user adjusts the model using the guidance and iterates, making the shape of the physical object approach that of the target 3D model over time. We show how this approach can be used to create a duplicate of an existing object, by scanning the object and using that scan as the target shape. The user is free to make the reproduction at a different scale and out of different materials: we turn a toy car into cake. We extend the technique to support replicating a sequence of models to create stop-motion video. We demonstrate an end-to-end system in which real-world performance capture data is retargeted to claymation. Our approach allows users to easily and accurately create complex shapes, and naturally supports a large range of materials and model sizes.",
"title": ""
},
{
"docid": "00e56a93a3b8ee3a3d2cdab2fd27375e",
"text": "Omnidirectional image and video have gained popularity thanks to availability of capture and display devices for this type of content. Recent studies have assessed performance of objective metrics in predicting visual quality of omnidirectional content. These metrics, however, have not been rigorously validated by comparing their prediction results with ground-truth subjective scores. In this paper, we present a set of 360-degree images along with their subjective quality ratings. The set is composed of four contents represented in two geometric projections and compressed with three different codecs at four different bitrates. A range of objective quality metrics for each stimulus is then computed and compared to subjective scores. Statistical analysis is performed in order to assess performance of each objective quality metric in predicting subjective visual quality as perceived by human observers. Results show the estimated performance of the state-of-the-art objective metrics for omnidirectional visual content. Objective metrics specifically designed for 360-degree content do not outperform conventional methods designed for 2D images.",
"title": ""
},
{
"docid": "563c0eeaeaaf4cbb005d97814be35aea",
"text": "Current multiprocessor systems execute parallel and concurrent software nondeterministically: even when given precisely the same input, two executions of the same program may produce different output. This severely complicates debugging, testing, and automatic replication for fault-tolerance. Previous efforts to address this issue have focused primarily on record and replay, but making execution actually deterministic would address the problem at the root. Our goals in this work are twofold: (1) to provide fully deterministic execution of arbitrary, unmodified, multithreaded programs as an OS service; and (2) to make all sources of intentional nondeterminism, such as network I/O, be explicit and controllable. To this end we propose a new OS abstraction, the Deterministic Process Group (DPG). All communication between threads and processes internal to a DPG happens deterministically, including implicit communication via sharedmemory accesses, as well as communication via OS channels such as pipes, signals, and the filesystem. To deal with fundamentally nondeterministic external events, our abstraction includes the shim layer, a programmable interface that interposes on all interaction between a DPG and the external world, making determinism useful even for reactive applications. We implemented the DPG abstraction as an extension to Linux and demonstrate its benefits with three use cases: plain deterministic execution; replicated execution; and record and replay by logging just external input. We evaluated our implementation on both parallel and reactive workloads, including Apache, Chromium, and PARSEC.",
"title": ""
},
{
"docid": "adc9f2a82ed4bccd2405eaf95d026962",
"text": "Each corner of the inhabited world is imaged from multiple viewpoints with increasing frequency. Online map services like Google Maps or Here Maps provide direct access to huge amounts of densely sampled, georeferenced images from street view and aerial perspective. There is an opportunity to design computer vision systems that will help us search, catalog and monitor public infrastructure, buildings and artifacts. We explore the architecture and feasibility of such a system. The main technical challenge is combining test time information from multiple views of each geographic location (e.g., aerial and street views). We implement two modules: det2geo, which detects the set of locations of objects belonging to a given category, and geo2cat, which computes the fine-grained category of the object at a given location. We introduce a solution that adapts state-of the-art CNN-based object detectors and classifiers. We test our method on \"Pasadena Urban Trees\", a new dataset of 80,000 trees with geographic and species annotations, and show that combining multiple views significantly improves both tree detection and tree species classification, rivaling human performance.",
"title": ""
},
{
"docid": "aad7697ce9d9af2b49cd3a46e441ef8e",
"text": "Soft pneumatic actuators (SPAs) are versatile robotic components enabling diverse and complex soft robot hardware design. However, due to inherent material characteristics exhibited by their primary constitutive material, silicone rubber, they often lack robustness and repeatability in performance. In this article, we present a novel SPA-based bending module design with shell reinforcement. The bidirectional soft actuator presented here is enveloped in a Yoshimura patterned origami shell, which acts as an additional protection layer covering the SPA while providing specific bending resilience throughout the actuator’s range of motion. Mechanical tests are performed to characterize several shell folding patterns and their effect on the actuator performance. Details on design decisions and experimental results using the SPA with origami shell modules and performance analysis are presented; the performance of the bending module is significantly enhanced when reinforcement is provided by the shell. With the aid of the shell, the bending module is capable of sustaining higher inflation pressures, delivering larger blocked torques, and generating the targeted motion trajectory.",
"title": ""
},
{
"docid": "5d98548bc4f65d66a8ece7e70cb61bc4",
"text": "0140-3664/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.comcom.2011.09.003 ⇑ Corresponding author. Tel.: +86 10 62283240. E-mail address: liwenmin02@hotmail.com (W. Li). Value-added applications in vehicular ad hoc network (VANET) come with the emergence of electronic trading. The restricted connectivity scenario in VANET, where the vehicle cannot communicate directly with the bank for authentication due to the lack of internet access, opens up new security challenges. Hence a secure payment protocol, which meets the additional requirements associated with VANET, is a must. In this paper, we propose an efficient and secure payment protocol that aims at the restricted connectivity scenario in VANET. The protocol applies self-certified key agreement to establish symmetric keys, which can be integrated with the payment phase. Thus both the computational cost and communication cost can be reduced. Moreover, the protocol can achieve fair exchange, user anonymity and payment security. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a72888742fafe05d5525890fd8187e80",
"text": "The uses of breeding programs for the Pacific white shrimp [Penaeus (Litopenaeus) vannamei] based on mixed linear models with pedigreed data are described. The application of these classic breeding methods yielded continuous progress of great value to increase the profitability of the shrimp industry in several countries. Recent advances in such areas as genomics in shrimp will allow for the development of new breeding programs in the near future that will increase genetic progress. In particular, these novel techniques may help increase disease resistance to specific emerging diseases, which is today a very important component of shrimp breeding programs. Thanks to increased selection accuracy, simulated genetic advance using genomic selection for survival to a disease challenge was up to 2.6 times that of phenotypic sib selection.",
"title": ""
}
] |
scidocsrr
|
dc5485657eed24774b979e7a98eb620f
|
Ch2R: A Chinese Chatter Robot for Online Shopping Guide
|
[
{
"docid": "16ccacd0f59bd5e307efccb9f15ac678",
"text": "This document presents the results from Inst. of Computing Tech., CAS in the ACLSIGHAN-sponsored First International Chinese Word Segmentation Bakeoff. The authors introduce the unified HHMM-based frame of our Chinese lexical analyzer ICTCLAS and explain the operation of the six tracks. Then provide the evaluation results and give more analysis. Evaluation on ICTCLAS shows that its performance is competitive. Compared with other system, ICTCLAS has ranked top both in CTB and PK closed track. In PK open track, it ranks second position. ICTCLAS BIG5 version was transformed from GB version only in two days; however, it achieved well in two BIG5 closed tracks. Through the first bakeoff, we could learn more about the development in Chinese word segmentation and become more confident on our HHMM-based approach. At the same time, we really find our problems during the evaluation. The bakeoff is interesting and helpful.",
"title": ""
}
] |
[
{
"docid": "89297a4aef0d3251e8d947ccc2acacc7",
"text": "We propose a novel probabilistic framework for learning visual models of 3D object categories by combining appearance information and geometric constraints. Objects are represented as a coherent ensemble of parts that are consistent under 3D viewpoint transformations. Each part is a collection of salient image features. A generative framework is used for learning a model that captures the relative position of parts within each of the discretized viewpoints. Contrary to most of the existing mixture of viewpoints models, our model establishes explicit correspondences of parts across different viewpoints of the object class. Given a new image, detection and classification are achieved by determining the position and viewpoint of the model that maximize recognition scores of the candidate objects. Our approach is among the first to propose a generative probabilistic framework for 3D object categorization. We test our algorithm on the detection task and the viewpoint classification task by using “car” category from both the Savarese et al. 2007 and PASCAL VOC 2006 datasets. We show promising results in both the detection and viewpoint classification tasks on these two challenging datasets.",
"title": ""
},
{
"docid": "4a89f20c4b892203be71e3534b32449c",
"text": "This paper draws together knowledge from a variety of fields to propose that innovation management can be viewed as a form of organisational capability. Excellent companies invest and nurture this capability, from which they execute effective innovation processes, leading to innovations in new product, services and processes, and superior business performance results. An extensive review of the literature on innovation management, along with a case study of Cisco Systems, develops a conceptual model of the firm as an innovation engine. This new operating model sees substantial investment in innovation capability as the primary engine for wealth creation, rather than the possession of physical assets. Building on the dynamic capabilities literature, an “innovation capability” construct is proposed with seven elements. These are vision and strategy, harnessing the competence base, organisational intelligence, creativity and idea management, organisational structures and systems, culture and climate, and management of technology.",
"title": ""
},
{
"docid": "3f83d41f66b2c3b6b62afb3d3a3d8562",
"text": "Many recommendation algorithms suffer from popularity bias in their output: popular items are recommended frequently and less popular ones rarely, if at all. However, less popular, long-tail items are precisely those that are often desirable recommendations. In this paper, we introduce a flexible regularization-based framework to enhance the long-tail coverage of recommendation lists in a learning-to-rank algorithm. We show that regularization provides a tunable mechanism for controlling the trade-off between accuracy and coverage. Moreover, the experimental results using two data sets show that it is possible to improve coverage of long tail items without substantial loss of ranking performance.",
"title": ""
},
{
"docid": "11538da6cfda3a81a7ddec0891aae1d9",
"text": "This work presents a dataset and annotation scheme for the new task of identifying “good” conversations that occur online, which we call ERICs: Engaging, Respectful, and/or Informative Conversations. We develop a taxonomy to reflect features of entire threads and individual comments which we believe contribute to identifying ERICs; code a novel dataset of Yahoo News comment threads (2.4k threads and 10k comments) and 1k threads from the Internet Argument Corpus; and analyze the features characteristic of ERICs. This is one of the largest annotated corpora of online human dialogues, with the most detailed set of annotations. It will be valuable for identifying ERICs and other aspects of argumentation, dialogue, and discourse.",
"title": ""
},
{
"docid": "6a993cdfbb701b43bb1cf287380e5b2e",
"text": "There is a growing need for real-time human pose estimation from monocular RGB images in applications such as human computer interaction, assisted living, video surveillance, people tracking, activity recognition and motion capture. For the task, depth sensors and multi-camera systems are usually more expensive and difficult to set up than conventional RGB video cameras. Recent advances in convolutional neural network research have allowed to replace of traditional methods with more efficient convolutional neural network based methods in many computer vision tasks. This thesis presents a method for real-time multi-person human pose estimation from video by utilizing convolutional neural networks. The method is aimed for use case specific applications, where good accuracy is essential and variation of the background and poses is limited. This enables to use a generic network architecture, which is both accurate and fast. The problem is divided into two phases: (1) pretraining and (2) fine-tuning. In pretraining, the network is learned with highly diverse input data from publicly available datasets, while in fine-tuning it is trained with application specific data recorded with Kinect. The method considers the whole system, including person detector, pose estimator and an automatic way to record application specific training material for fine-tuning. The method can be also thought of as a replacement for Kinect, and it can be used for higher level tasks such as gesture control, games, person tracking and action recognition.",
"title": ""
},
{
"docid": "b4ecf497c8240a48a6e60aef400d0e1e",
"text": "Skin color diversity is the most variable and noticeable phenotypic trait in humans resulting from constitutive pigmentation variability. This paper will review the characterization of skin pigmentation diversity with a focus on the most recent data on the genetic basis of skin pigmentation, and the various methodologies for skin color assessment. Then, melanocyte activity and amount, type and distribution of melanins, which are the main drivers for skin pigmentation, are described. Paracrine regulators of melanocyte microenvironment are also discussed. Skin response to sun exposure is also highly dependent on color diversity. Thus, sensitivity to solar wavelengths is examined in terms of acute effects such as sunburn/erythema or induced-pigmentation but also long-term consequences such as skin cancers, photoageing and pigmentary disorders. More pronounced sun-sensitivity in lighter or darker skin types depending on the detrimental effects and involved wavelengths is reviewed.",
"title": ""
},
{
"docid": "6c87cff16fb85eaa02c377fa047346bb",
"text": "BACKGROUND\n: Arterial and venous thoracic outlet syndrome (TOS) were recognized in the late 1800s and neurogenic TOS in the early 1900s. Diagnosis and treatment of the 2 vascular forms of TOS are generally accepted in all medical circles. On the other hand, neurogenic TOS is more difficult to diagnose because there is no standard objective test to confirm clinical impressions.\n\n\nREVIEW SUMMARY\n: The clinical features of arterial, venous, and neurogenic TOS are described. Because neurogenic TOS is by far the most common type, the pathology, pathophysiology, diagnostic tests, differential and associate diagnoses, and treatment are detailed and discussed. The controversial area of objective and subjective diagnostic criteria is addressed.\n\n\nCONCLUSION\n: Arterial and venous TOS are usually not difficult to recognize and the diagnosis can be confirmed by angiography. The diagnosis of neurogenic TOS is more challenging because its symptoms of nerve compression are not unique. The clinical diagnosis relies on documenting several positive findings on physical examination. To date there is still no reliable objective test to confirm the diagnosis, but measurements of the medial antebrachial cutaneous nerve appear promising.",
"title": ""
},
{
"docid": "15c715c3da3883e363aa8e442e903269",
"text": "A supervised learning rule for Spiking Neural Networks (SNNs) is presented that can cope with neurons that spike multiple times. The rule is developed by extending the existing SpikeProp algorithm which could only be used for one spike per neuron. The problem caused by the discontinuity in the spike process is counteracted with a simple but effective rule, which makes the learning process more efficient. Our learning rule is successfully tested on a classification task of Poisson spike trains. We also applied the algorithm on a temporal version of the XOR problem and show that it is possible to learn this classical problem using only one spiking neuron making use of a hairtrigger situation.",
"title": ""
},
{
"docid": "5c05b2d2086125bc8c6364b58c37971a",
"text": "In this exploratory field-study, we examined how normative messages (i.e., activating an injunctive norm, personal norm, or both) could encourage shoppers to use fewer free plastic bags for their shopping in addition to the supermarket‘s standard environmental message aimed at reducing plastic bags. In a one-way subjects-design (N = 200) at a local supermarket, we showed that shoppers used significantly fewer free plastic bags in the injunctive, personal and combined normative message condition than in the condition where only an environmental message was present. The combined normative message did result in the smallest uptake of free plastic bags compared to the injunctive and personal normative-only message, although these differences were not significant. Our findings imply that re-wording the supermarket‘s environmental message by including normative information could be a promising way to reduce the use of free plastic bags, which will ultimately benefit the environment.",
"title": ""
},
{
"docid": "7aff3e7bac49208478f2979ca591e059",
"text": "The postulate of independence of cause and mechanism (ICM) has recently led to several new causal discovery algorithms. The interpretation of independence and the way it is utilized, however, varies across these methods. Our aim in this paper is to propose a group theoretic framework for ICM to unify and generalize these approaches. In our setting, the cause-mechanism relationship is assessed by perturbing it with random group transformations. We show that the group theoretic view encompasses previous ICM approaches and provides a very general tool to study the structure of data generating mechanisms with direct applications to machine learning.",
"title": ""
},
{
"docid": "8f1cb692121899bb63e98f9a6ab3000e",
"text": "Magnet material prices has become an uncertain factor for electric machine development. Most of all, the output of ironless axial flux motors equipped with Halbach magnet arrays depend on the elaborated magnetic flux. Therefore, possibilities to reduce the manufacturing cost without negatively affecting the performance are studied in this paper. Both magnetostatic and transient 3D finite element analyses are applied to compare flux density distribution, elaborated output torque and induced back EMF. It is shown, that the proposed magnet shapes and magnetization pattern meet the requirements. Together with the assembly and measurements of functional linear Halbach magnet arrays, the prerequisite for the manufacturing of axial magnet arrays for an ironless in-wheel hub motor are given.",
"title": ""
},
{
"docid": "112ec676f74c22393d06bc23eaae50d8",
"text": "Multi-user multiple-input multiple-output (MU-MIMO) is the latest communication technology that promises to linearly increase the wireless capacity by deploying more antennas on access points (APs). However, the large number of MIMO antennas will generate a huge amount of digital signal samples in real time. This imposes a grand challenge on the AP design by multiplying the computation and the I/O requirements to process the digital samples. This paper presents BigStation, a scalable architecture that enables realtime signal processing in large-scale MIMO systems which may have tens or hundreds of antennas. Our strategy to scale is to extensively parallelize the MU-MIMO processing on many simple and low-cost commodity computing devices. Our design can incrementally support more antennas by proportionally adding more computing devices. To reduce the overall processing latency, which is a critical constraint for wireless communication, we parallelize the MU-MIMO processing with a distributed pipeline based on its computation and communication patterns. At each stage of the pipeline, we further use data partitioning and computation partitioning to increase the processing speed. As a proof of concept, we have built a BigStation prototype based on commodity PC servers and standard Ethernet switches. Our prototype employs 15 PC servers and can support real-time processing of 12 software radio antennas. Our results show that the BigStation architecture is able to scale to tens to hundreds of antennas. With 12 antennas, our BigStation prototype can increase wireless capacity by 6.8x with a low mean processing delay of 860μs. While this latency is not yet low enough for the 802.11 MAC, it already satisfies the real-time requirements of many existing wireless standards, e.g., LTE and WCDMA.",
"title": ""
},
{
"docid": "7ebd960866db666093fd61e22be6fe7b",
"text": "The elucidation of molecular targets of bioactive small organic molecules remains a significant challenge in modern biomedical research and drug discovery. This tutorial review summarizes strategies for the derivatization of bioactive small molecules and their use as affinity probes to identify cellular binding partners. Special emphasis is placed on logistical concerns as well as common problems encountered during such target identification experiments. The roadmap provided is a guide through the process of affinity probe selection, target identification, and downstream target validation.",
"title": ""
},
{
"docid": "92e50fc2351b4a05d573590f3ed05e81",
"text": "OBJECTIVE\nWe examined the effects of sensory-enhanced hatha yoga on symptoms of combat stress in deployed military personnel, compared their anxiety and sensory processing with that of stateside civilians, and identified any correlations between the State-Trait Anxiety Inventory scales and the Adolescent/Adult Sensory Profile quadrants.\n\n\nMETHOD\nSeventy military personnel who were deployed to Iraq participated in a randomized controlled trial. Thirty-five received 3 wk (≥9 sessions) of sensory-enhanced hatha yoga, and 35 did not receive any form of yoga.\n\n\nRESULTS\nSensory-enhanced hatha yoga was effective in reducing state and trait anxiety, despite normal pretest scores. Treatment participants showed significantly greater improvement than control participants on 16 of 18 mental health and quality-of-life factors. We found positive correlations between all test measures except sensory seeking. Sensory seeking was negatively correlated with all measures except low registration, which was insignificant.\n\n\nCONCLUSION\nThe results support using sensory-enhanced hatha yoga for proactive combat stress management.",
"title": ""
},
{
"docid": "4c4c25aba1600869f7899e20446fd75f",
"text": "This paper presents GRAPE, a parallel system for graph computations. GRAPE differs from prior systems in its ability to parallelize existing sequential graph algorithms as a whole. Underlying GRAPE are a simple programming model and a principled approach, based on partial evaluation and incremental computation. We show that sequential graph algorithms can be \"plugged into\" GRAPE with minor changes, and get parallelized. As long as the sequential algorithms are correct, their GRAPE parallelization guarantees to terminate with correct answers under a monotonic condition. Moreover, we show that algorithms in MapReduce, BSP and PRAM can be optimally simulated on GRAPE. In addition to the ease of programming, we experimentally verify that GRAPE achieves comparable performance to the state-of-the-art graph systems, using real-life and synthetic graphs.",
"title": ""
},
{
"docid": "7e4c00d8f17166cbfb3bdac8d5e5ad09",
"text": "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility.",
"title": ""
},
{
"docid": "03826954a304a4d6bdb2c1f55bbe8001",
"text": "This paper gives an overview of the channel access methods of three wireless technologies that are likely to be used in the environment of vehicle networks: IEEE 802.15.4, IEEE 802.11 and Bluetooth. Researching the coexistence of IEEE 802.15.4 with IEEE 802.11 and Bluetooth, results of experiments conducted in a radio frequency anechoic chamber are presented. The power densities of the technologies on a single IEEE 802.15.4 channel are compared. It is shown that the pure existence of an IEEE 802.11 access point leads to collisions due to different timing scales. Furthermore, the packet drop rate caused by Bluetooth is analyzed and an estimation formula for it is given.",
"title": ""
},
{
"docid": "d2fb10bdbe745ace3a2512ccfa414d4c",
"text": "In cloud computing environment, especially in big data era, adversary may use data deduplication service supported by the cloud service provider as a side channel to eavesdrop users' privacy or sensitive information. In order to tackle this serious issue, in this paper, we propose a secure data deduplication scheme based on differential privacy. The highlights of the proposed scheme lie in constructing a hybrid cloud framework, using convergent encryption algorithm to encrypt original files, and introducing differential privacy mechanism to resist against the side channel attack. Performance evaluation shows that our scheme is able to effectively save network bandwidth and disk storage space during the processes of data deduplication. Meanwhile, security analysis indicates that our scheme can resist against the side channel attack and related files attack, and prevent the disclosure of privacy information.",
"title": ""
},
{
"docid": "e88def1e0d709047f910b7d5d2319508",
"text": "This paper presents an asymmetrical control with phase lock loop for series resonant inverters. This control strategy is used in full-bridge topologies for induction cookers. The operating frequency is automatically tracked to maintain a small constant lagging phase angle when load parameters change. The switching loss is minimized by operating the IGBT in the zero voltage resonance modes. The output power can be adjusted by using asymmetrical voltage cancellation control which is regulated with a PWM duty cycle control strategy.",
"title": ""
},
{
"docid": "e0fe5ab372bd6d4e39dfc6974832da34",
"text": "Purpose – The purpose of this paper is to determine the factors that influence the intention to use and actual usage of a G2B system such as electronic procurement system (EPS) by various ministries in the Government of Malaysia. Design/methodology/approach – The research uses an extension of DeLone and McLean’s model of IS success by including trust, facilitating conditions, and web design quality. The model is tested using an empirical approach. A questionnaire was designed and responses from 358 users from various ministries were collected and analyzed using structural equation modeling (SEM). Findings – The findings of the study indicate that: perceived usefulness, perceived ease of use, assurance of service by service providers, responsiveness of service providers, facilitating conditions, web design (service quality) are strongly linked to intention to use EPS; and intention to use is strongly linked to actual usage behavior. Practical implications – Typically, governments of developing countries spend millions of dollars to implement e-government systems. The investments can be considered useful only if the usage rate is high. The study can help ICT decision makers in government to recognize the critical factors that are responsible for the success of a G2B system like EPS. Originality/value – The model used in the study is one of the few models designed to determine factors influencing intention to use and actual usage behavior in a G2B system in a fast-developing country like Malaysia.",
"title": ""
}
] |
scidocsrr
|
8c8a496884951e56048ff41b31f7f3c1
|
Super-resolution of compressed videos using convolutional neural networks
|
[
{
"docid": "4746d9ecd4773fa35d516bd40dbfb64b",
"text": "Deep learning has been successfully applied to image super resolution (SR). In this paper, we propose a deep joint super resolution (DJSR) model to exploit both external and self similarities for SR. A Stacked Denoising Convolutional Auto Encoder (SDCAE) is first pre-trained on external examples with proper data augmentations. It is then fine-tuned with multi-scale self examples from each input, where the reliability of self examples is explicitly taken into account. We also enhance the model performance by sub-model training and selection. The DJSR model is extensively evaluated and compared with state-of-the-arts, and show noticeable performance improvements both quantitatively and perceptually on a wide range of images.",
"title": ""
},
{
"docid": "fb1c9fcea2f650197b79711606d4678b",
"text": "Self-similarity based super-resolution (SR) algorithms are able to produce visually pleasing results without extensive training on external databases. Such algorithms exploit the statistical prior that patches in a natural image tend to recur within and across scales of the same image. However, the internal dictionary obtained from the given image may not always be sufficiently expressive to cover the textural appearance variations in the scene. In this paper, we extend self-similarity based SR to overcome this drawback. We expand the internal patch search space by allowing geometric variations. We do so by explicitly localizing planes in the scene and using the detected perspective geometry to guide the patch search process. We also incorporate additional affine transformations to accommodate local shape variations. We propose a compositional model to simultaneously handle both types of transformations. We extensively evaluate the performance in both urban and natural scenes. Even without using any external training databases, we achieve significantly superior results on urban scenes, while maintaining comparable performance on natural scenes as other state-of-the-art SR algorithms.",
"title": ""
}
] |
[
{
"docid": "9aa24f6e014ac5104c5b9ff68dc45576",
"text": "The development of social networks has led the public in general to find easy accessibility for communication with respect to rapid communication to each other at any time. Such services provide the quick transmission of information which is its positive side but its negative side needs to be kept in mind thereby misinformation can spread. Nowadays, in this era of digitalization, the validation of such information has become a real challenge, due to lack of information authentication method. In this paper, we design a framework for the rumors detection from the Facebook events data, which is based on inquiry comments. The proposed Inquiry Comments Detection Model (ICDM) identifies inquiry comments utilizing a rule-based approach which entails regular expressions to categorize the sentences as an inquiry into those starting with an intransitive verb (like is, am, was, will, would and so on) and also those sentences ending with a question mark. We set the threshold value to compare with the ratio of Inquiry to English comments and identify the rumors. We verified the proposed ICDM on labeled data, collected from snopes.com. Our experiments revealed that the proposed method achieved considerably well in comparison to the existing machine learning techniques. The proposed ICDM approach attained better results of 89% precision, 77% recall, and 82% F-measure. We are of the opinion that our experimental findings of this study will be useful for the worldwide adoption. Keywords—Social networks; rumors; inquiry comments; question identification",
"title": ""
},
{
"docid": "6101b3c76db195a68fc46cb99c0cda1c",
"text": "We review two clustering algorithms (hard c-means and single linkage) and three indexes of crisp cluster validity (Hubert's statistics, the Davies-Bouldin index, and Dunn's index). We illustrate two deficiencies of Dunn's index which make it overly sensitive to noisy clusters and propose several generalizations of it that are not as brittle to outliers in the clusters. Our numerical examples show that the standard measure of interset distance (the minimum distance between points in a pair of sets) is the worst (least reliable) measure upon which to base cluster validation indexes when the clusters are expected to form volumetric clouds. Experimental results also suggest that intercluster separation plays a more important role in cluster validation than cluster diameter. Our simulations show that while Dunn's original index has operational flaws, the concept it embodies provides a rich paradigm for validation of partitions that have cloud-like clusters. Five of our generalized Dunn's indexes provide the best validation results for the simulations presented.",
"title": ""
},
{
"docid": "b25cfcd6ceefffe3039bb5a6a53e216c",
"text": "With the increasing applications in the domains of ubiquitous and context-aware computing, Internet of Things (IoT) are gaining importance. In IoTs, literally anything can be part of it, whether it is sensor nodes or dumb objects, so very diverse types of services can be produced. In this regard, resource management, service creation, service management, service discovery, data storage, and power management would require much better infrastructure and sophisticated mechanism. The amount of data IoTs are going to generate would not be possible for standalone power-constrained IoTs to handle. Cloud computing comes into play here. Integration of IoTs with cloud computing, termed as Cloud of Things (CoT) can help achieve the goals of envisioned IoT and future Internet. This IoT-Cloud computing integration is not straight-forward. It involves many challenges. One of those challenges is data trimming. Because unnecessary communication not only burdens the core network, but also the data center in the cloud. For this purpose, data can be preprocessed and trimmed before sending to the cloud. This can be done through a Smart Gateway, accompanied with a Smart Network or Fog Computing. In this paper, we have discussed this concept in detail and present the architecture of Smart Gateway with Fog Computing. We have tested this concept on the basis of Upload Delay, Synchronization Delay, Jitter, Bulk-data Upload Delay, and Bulk-data Synchronization Delay.",
"title": ""
},
{
"docid": "6669f61c302d79553a3e49a4f738c933",
"text": "Imagining urban space as being comfortable or fearful is studied as an effect of people’s connections to their residential area communication infrastructure. Geographic Information System (GIS) modeling and spatial-statistical methods are used to process 215 mental maps obtained from respondents to a multilingual survey of seven ethnically marked residential communities of Los Angeles. Spatial-statistical analyses reveal that fear perceptions of Los Angeles urban space are not associated with commonly expected causes of fear, such as high crime victimization likelihood. The main source of discomfort seems to be presence of non-White and non-Asian populations. Respondents more strongly connected to television and interpersonal communication channels are relatively more fearful of these populations than those less strongly connected. Theoretical, methodological, and community-building policy implications are discussed.",
"title": ""
},
{
"docid": "8185da1a497e25f0c50e789847b6bd52",
"text": "We address numerical versus experimental design and testing of miniature implantable antennas for biomedical telemetry in the medical implant communications service band (402-405 MHz). A model of a novel miniature antenna is initially proposed for skin implantation, which includes varying parameters to deal with fabrication-specific details. An iterative design-and-testing methodology is further suggested to determine the parameter values that minimize deviations between numerical and experimental results. To assist in vitro testing, a low-cost technique is proposed for reliably measuring the electric properties of liquids without requiring commercial equipment. Validation is performed within a specific prototype fabrication/testing approach for miniature antennas. To speed up design while providing an antenna for generic skin implantation, investigations are performed inside a canonical skin-tissue model. Resonance, radiation, and safety performance of the proposed antenna is finally evaluated inside an anatomical head model. This study provides valuable insight into the design of implantable antennas, assessing the significance of fabrication-specific details in numerical simulations and uncertainties in experimental testing for miniature structures. The proposed methodology can be applied to optimize antennas for several fabrication/testing approaches and biotelemetry applications.",
"title": ""
},
{
"docid": "bdbb97522eea6cb9f8e11f07c2e83282",
"text": "Middle ear surgery is strongly influenced by anatomical and functional characteristics of the middle ear. The complex anatomy means a challenge for the otosurgeon who moves between preservation or improvement of highly important functions (hearing, balance, facial motion) and eradication of diseases. Of these, perforations of the tympanic membrane, chronic otitis media, tympanosclerosis and cholesteatoma are encountered most often in clinical practice. Modern techniques for reconstruction of the ossicular chain aim for best possible hearing improvement using delicate alloplastic titanium prostheses, but a number of prosthesis-unrelated factors work against this intent. Surgery is always individualized to the case and there is no one-fits-all strategy. Above all, both middle ear diseases and surgery can be associated with a number of complications; the most important ones being hearing deterioration or deafness, dizziness, facial palsy and life-threatening intracranial complications. To minimize risks, a solid knowledge of and respect for neurootologic structures is essential for an otosurgeon who must train him- or herself intensively on temporal bones before performing surgery on a patient.",
"title": ""
},
{
"docid": "6955b9d5e986913ff0f100930ab49775",
"text": "Although pedestrians have individual preferences, aims, and destinations, the dynamics of pedestrian crowds is surprisingly predictable. Pedestrians can move freely only at small pedestrian densities. Otherwise their motion is affected by repulsive interactions with other pedestrians, giving rise to self-organization phenomena. Examples of the resulting patterns of motion are separate lanes of uniform walking direction in crowds of oppositely moving pedestrians or oscillations of the passing direction at bottlenecks. If pedestrians leave footprints on deformable ground (for example, in green spaces such as public parks) this additionally causes attractive interactions which are mediated by modifications of their environment. In such cases, systems of pedestrian trails will evolve over time. The corresponding computer simulations are a valuable tool for developing optimized pedestrian facilities and way systems. DOI:10.1068/b2697",
"title": ""
},
{
"docid": "dd2819d0413a1d41c602aef4830888a4",
"text": "Presented here is a fast method that combines curve matching techniques with a surface matching algorithm to estimate the positioning and respective matching error for the joining of three-dimensional fragmented objects. Furthermore, this paper describes how multiple joints are evaluated and how the broken artefacts are clustered and transformed to form potential solutions of the assemblage problem. q 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "97f73a46f4bc44cdcd1f36dbb1d17197",
"text": "The purpose of this study is to investigate the relationship between the total quality management (TQM) practice and the continuous improvement of international project management (CIIPM) practice. Based on a literature review and qualitative interviews with TQM and project management experts, four hypotheses are posed on how TQM elements affect CIIPM. A cross-sectional survey collected from over 100 mid to senior level international managers is used to validate these hypotheses. The study suggests that the relationship between ‘soft’ TQM elements and CIIPM is more significant than the relationship between ‘hard’ TQM elements and CIIPM. q 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a8ff2ea9e15569de375c34ef252d0dad",
"text": "BIM (Building Information Modeling) has been recently implemented by many Architecture, Engineering, and Construction firms due to its productivity gains and long term benefits. This paper presents the development and implementation of a sustainability assessment framework for an architectural design using BIM technology in extracting data from the digital building model needed for determining the level of sustainability. The sustainability assessment is based on the LEED (Leadership in Energy and Environmental Design) Green Building Rating System, a widely accepted national standards for sustainable building design in the United States. The architectural design of a hotel project is used as a case study to verify the applicability of the framework.",
"title": ""
},
{
"docid": "ad60e181edbf2500da6f78b96fd513d1",
"text": "While vendors on the Internet may have enjoyed an increase in the number of clicks on their Web sites, they have also faced disappointments in converting these clicks into purchases. Lack of trust is identified as one of the greatest barriers inhibiting Internet transactions. Thus, it is essential to understand how trust is created and how it evolves in the Electronic Commerce (EC) context throughout a customer’s purchase experience with an Internet store. As the first step in studying the dynamics of online trust building, this research aims to compare online trust-building factors between potential customers and repeat customers. For this purpose, we classify trust in an Internet store into potential customer trust and repeat customer trust, depending on the customer’s purchase experience with the store. We find that trust building differs between potential customers and repeat customers in terms of antecedents. We also compare the effects of shared antecedents on trust between potential customers and repeat customers. We find that customer satisfaction has a stronger effect on trust building for repeat ∗ Soon Ang was the accepting senior editor for this paper. Harrison McKnight and Suzanne Rivard were reviewers for this paper. Kim, Xu, and Koh/A Comparison of Online Trust Building Factors Journal of the Association for Information Systems Vol. 5 No. 10, pp.392-420/October 2004 393 customers than other antecedents. We discuss the theoretical reasons for the differences and the implications of our research.",
"title": ""
},
{
"docid": "72a01822f817e238812f9722629cf4dc",
"text": "Machine learning is increasingly used in high impact applications such as prediction of hospital re-admission, cancer screening or bio-medical research applications. As predictions become increasingly accurate, practitioners may be interested in identifying actionable changes to inputs in order to alter their class membership. For example, a doctor might want to know what changes to a patient’s status would predict him/her to not be re-admitted to the hospital soon. Szegedy et al. (2013b) demonstrated that identifying such changes can be very hard in image classification tasks. In fact, tiny, imperceptible changes can result in completely different predictions without any change to the true class label of the input. In this paper we ask the question if we can make small but meaningful changes in order to truly alter the class membership of images from a source class to a target class. To this end we propose deep manifold traversal, a method that learns the manifold of natural images and provides an effective mechanism to move images from one area (dominated by the source class) to another (dominated by the target class).The resulting algorithm is surprisingly effective and versatile. It allows unrestricted movements along the image manifold and only requires few images from source and target to identify meaningful changes. We demonstrate that the exact same procedure can be used to change an individual’s appearance of age, facial expressions or even recolor black and white images.",
"title": ""
},
{
"docid": "ccf6f5d7b73054752b45d753454130f7",
"text": "Emerging non-volatile memories such as phase-change RAM (PCRAM) offer significant advantages but suffer from write endurance problems. However, prior solutions are oblivious to soft errors (recently raised as a potential issue even for PCRAM) and are incompatible with high-level fault tolerance techniques such as chipkill. To additionally address such failures requires unnecessarily high costs for techniques that focus singularly on wear-out tolerance. In this paper, we propose fine-grained remapping with ECC and embedded pointers (FREE-p). FREE-p remaps fine-grained worn-out NVRAM blocks without requiring large dedicated storage. We discuss how FREE-p protects against both hard and soft errors and can be extended to chipkill. Further, FREE-p can be implemented purely in the memory controller, avoiding custom NVRAM devices. In addition to these benefits, FREE-p increases NVRAM lifetime by up to 26% over the state-of-the-art even with severe process variation while performance degradation is less than 2% for the initial 7 years.",
"title": ""
},
{
"docid": "5e6bc57a2888ac77b93470518d96dda3",
"text": "Strategies for hybrid locomotion such as jumping and gliding are used in nature by many different animals for traveling over rough terrain. This combination of locomotion modes also allows small robots to overcome relatively large obstacles at a minimal energetic cost compared to wheeled or flying robots. In this chapter we describe the development of a novel palm sized robot of 10 g that is able to autonomously deploy itself from ground or walls, open its wings, recover in midair and subsequently perform goal-directed gliding. In particular, we focus on the subsystems that will in the future be integrated such as a 1.5 g microglider that can perform phototaxis; a 4.5 g, bat-inspired, wing folding mechanism that can unfold in only 50 ms; and a locust-inspired, 7 g robot that can jump more than 27 times its own height. We also review the relevance of jumping and gliding for living and robotic systems and we highlight future directions for the realization of a fully integrated robot.",
"title": ""
},
{
"docid": "f3590467f740bc575e995389c9cc3684",
"text": "Action recognition has become a very important topic in computer vision, with many fundamental applications, in robotics, video surveillance, human–computer interaction, and multimedia retrieval among others and a large variety of approaches have been described. The purpose of this survey is to give an overview and categorization of the approaches used. We concentrate on approaches that aim on classification of full-body motions, such as kicking, punching, and waving, and we categorize them according to how they represent the spatial and temporal structure of actions; how they segment actions from an input stream of visual data; and how they learn a view-invariant representation of actions. 2010 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "aebf00f667b9e0aa23bf8484fc9e2cfd",
"text": "Patients' medical conditions often evolve in complex and seemingly unpredictable ways. Even within a relatively narrow and well-defined episode of care, variations between patients in both their progression and eventual outcome can be dramatic. Understanding the patterns of events observed within a population that most correlate with differences in outcome is therefore an important task in many types of studies using retrospective electronic health data. In this paper, we present a method for interactive pattern mining and analysis that supports ad hoc visual exploration of patterns mined from retrospective clinical patient data. Our approach combines (1) visual query capabilities to interactively specify episode definitions, (2) pattern mining techniques to help discover important intermediate events within an episode, and (3) interactive visualization techniques that help uncover event patterns that most impact outcome and how those associations change over time. In addition to presenting our methodology, we describe a prototype implementation and present use cases highlighting the types of insights or hypotheses that our approach can help uncover.",
"title": ""
},
{
"docid": "c1645ba8b221bf6e15fdfa1842ef8017",
"text": "In this paper a scalable and flexible Architecture for real-time mission planning and dynamic agent-to-task assignment for a swarm of Unmanned Aerial Vehicles (UAV) is presented. The proposed mission planning architecture consists of a Global Mission Planner (GMP) which is responsible of assigning and monitoring different high-level missions through an Agent Mission Planner (AMP), which is in charge of providing and monitoring each task of the mission to each UAV in the swarm. The objective of the proposed architecture is to carry out high-level missions such as autonomous multi-agent exploration, automatic target detection and recognition, search and rescue, and other different missions with the ability of dynamically re-adapt the mission in real-time. The proposed architecture has been evaluated in simulation and real indoor flights demonstrating its robustness in different scenarios and its flexibility for real-time mission re-planning and dynamic agent-to-task assignment.",
"title": ""
},
{
"docid": "ce82b53bc47ea8ca9c6bdfb5421a5210",
"text": "Max Planck Institute for Biogeochemistry, Hans-Knöll-Strasse 10, 07745 Jena, Germany, German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig, Deutscher Platz 5, 04103 Leipzig, Germany, Department of Forest Resources, University of Minnesota, St Paul, MN 55108, USA, Department of Computer Science and Engineering, University of Minnesota, Twin Cities, USA, Microsoft Corporation, One Microsoft Way, Redmond, WA 98052, USA, Instituto Multidisciplinario de Biología Vegetal (IMBIV – CONICET) and Departamento de Diversidad Biológica y Ecología, FCEFyN, Universidad Nacional de Córdoba, CC 495, 5000, Córdoba, Argentina, Royal Botanic Gardens Kew, Wakehurst Place, RH17 6TN, UK, Center for Biodiversity Management, Yungaburra 4884, Queensland, Australia, Centre National de la Recherche Scientifique, Grenoble, France, Laboratoire ESE, Université Paris-Sud, UMR 8079 CNRS, UOS, AgroParisTech, 91405 Orsay, France, University of Leipzig, Leipzig, Germany, Department of Biological Sciences, Macquarie University, NSW 2109, Australia, Smithsonian Tropical Research Institute, Apartado 0843-03092, Balboa, Republic of Panama, Hawkesbury Institute for the Environment, University of Western Sydney, Locked Bag 1797, Penrith, NSW 2751 Australia ABSTRACT",
"title": ""
},
{
"docid": "b407a0459cc6e280fef1023fe6a2010d",
"text": "An animal’s ability to navigate through its natural environment is critical to its survival. Navigation can be slow and methodical such as an annual migration, or purely reactive such as an escape response. How sensory input is translated into a fast behavioral output to execute goal oriented locomotion remains elusive. In this dissertation, I aimed to investigate escape response behavior in the nematode C. elegans. It has been shown that the biogenic amine tyramine is essential for the escape response. A tyramine-gated chloride channel, LGC-55, has been revealed to modulate suppression of head oscillations and reversal behavior in response to touch. Here, I discovered key modulators of the tyraminergic signaling pathway through forward and reverse genetic screens using exogenous tyramine drug plates. ser-2, a tyramine activated G proteincoupled receptor mutant, was partially resistant to the paralytic effects of exogenous tyramine on body movements, indicating a role in locomotion behavior. Further analysis revealed that ser-2 is asymmetrically expressed in the VD GABAergic motor neurons, and that SER-2 inhibits neurotransmitter release along the ventral nerve cord. Although overall locomotion was normal in ser-2 mutants, they failed to execute omega turns by fully contracting the ventral musculature. Omega turns allow the animal to reverse and completely change directions away from a predator during the escape response. Furthermore, my studies developed an assay to investigate instantaneous velocity changes during",
"title": ""
},
{
"docid": "a36d019f5016d0e86ac8d7c412a3c9fd",
"text": "Increasing population density in urban centers demands adequate provision of services and infrastructure to meet the needs of city inhabitants, encompassing residents, workers, and visitors. The utilization of information and communications technologies to achieve this objective presents an opportunity for the development of smart cities, where city management and citizens are given access to a wealth of real-time information about the urban environment upon which to base decisions, actions, and future planning. This paper presents a framework for the realization of smart cities through the Internet of Things (IoT). The framework encompasses the complete urban information system, from the sensory level and networking support structure through to data management and Cloud-based integration of respective systems and services, and forms a transformational part of the existing cyber-physical system. This IoT vision for a smart city is applied to a noise mapping case study to illustrate a new method for existing operations that can be adapted for the enhancement and delivery of important city services.",
"title": ""
}
] |
scidocsrr
|
68beb6be387e815698d5e6d4bb9d0d96
|
Multi-sensor self-localization based on Maximally Stable Extremal Regions
|
[
{
"docid": "5e286453dfe55de305b045eaebd5f8fd",
"text": "Target tracking is an important element of surveillance, guidance or obstacle avoidance, whose role is to determine the number, position and movement of targets. The fundamental building block of a tracking system is a filter for recursive state estimation. The Kalman filter has been flogged to death as the work-horse of tracking systems since its formulation in the 60's. In this talk we look beyond the Kalman filter at sequential Monte Carlo methods, collectively referred to as particle filters. Particle filters have become a popular method for stochastic dynamic estimation problems. This popularity can be explained by a wave of optimism among practitioners that traditionally difficult nonlinear/non-Gaussian dynamic estimation problems can now be solved accurately and reliably using this methodology. The computational cost of particle filters have often been considered their main disadvantage, but with ever faster computers and more efficient particle filter algorithms, this argument is becoming less relevant. The talk is organized in two parts. First we review the historical development and current status of particle filtering and its relevance to target tracking. We then consider in detail several tracking applications where conventional (Kalman based) methods appear inappropriate (unreliable or inaccurate) and where we instead need the potential benefits of particle filters. 1 The paper was written together with David Salmond, QinetiQ, UK.",
"title": ""
}
] |
[
{
"docid": "861e2a3c19dafdd3273dc718416309c2",
"text": "For the last 40 years high - capacity Unmanned Air Vehicles have been use mostly for military services such as tracking, surveillance, engagement with active weapon or in the simplest term for data acquisition purpose. Unmanned Air Vehicles are also demanded commercially because of their advantages in comparison to manned vehicles such as their low manufacturing and operating cost, configuration flexibility depending on customer request, not risking pilot in the difficult missions. Nevertheless, they have still open issues such as integration to the manned flight air space, reliability and airworthiness. Although Civil Unmanned Air Vehicles comprise 3% of the UAV market, it is estimated that they will reach 10% level within the next 5 years. UAV systems with their useful equipment (camera, hyper spectral imager, air data sensors and with similar equipment) have been in use more and more for civil applications: Tracking and monitoring in the event of agriculture / forest / marine pollution / waste / emergency and disaster situations; Mapping for land registry and cadastre; Wildlife and ecologic monitoring; Traffic Monitoring and; Geology and mine researches. They can bring minimal risk and cost advantage to many civil applications, in which it was risky and costly to use manned air vehicles before. When the cost of Unmanned Air Vehicles designed and produced for military service is taken into account, civil market demands lower cost and original products which are suitable for civil applications. Most of civil applications which are mentioned above require UAVs that are able to take off and land on limited runway, and moreover move quickly in the operation region for mobile applications but hover for immobile measurement and tracking when necessary. This points to a hybrid unmanned vehicle concept optimally, namely the Vertical Take Off and Landing (VTOL) UAVs. At the same time, this system requires an efficient cost solution for applicability / convertibility for different civil applications. It means an Air Vehicle having easily portability of payload depending on application concept and programmability of operation (hover and cruise flight time) specific to the application. The main topic of this project is designing, producing and testing the TURAC VTOL UAV that have the following features : Vertical takeoff and landing, and hovering like helicopter ; High cruise speed and fixed-wing ; Multi-functional and designed for civil purpose ; The project involves two different variants ; The TURAC A variant is a fully electrical platform which includes 2 tilt electric motors in the front, and a fixed electric motor and ducted fan in the rear ; The TURAC B variant uses fuel cells.",
"title": ""
},
{
"docid": "e729c06c5a4153af05740a01509ee5d5",
"text": "Understanding large-scale document collections in an efficient manner is an important problem. Usually, document data are associated with other information (e.g., an author's gender, age, and location) and their links to other entities (e.g., co-authorship and citation networks). For the analysis of such data, we often have to reveal common as well as discriminative characteristics of documents with respect to their associated information, e.g., male- vs. female-authored documents, old vs. new documents, etc. To address such needs, this paper presents a novel topic modeling method based on joint nonnegative matrix factorization, which simultaneously discovers common as well as discriminative topics given multiple document sets. Our approach is based on a block-coordinate descent framework and is capable of utilizing only the most representative, thus meaningful, keywords in each topic through a novel pseudo-deflation approach. We perform both quantitative and qualitative evaluations using synthetic as well as real-world document data sets such as research paper collections and nonprofit micro-finance data. We show our method has a great potential for providing in-depth analyses by clearly identifying common and discriminative topics among multiple document sets.",
"title": ""
},
{
"docid": "73877d224b5bbbde7ea8185284da3c2d",
"text": "With the advancement of web technology and its growth, there is a huge volume of data present in the web for internet users and a lot of data is generated too. Internet has become a platform for online learning, exchanging ideas and sharing opinions. Social networking sites like Twitter, Facebook, Google+ are rapidly gaining popularity as they allow people to share and express their views about topics, have discussion with different communities, or post messages across the world. There has been lot of work in the field of sentiment analysis of twitter data. This survey focuses mainly on sentiment analysis of twitter data which is helpful to analyze the information in the tweets where opinions are highly unstructured, heterogeneous and are either positive or negative, or neutral in some cases. In this paper, we provide a survey and a comparative analyses of existing techniques for opinion mining like machine learning and lexicon-based approaches, together with evaluation metrics. Using various machine learning algorithms like Naive Bayes, Max Entropy, and Support Vector Machine, we provide research on twitter data streams.We have also discussed general challenges and applications of Sentiment Analysis on Twitter.",
"title": ""
},
{
"docid": "c1ca3f495400a898da846bdf20d23833",
"text": "It is very useful to integrate human knowledge and experience into traditional neural networks for faster learning speed, fewer training samples and better interpretability. However, due to the obscured and indescribable black box model of neural networks, it is very difficult to design its architecture, interpret its features and predict its performance. Inspired by human visual cognition process, we propose a knowledge-guided semantic computing network which includes two modules: a knowledge-guided semantic tree and a data-driven neural network. The semantic tree is pre-defined to describe the spatial structural relations of different semantics, which just corresponds to the tree-like description of objects based on human knowledge. The object recognition process through the semantic tree only needs simple forward computing without training. Besides, to enhance the recognition ability of the semantic tree in aspects of the diversity, randomicity and variability, we use the traditional neural network to aid the semantic tree to learn some indescribable features. Only in this case, the training process is needed. The experimental results on MNIST and GTSRB datasets show that compared with the traditional data-driven network, our proposed semantic computing network can achieve better performance with fewer training samples and lower computational complexity. Especially, Our model also has better adversarial robustness than traditional neural network with the help of human knowledge.",
"title": ""
},
{
"docid": "9404d1fd58dbd1d83c2d503e54ffd040",
"text": "This work examines the association between the Big Five personality dimensions, the most relevant demographic factors (sex, age and relationship status), and subjective well-being. A total of 236 nursing professionals completed the NEO Five Factor Inventory (NEO-FFI) and the Affect-Balance Scale (ABS). Regression analysis showed personality as one of the most important correlates of subjective well-being, especially through Extraversion and Neuroticism. There was a positive association between Openness to experience and the positive and negative components of affect. Likewise, the most basic demographic variables (sex, age and relationship status) are found to be differentially associated with the different elements of subjective well-being, and the explanation for these associations is highly likely to be found in the links between demographic variables and personality. In the same way as control of the effect of demographic variables is necessary for isolating the effect of personality on subjective well-being, control of personality should permit more accurate analysis of the role of demographic variables in relation to the subjective well-being construct. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "85b1fe5c3d6d68791345d32eda99055b",
"text": "Surgery and other invasive therapies are complex interventions, the assessment of which is challenged by factors that depend on operator, team, and setting, such as learning curves, quality variations, and perception of equipoise. We propose recommendations for the assessment of surgery based on a five-stage description of the surgical development process. We also encourage the widespread use of prospective databases and registries. Reports of new techniques should be registered as a professional duty, anonymously if necessary when outcomes are adverse. Case series studies should be replaced by prospective development studies for early technical modifications and by prospective research databases for later pre-trial evaluation. Protocols for these studies should be registered publicly. Statistical process control techniques can be useful in both early and late assessment. Randomised trials should be used whenever possible to investigate efficacy, but adequate pre-trial data are essential to allow power calculations, clarify the definition and indications of the intervention, and develop quality measures. Difficulties in doing randomised clinical trials should be addressed by measures to evaluate learning curves and alleviate equipoise problems. Alternative prospective designs, such as interrupted time series studies, should be used when randomised trials are not feasible. Established procedures should be monitored with prospective databases to analyse outcome variations and to identify late and rare events. Achievement of improved design, conduct, and reporting of surgical research will need concerted action by editors, funders of health care and research, regulatory bodies, and professional societies.",
"title": ""
},
{
"docid": "080f76412f283fb236c28678bf9dada8",
"text": "We describe a new algorithm for robot localization, efficient both in terms of memory and processing time. It transforms a stream of laser range sensor data into a probabilistic calculation of the robot’s position, using a bidirectional Long Short-Term Memory (LSTM) recurrent neural network (RNN) to learn the structure of the environment and to answer queries such as: in which room is the robot? To achieve this, the RNN builds an implicit map of the environment.",
"title": ""
},
{
"docid": "fdfc8d10002769f2dd2943a4d76fe27d",
"text": "This paper has as a major objective to present a unified overview and derivation of mixedinteger nonlinear programming (MINLP) techniques, Branch and Bound, Outer-Approximation, Generalized Benders and Extended Cutting Plane methods, as applied to nonlinear discrete optimization problems that are expressed in algebraic form. The solution of MINLP problems with convex functions is presented first, followed by a brief discussion on extensions for the nonconvex case. The solution of logic based representations, known as generalized disjunctive programs, is also described. Theoretical properties are presented, and numerical comparisons on a small process network problem.",
"title": ""
},
{
"docid": "18969bed489bb9fa7196634a8086449e",
"text": "A speech recognition model is proposed in which the transformation from an input speech signal into a sequence of phonemes is carried out largely through an active or feedback process. In this process, patterns are generated internally in the analyzer according to an adaptable sequence of instructions until a best match with the input signal is obtained. Details of the process are given, and the areas where further research is needed are indicated.",
"title": ""
},
{
"docid": "9a79af1c226073cc129087695295a4e5",
"text": "This paper presents an effective approach for resume information extraction to support automatic resume management and routing. A cascaded information extraction (IE) framework is designed. In the first pass, a resume is segmented into a consecutive blocks attached with labels indicating the information types. Then in the second pass, the detailed information, such as Name and Address, are identified in certain blocks (e.g. blocks labelled with Personal Information), instead of searching globally in the entire resume. The most appropriate model is selected through experiments for each IE task in different passes. The experimental results show that this cascaded hybrid model achieves better F-score than flat models that do not apply the hierarchical structure of resumes. It also shows that applying different IE models in different passes according to the contextual structure is effective.",
"title": ""
},
{
"docid": "ceb42399b7cd30b15d27c30d7c4b57b6",
"text": "In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated from an informationtheoretic perspective. The relationships among the capacity r egion of broadcast channels and two rate regions achieved by NOMA and time-division multiple access (TDMA) are illustrated first. Then, the performance of NOMA is evaluated by considering TDMA as the benchmark, where both the sum rate and the individual use r rates are used as the criteria. In a wireless downlink scenar io with user pairing, the developed analytical results show that NOMA can outperform TDMA not only for the sum rate but also for each user’s individual rate, particularly when the difference between the users’ channels is large. I. I NTRODUCTION Because of its superior spectral efficiency, non-orthogona l multiple access (NOMA) has been recognized as a promising technique to be used in the fifth generation (5G) networks [1] – [4]. NOMA utilizes the power domain for achieving multiple access, i.e., different users are served at different power levels. Unlike conventional orthogonal MA, such as timedivision multiple access (TDMA), NOMA faces strong cochannel interference between different users, and success ive interference cancellation (SIC) is used by the NOMA users with better channel conditions for interference managemen t. The concept of NOMA is essentially a special case of superposition coding developed for broadcast channels (BC ). Cover first found the capacity region of a degraded discrete memoryless BC by using superposition coding [5]. Then, the capacity region of the Gaussian BC with single-antenna terminals was established in [6]. Moreover, the capacity re gion of the multiple-input multiple-output (MIMO) Gaussian BC was found in [7], by applying dirty paper coding (DPC) instea d of superposition coding. This paper mainly focuses on the single-antenna scenario. Specifically, consider a Gaussian BC with a single-antenna transmitter and two single-antenna receivers, where each r eceiver is corrupted by additive Gaussian noise with unit var iance. Denote the ordered channel gains from the transmitter to the two receivers byhw andhb, i.e., |hw| < |hb|. For a given channel pair(hw, hb), the capacity region is given by [6] C , ⋃ a1+a2=1, a1, a2 ≥ 0 { (R1, R2) : R1, R2 ≥ 0, R1≤ log2 ( 1+ a1x 1+a2x ) , R2≤ log2 (1+a2y) }",
"title": ""
},
{
"docid": "bae9c30552536348b9a871a2a49555b9",
"text": "Background\nDiastasis recti abdominis affects a significant number of women during the prenatal and postnatal period.\n\n\nObjective\nThe objective was to evaluate the effect of a postpartum training program on the prevalence of diastasis recti abdominis.\n\n\nDesign\nThe design was a secondary analysis of an assessor-masked randomized controlled trial.\n\n\nMethods\nOne hundred seventy-five primiparous women (mean age = 29.8 ± 4.1 years) were randomized to an exercise or control group. The interrectus distance was palpated using finger widths, with a cutoff point for diastasis as ≥2 finger widths. Measures were taken 4.5 cm above, at, and 4.5 cm below the umbilicus. The 4-month intervention started 6 weeks postpartum and consisted of a weekly, supervised exercise class focusing on strength training of the pelvic floor muscles. In addition, the women were asked to perform daily pelvic floor muscle training at home. The control group received no intervention. Analyses were based on intention to treat. The Mantel-Haenszel test (relative risk [RR] ratio) and the chi-square test for independence were used to evaluate between-group differences on categorical data.\n\n\nResults\nAt 6 weeks postpartum, 55.2% and 54.5% of the participants were diagnosed with diastasis in the intervention and control groups, respectively. No significant differences between groups in prevalence were found at baseline (RR: 1.01 [0.77-1.32]), at 6 months postpartum (RR: 0.99 [0.71-1.38]), or at 12 months postpartum (RR: 1.04 [0.73-1.49]).\n\n\nLimitations\nThe interrecti distance was palpated using finger widths, and the sample included women with and without diastasis.\n\n\nConclusions\nA weekly, postpartum, supervised exercise program, including strength training of the pelvic floor and abdominal muscles, in addition to daily home training of the pelvic floor muscles, did not reduce the prevalence of diastasis.",
"title": ""
},
{
"docid": "eaa3284dbe2bbd5c72df99d76d4909a7",
"text": "BACKGROUND\nWorldwide, depression is rated as the fourth leading cause of disease burden and is projected to be the second leading cause of disability by 2020. Annual depression-related costs in the United States are estimated at US $210.5 billion, with employers bearing over 50% of these costs in productivity loss, absenteeism, and disability. Because most adults with depression never receive treatment, there is a need to develop effective interventions that can be more widely disseminated through new channels, such as employee assistance programs (EAPs), and directly to individuals who will not seek face-to-face care.\n\n\nOBJECTIVE\nThis study evaluated a self-guided intervention, using the MoodHacker mobile Web app to activate the use of cognitive behavioral therapy (CBT) skills in working adults with mild-to-moderate depression. It was hypothesized that MoodHacker users would experience reduced depression symptoms and negative cognitions, and increased behavioral activation, knowledge of depression, and functioning in the workplace.\n\n\nMETHODS\nA parallel two-group randomized controlled trial was conducted with 300 employed adults exhibiting mild-to-moderate depression. Participants were recruited from August 2012 through April 2013 in partnership with an EAP and with outreach through a variety of additional non-EAP organizations. Participants were blocked on race/ethnicity and then randomly assigned within each block to receive, without clinical support, either the MoodHacker intervention (n=150) or alternative care consisting of links to vetted websites on depression (n=150). Participants in both groups completed online self-assessment surveys at baseline, 6 weeks after baseline, and 10 weeks after baseline. Surveys assessed (1) depression symptoms, (2) behavioral activation, (3) negative thoughts, (4) worksite outcomes, (5) depression knowledge, and (6) user satisfaction and usability. After randomization, all interactions with subjects were automated with the exception of safety-related follow-up calls to subjects reporting current suicidal ideation and/or severe depression symptoms.\n\n\nRESULTS\nAt 6-week follow-up, significant effects were found on depression, behavioral activation, negative thoughts, knowledge, work productivity, work absence, and workplace distress. MoodHacker yielded significant effects on depression symptoms, work productivity, work absence, and workplace distress for those who reported access to an EAP, but no significant effects on these outcome measures for those without EAP access. Participants in the treatment arm used the MoodHacker app an average of 16.0 times (SD 13.3), totaling an average of 1.3 hours (SD 1.3) of use between pretest and 6-week follow-up. Significant effects on work absence in those with EAP access persisted at 10-week follow-up.\n\n\nCONCLUSIONS\nThis randomized effectiveness trial found that the MoodHacker app produced significant effects on depression symptoms (partial eta(2) = .021) among employed adults at 6-week follow-up when compared to subjects with access to relevant depression Internet sites. The app had stronger effects for individuals with access to an EAP (partial eta(2) = .093). For all users, the MoodHacker program also yielded greater improvement on work absence, as well as the mediating factors of behavioral activation, negative thoughts, and knowledge of depression self-care. Significant effects were maintained at 10-week follow-up for work absence. General attenuation of effects at 10-week follow-up underscores the importance of extending program contacts to maintain user engagement. This study suggests that light-touch, CBT-based mobile interventions like MoodHacker may be appropriate for implementation within EAPs and similar environments. In addition, it seems likely that supporting MoodHacker users with guidance from counselors may improve effectiveness for those who seek in-person support.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02335554; https://clinicaltrials.gov/ct2/show/NCT02335554 (Archived by WebCite at http://www.webcitation.org/6dGXKWjWE).",
"title": ""
},
{
"docid": "616749e7918accb48e46a13d6d1a36c2",
"text": "Achieving long battery lives or even self sustainability has been a long standing challenge for designing mobile devices. This paper presents a novel solution that seamlessly integrates two technologies, mobile cloud computing and microwave power transfer (MPT), to enable computation in passive low-complexity devices such as sensors and wearable computing devices. Specifically, considering a single-user system, a base station (BS) either transfers power to or offloads computation from a mobile to the cloud; the mobile uses harvested energy to compute given data either locally or by offloading. A framework for energy efficient computing is proposed that comprises a set of policies for controlling CPU cycles for the mode of local computing, time division between MPT and offloading for the other mode of offloading, and mode selection. Given the CPU-cycle statistics information and channel state information (CSI), the policies aim at maximizing the probability of successfully computing given data, called computing probability, under the energy harvesting and deadline constraints. The policy optimization is translated into the equivalent problems of minimizing the mobile energy consumption for local computing and maximizing the mobile energy savings for offloading which are solved using convex optimization theory. The structures of the resultant policies are characterized in closed form. Furthermore, given non-causal CSI, the said analytical framework is further developed to support computation load allocation over multiple channel realizations, which further increases the computing probability. Last, simulation demonstrates the feasibility of wirelessly powered mobile cloud computing and the gain of its optimal control.",
"title": ""
},
{
"docid": "a1915a869616b9c8c2547f66ec89de13",
"text": "The harvest yield in vineyards can vary significantly from year to year and also spatially within plots due to variations in climate, soil conditions and pests. Fine grained knowledge of crop yields can allow viticulturists to better manage their vineyards. The current industry practice for yield prediction is destructive, expensive and spatially sparse - during the growing season sparse samples are taken and extrapolated to determine overall yield. We present an automated method that uses computer vision to detect and count grape berries. The method could potentially be deployed across large vineyards taking measurements at every vine in a non-destructive manner. Our berry detection uses both shape and visual texture and we can demonstrate detection of green berries against a green leaf background. Berry detections are counted and the eventual harvest yield is predicted. Results are presented for 224 vines (over 450 meters) of two different grape varieties and compared against the actual harvest yield as groundtruth. We calibrate our berry count to yield and find that we can predict yield of individual vineyard rows to within 9.8% of actual crop weight.",
"title": ""
},
{
"docid": "b0bb9c4bcf666dca927d4f747bfb1ca1",
"text": "Remote monitoring of animal behaviour in the environment can assist in managing both the animal and its environmental impact. GPS collars which record animal locations with high temporal frequency allow researchers to monitor both animal behaviour and interactions with the environment. These ground-based sensors can be combined with remotely-sensed satellite images to understand animal-landscape interactions. The key to combining these technologies is communication methods such as wireless sensor networks (WSNs). We explore this concept using a case-study from an extensive cattle enterprise in northern Australia and demonstrate the potential for combining GPS collars and satellite images in a WSN to monitor behavioural preferences and social behaviour of cattle.",
"title": ""
},
{
"docid": "b8702cb8d18ae53664f3dfff95152764",
"text": "Word Sense Disambiguation is a longstanding task in Natural Language Processing, lying at the core of human language understanding. However, the evaluation of automatic systems has been problematic, mainly due to the lack of a reliable evaluation framework. In this paper we develop a unified evaluation framework and analyze the performance of various Word Sense Disambiguation systems in a fair setup. The results show that supervised systems clearly outperform knowledge-based models. Among the supervised systems, a linear classifier trained on conventional local features still proves to be a hard baseline to beat. Nonetheless, recent approaches exploiting neural networks on unlabeled corpora achieve promising results, surpassing this hard baseline in most test sets.",
"title": ""
},
{
"docid": "a37493c6cde320091c1baf7eaa57b982",
"text": "The pervasiveness of cell phones and mobile social media applications is generating vast amounts of geolocalized user-generated content. Since the addition of geotagging information, Twitter has become a valuable source for the study of human dynamics. Its analysis is shedding new light not only on understanding human behavior but also on modeling the way people live and interact in their urban environments. In this paper, we evaluate the use of geolocated tweets as a complementary source of information for urban planning applications. Our contributions are focussed in two urban planing areas: (1) a technique to automatically determine land uses in a specific urban area based on tweeting patterns, and (2) a technique to automatically identify urban points of interest as places with high activity of tweets. We apply our techniques in Manhattan (NYC) using 49 days of geolocated tweets and validate them using land use and landmark information provided by various NYC departments. Our results indicate that geolocated tweets are a powerful and dynamic data source to characterize urban environments.",
"title": ""
},
{
"docid": "72e9ed1d81f8dfce9492f5bb30fc91a1",
"text": "A key component to the success of deep learning is the availability of massive amounts of training data. Building and annotating large datasets for solving medical image classification problems is today a bottleneck for many applications. Recently, capsule networks were proposed to deal with shortcomings of Convolutional Neural Networks (ConvNets). In this work, we compare the behavior of capsule networks against ConvNets under typical datasets constraints of medical image analysis, namely, small amounts of annotated data and class-imbalance. We evaluate our experiments on MNIST, Fashion-MNIST and medical (histological and retina images) publicly available datasets. Our results suggest that capsule networks can be trained with less amount of data for the same or better performance and are more robust to an imbalanced class distribution, which makes our approach very promising for the medical imaging community.",
"title": ""
},
{
"docid": "6fc870c703611e07519ce5fe956c15d1",
"text": "Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.",
"title": ""
}
] |
scidocsrr
|
49adc2d524cc70df9bb48d6c53938f59
|
Grid-Based Crime Prediction Using Geographical Features
|
[
{
"docid": "c39fe902027ba5cb5f0fa98005596178",
"text": "Twitter is used extensively in the United States as well as globally, creating many opportunities to augment decision support systems with Twitterdriven predictive analytics. Twitter is an ideal data source for decision support: its users, who number in the millions, publicly discuss events, emotions, and innumerable other topics; its content is authored and distributed in real time at no charge; and individual messages (also known as tweets) are often tagged with precise spatial and temporal coordinates. This article presents research investigating the use of spatiotemporally tagged tweets for crime prediction. We use Twitter-specific linguistic analysis and statistical topic modeling to automatically identify discussion topics across a major city in the United States. We then incorporate these topics into a crime prediction model and show that, for 19 of the 25 crime types we studied, the addition of Twitter data improves crime prediction performance versus a standard approach based on kernel density estimation. We identify a number of performance bottlenecks that could impact the use of Twitter in an actual decision support system. We also point out important areas of future work for this research, including deeper semantic analysis of message con∗Email address: msg8u@virginia.edu; Tel.: 1+ 434 924 5397; Fax: 1+ 434 982 2972 Preprint submitted to Decision Support Systems January 14, 2014 tent, temporal modeling, and incorporation of auxiliary data sources. This research has implications specifically for criminal justice decision makers in charge of resource allocation for crime prevention. More generally, this research has implications for decision makers concerned with geographic spaces occupied by Twitter-using individuals.",
"title": ""
},
{
"docid": "33a770eb60ef024128ba784a5c5b9491",
"text": "Word embeddings that provide continuous low-dimensional vector representations of words have been extensively used for various natural language processing tasks. However, existing context-based word embeddings such as Word2vec and GloVe typically fail to capture sufficient sentiment information, which may result in words with similar vector representations having an opposite sentiment polarity e.g., good and bad, thus degrading sentiment analysis performance. To tackle this problem, recent studies have suggested learning sentiment embeddings to incorporate the sentiment polarity positive and negative information from labeled corpora. This study adopts another strategy to learn sentiment embeddings. Instead of creating a new word embedding from labeled corpora, we propose a word vector refinement model to refine existing pretrained word vectors using real-valued sentiment intensity scores provided by sentiment lexicons. The idea of the refinement model is to improve each word vector such that it can be closer in the lexicon to both semantically and sentimentally similar words i.e., those with similar intensity scores and further away from sentimentally dissimilar words i.e., those with dissimilar intensity scores. An obvious advantage of the proposed method is that it can be applied to any pretrained word embeddings. In addition, the intensity scores can provide more fine-grained real-valued sentiment information than binary polarity labels to guide the refinement process. Experimental results show that the proposed refinement model can improve both conventional word embeddings and previously proposed sentiment embeddings for binary, ternary, and fine-grained sentiment classification on the SemEval and Stanford Sentiment Treebank datasets.",
"title": ""
}
] |
[
{
"docid": "d4b1513319396aedab8f9d78bb19c9bf",
"text": "CONTEXT\nSolid-pseudopapillary tumor of the pancreas is a rare tumor which usually affects young females in their second and third decade of life. Metastasis is very rare after a resection of curative intent.\n\n\nCASE REPORT\nWe report a case of a 65-year-old white female who presented with metastasis to the liver four years after Whipple's resection for a solid-pseudopapillary tumor of the pancreas.\n\n\nCONCLUSIONS\nSolid-pseudopapillary tumors of the pancreas can present with metastasis a long time after resection of the primary tumor. Long term close follow up of these patients should be done. The survival rate even after liver metastasis is good.",
"title": ""
},
{
"docid": "8e66f052f71059827995d466dd60566d",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Techno-economic analysis of PV/H2 systems C Darras, G Bastien, M Muselli, P Poggi, B Champel, P Serre-Combe",
"title": ""
},
{
"docid": "e9bf278fd48cc437796f12530d352d3c",
"text": "This paper investigates the transportation and vehicular modes classification by using big data from smartphone sensors. The three types of sensors used in this paper include the accelerometer, magnetometer, and gyroscope. This study proposes improved features and uses three machine learning algorithms including decision trees, K-nearest neighbor, and support vector machine to classify the user's transportation and vehicular modes. In the experiments, we discussed and compared the performance from different perspectives including the accuracy for both modes, the executive time, and the model size. Results show that the proposed features enhance the accuracy, in which the support vector machine provides the best performance in classification accuracy whereas it consumes the largest prediction time. This paper also investigates the vehicle classification mode and compares the results with that of the transportation modes.",
"title": ""
},
{
"docid": "98a3216257c9c2358d2a70247b185cb9",
"text": "Deep Neural Networks (DNNs) have achieved impressive accuracy in many application domains including im-age classification. Training of DNNs is an extremely compute-intensive process and is solved using variants of the stochastic gradient descent (SGD) algorithm. A lot of recent research has focused on improving the performance of DNN training. In this paper, we present optimization techniques to improve the performance of the data parallel synchronous SGD algorithm using the Torch framework: (i) we maintain data in-memory to avoid file I/O overheads, (ii) we propose optimizations to the Torch data parallel table framework that handles multi-threading, and (iii) we present MPI optimization to minimize communication overheads. We evaluate the performance of our optimizations on a Power 8 Minsky cluster with 64 nodes and 256 NVidia Pascal P100 GPUs. With our optimizations, we are able to train 90 epochs of the ResNet-50 model on the Imagenet-1k dataset using 256 GPUs in just 48 minutes. This significantly improves on the previously best known performance of training 90 epochs of the ResNet-50 model on the same dataset using the same number of GPUs in 65 minutes. To the best of our knowledge, this is the best known training performance demonstrated for the Imagenet-1k dataset using 256 GPUs.",
"title": ""
},
{
"docid": "3df0c9350c620f8432fa47c1c6d37f7a",
"text": "Micropropagation of Ilex dumosa var. dumosa R. (\"yerba señorita\") from nodal segments containing one axillary bud was investigated. Shoot regeneration from explants of six-year-old plants was readily achieved in 1/4 strength Murashige and Skoog medium (1/4 MS) plus 30 gr x L(-1) sucrose and supplemented with 4.4 microM BA. Further multiplication and elongation of the regenerated shoots were obtained by subculture in a fresh medium of similar composition with 1.5 gr x L(-1) sucrose. Rooting induction from shoots were achieved in two steps: 1) 7 days in 1/4 MS (30 gr x L(-1) sucrose, 0.25% Phytagel) with 7.3 microM IBA and 2) 21 days in the same medium without IBA and 20 microM of cadaverine added. Regenerated plants were successfully transferred to soil. This micropropagation schedule can be implemented in breeding programs of Ilex dumosa.",
"title": ""
},
{
"docid": "3baec781f7b5aaab8598c3628ea0af3b",
"text": "Article history: Received 15 November 2010 Received in revised form 9 February 2012 Accepted 15 February 2012 Information professionals performing business activity related investigative analysis must routinely associate data from a diverse range of Web based general-interest business and financial information sources. XBRL has become an integral part of the financial data landscape. At the same time, Open Data initiatives have contributed relevant financial, economic, and business data to the pool of publicly available information on the Web but the use of XBRL in combination with Open Data remains at an early state of realisation. In this paper we argue that Linked Data technology, created for Web scale information integration, can accommodate XBRL data and make it easier to combine it with open datasets. This can provide the foundations for a global data ecosystem of interlinked and interoperable financial and business information with the potential to leverage XBRL beyond its current regulatory and disclosure role. We outline the uses of Linked Data technologies to facilitate XBRL consumption in conjunction with non-XBRL Open Data, report on current activities and highlight remaining challenges in terms of information consolidation faced by both XBRL and Web technologies. © 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "92fb73e03b487d5fbda44e54cf59640d",
"text": "The eyes and periocular area are the central aesthetic unit of the face. Facial aging is a dynamic process that involves skin, subcutaneous soft tissues, and bony structures. An understanding of what is perceived as youthful and beautiful is critical for success. Knowledge of the functional aspects of the eyelid and periocular area can identify pre-preoperative red flags.",
"title": ""
},
{
"docid": "1c692d403481da01d9e752584b00afd8",
"text": "BACKGROUND\nWhite matter abnormalities have been associated with both behavioral variant frontotemporal dementia (bvFTD) and Alzheimer's disease (AD).\n\n\nOBJECTIVE\nUsing MRI diffusion tensor imaging (DTI) measures, we compared white matter integrity between patients with bvFTD and those with early-onset AD and correlated these biomarkers with behavioral symptoms involving emotional blunting.\n\n\nMETHODS\nWe studied 8 bvFTD and 12 AD patients as well as 12 demographically-matched healthy controls (NCs). Using four DTI metrics (fractional anisotropy, axial diffusivity, radial diffusivity, and mean diffusivity), we assessed the frontal lobes (FWM) and genu of the corpus callosum (GWM), which are vulnerable late-myelinating regions, and a contrasting early-myelinating region (splenium of the corpus callosum). The Scale for Emotional Blunting Scale (SEB) was used to assess emotional functioning of the study participants.\n\n\nRESULTS\nCompared to AD patients and NCs, the bvFTD subjects exhibited significantly worse FWM and GWM integrity on all four DTI metrics sensitive to myelin and axonal integrity. In contrast, AD patients showed a numerical trend toward worse splenium of the corpus callosum integrity than bvFTD and NC groups. Significant associations between SEB ratings and GWM DTI measures were demonstrated in the combined bvFTD and AD sample. When examined separately, these relationships remained robust for the bvFTD group but not the AD group.\n\n\nCONCLUSIONS\nThe regional DTI alterations suggest that FTD and AD are each associated with a characteristic distribution of white matter degradation. White matter breakdown in late-myelinating regions was associated with symptoms of emotional blunting, particularly within the bvFTD group.",
"title": ""
},
{
"docid": "19d35c0f4e3f0b90d0b6e4d925a188e4",
"text": "This paper presents a new approach to the computer aided diagnosis (CAD) of diabetic retinopathy (DR)—a common and severe complication of long-term diabetes which damages the retina and cause blindness. Since microaneurysms are regarded as the first signs of DR, there has been extensive research on effective detection and localization of these abnormalities in retinal images. In contrast to existing algorithms, a new approach based on multi-scale correlation filtering (MSCF) and dynamic thresholding is developed. This consists of two levels, microaneurysm candidate detection (coarse level) and true microaneurysm classification (fine level). The approach was evaluated based on two public datasets—ROC (retinopathy on-line challenge, http://roc.healthcare.uiowa.edu) and DIARETDB1 (standard diabetic retinopathy database, http://www.it.lut.fi/project/imageret/diaretdb1). We conclude our method to be effective and efficient. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "06909d0ffbc52e14e0f6f1c9ffe29147",
"text": "DistributedLog is a high performance, strictly ordered, durably replicated log. It is multi-tenant, designed with a layered architecture that allows reads and writes to be scaled independently and supports OLTP, stream processing and batch workloads. It also supports a globally synchronous consistent replicated log spanning multiple geographically separated regions. This paper describes how DistributedLog is structured, its components and the rationale underlying various design decisions. We have been using DistributedLog in production for several years, supporting applications ranging from transactional database journaling, real-time data ingestion, and analytics to general publish-subscribe messaging.",
"title": ""
},
{
"docid": "aed264522ed7ee1d3559fe4863760986",
"text": "A wireless network consisting of a large number of small sensors with low-power transceivers can be an effective tool for gathering data in a variety of environments. The data collected by each sensor is communicated through the network to a single processing center that uses all reported data to determine characteristics of the environment or detect an event. The communication or message passing process must be designed to conserve the limited energy resources of the sensors. Clustering sensors into groups, so that sensors communicate information only to clusterheads and then the clusterheads communicate the aggregated information to the processing center, may save energy. In this paper, we propose a distributed, randomized clustering algorithm to organize the sensors in a wireless sensor network into clusters. We then extend this algorithm to generate a hierarchy of clusterheads and observe that the energy savings increase with the number of levels in the hierarchy. Results in stochastic geometry are used to derive solutions for the values of parameters of our algorithm that minimize the total energy spent in the network when all sensors report data through the clusterheads to the processing center. KeywordsSensor Networks; Clustering Methods; Voronoi Tessellations; Algorithms.",
"title": ""
},
{
"docid": "5abfafc228a99bef6cc0491f80ae9483",
"text": "OBJECTIVES\nOperational definitions of cognitive impairment have varied widely in diagnosing mild cognitive impairment (MCI). Identifying clinical subtypes of MCI has further challenged diagnostic approaches because varying the components of the objective cognitive assessment can significantly impact diagnosis. Therefore, the authors investigated the applicability of diagnostic criteria for clinical subtypes of MCI in a naturalistic research sample of community elders and quantified the variability in diagnostic outcomes that results from modifying the neuropsychological definition of objective cognitive impairment.\n\n\nDESIGN\nCross-sectional and longitudinal study.\n\n\nSETTING\nSan Diego, CA, Veterans Administration Hospital.\n\n\nPARTICIPANTS\nNinety nondemented, neurologically normal, community-dwelling older adults were initially assessed and 73 were seen for follow-up approximately 17 months later.\n\n\nMEASUREMENTS\nParticipants were classified via consensus diagnosis as either normally aging or having MCI via each of the five diagnostic strategies, which varied the cutoff for objective impairment and the number of neuropsychological tests considered in the diagnostic process.\n\n\nRESULTS\nA range of differences in the percentages identified as MCI versus cognitively normal were demonstrated, ranging from 10-74%, depending on the classification criteria used. A substantial minority of individuals demonstrated diagnostic instability over time and across diagnostic approaches. The single domain nonamnestic subtype diagnosis was particularly unstable (e.g., prone to reclassification as normal at follow up).\n\n\nCONCLUSION\nOur findings provide empirical support for a neuropsychologically derived operational definition of clinical subtypes of MCI and point to the importance of using comprehensive neuropsychological assessments. Diagnoses, particularly involving nonamnestic MCI, were variable over time. The applicability and utility of this particular MCI subtype warrants further investigation.",
"title": ""
},
{
"docid": "a78782e389313600620bfb68fc57a81f",
"text": "Online consumer reviews reflect the testimonials of real people, unlike advertisements. As such, they have critical impact on potential consumers, and indirectly on businesses. According to a Harvard study (Luca 2011), +1 rise in star-rating increases revenue by 5–9%. Problematically, such financial incentives have created a market for spammers to fabricate reviews, to unjustly promote or demote businesses, activities known as opinion spam (Jindal and Liu 2008). A vast majority of existing work on this problem have formulations based on static review data, with respective techniques operating in an offline fashion. Spam campaigns, however, are intended to make most impact during their course. Abnormal events triggered by spammers’ activities could be masked in the load of future events, which static analysis would fail to identify. In this work, we approach the opinion spam problem with a temporal formulation. Specifically, we monitor a list of carefully selected indicative signals of opinion spam over time and design efficient techniques to both detect and characterize abnormal events in real-time. Experiments on datasets from two different review sites show that our approach is fast, effective, and practical to be deployed in real-world systems.",
"title": ""
},
{
"docid": "fcdf27ea2841b6b4259df3cd12e45390",
"text": "With the development of deep learning and artificial intelligence, deep neural networks are increasingly being applied for natural language processing tasks. However, the majority of research on natural language processing focuses on alphabetic languages. Few studies have paid attention to the characteristics of ideographic languages, such as the Chinese language. In addition, the existing Chinese processing algorithms typically regard Chinese words or Chinese characters as the basic units while ignoring the information contained within the deeper architecture of Chinese characters. In the Chinese language, each Chinese character can be split into several components, or strokes. This means that strokes are the basic units of a Chinese character, in a manner similar to the letters of an English word. Inspired by the success of character-level neural networks, we delve deeper into Chinese writing at the stroke level for Chinese language processing. We extract the basic features of strokes by considering similar Chinese characters to learn a continuous representation of Chinese characters. Furthermore, word embeddings trained at different granularities are not exactly the same. In this paper, we propose an algorithm for combining different representations of Chinese words within a single neural network to obtain a better word representation. We develop a Chinese word representation service for several natural language processing tasks, and cloud computing is introduced to deal with preprocessing challenges and the training of basic representations from different dimensions.",
"title": ""
},
{
"docid": "fc3c4f6c413719bbcf7d13add8c3d214",
"text": "Disentangling the effects of selection and influence is one of social science's greatest unsolved puzzles: Do people befriend others who are similar to them, or do they become more similar to their friends over time? Recent advances in stochastic actor-based modeling, combined with self-reported data on a popular online social network site, allow us to address this question with a greater degree of precision than has heretofore been possible. Using data on the Facebook activity of a cohort of college students over 4 years, we find that students who share certain tastes in music and in movies, but not in books, are significantly likely to befriend one another. Meanwhile, we find little evidence for the diffusion of tastes among Facebook friends-except for tastes in classical/jazz music. These findings shed light on the mechanisms responsible for observed network homogeneity; provide a statistically rigorous assessment of the coevolution of cultural tastes and social relationships; and suggest important qualifications to our understanding of both homophily and contagion as generic social processes.",
"title": ""
},
{
"docid": "56d4f50a6e30ea213e08221374081282",
"text": "Over the past few years, a large family of manifold learning algorithms have been proposed, and applied to various applications. While designing new manifold learning algorithms has attracted much research attention, fewer research efforts have been focused on out-ofsample extrapolation of learned manifold. In this paper, we propose a novel algorithm of manifold learning. The proposed algorithm, namely Local and Global Regressive Mapping (LGRM), employs local regression models to grasp the manifold structure. We additionally impose a global regression term as regularization to learn a model for out-of-sample data extrapolation. Based on the algorithm, we propose a new manifold learning framework. Our framework can be applied to any manifold learning algorithms to simultaneously learn the low dimensional embedding of the training data and a model which provides explicit mapping of the outof-sample data to the learned manifold. Experiments demonstrate that the proposed framework uncover the manifold structure precisely and can be freely applied to unseen data. Introduction & Related Works Unsupervised dimension reduction plays an important role in many applications. Among them, manifold learning, a family of non-linear dimension reduction algorithms, has attracted much attention. During recent decade, researchers have developed various manifold learning algorithms, such as ISOMap (Tenenbaum, Silva, & Langford 2000), Local Linear Embedding (LLE) (Roweis & Saul 2000), Laplacian Eigenmap (LE) (Belkin & Niyogi 2003), Local Tangent Space Alignment (LTSA) (Zhang & Zha 2004), Local Spline Embedding (LSE) (Xiang et al. 2009), etc . Manifold learning has been applied to different applications, particularly in the field of computer vision, where it has been experimentally demonstrated that linear dimension reduction methods are not capable to cope with the data sampled from non-linear manifold (Chin & Suter 2008). Suppose there are n training data X = {x1, ..., xn} densely sampled from smooth manifold, where xi ∈ R for 1 ≤ i ≤ n. Denote Y = {y1, ..., yn}, where yi ∈ R(m < d) Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. is the low dimensional embedding of xi. We define Y = [y1, ..., yn] as the low dimensional embedding matrix. Although the motivation of manifold learning algorithm differs from one to another, the objective function of ISOMap, LLE and LE can be uniformly formulated as follows (Yan et al. 2005). min Y T BY =I tr(Y T LY ), (1) where tr(·) is the trace operator, B is a constraint matrix, and L is the Laplacian matrix computed according to different criterions. It is also easy to see that (1) generalizes the objective function of other manifold learning algorithms, such as LTSA. Clearly, the Laplacian matrix plays a key role in manifold learning. Different from linear dimension reduction approaches, most of the manifold learning algorithms do not provide explicit mapping of the unseen data. As a compromise, Locality Preserving Projection (LPP) (He & Niyogi 2003) and Spectral Regression (SR) (Cai, He, & Han 2007) were proposed, which introduce linear projection matrix to LE. However, because a linear constraint is imposed, both algorithms fail in preserving the intrinsical non-linear structure of the data manifold. Manifold learning algorithms can be described as Kernel Principal Component Analysis (KPCA) (Schölkopf, Smola, & Müller 1998) on specially constructed Gram matrices (Ham et al. 2004). According to the specific algorithmic procedures of manifold learning algorithms, Bengio et al. have defined a data dependent kernel matrix K for ISOMap, LLE and LE, respectively (Bengio et al. 2003). Given the data dependent kernel matrix K, out-of-sample data can be extrapolated by employing Nyström formula. The framework proposed in (Bengio et al. 2003) generalizes Landmark ISOMap (Silva & Tenenbaum 2003). Similar algorithm was also proposed in (Chin & Suter 2008) for Maximum Variance Unfolding (MVU). Note that Semi-Definite Programming is conducted in MVU. It is very time consuming and thus less practical. One limitation of this family of algorithms is that the design of data dependent kernel matrices for various manifold learning algorithms is a nontrivial task. For example, compared with LE, it is not that straightforward to define the data dependent kernel matrix for LLE (Bengio et al. 2003) and it still remains unclear how to define the kernel matrices for other manifold learning algorithms, i.e., LTSA. In (Saul & Roweis 2003), a nonparametric approach was proposed for out-of-sample extrapolation of LLE. Let xo ∈ R be the novel data to be extrapolated and Xo = {xo1, ..., xok} ⊂ X be a set of data which are k nearest neighbor set of xo in R. The low dimensional embedding yo of xo is given by ∑k i=1 wiyoi, in which yoi is the low dimensional embedding of xoi and wi (1 ≤ i ≤ k) can be obtained by minimizing the following objective function.",
"title": ""
},
{
"docid": "4c63c1a3d9323af8b57ce746fff1c246",
"text": "OBJECTIVES\nIn this paper we aim to characterise the critical mass of linked data, methods and expertise required for health systems to adapt to the needs of the populations they serve - more recently known as learning health systems. The objectives are to: 1) identify opportunities to combine separate uses of common data sources in order to reduce duplication of data processing and improve information quality; 2) identify challenges in scaling-up the reuse of health data sufficiently to support health system learning.\n\n\nMETHODS\nThe challenges and opportunities were identified through a series of e-health stakeholder consultations and workshops in Northern England from 2011 to 2014. From 2013 the concepts presented here have been refined through feedback to collaborators, including patient/citizen representatives, in a regional health informatics research network (www.herc.ac.uk).\n\n\nRESULTS\nHealth systems typically have separate information pipelines for: 1) commissioning services; 2) auditing service performance; 3) managing finances; 4) monitoring public health; and 5) research. These pipelines share common data sources but usually duplicate data extraction, aggregation, cleaning/preparation and analytics. Suboptimal analyses may be performed due to a lack of expertise, which may exist elsewhere in the health system but is fully committed to a different pipeline. Contextual knowledge that is essential for proper data analysis and interpretation may be needed in one pipeline but accessible only in another. The lack of capable health and care intelligence systems for populations can be attributed to a legacy of three flawed assumptions: 1) universality: the generalizability of evidence across populations; 2) time-invariance: the stability of evidence over time; and 3) reducibility: the reduction of evidence into specialised sub-systems that may be recombined.\n\n\nCONCLUSIONS\nWe conceptualize a population health and care intelligence system capable of supporting health system learning and we put forward a set of maturity tests of progress toward such a system. A factor common to each test is data-action latency; a mature system spawns timely actions proportionate to the information that can be derived from the data, and in doing so creates meaningful measurement about system learning. We illustrate, using future scenarios, some major opportunities to improve health systems by exchanging conventional intelligence pipelines for networked critical masses of data, methods and expertise that minimise data-action latency and ignite system-learning.",
"title": ""
},
{
"docid": "d43c578f8aaa51cd593fc3c9f2b12665",
"text": "One of the most impressive feats in robotics was the 2005 victory by a driverless Volkswagen Touareg in the DARPA Grand Challenge. This paper discusses what can be learned about the nature of representation from the car’s successful attempt to navigate the world. We review the hardware and software that it uses to interact with its environment, and describe how these techniques enable it to represent the world. We discuss robosemantics, the meaning of computational structures in robots. We argue that the car constitutes a refutation of semantic arguments against the possibility of strong artificial intelligence.",
"title": ""
},
{
"docid": "84c6f828c4a86b8a0ab14ca84d294e52",
"text": "In sectorless air traffic management (ATM) concept, air traffic controllers are no longer in charge of a certain sector. Instead, the sectorless airspace is considered as a single unit and controllers are assigned certain aircraft, which might be located anywhere in the sectorless airspace. The air traffic controllers are responsible for these geographically independent aircraft all the way from their entry into the airspace to the exit. In order to support the controllers with this task, they are provided with one radar display for each assigned aircraft. This means, only one aircraft on each of these radar displays is under their control as the surrounding traffic is under control of other controllers. Each air traffic controller has to keep track of several traffic situations at the same time. In order to optimally support controllers with this task, a color-coding of the information is necessary. For example, the aircraft under control can be distinguished from the surrounding traffic by displaying them in a certain color. Furthermore, conflict detection and resolution information can be color-coded, such that it is straightforward which controller is in charge of solving a conflict. We conducted a human-in-the-loop simulation in order to compare different color schemes for a sectorless ATM controller working position. Three different color schemes were tested: a positive contrast polarity scheme that follows the current look of the P1/VAFORIT (P1/very advanced flight-data processing operational requirement implementation) display used by the German air navigation service provider DFS in the Karlsruhe upper airspace control center, a newly designed negative contrast polarity color scheme and a modified positive contrast polarity scheme. An analysis of the collected data showed no significant evidence for an impact of the color schemes on controller task performance. However, results suggest that a positive contrast polarity should be preferred and that the newly designed positive contrast polarity color scheme has advantages over the P1/VAFORIT color scheme when used for sectorless ATM.",
"title": ""
},
{
"docid": "5862e294bdb2b001256ada3387212866",
"text": "We investigate the problem of representing an entire video using CNN features for human action recognition. End-to-end learning of CNN/RNNs is currently not possible for whole videos due to GPU memory limitations and so a common practice is to use sampled frames as inputs along with the video labels as supervision. However, the global video labels might not be suitable for all of the temporally local samples as the videos often contain content besides the action of interest. We therefore propose to instead treat the deep networks trained on local inputs as local feature extractors. The local features are then aggregated to form global features which are used to assign video-level labels through a second classification stage. We investigate a number of design choices for this local feature approach. Experimental results on the HMDB51 and UCF101 datasets show that a simple maximum pooling on the sparsely sampled local features leads to significant performance improvement.",
"title": ""
}
] |
scidocsrr
|
8f69f8fda236439cea819b00b2aa924e
|
Table-to-Text: Describing Table Region With Natural Language
|
[
{
"docid": "3d32f7037ee239fe2939526559eb67d5",
"text": "We propose an end-to-end, domainindependent neural encoder-aligner-decoder model for selective generation, i.e., the joint task of content selection and surface realization. Our model first encodes a full set of over-determined database event records via an LSTM-based recurrent neural network, then utilizes a novel coarse-to-fine aligner to identify the small subset of salient records to talk about, and finally employs a decoder to generate free-form descriptions of the aligned, selected records. Our model achieves the best selection and generation results reported to-date (with 59% relative improvement in generation) on the benchmark WEATHERGOV dataset, despite using no specialized features or linguistic resources. Using an improved k-nearest neighbor beam filter helps further. We also perform a series of ablations and visualizations to elucidate the contributions of our key model components. Lastly, we evaluate the generalizability of our model on the ROBOCUP dataset, and get results that are competitive with or better than the state-of-the-art, despite being severely data-starved.",
"title": ""
}
] |
[
{
"docid": "ef89fd9b1e748280e988210c663b406f",
"text": "Better life of human is a central goal of information technology. To make a useful technology, in sensor network area, activity recognition (AR) is becoming a key feature. Using the AR technology it is now possible to know peoples behaviors like what they do, how they do and when they do etc. In recent years, there have been frequent accidental reports of aged dementia patients, and social cost has been increasing to take care of them. AR can be utilized to take care of these patients. In this paper, we present an efficient method that converts sensor’s raw data to readable patterns in order to classify their current activities and then compare these patterns with previously stored patterns to detect several abnormal patterns like wandering which is one of the early symptoms of dementia and so on. In this way, we digitalize human activities and can detect wandering and so can infer dementia through activity pattern matching. Here, we present a novel algorithm about activity digitalization using acceleration sensors as well as a wandering estimation algorithm in order to overcome limitations of existing models to detect/infer dementia.",
"title": ""
},
{
"docid": "97353be7c54dd2ded69815bf93545793",
"text": "In recent years, with the rapid development of deep learning, it has achieved great success in the field of image recognition. In this paper, we applied the convolution neural network (CNN) on supermarket commodity identification, contributing to the study of supermarket commodity identification. Different from the QR code identification of supermarket commodity, our work applied the CNN using the collected images of commodity as input. This method has the characteristics of fast and non-contact. In this paper, we mainly did the following works: 1. Collected a small dataset of supermarket goods. 2. Built Different convolutional neural network frameworks in caffe and trained the dataset using the built networks. 3. Improved train methods by finetuning the trained model.",
"title": ""
},
{
"docid": "a9cf03d67702e87b3efbf4e0602ff4e7",
"text": "We propose a real-time finger writing character recognition system using depth information. This system allows user to input characters by writing freely in the air with the Kinect. During the writing process, it is reasonable to assume that the finger and hand are always holding in front of torso. Firstly, we compute the depth histogram of human body and use a switch mixture Gaussian model to characterize it. Since the hand is closer to camera, a model-based threshold can segment the hand-related region out. Then, we employ an unsupervised clustering algorithm, K-means, to classify the segmented region into two parts, the finger-hand part and hand-arm part. By identifying the arm direction, we can determine the finger-hand cluster and locate the fingertip as the farthest point from the other cluster. We collected over 8000 frames writing-in-the-air sequences including two different subjects writing numbers, strokes, pattern, English and Chinese characters from two different distances. From our experiments, the proposed algorithm can provide robust and accurate fingertip detection, and achieve encouraging character recognition result.",
"title": ""
},
{
"docid": "8b1b0ee79538a1f445636b0798a0c7ca",
"text": "Much of the current activity in the area of intelligent vehicle-highway systems (IVHS) focuses on one simple objective: to collect more data. Clearly, improvements in sensor technology and communication systems will allow transportation agencies to more closely monitor the condition of the surface transportation system. However, monitoring alone cannot improve the safety or efficiency of the system. It is imperative that surveillance data be used to manage the system in a proactive rather than a reactive manner. 'Proactive traffic management will require the ability to predict traffic conditions. Previous predictive modeling approaches can be grouped into three categories: (a) historical, data-based algorithms; (b) time-series models; and (c) simulations. A relatively new mathematical model, the neural network, offers an attractive alternative because neural networks can model undefined, complex nonlinear surfaces. In a comparison of a backpropagation neural network model with the more traditional approaches of an historical, data-based algorithm and a time-series model, the backpropagation model· was clearly superior, although all three models did an adequate job of predicting future traffic volumes. The backpropagation model was more responsive to dynamic conditions than the historical, data-based algorithm, and it did not experience the lag and overprediction characteristics of the time-series model. Given these advantages and the backpropagation model's ability to run in a parallel computing environment, it appears that such neural network prediction models hold considerable potential for use in real-time IVHS applications.",
"title": ""
},
{
"docid": "5cce25e685265deba47f25a4fbeadfe0",
"text": "A number of recent studies in the economics literature have focused on the usefulness of factor models in the context of prediction using ‘‘big data’’ (see Bai and Ng, 2008; Dufour and Stevanovic, 2010; Forni, Hallin, Lippi, & Reichlin, 2000; Forni et al., 2005; Kim and Swanson, 2014a; Stock and Watson, 2002b, 2006, 2012, and the references cited therein). We add to this literature by analyzing whether ‘‘big data’’ are useful for modelling low frequency macroeconomic variables, such as unemployment, inflation and GDP. In particular, we analyze the predictive benefits associated with the use of principal component analysis (PCA), independent component analysis (ICA), and sparse principal component analysis (SPCA). We also evaluate machine learning, variable selection and shrinkage methods, including bagging, boosting, ridge regression, least angle regression, the elastic net, and the non-negative garotte. Our approach is to carry out a forecasting ‘‘horse-race’’ using prediction models that are constructed based on a variety of model specification approaches, factor estimation methods, and data windowing methods, in the context of predicting 11 macroeconomic variables that are relevant to monetary policy assessment. In many instances, we find that various of our benchmark models, including autoregressive (AR)models, ARmodelswith exogenous variables, and (Bayesian) model averaging, do not dominate specifications based on factor-type dimension reduction combinedwith variousmachine learning, variable selection, and shrinkagemethods (called ‘‘combination’’ models). We find that forecast combination methods are mean square forecast error (MSFE) ‘‘best’’ for only three variables out of 11 for a forecast horizon of h = 1, and for four variables when h = 3 or 12. In addition, non-PCA type factor estimation methods yield MSFE-best predictions for nine variables out of 11 for h = 1, although PCA dominates at longer horizons. Interestingly, we also find evidence of the usefulness of combinationmodels for approximately half of our variableswhen h > 1.Most importantly, we present strong new evidence of the usefulness of factor-based dimension reduction when utilizing ‘‘big data’’ for macroeconometric forecasting. © 2016 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved. ∗ Corresponding author. E-mail addresses: khdouble@bok.or.kr (H.H. Kim), nswanson@econ.rutgers.edu (N.R. Swanson).",
"title": ""
},
{
"docid": "876ee0ecb1b6196a19fb2ab85b86f19d",
"text": "This paper presents new experimental data and an improved mechanistic model for the Gas-Liquid Cylindrical Cyclone (GLCC) separator. The data were acquired utilizing a 3” ID laboratory-scale GLCC, and are presented along with a limited number of field data. The data include measurements of several parameters of the flow behavior and the operational envelope of the GLCC. The operational envelope defines the conditions for which there will be no liquid carry-over or gas carry-under. The developed model enables the prediction of the hydrodynamic flow behavior in the GLCC, including the operational envelope, equilibrium liquid level, vortex shape, velocity and holdup distributions and pressure drop across the GLCC. The predictions of the model are compared with the experimental data. These provide the state-of-the-art for the design of GLCC’s for the industry. Introduction The gas-liquid separation technology currently used by the petroleum industry is mostly based on the vessel-type separator which is large, heavy and expensive to purchase and operate. This technology has not been substantially improved over the last several decades. In recent years the industry has shown interest in the development and application of alternatives to the vessel-type separator. One such alternative is the use of compact or in-line separators, such as the Gas-Liquid Cylindrical Cyclone (GLCC) separator. As can be seen in Fig. 1, the GLCC is an emerging class of vertical compact separators, as compared to the very mature technology of the vessel-type separator. D ev el op m en t GLCC’s FWKO Cyclones Emerging Gas Cyclones Conventional Horizontal and Vertical Separators Growth Finger Storage Slug Catcher Vessel Type Slug Catcher",
"title": ""
},
{
"docid": "98881e7174d495d42a0d68c0f0d7bf3b",
"text": "The design process is often characterized by and realized through the iterative steps of evaluation and refinement. When the process is based on a single creative domain such as visual art or audio production, designers primarily take inspiration from work within their domain and refine it based on their own intuitions or feedback from an audience of experts from within the same domain. What happens, however, when the creative process involves more than one creative domain such as in a digital game? How should the different domains influence each other so that the final outcome achieves a harmonized and fruitful communication across domains? How can a computational process orchestrate the various computational creators of the corresponding domains so that the final game has the desired functional and aesthetic characteristics? To address these questions, this paper identifies game facet orchestration as the central challenge for artificial-intelligence-based game generation, discusses its dimensions, and reviews research in automated game generation that has aimed to tackle it. In particular, we identify the different creative facets of games, propose how orchestration can be facilitated in a top-down or bottom-up fashion, review indicative preliminary examples of orchestration, and conclude by discussing the open questions and challenges ahead.",
"title": ""
},
{
"docid": "e2950089f76e1509ad2aa74ea5c738eb",
"text": "In this review the knowledge status of and future research options on a green gas supply based on biogas production by co-digestion is explored. Applications and developments of the (bio)gas supply in The Netherlands have been considered, whereafter literature research has been done into the several stages from production of dairy cattle manure and biomass to green gas injection into the gas grid. An overview of a green gas supply chain has not been made before. In this study it is concluded that on installation level (micro-level) much practical knowledge is available and on macro-level knowledge about availability of biomass. But on meso-level (operations level of a green gas supply) very little research has been done until now. Future research should include the modeling of a green gas supply chain on an operations level, i.e. questions must be answered as where to build digesters based on availability of biomass. Such a model should also advise on technology of upgrading depending on scale factors. Future research might also give insight in the usability of mixing (partly upgraded) biogas with natural gas. The preconditions for mixing would depend on composition of the gas, the ratio of gases to be mixed and the requirements on the mixture.",
"title": ""
},
{
"docid": "a049d8375465cadb67a796c52bf42f79",
"text": "We extend continuous assurance research by proposing a novel continuous assurance architecture grounded in information fusion research. Existing continuous assurance architectures focus primarily on methods of monitoring assurance clients’ systems to detect anomalous activities and have not addressed the question of how to process the detected anomalies. Consequently, actual implementations of these systems typically detect a large number of anomalies, with the resulting information overload leading to suboptimal decision making due to human information processing limitations. The proposed architecture addresses these issues by performing anomaly detection, aggregation and evaluation. Within the proposed architecture, artifacts developed in prior continuous assurance, ontology, and artificial intelligence research are used to perform the detection, aggregation and evaluation information fusion tasks. The architecture contributes to the academic continuous assurance literature and has implications for practitioners involved in the development of more robust and useful continuous assurance systems.",
"title": ""
},
{
"docid": "30941e0bc8575047d1adc8c20983823b",
"text": "The world has changed dramatically for wind farm operators and service providers in the last decade. Organizations whose turbine portfolios was counted in 10-100s ten years ago are now managing large scale operation and service programs for fleet sizes well above one thousand turbines. A big challenge such organizations now face is the question of how the massive amount of operational data that are generated by large fleets are effectively managed and how value is gained from the data. A particular hard challenge is the handling of data streams collected from advanced condition monitoring systems. These data are highly complex and typically require expert knowledge to interpret correctly resulting in poor scalability when moving to large Operation and Maintenance (O&M) platforms.",
"title": ""
},
{
"docid": "88a41637c732aae49503bb8d94f1790a",
"text": "Different demographics, e.g., gender or age, can demonstrate substantial variation in their language use, particularly in informal contexts such as social media. In this paper we focus on learning gender differences in the use of subjective language in English, Spanish, and Russian Twitter data, and explore cross-cultural differences in emoticon and hashtag use for male and female users. We show that gender differences in subjective language can effectively be used to improve sentiment analysis, and in particular, polarity classification for Spanish and Russian. Our results show statistically significant relative F-measure improvement over the gender-independent baseline 1.5% and 1% for Russian, 2% and 0.5% for Spanish, and 2.5% and 5% for English for polarity and subjectivity classification.",
"title": ""
},
{
"docid": "860be6329845de071654aadaa0d45e5a",
"text": "This report demonstrates our solution for the Open Images 2018 Challenge. Based on our detailed analysis on the Open Images Datasets (OID), it is found that there are four typical features: large-scale, hierarchical tag system, severe annotation incompleteness and data imbalance. Considering these characteristics, an amount of strategies are employed, including SNIPER, soft sampling, class-aware sampling (CAS), hierarchical non-maximumsuppression (HNMS) and so on. In virtue of these effective strategies, and further using the powerful SENet154 armed with feature pyramid module and deformable ROIalign as the backbone, our best single model could achieve a mAP of 56.9%. After a further ensemble with 9 models, the final mAP is boosted to 62.2% in the public leaderboard (ranked the 2nd place) and 58.6% in the private leaderboard (ranked the 3rd place, slightly inferior to the 1st place by only 0.04 point).",
"title": ""
},
{
"docid": "201f576423ed88ee97d1505b6d5a4d3f",
"text": "The effectiveness of the treatment of breast cancer depends on its timely detection. An early step in the diagnosis is the cytological examination of breast material obtained directly from the tumor. This work reports on advances in computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies to characterize these biopsies as either benign or malignant. Instead of relying on the accurate segmentation of cell nuclei, the nuclei are estimated by circles using the circular Hough transform. The resulting circles are then filtered to keep only high-quality estimations for further analysis by a support vector machine which classifies detected circles as correct or incorrect on the basis of texture features and the percentage of nuclei pixels according to a nuclei mask obtained using Otsu's thresholding method. A set of 25 features of the nuclei is used in the classification of the biopsies by four different classifiers. The complete diagnostic procedure was tested on 737 microscopic images of fine needle biopsies obtained from patients and achieved 98.51% effectiveness. The results presented in this paper demonstrate that a computerized medical diagnosis system based on our method would be effective, providing valuable, accurate diagnostic information.",
"title": ""
},
{
"docid": "06c3f32f07418575c700e2f0925f4398",
"text": "The spacing of a fixed amount of study time across multiple sessions usually increases subsequent test performance*a finding known as the spacing effect. In the spacing experiment reported here, subjects completed multiple learning trials, and each included a study phase and a test. Once a subject achieved a perfect test, the remaining learning trials within that session comprised what is known as overlearning. The number of these overlearning trials was reduced when learning trials were spaced across multiple sessions rather than massed in a single session. In addition, the degree to which spacing reduced overlearning predicted the size of the spacing effect, which is consistent with the possibility that spacing increases subsequent recall by reducing the occurrence of overlearning. By this account, overlearning is an inefficient use of study time, and the efficacy of spacing depends at least partly on the degree to which it reduces the occurrence of overlearning.",
"title": ""
},
{
"docid": "cb3e55580b47b0527ca5b46cea7af425",
"text": "Recent practise has revealed that conservation interventions that seek to achieve multiple benefits generally face significant, if under-recognized trade-offs. REDD+ policies present prospective win–win solutions for climate change mitigation, rural development and biodiversity conservation. Notably, protecting, enhancing and restoring forests for their carbon sequestration services has the potential to additionally promote the conservation of imperiled tropical biodiversity. However, it has become increasingly apparent that efforts to design a REDD+ mechanism that optimizes emissions reductions and associated co-benefits face significant environmental and economic trade-offs. We provide a framework for conceptualizing the major related policy options, presenting the associated trade-offs as a continuum and as functions of two key factors: (1) geographic targeting, and (2) the selection of specific forest management activities. Our analysis highlights the challenges of assessing trade-offs using existing data and valuation schemes, and the difficulty of paying for and legislating biodiversity co-benefits and safeguards within a future REDD+ mechanism. 2011 Elsevier Ltd. All rights reserved. 1. REDD+ as a win–win solution enhancing forests. Although generally considered ancillary to Tropical forests face a new set of win–win expectations. REDD+ policies under development through the United Nations Framework Convention on Climate Change (UNFCCC) would financially reward countries that reduce their carbon emissions through interventions to reduce deforestation and forest degradation, and conserve, sustainably manage and enhance forest carbon stocks (UNFCCC, 2010). These policies could provide large-scale carbon emissions reductions at comparatively low abatement costs (Stern, 2006), while also promoting sustainable forest sector development, enhancing rural livelihoods and protecting biodiversity— multiple objectives reaffirmed during the UNFCCC 17th Conference of Parties (2011a). A future REDD+ mechanism could transfer billions of dollars from industrialized nations to tropical developing countries each year (e.g., Ballesteros et al., 2011). Funds would be used to protect threatened forests, restore degraded forests, improve forest sector planning and governance, and incentivize sustainable management in order to reduce forest-based carbon emissions. They may further generate social co-benefits through conservation payments to landholders and sustainable development initiatives that both improve rural livelihoods and reduce pressures on forests (e.g., Palmer, this issue). REDD+ policies may also deliver significant, additional biodiversity co-benefits by better protecting, managing and ll rights reserved. helps), ted.webb@nus.edu.sg al. Win–win REDD+ approache emissions reductions, biodiversity co-benefits have proved important to early REDD+ project developers (Cerbu et al., 2011; Dickson et al., 2009). Sites where conservation priorities geographically overlap with high carbon density forests are especially likely to deliver win–win outcomes (Busch et al., 2011; Kapos et al., 2008; Strassburg et al., 2010; Venter et al., 2009b). Similarly, REDD+ investments in carbon stock enhancement through reforestation can benefit biodiversity (Kettle, this issue), and REDD+ support for sustainable forest management strategies may provide economically competitive, more biodiversity-friendly alternatives to conventional logging (e.g., CBD and GIZ, 2011). A number of prospective REDD+ interventions may deliver win– win solutions, generating considerable optimism (e.g., Busch et al. 2011; Christophersen and Stahl, 2011; CI, 2010; Djoghlaf, 2010; UNFCCC, 2011a; Viana, 2009). However, there is growing evidence that even where multiple benefits are possible REDD+ policy decisions face significant carbon–biodiversity trade-offs (Hirsch et al., 2010). We provide a framework for conceptualizing the major policy options currently available for a REDD+ mechanism that seeks joint emissions reduction and biodiversity conservation outcomes, and for anticipating the associated trade-offs. The framework facilitates more realistic assessments of the win–win opportunities afforded by REDD+. 2. Acknowledging conservation trade-offs Environmental management often seeks multiple benefits, of which biodiversity conservation is often implicitly or explicitly a s belie carbon–biodiversity trade-offs. Biol. Conserv. (2012), doi:10.1016/ 2 J. Phelps et al. / Biological Conservation xxx (2012) xxx–xxx desired outcome (e.g., Kareiva et al., 2008). The tension between maximizing multiple benefits and accepting trade-offs in previous initiatives is instructive to the REDD+ debate. Integrated conservation and development projects, and community-based management initiatives have traditionally linked livelihood development and biodiversity conservation (e.g., Kremen et al., 1994). Agricultural intensification programmes have jointly promoted enhanced productivity and land sparing for conservation (e.g., Avery, 1997). Similarly, sustainable forest management and reduced impact logging initiatives have sought to maintain forest-based biodiversity alongside profitable extractive industries (e.g., Gascon et al., 1998). Perhaps most recently, the evolution of payment for ecosystem services schemes has presented opportunities to jointly address biodiversity conservation and poverty alleviation (Wunder, 2008). Experience with these programmes has shown that while multiple benefits are possible in some contexts, win–win solutions remain the source of considerable debate (see McShane et al., 2011; Hirsch et al., 2010; for debates surrounding the win–win examples above, see Adams et al. 2006; Agrawal and Redford, 2006; Bowles et al., 1998; Garcia-Fernandez et al., 2008; Matson and Vitousek, 2006; Redford and Adams, 2009; Robinson and Redford, 2004). These examples offer ample precedents for REDD+ efforts that seek to both maximize carbon sequestration and biodiversity conservation. Accepting trade-offs explicitly requires a decision to forego the maximum return of one outcome in exchange for an increase in another outcome. Trade-offs present difficult decisions for policy makers, and often require a reassessment of priorities and expected outcomes (Minteer and Miller, 2011), or even a new definition of intervention success. There is increased recognition that conservation interventions suffer where trade-offs are overlooked (prompting unrealistic expectations), while honest assessments of trade-offs can facilitate problem-solving, improve planning (Hirsch et al., 2010; McShane et al., 2011) and increase conservation success. There have been recent calls to evaluate the trade-offs of REDD+ policy options (e.g., Ghazoul et al., 2010; Harvey et al., 2010; Hirsch et al., 2010), as it has become increasingly apparent that REDD+ interventions are more complex than depicted by win–win representations alone (e.g., Ebeling and Fehse, 2009; Paoli et al., 2010; Phelps et al., 2010b). For example, evidence suggests that a REDD+ mechanism will not automatically yield significant, geographically-distributed biodiversity co-benefits (Ebeling and Yasue, 2008; Paoli et al., 2010; Strassburg et al., 2010; Venter et al., 2009a). Moreover, in some circumstances REDD+ policies may lead to unintentional biodiversity loss, for example if REDD+ policies displace deforestation pressures into other forests (leakage), or if REDD+ redirects funds away from other conservation objectives (Grainger et al., 2009; Miles and Kapos, 2008; Putz and Redford, 2009). Thus, while most REDD+ policies have the potential to deliver multiple benefits (Fig. 1), a future mechanism (1) requires environmental regulations and safeguards in order to protect against unintended biodiversity loss, and (2) would have to be specially designed in order to maximize additional biodiversity co-benefits (Harvey et al., 2010; Pistorius et al., 2010). 3. Conceptualizing carbon–biodiversity trade-offs In general, sites where carbon and biodiversity priorities geographically overlap (e.g., Congo Basin), and where land management approaches favor both carbon and biodiversity conservation (e.g., protected areas), represent prospective win–win outcomes. However, once the obviously attractive investments are implemented, the selection of ‘second-tier’ investments will involve carPlease cite this article in press as: Phelps, J., et al. Win–win REDD+ approache j.biocon.2011.12.031 bon–biodiversity trade-offs (e.g., Miles and Kapos, 2008; Price et al., 2008). Importantly, interventions designed to maximize biodiversity co-benefits may yield less carbon benefits than interventions that prioritize maximum carbon outcomes (Fig. 1; Harvey et al., 2010; Paoli et al., 2010). The associated trade-offs are not binary (all biodiversity or all carbon benefits). Fig. 1 reveals that many of the strategies under consideration for REDD+ support have the potential to deliver win–win outcomes. However, there is a range of prospective safeguards and investment options that offer varying degrees of environmental protection, a continuum of prospective biodiversity co-benefits (Miles and Dickson, 2010) and a range of prospective carbon benefits. Fig. 1 depicts the trade-offs facing forest management interventions that seek to (1) maximize carbon emissions reductions, (2) avoid unintended biodiversity loss through the adoption of safeguards, and (3) maximize additional biodiversity co-benefits. The trade-offs between these three conservation objectives are represented as the function of two key policy dimensions: (1) geographic targeting of where REDD+ interventions are located, and (2) the planning, selection and implementation of forest management interventions. The figure is explained using six scenarios (A–F), which represent specific examples of prospective investments intended to reduce forest-based carbon emissions, although not all are necessaril",
"title": ""
},
{
"docid": "f43aef1428a2c481fc97a25c17f4bdb4",
"text": "It is thought by cognitive scientists and typographers alike, that lower-case text is more legible than upper-case. Yet lower-case letters are, on average, smaller in height and width than upper-case characters, which suggests an upper-case advantage. Using a single unaltered font and all upper-, all lower-, and mixed-case text, we assessed size thresholds for words and random strings, and reading speeds for text with normal and visually impaired participants. Lower-case thresholds were roughly 0.1 log unit higher than upper. Reading speeds were higher for upper- than for mixed-case text at sizes twice acuity size; at larger sizes, the upper-case advantage disappeared. Results suggest that upper-case is more legible than the other case styles, especially for visually-impaired readers, because smaller letter sizes can be used than with the other case styles, with no diminution of legibility.",
"title": ""
},
{
"docid": "457bcddcc1c509954c614daf2f7b9227",
"text": "Human-robot interaction (HRI) for mobile robots is still in its infancy. Most user interactions with robots have been limited to teleoperation capabilities where the most common interface provided to the user has been the video feed from the robotic platform and some way of directing the path of the robot. For mobile robots with semi-autonomous capabilities, the user is also provided with a means of setting way points. More importantly, most HRI capabilities have been developed by robotics experts for use by robotics experts. As robots increase in capabilities and are able to perform more tasks in an autonomous manner we need to think about the interactions that humans will have with robots and what software architecture and user interface designs can accommodate the human in-the-loop. We also need to design systems that can be used by domain experts but not robotics experts. This paper outlines a theory of human-robot interaction and proposes the interactions and information needed by both humans and robots for the different levels of interaction, including an evaluation methodology based on situational awareness.",
"title": ""
},
{
"docid": "2e976aa51bc5550ad14083d5df7252a8",
"text": "This paper presents a 60-dB gain bulk-driven Miller OTA operating at 0.25-V power supply in the 130-nm digital CMOS process. The amplifier operates in the weak-inversion region with input bulk-driven differential pair sporting positive feedback source degeneration for transconductance enhancement. In addition, the distributed layout configuration is used for all the transistors to mitigate the effect of halo implants for higher output impedance. Combining these two approaches, we experimentally demonstrate a high gain of over 60-dB with just 18-nW power consumption from 0.25-V power supply. The use of enhanced bulk-driven differential pair and distributed layout can help overcome some of the constraints imposed by nanometer CMOS process for high performance analog circuits in weak inversion region.",
"title": ""
},
{
"docid": "60d8839833d10b905729e3d672cafdd6",
"text": "In order to account for the phenomenon of virtual pitch, various theories assume implicitly or explicitly that each spectral component introduces a series of subharmonics. The spectral-compression method for pitch determination can be viewed as a direct implementation of this principle. The widespread application of this principle in pitch determination is, however, impeded by numerical problems with respect to accuracy and computational efficiency. A modified algorithm is described that solves these problems. Its performance is tested for normal speech and \"telephone\" speech, i.e., speech high-pass filtered at 300 Hz. The algorithm out-performs the harmonic-sieve method for pitch determination, while its computational requirements are about the same. The algorithm is described in terms of nonlinear system theory, i.c., subharmonic summation. It is argued that the favorable performance of the subharmonic-summation algorithm stems from its corresponding more closely with current pitch-perception theories than does the harmonic sieve.",
"title": ""
},
{
"docid": "d8da3a1c33ab84c06daabd708ef41c46",
"text": "OBJECTIVE\nSexual dysfunction is a common clinical symptom in women who were victims of childhood sexual abuse. The precise mechanism that mediates this association remains poorly understood. The authors evaluated the relationship between the experience of childhood abuse and neuroplastic thinning of cortical fields, depending on the nature of the abusive experience.\n\n\nMETHOD\nThe authors used MRI-based cortical thickness analysis in 51 medically healthy adult women to test whether different forms of childhood abuse were associated with cortical thinning in areas critical to the perception and processing of specific behavior implicated in the type of abuse.\n\n\nRESULTS\nExposure to childhood sexual abuse was specifically associated with pronounced cortical thinning in the genital representation field of the primary somatosensory cortex. In contrast, emotional abuse was associated with cortical thinning in regions relevant to self-awareness and self-evaluation.\n\n\nCONCLUSIONS\nNeural plasticity during development appears to result in cortical adaptation that may shield a child from the sensory processing of the specific abusive experience by altering cortical representation fields in a regionally highly specific manner. Such plastic reorganization may be protective for the child living under abusive conditions, but it may underlie the development of behavioral problems, such as sexual dysfunction, later in life.",
"title": ""
}
] |
scidocsrr
|
3aacde304008aef1c80cfe817e8354c8
|
Exploiting smart e-Health gateways at the edge of healthcare Internet-of-Things: A fog computing approach
|
[
{
"docid": "be99f6ba66d573547a09d3429536049e",
"text": "With the development of sensor, wireless mobile communication, embedded system and cloud computing, the technologies of Internet of Things have been widely used in logistics, Smart Meter, public security, intelligent building and so on. Because of its huge market prospects, Internet of Things has been paid close attention by several governments all over the world, which is regarded as the third wave of information technology after Internet and mobile communication network. Bridging between wireless sensor networks with traditional communication networks or Internet, IOT Gateway plays an important role in IOT applications, which facilitates the seamless integration of wireless sensor networks and mobile communication networks or Internet, and the management and control with wireless sensor networks. In this paper, we proposed an IOT Gateway system based on Zigbee and GPRS protocols according to the typical IOT application scenarios and requirements from telecom operators, presented the data transmission between wireless sensor networks and mobile communication networks, protocol conversion of different sensor network protocols, and control functionalities for sensor networks, and finally gave an implementation of prototyping system and system validation.",
"title": ""
},
{
"docid": "b25cfcd6ceefffe3039bb5a6a53e216c",
"text": "With the increasing applications in the domains of ubiquitous and context-aware computing, Internet of Things (IoT) are gaining importance. In IoTs, literally anything can be part of it, whether it is sensor nodes or dumb objects, so very diverse types of services can be produced. In this regard, resource management, service creation, service management, service discovery, data storage, and power management would require much better infrastructure and sophisticated mechanism. The amount of data IoTs are going to generate would not be possible for standalone power-constrained IoTs to handle. Cloud computing comes into play here. Integration of IoTs with cloud computing, termed as Cloud of Things (CoT) can help achieve the goals of envisioned IoT and future Internet. This IoT-Cloud computing integration is not straight-forward. It involves many challenges. One of those challenges is data trimming. Because unnecessary communication not only burdens the core network, but also the data center in the cloud. For this purpose, data can be preprocessed and trimmed before sending to the cloud. This can be done through a Smart Gateway, accompanied with a Smart Network or Fog Computing. In this paper, we have discussed this concept in detail and present the architecture of Smart Gateway with Fog Computing. We have tested this concept on the basis of Upload Delay, Synchronization Delay, Jitter, Bulk-data Upload Delay, and Bulk-data Synchronization Delay.",
"title": ""
}
] |
[
{
"docid": "e1a24bdf3ff1cb343b5a99391bf531b4",
"text": "OAuth 2.0 provides an open framework for the authorization of users across the web. While the standard enumerates mandatory security protections for a variety of attacks, many embodiments of this standard allow these protections to be optionally implemented. In this paper, we analyze the extent to which one particularly dangerous vulnerability, Cross Site Request Forgery, exists in realworld deployments. We crawl the Alexa Top 10,000 domains, and conservatively identify that 25% of websites using OAuth appear vulnerable to CSRF attacks. We then perform an in-depth analysis of four high-profile case studies, which reveal not only weaknesses in sample code provided in SDKs, but also inconsistent implementation of protections among services provided by the same company. From these data points, we argue that protection against known and sometimes subtle security vulnerabilities can not simply be thrust upon developers as an option, but instead must be strongly enforced by Identity Providers before allowing web applications to connect.",
"title": ""
},
{
"docid": "66e43ce62fd7e9cf78c4ff90b82afb8d",
"text": "BACKGROUND\nConcern over the frequency of unintended harm to patients has focused attention on the importance of teamwork and communication in avoiding errors. This has led to experiments with teamwork training programmes for clinical staff, mostly based on aviation models. These are widely assumed to be effective in improving patient safety, but the extent to which this assumption is justified by evidence remains unclear.\n\n\nMETHODS\nA systematic literature review on the effects of teamwork training for clinical staff was performed. Information was sought on outcomes including staff attitudes, teamwork skills, technical performance, efficiency and clinical outcomes.\n\n\nRESULTS\nOf 1036 relevant abstracts identified, 14 articles were analysed in detail: four randomized trials and ten non-randomized studies. Overall study quality was poor, with particular problems over blinding, subjective measures and Hawthorne effects. Few studies reported on every outcome category. Most reported improved staff attitudes, and six of eight reported significantly better teamwork after training. Five of eight studies reported improved technical performance, improved efficiency or reduced errors. Three studies reported evidence of clinical benefit, but this was modest or of borderline significance in each case. Studies with a stronger intervention were more likely to report benefits than those providing less training. None of the randomized trials found evidence of technical or clinical benefit.\n\n\nCONCLUSION\nThe evidence for technical or clinical benefit from teamwork training in medicine is weak. There is some evidence of benefit from studies with more intensive training programmes, but better quality research and cost-benefit analysis are needed.",
"title": ""
},
{
"docid": "14fdf8fa41d46ad265b48bbc64a2d3cc",
"text": "Preserving edge structures is a challenge to image interpolation algorithms that reconstruct a high-resolution image from a low-resolution counterpart. We propose a new edge-guided nonlinear interpolation technique through directional filtering and data fusion. For a pixel to be interpolated, two observation sets are defined in two orthogonal directions, and each set produces an estimate of the pixel value. These directional estimates, modeled as different noisy measurements of the missing pixel are fused by the linear minimum mean square-error estimation (LMMSE) technique into a more robust estimate, using the statistics of the two observation sets. We also present a simplified version of the LMMSE-based interpolation algorithm to reduce computational cost without sacrificing much the interpolation performance. Experiments show that the new interpolation techniques can preserve edge sharpness and reduce ringing artifacts",
"title": ""
},
{
"docid": "d7c0b0261547590d405e118301651b1f",
"text": "This paper reports on the Event StoryLine Corpus (ESC) v0.9, a new benchmark dataset for the temporal and causal relation detection. By developing this dataset, we also introduce a new task, the StoryLine Extraction from news data, which aims at extracting and classifying events relevant for stories, from across news documents spread in time and clustered around a single seminal event or topic. In addition to describing the dataset, we also report on three baselines systems whose results show the complexity of the task and suggest directions for the development of more robust systems.",
"title": ""
},
{
"docid": "5c05ad44ac2bf3fb26cea62d563435f8",
"text": "We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.",
"title": ""
},
{
"docid": "6038975e7868b235f2b665ffbd249b68",
"text": "Existing person re-identification benchmarks and methods mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be searched from a gallery of whole scene images. To close the gap, we propose a new deep learning framework for person search. Instead of breaking it down into two separate tasks—pedestrian detection and person re-identification, we jointly handle both aspects in a single convolutional neural network. An Online Instance Matching (OIM) loss function is proposed to train the network effectively, which is scalable to datasets with numerous identities. To validate our approach, we collect and annotate a large-scale benchmark dataset for person search. It contains 18,184 images, 8,432 identities, and 96,143 pedestrian bounding boxes. Experiments show that our framework outperforms other separate approaches, and the proposed OIM loss function converges much faster and better than the conventional Softmax loss.",
"title": ""
},
{
"docid": "dcdc8c237961aa063f8fb307f2e1697b",
"text": "We collected data from Twitter posts about firms in the S&P 500 and analyzed their cumulative emotional valence (i.e., whether the posts contained an overall positive or negative emotional sentiment). We compared this to the average daily stock market returns of firms in the S&P 500. Our results show that the cumulative emotional valence (positive or negative) of Twitter tweets about a specific firm was significantly related to that firm's stock returns. The emotional valence of tweets from users with many followers (more than the median) had a stronger impact on same day returns, as emotion was quickly disseminated and incorporated into stock prices. In contrast, the emotional valence of tweets from users with few followers had a stronger impact on future stock returns (10-day returns).",
"title": ""
},
{
"docid": "d2cefbafb0d0ab30daa17630bc800026",
"text": "To assess the feasibility, technical success, and effectiveness of high-resolution magnetic resonance (MR)-guided posterior femoral cutaneous nerve (PFCN) blocks. A retrospective analysis of 12 posterior femoral cutaneous nerve blocks in 8 patients [6 (75 %) female, 2 (25 %) male; mean age, 47 years; range, 42–84 years] with chronic perineal pain suggesting PFCN neuropathy was performed. Procedures were performed with a clinical wide-bore 1.5-T MR imaging system. High-resolution MR imaging was utilized for visualization and targeting of the PFCN. Commercially available, MR-compatible 20-G needles were used for drug delivery. Variables assessed were technical success (defined as injectant surrounding the targeted PFCN on post-intervention MR images) effectiveness, (defined as post-interventional regional anesthesia of the target area innervation downstream from the posterior femoral cutaneous nerve block), rate of complications, and length of procedure time. MR-guided PFCN injections were technically successful in 12/12 cases (100 %) with uniform perineural distribution of the injectant. All blocks were effective and resulted in post-interventional regional anesthesia of the expected areas (12/12, 100 %). No complications occurred during the procedure or during follow-up. The average total procedure time was 45 min (30–70) min. Our initial results demonstrate that this technique of selective MR-guided PFCN blocks is feasible and suggest high technical success and effectiveness. Larger studies are needed to confirm our initial results.",
"title": ""
},
{
"docid": "1ebaa8de358a160024c07470dd48943a",
"text": "This study introduces and evaluates the robustness of different volumetric, sentiment, and social network approaches to predict the elections in three Asian countries – Malaysia, India, and Pakistan from Twitter posts. We find that predictive power of social media performs well for India and Pakistan but is not effective for Malaysia. Overall, we find that it is useful to consider the recency of Twitter posts while using it to predict a real outcome, such as an election result. Sentiment information mined using machine learning models was the most accurate predictor of election outcomes. Social network information is stable despite sudden surges in political discussions, for e.g. around electionsrelated news events. Methods combining sentiment and volume information, or sentiment and social network information, are effective at predicting smaller vote shares, for e.g. vote shares in the case of independent candidates and regional parties. We conclude with a detailed discussion on the caveats of social media analysis for predicting real-world outcomes and recommendations for future work. ARTICLE HISTORY Received 1 August 2017 Revised 12 February 2018 Accepted 12 March 2018",
"title": ""
},
{
"docid": "ea01ef46670d4bb8244df0d6ab08a3d5",
"text": "In this paper, statics model of an underactuated wire-driven flexible robotic arm is introduced. The robotic arm is composed of a serpentine backbone and a set of controlling wires. It has decoupled bending rigidity and axial rigidity, which enables the robot large axial payload capacity. Statics model of the robotic arm is developed using the Newton-Euler method. Combined with the kinematics model, the robotic arm deformation as well as the wire motion needed to control the robotic arm can be obtained. The model is validated by experiments. Results show that, the proposed model can well predict the robotic arm bending curve. Also, the bending curve is not affected by the wire pre-tension. This enables the wire-driven robotic arm with potential applications in minimally invasive surgical operations.",
"title": ""
},
{
"docid": "fcc94c9c9f388386b7eadc42c432f273",
"text": "Thanks to the growing availability of spoofing databases and rapid advances in using them, systems for detecting voice spoofing attacks are becoming more and more capable, and error rates close to zero are being reached for the ASVspoof2015 database. However, speech synthesis and voice conversion paradigms that are not considered in the ASVspoof2015 database are appearing. Such examples include direct waveform modelling and generative adversarial networks. We also need to investigate the feasibility of training spoofing systems using only low-quality found data. For that purpose, we developed a generative adversarial networkbased speech enhancement system that improves the quality of speech data found in publicly available sources. Using the enhanced data, we trained state-of-the-art text-to-speech and voice conversion models and evaluated them in terms of perceptual speech quality and speaker similarity. The results show that the enhancement models significantly improved the SNR of low-quality degraded data found in publicly available sources and that they significantly improved the perceptual cleanliness of the source speech without significantly degrading the naturalness of the voice. However, the results also show limitations when generating speech with the low-quality found data.",
"title": ""
},
{
"docid": "5af83f822ac3d9379c7b477ff1d32a97",
"text": "Sprout is an end-to-end transport protocol for interactive applications that desire high throughput and low delay. Sprout works well over cellular wireless networks, where link speeds change dramatically with time, and current protocols build up multi-second queues in network gateways. Sprout does not use TCP-style reactive congestion control; instead the receiver observes the packet arrival times to infer the uncertain dynamics of the network path. This inference is used to forecast how many bytes may be sent by the sender, while bounding the risk that packets will be delayed inside the network for too long. In evaluations on traces from four commercial LTE and 3G networks, Sprout, compared with Skype, reduced self-inflicted end-to-end delay by a factor of 7.9 and achieved 2.2× the transmitted bit rate on average. Compared with Google’s Hangout, Sprout reduced delay by a factor of 7.2 while achieving 4.4× the bit rate, and compared with Apple’s Facetime, Sprout reduced delay by a factor of 8.7 with 1.9× the bit rate. Although it is end-to-end, Sprout matched or outperformed TCP Cubic running over the CoDel active queue management algorithm, which requires changes to cellular carrier equipment to deploy. We also tested Sprout as a tunnel to carry competing interactive and bulk traffic (Skype and TCP Cubic), and found that Sprout was able to isolate client application flows from one another.",
"title": ""
},
{
"docid": "776726fe88c24dff0b726a71f0f94d67",
"text": "The application of remote sensing technology and precision agriculture in the oil palm industry is in development. This study investigated the potential of high resolution QuickBird satellite imagery, which has a synoptic overview, for detecting oil palms infected by basal stem rot disease and for mapping the disease. Basal stem rot disease poses a major threat to the oil palm industry, especially in Indonesia. It is caused by Ganoderma boninense and the symptoms can be seen on the leaf and basal stem. At present there is no effective control for this disease and early detection of the infection is essential. A detailed, accurate and rapid method of monitoring the disease is needed urgently. This study used QuickBird imagery to detect the disease and its spatial pattern. Initially, oil palm and non oil palm object segmentation based on the red band was used to map the spatial pattern of the disease. Secondly, six vegetation indices derived from visible and near infrared bands (NIR) were used for to identify palms infected by the disease. Finally, ground truth from field sampling in four fields with different ages of plant and degrees of infection was used to assess the accuracy of the remote sensing approach. The results show that image segmentation effectively delineated areas infected by the disease with a mapping accuracy of 84%. The resulting maps showed two patterns of the disease; a sporadic pattern in fields with older palms and a dendritic pattern in younger palms with medium to low infection. Ground truth data showed that oil palms infected by basal stem rot had a higher reflectance in the visible bands and a lower reflectance in the near infrared band. Different vegetation indices performed differently in each field. The atmospheric resistant vegetation index and green blue normalized difference vegetation index identified the disease with an accuracy of 67% in a field with 21 year old palms and high infection rates. In the field of 10 year old palms with medium rates of infection, the simple ratio (NIR/red) was effective with an accuracy of 62% for identifying the disease. The green blue normalized difference vegetation index was effective in the field of 10 years old palms with low infection rates with an accuracy of 59%. In the field of 15 and 18 years old palms with low infection rates, all the indices showed low levels of accuracy for identifying the disease. This study suggests that high resolution QuickBird imagery offers a quick, detailed and accurate way of estimating the location and extent of basal stem rot disease infections in oil palm plantations.",
"title": ""
},
{
"docid": "a8fa56dcb8524cc31feb946cf6d88e02",
"text": "We propose a fraud detection method based on the user accounts visualization and threshold-type detection. The visualization technique employed in our approach is the Self-Organizing Map (SOM). Since the SOM technique in its original form visualizes only the vectors, and the user accounts are represented in our work as the matrices storing a collection of records reflecting the user sequential activities, we propose a method of the matrices visualization on the SOM grid, which constitutes the main contribution of this paper. Furthermore, we propose a method of the detection threshold setting on the basis of the SOM U-matrix. The results of the conducted experimental study on real data in three different research fields confirm the advantages and effectiveness of the proposed approach. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5e83743d4a3997afdbf6898e8b5d54b5",
"text": "I would like to devote this review to my teachers and colleagues, Nadine Brisson and Gilbert Saint, who passed away too early. I am also grateful to a number of people who contributed directly and indirectly to this paper: Antonio Formaggio and Yosio Shimabokuro and their team from INPE (Sao Jose dos Campos) for shared drafting of a related research proposal for a Brazilian monitoring system; Felix Rembold (JRC Ispra) for discussing the original manuscript; Zoltan Balint, Peris Muchiri and SWALIM at FAO Somalia (Nairobi, Kenya) for contributions regarding the CDI drought index; and Anja Klisch, Francesco Vuolo and Matteo Mattiuzzi (BOKU Vienna) for providing inputs related to vegetation phenology and the MODIS processing chain.",
"title": ""
},
{
"docid": "03cd6ef0cc0dab9f33b88dd7ae4227c2",
"text": "The dopaminergic system plays a pivotal role in the central nervous system via its five diverse receptors (D1–D5). Dysfunction of dopaminergic system is implicated in many neuropsychological diseases, including attention deficit hyperactivity disorder (ADHD), a common mental disorder that prevalent in childhood. Understanding the relationship of five different dopamine (DA) receptors with ADHD will help us to elucidate different roles of these receptors and to develop therapeutic approaches of ADHD. This review summarized the ongoing research of DA receptor genes in ADHD pathogenesis and gathered the past published data with meta-analysis and revealed the high risk of DRD5, DRD2, and DRD4 polymorphisms in ADHD.",
"title": ""
},
{
"docid": "2ffd4537f9adff88434c8a2b5860b6a5",
"text": "free download the design of rijndael: aes the advanced the design of rijndael aes the advanced encryption publication moved: fips 197, advanced encryption standard rijndael aes paper nist computer security resource the design of rijndael toc beck-shop design and implementation of advanced encryption standard lecture note 4 the advanced encryption standard (aes) selecting the advanced encryption standard implementation of advanced encryption standard (aes implementation of advanced encryption standard algorithm cryptographic algorithms aes cryptography the advanced encryption the successor of des computational and algebraic aspects of the advanced advanced encryption standard security forum 2017 advanced encryption standard 123seminarsonly design of high speed 128 bit aes algorithm for data encryption fpga based implementation of aes encryption and decryption effective comparison and evaluation of des and rijndael advanced encryption standard (aes) and it’s working the long road to the advanced encryption standard fpga implementations of advanced encryption standard a survey a reconfigurable cryptography coprocessor rcc for advanced vlsi design and implementation of pipelined advanced information security and cryptography springer cryptographic algorithms (aes, rsa) polynomials in the nation’s service: using algebra to chapter 19: rijndael: a successor to the data encryption a vlsi architecture for rijndael, the advanced encryption a study of encryption algorithms (rsa, des, 3des and aes design an aes algorithm using s.r & m.c technique alook at the advanced encr yption standard (aes) aes-512: 512-bit advanced encryption standard algorithm some algebraic aspects of the advanced encryption standard global information assurance certification paper design of parallel advanced encryption standard (ae s shared architecture for encryption/decryption of aes iceec2015sp06.pdf an enhanced advanced encryption standard a vhdl implementation of the advanced encryption standard advanced encryption standard ijcset vlsi implementation of enhanced aes cryptography",
"title": ""
},
{
"docid": "c283549416aa70e2d97cd5996b0217b5",
"text": "The importance of vascular contributions to cognitive impairment and dementia (VCID) associated with Alzheimer's disease (AD) and related neurodegenerative diseases is increasingly recognized, however, the underlying mechanisms remain obscure. There is growing evidence that in addition to Aβ deposition, accumulation of hyperphosphorylated oligomeric tau contributes significantly to AD etiology. Tau oligomers are toxic and it has been suggested that they propagate in a \"prion-like\" fashion, inducing endogenous tau misfolding in cells. Their role in VCID, however, is not yet understood. The present study was designed to determine the severity of vascular deposition of oligomeric tau in the brain in patients with AD and related tauopathies, including dementia with Lewy bodies (DLB) and progressive supranuclear palsy (PSP). Further, we examined a potential link between vascular deposition of fibrillar Aβ and that of tau oligomers in the Tg2576 mouse model. We found that tau oligomers accumulate in cerebral microvasculature of human patients with AD and PSP, in association with vascular endothelial and smooth muscle cells. Cerebrovascular deposition of tau oligomers was also found in DLB patients. We also show that tau oligomers accumulate in cerebral microvasculature of Tg2576 mice, partially in association with cerebrovascular Aβ deposits. Thus, our findings add to the growing evidence for multifaceted microvascular involvement in the pathogenesis of AD and other neurodegenerative diseases. Accumulation of tau oligomers may represent a potential novel mechanism by which functional and structural integrity of the cerebral microvessels is compromised.",
"title": ""
},
{
"docid": "4084849efe6159dd4164effc43d4c017",
"text": "A novel hybrid spectroscopic technique is proposed, combining surface plasmon resonance (SPR) with surface-enhanced Raman scattering (SERS) microscopy. A standard Raman microscope is modified to accommodate the excitation of surface plasmon-polaritons (SPPs) on flat metallic surfaces in the Kretschmann configuration, while retaining the capabilities of Raman microscopy. The excitation of SPPs is performed as in standard SPR-microscopy; namely, a beam with TM-polarization traverses off-axis a high numerical aperture oil immersion objective, illuminating at an angle the metallic film from the (glass) substrate side. The same objective is used to collect the full Kretschmann cone containing the SERS emission on the substrate side. The angular dispersion of the plasmon resonance is measured in reflectivity for different coupling conditions and, simultaneously, SERS spectra are recorded from Nile Blue (NB) molecules adsorbed onto the surface. A trade-off is identified between the conditions of optimum coupling to SPPs and the spot size (which is related to the spatial resolution). This technique opens new horizons for SERS microscopy with uniform enhancement on flat surfaces.",
"title": ""
}
] |
scidocsrr
|
dd781aaa6582e0d82e2ac688ca27176e
|
Innovative public-private partnership to support Smart City : the case of “ Chaire REVES ”
|
[
{
"docid": "aa32bff910ce6c7b438dc709b28eefe3",
"text": "Here we sketch the rudiments of what constitutes a smart city which we define as a city in which ICT is merged with traditional infrastructures, coordinated and integrated using new digital technologies. We first sketch our vision defining seven goals which concern: developing a new understanding of urban problems; effective and feasible ways to coordinate urban technologies; models and methods for using urban data across spatial and temporal scales; developing new technologies for communication and dissemination; developing new forms of urban governance and organisation; defining critical problems relating to cities, transport, and energy; and identifying risk, uncertainty, and hazards in the smart city. To this, we add six research challenges: to relate the infrastructure of smart cities to their operational functioning and planning through management, control and optimisation; to explore the notion of the city as a laboratory for innovation; to provide portfolios of urban simulation which inform future designs; to develop technologies that ensure equity, fairness and realise a better quality of city life; to develop technologies that ensure informed participation and create shared knowledge for democratic city governance; and to ensure greater and more effective mobility and access to opportunities for a e-mail: m.batty@ucl.ac.uk 482 The European Physical Journal Special Topics urban populations. We begin by defining the state of the art, explaining the science of smart cities. We define six scenarios based on new cities badging themselves as smart, older cities regenerating themselves as smart, the development of science parks, tech cities, and technopoles focused on high technologies, the development of urban services using contemporary ICT, the use of ICT to develop new urban intelligence functions, and the development of online and mobile forms of participation. Seven project areas are then proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelling Urban Land Use, Transport and Economic Interactions, Modelling Urban Transactional Activities in Labour and Housing Markets, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the Smart City. Finally we anticipate the paradigm shifts that will occur in this research and define a series of key demonstrators which we believe are important to progressing a science",
"title": ""
}
] |
[
{
"docid": "766102021469d6fc046369d8f9feeee6",
"text": "From last few years, mobile technology has been received much more attention since it is most popular and basic need of today's world. Due to the popularity, mobiles are major target for malicious applications. Key challenge is to detect and remove malicious apps from mobiles. Numerous amounts of mobile apps are generated daily so ranking fraud is the one of the major aspects in front of the mobile App market. Ranking fraud refers to fraudulent or vulnerable activities. Main aim of the fraudulent is to knock the fraud mobile apps in the popularity list. Most App developer generates the ranking fraud apps by tricky means like enhancing the apps sales or by simply rating fake apps. Thus, there is need to have novel system to effectively",
"title": ""
},
{
"docid": "303e7cfb73f6db763aa9dbe4418aaf91",
"text": "This paper presents a summary of the main types of snubber circuits; generally classified as dissipative and non-dissipative or active and passive snubbers. This type of circuits are commonly used because of they can suppress electrical spikes, allowing a better performance on the main electrical circuit. This article intent to describe the currently snubber circuits and its applications without getting into their design.",
"title": ""
},
{
"docid": "d9947d2a6b6e184cf27515ad72cc7f98",
"text": "This study examined the role of a social network site (SNS) in the lives of 11 high school teenagers from low-income families in the U.S. We conducted interviews, talk-alouds and content analysis of MySpace profiles. Qualitative analysis of these data revealed three themes. First, SNSs facilitated emotional support, helped maintain relationships, and provided a platform for self-presentation. Second, students used their online social network to fulfill essential social learning functions. Third, within their SNS, students engaged in a complex array of communicative and creative endeavors. In several instances, students’ use of social network sites demonstrated the new literacy practices currently being discussed within education reform efforts. Based on our findings, we suggest additional directions for related research and educational practices.",
"title": ""
},
{
"docid": "93628f77c0a55b2fca843499e2e5bef8",
"text": "This PhD thesis investigates the role of representations that generalize values in autonomous reinforcement learning (RL). I employ the mathematical framework of function analysis to examine inductive and deductive RL approaches in continuous (or hybrid) state and action spaces. My analysis reveals the importance of the representationŠs metric for inductive generalization and the need for structural assumptions to generalize values deductively to entirely new situations. The thesis contributes to two related Ąelds of research: representation learning and deductive value estimation in large state-action spaces. I emphasize here the agentŠs autonomy by demanding little to no knowledge about (or restrictions to) possible tasks and environments. In the following I summarize my contributions in more detail. I argue that all isomorphic state spaces (and fully observable observation spaces) difer only in their metric (Böhmer et al., 2015), and show that values can be generalized optimally by a difusion metric. I prove that when the value is estimated by a linear algorithm like least squares policy iteration (LSPI), slow feature analysis (SFA) approximates an optimal representation for all tasks in the same environment (Böhmer et al., 2013). I demonstrate this claim by deriving a novel regularized sparse kernel SFA algorithm (RSK-SFA, Böhmer et al., 2012) and compare the learned representations with others, for example, in a real LSPI robot-navigation task and in extensive simulations thereof. I also extend the deĄnition of SFA to γ-SFA, which represents only a speciĄed subset of ŞanticipatedŤ tasks. Autonomous inductive learning sufers the curse of insufficient samples in many realistic tasks, as environments consisting of many variables can have exponentially many states. This precludes inductive representation learning and both inductive and deductive value estimation for these environments, all of which need training samples suiciently ŞcloseŤ to every state. I propose structural assumptions on the state dynamics to break the curse. I investigate state spaces with sparse conditional independent transitions, called Bayesian dynamic networks (DBN). In diference to value functions, sparse DBN transition models can be learned inductively without sufering the above curse. To this end, I deĄne the new class of linear factored functions (LFF, Böhmer and Obermayer, 2015), which can compute the operations in a DBN, marginalization and point-wise multiplication, for an entire function analytically. I derive compression algorithms to keep LFF compact and three the inductive LFF algorithms density estimation, regression and value estimation. As inductive value estimation sufers the curse of insuicient samples, I derive a deductive variant of LSPI (FAPI, Böhmer and Obermayer, 2013). Like LSPI, FAPI requires predeĄned basis functions and can thus not estimate values autonomously in large state-action spaces. I develop therefore a second algorithm to estimate values deductively for a DBN (represented by LFF, e.g., learned by LFF regression) directly in the function space of LFF. As most environments can not be perfectly modeled by DBN, I discuss an importance sampling technique to combine inductive and deductive value estimation. Deduction can be furthermore improved by mixture-of-expert DBN, where a set of conditions determines for each state which expert describes the dynamics best. The conditions can be constructed as LFF from grounded relational rules. This allows to generate a transition model for each conĄguration of the environment. Ultimately, the framework derived in this thesis could generalize inductively learned models to other environments with similar objects.",
"title": ""
},
{
"docid": "0a981845153607465efb91acec05e9d0",
"text": "The performance of memory-bound commercial applicationssuch as databases is limited by increasing memory latencies. Inthis paper, we show that exploiting memory-level parallelism(MLP) is an effective approach for improving the performance ofthese applications and that microarchitecture has a profound impacton achievable MLP. Using the epoch model of MLP, we reasonhow traditional microarchitecture features such as out-of-orderissue and state-of-the-art microarchitecture techniques suchas runahead execution affect MLP. Simulation results show that amoderately aggressive out-of-order issue processor improvesMLP over an in-order issue processor by 12-30%, and that aggressivehandling of loads, branches and serializing instructionsis needed to attain the full benefits of large out-of-order instructionwindows. The results also show that a processor's issue windowand reorder buffer should be decoupled to exploit MLP more efficiently.In addition, we demonstrate that runahead execution ishighly effective in enhancing MLP, potentially improving the MLPof the database workload by 82% and its overall performance by60%. Finally, our limit study shows that there is considerableheadroom in improving MLP and overall performance by implementingeffective instruction prefetching, more accurate branchprediction and better value prediction in addition to runahead execution.",
"title": ""
},
{
"docid": "18285ee4096c50691b9949315abb4d21",
"text": "Automated visual inspection (AVI) is becoming an integral part of modern surface mount technology (SMT) assembly process. This high technology assembly, produces printed circuit boards (PCB) with tiny and delicate electronic components. With the increase in demand for such PCBs, high-volume production has to cater for both the quantity and zero defect quality assurance. The ever changing technology in fabrication, placement and soldering of SMT electronic components have caused an increase in PCB defects both in terms of numbers and types. Consequently, a wide range of defect detecting techniques and algorithms have been reported and implemented in AVI systems in the past decade. Unfortunately, the turn-over rate for PCB inspection is very crucial in the electronic industry. Current AVI systems spend too much time inspecting PCBs on a component-bycomponent basis. In this paper, we focus on providing a solution that can cover a larger inspection area of a PCB at any one time. This will reduce inspection time and increase the throughput of PCB production. Our solution is targeted for missing and misalignment defects of SMT devices in a PCB. An alternative visual inspection approach using color background subtraction is presented to address the stated defect. Experimental results of various defect PCBs are also presented. Key–Words: PCB Inspection, Background Subtraction, Automated Visual Inspection.",
"title": ""
},
{
"docid": "23bf81699add38814461d5ac3e6e33db",
"text": "This paper examined a steering behavior based fatigue monitoring system. The advantages of using steering behavior for detecting fatigue are that these systems measure continuously, cheaply, non-intrusively, and robustly even under extremely demanding environmental conditions. The expected fatigue induced changes in steering behavior are a pattern of slow drifting and fast corrective counter steering. Using advanced signal processing procedures for feature extraction, we computed 3 feature set in the time, frequency and state space domain (a total number of 1251 features) to capture fatigue impaired steering patterns. Each feature set was separately fed into 5 machine learning methods (e.g. Support Vector Machine, K-Nearest Neighbor). The outputs of each single classifier were combined to an ensemble classification value. Finally we combined the ensemble values of 3 feature subsets to a of meta-ensemble classification value. To validate the steering behavior analysis, driving samples are taken from a driving simulator during a sleep deprivation study (N=12). We yielded a recognition rate of 86.1% in classifying slight from strong fatigue.",
"title": ""
},
{
"docid": "79beaf249c8772ee1cbd535df0bf5a13",
"text": "Accurate vessel detection in retinal images is an important and difficult task. Detection is made more challenging in pathological images with the presence of exudates and other abnormalities. In this paper, we present a new unsupervised vessel segmentation approach to address this problem. A novel inpainting filter, called neighborhood estimator before filling, is proposed to inpaint exudates in a way that nearby false positives are significantly reduced during vessel enhancement. Retinal vascular enhancement is achieved with a multiple-scale Hessian approach. Experimental results show that the proposed vessel segmentation method outperforms state-of-the-art algorithms reported in the recent literature, both visually and in terms of quantitative measurements, with overall mean accuracy of 95.62% on the STARE dataset and 95.81% on the HRF dataset.",
"title": ""
},
{
"docid": "cca9b3cb4a0d6fb8a690f2243cf7abce",
"text": "In this paper, we propose to predict immediacy for interacting persons from still images. A complete immediacy set includes interactions, relative distance, body leaning direction and standing orientation. These measures are found to be related to the attitude, social relationship, social interaction, action, nationality, and religion of the communicators. A large-scale dataset with 10,000 images is constructed, in which all the immediacy measures and the human poses are annotated. We propose a rich set of immediacy representations that help to predict immediacy from imperfect 1-person and 2-person pose estimation results. A multi-task deep recurrent neural network is constructed to take the proposed rich immediacy representation as input and learn the complex relationship among immediacy predictions multiple steps of refinement. The effectiveness of the proposed approach is proved through extensive experiments on the large scale dataset.",
"title": ""
},
{
"docid": "08353c7d40a0df4909b09f2d3e5ab4fe",
"text": "Object detection has made great progress in the past few years along with the development of deep learning. However, most current object detection methods are resource hungry, which hinders their wide deployment to many resource restricted usages such as usages on always-on devices, battery-powered low-end devices, etc. This paper considers the resource and accuracy trade-off for resource-restricted usages during designing the whole object detection framework. Based on the deeply supervised object detection (DSOD) framework, we propose Tiny-DSOD dedicating to resource-restricted usages. Tiny-DSOD introduces two innovative and ultra-efficient architecture blocks: depthwise dense block (DDB) based backbone and depthwise feature-pyramid-network (D-FPN) based front-end. We conduct extensive experiments on three famous benchmarks (PASCAL VOC 2007, KITTI, and COCO), and compare Tiny-DSOD to the state-of-the-art ultra-efficient object detection solutions such as Tiny-YOLO, MobileNet-SSD (v1 & v2), SqueezeDet, Pelee, etc. Results show that Tiny-DSOD outperforms these solutions in all the three metrics (parameter-size, FLOPs, accuracy) in each comparison. For instance, Tiny-DSOD achieves 72.1% mAP with only 0.95M parameters and 1.06B FLOPs, which is by far the state-of-the-arts result with such a low resource requirement.∗",
"title": ""
},
{
"docid": "c9a28a3d90f6d716643c45ed2c0b47bb",
"text": "A fast, completely automated method to create 3D watertight building models from airborne LiDAR point clouds is presented. The proposed method analyzes the scene content and produces multi-layer rooftops with complex boundaries and vertical walls that connect rooftops to the ground. A graph cuts based method is used to segment vegetative areas from the rest of scene content. The ground terrain and building rooftop patches are then extracted utilizing our technique, the hierarchical Euclidean clustering. Our method adopts a “divide-and-conquer” strategy. Once potential points on rooftops are segmented from terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential building footprints. For each individual building region, significant features on the rooftop are further detected using a specifically designed region growing algorithm with smoothness constraint. Boundaries for all of these features are refined in order to produce strict description. After this refinement, mesh models could be generated using an existing robust dual contouring method.",
"title": ""
},
{
"docid": "7fdedef4608078d70e3f488790240ce0",
"text": "This paper investigates hardware/software (Hw/Sw) partitioning, a key problem in embedded co-design system. An efficient algorithm are proposed to optimally solve the problem in which the communication overhead is taken into account. The proposed algorithm constructs an efficient branch-and-bound approach to partition the hot path selected by path profiling techniques. The techniques for generation of good initial solution and the efficient lower bound for the feasible solution are customized in branch and bound search. Experimental results show that the partition result proposed by the new algorithm produces 10% increase in speedup as compared with the traditional approximate algorithm in most of the cases.",
"title": ""
},
{
"docid": "125655821a44bbce2646157c8465e345",
"text": "Due to its wide applicability, the problem of semi-supervised classification is attracting increasing attention in machine learning. Semi-Supervised Support Vector Machines (S3VMs) are based on applying the margin maximization principle to both labeled and unlabeled examples. Unlike SVMs, their formulation leads to a non-convex optimization problem. A suite of algorithms have recently been proposed for solving S3VMs. This paper reviews key ideas in this literature. The performance and behavior of various S3VM algorithms is studied together, under a common experimental setting.",
"title": ""
},
{
"docid": "3cecab28e0cffd99338545dd2b633445",
"text": "After two decades of intensive development, 3-D integration has proven invaluable for allowing integrated circuits to adhere to Moore's Law without needing to continuously shrink feature sizes. The 3-D integration is also an enabling technology for hetero-integration of microelectromechanical systems (MEMS)/microsensors with different technologies, such as CMOS and optoelectronics. This 3-D hetero-integration allows for the development of highly integrated multifunctional microsystems with small footprints, low cost, and high performance demanded by emerging applications. This paper reviews the following aspects of the MEMS/microsensor-centered 3-D integration: fabrication technologies and processes, processing considerations and strategies for 3-D integration, integrated device configurations and wafer-level packaging, and applications and commercial MEMS/microsensor products using 3-D integration technologies. Of particular interest throughout this paper is the hetero-integration of the MEMS and CMOS technologies.",
"title": ""
},
{
"docid": "c7a8e9fa0e1782d4918717b2d20b1b8e",
"text": "It is recognised that randomised controlled trials are not feasible for capturing rare adverse events. There is an increasing trend towards observational research methodologies using large population-based health databases. These databases offer more scope for adequate sample sizes, allowing for comprehensive patient characterisation and assessment of the associated factors. While direct causality cannot be established and confounders cannot be ignored, databases present an opportunity to explore and quantify rare events. The use of databases for the detection of rare adverse events in the following conditions, sudden death associated with attention deficit hyperactivity disorder (ADHD) treatment, retinal detachment associated with the use of fluoroquinolones and toxic epidermal necrolysis associated with drug exposure, are discussed as examples. In general, rare adverse events tend to have immediate and important clinical implications and may be life-threatening. An understanding of the causative factors is therefore important, in addition to the research methodologies and database platforms that enable the undertaking of the research.",
"title": ""
},
{
"docid": "43dfbf378a47cadf6868eb9bac22a4cd",
"text": "Maximum power point tracking (MPPT) techniques are employed in photovoltaic (PV) systems to make full utilization of the PV array output power which depends on solar irradiation and ambient temperature. Among all the MPPT strategies, perturbation and observation (P&O) and hill climbing methods are widely applied in the MPPT controllers due to their simplicity and easy implementation. In this paper, both P&O and hill climbing methods are adopted to implement a grid-connected PV system. Their performance is evaluated and compared through theoretical analysis and digital simulation. P&O MPPT method exhibits fast dynamic performance and well regulated PV output voltage, which is more suitable than hill climbing method for grid-connected PV system.",
"title": ""
},
{
"docid": "e88ce80671544f3077fb466029fabfe7",
"text": "Driver distraction is a major cause of traffic accidents, with mobile telephones as a key source of distraction. In two studies, we examined distraction of pedestrians associated with mobile phone use. The first had 60 participants walk along a prescribed route, with half of them conversing on a mobile phone, and the other half holding the phone awaiting a potential call, which never came. Comparison of the performance of the groups in recalling objects planted along the route revealed that pedestrians conversing recalled fewer objects than did those not conversing. The second study had three observers record pedestrian behavior of mobile phone users, i-pod users, and pedestrians with neither one at three crosswalks. Mobile phone users crossed unsafely into oncoming traffic significantly more than did either of the other groups. For pedestrians as with drivers, cognitive distraction from mobile phone use reduces situation awareness, increases unsafe behavior, putting pedestrians at greater risk for accidents, and crime victimization.",
"title": ""
},
{
"docid": "0950d606153e4e634f4bb5633562aa69",
"text": "The approach that one chooses to evolve software-intensive systems depends on the organization, the system, and the technology. We believe that significant progress in system architecture, system understanding, object technology, and net-centric computing make it possible to economically evolve software systems to a state in which they exhibit greater functionality and maintainability. In particular, interface technology, wrapping technology, and network technology are opening many opportunities to leverage existing software assets instead of scrapping them and starting over. But these promising technologies cannot be applied in a vacuum or without management understanding and control. There must be a framework in which to motivate the organization to understand its business opportunities, its application systems, and its road to an improved target system. This report outlines a comprehensive system evolution approach that incorporates an enterprise framework for the application of the promising technologies in the context of legacy systems.",
"title": ""
},
{
"docid": "6f1f7ceba3d347977866689c09e6a51e",
"text": "This paper reviews current cloud computing business models and presents proposals on how organisations can achieve sustainability by adopting appropriate models. We classify cloud computing business models into eight types: (1) Service Provider and Service Orientation; (2) Support and Services Contracts; (3) In- House Private Clouds; (4) All-In-One Enterprise Cloud; (5) One-Stop Resources and Services; (6) Government funding; (7) Venture Capitals; and (8) Entertainment and Social Networking. Using the Jericho Forum’s ‘Cloud Cube Model’ (CCM), the paper presents a summary of the eight business models. We discuss how the CCM fits into each business model, and then based on this discuss each business model’s strengths and weaknesses. We hope adopting an appropriate cloud computing business model will help organisations investing in this technology to stand firm in the economic downturn.",
"title": ""
}
] |
scidocsrr
|
eafc5fc44d8a42a374e804d84230a20f
|
THE MCKINSEY 7S MODEL FRAMEWORK FOR E-LEARNING SYSTEM READINESS ASSESSMENT
|
[
{
"docid": "570fcf7ba739ffb6ea07e5c58c8154c7",
"text": "E-learning is emerging as the new paradigm of modern education. Worldwide, the e-learning market has a growth rate of 35.6%, but failures exist. Little is known about why many users stop their online learning after their initial experience. Previous research done under different task environments has suggested a variety of factors affecting user satisfaction with e-Learning. This study developed an integrated model with six dimensions: learners, instructors, courses, technology, design, and environment. A survey was conducted to investigate the critical factors affecting learners’ satisfaction in e-Learning. The results revealed that learner computer anxiety, instructor attitude toward e-Learning, e-Learning course flexibility, e-Learning course quality, perceived usefulness, perceived ease of use, and diversity in assessments are the critical factors affecting learners’ perceived satisfaction. The results show institutions how to improve learner satisfaction and further strengthen their e-Learning implementation. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "ed20c75915c3c0c7d7e32f2ec0334a65",
"text": "Who is likely to view materials online maligning groups based on race, nationality, ethnicity, sexual orientation, gender, political views, immigration status, or religion? We use an online survey (N 1⁄4 1034) of youth and young adults recruited from a demographically balanced sample of Americans to address this question. By studying demographic characteristics and online habits of individuals who are exposed to online extremist groups and their messaging, this study serves as a precursor to a larger research endeavor examining the online contexts of extremism. Descriptive results indicate that a sizable majority of respondents were exposed to negative materials online. The materials were most commonly used to stereotype groups. Nearly half of negative material centered on race or ethnicity, and respondents were likely to encounter such material on social media sites. Regression results demonstrate African-Americans and foreign-born respondents were significantly less likely to be exposed to negative material online, as are younger respondents. Additionally, individuals expressing greater levels of trust in the federal government report significantly less exposure to such materials. Higher levels of education result in increased exposure to negative materials, as does a proclivity towards risk-taking. © 2016 Elsevier Ltd. All rights reserved. While the Internet has obvious benefits, its uncensored nature also exposes users to extremist ideas some find vile, offensive, and disturbing. Authorities consider online extremism a threat to national security and note the need for research on it (e.g. Hussain & Saltman, 2014; Levin, 2015; The White House, 2015). While scholars are discussing strategies for countering the effects of extremism (Helmus, York, & Chalk, 2013; Neumann, 2013), few have investigated who is exposed to extremist materials (for exceptions, see Hawdon, Oksanen, & R€ as€anen, 2014; R€ as€anen et al., 2015). Yet, we must understand who sees extremist materials if we are to effectively limit exposure or disseminate countermessages. Moreover, since youth appear to be most vulnerable to extremist messages (Oksanen, Hawdon, Holkeri, N€ asi, & R€ as€ anen, 2014; Onuoha, 2014; Torok, 2016), there is an enhanced need to investigate what online behaviors place them at risk for exposure. To help guide efforts in combatting online extremism by ). understanding who sees these materials, we use a sample of youth and young adults to investigate the behavioral and attitudinal factors that lead to exposure. We frame the analysis using routine activity theory (RAT), which argues that crimes occur when a motivated offender, a suitable target, and a lack of capable guardians converge in time and space (Cohen & Felson, 1979). The theory explains how victims’ activities can expose them to dangerous people, places, and situations. We also extend RAT by incorporating insights from social learning theory (Akers, 1977). Specifically, we consider if those who distrust government are more likely to view extremist messages because their ideology leads them to frequent online environments where extremist opinions are posted. Therefore, we focus on two research questions: R1: What behaviors place youth and young adults at risk of being virtually proximate to extremist materials? R2: Does the lack of trust in the government increase exposure to extremist materials, all else being equal? By identifying behaviors and attitudes that lead to extremism, our research will help authorities design strategies to counter its effects. M. Costello et al. / Computers in Human Behavior 63 (2016) 311e320 312 The current study, which was approved by the Intuitional Review Boards (IRBs) of the universities involved in the project aswell as the National Institute of Justice, begins with a discussion of online extremism. We then review RAT and extend it by considering insights from social learning theory. We then predict exposure to online hate materials among a sample of 1029 youth and young adults. We conclude by considering the implications of our research. 1. Online extremism: its nature, types, and dangers The phenomenon we consider is a type of cyberviolence (see Wall, 2001) and goes bymany names: online extremism, online hate, or cyberhate. We consider online hate or extremism to be the use of information computer technology (ICT) to profess attitudes devaluating others because of their religion, race, ethnicity, gender, sexual orientation, national origin, or some other characteristic. As such, online hate material is a distinct form of cyberviolence as abuse is aimed at a collective identity rather than a specific individual (Hawdon et al., 2014). Contrasting exposure to online hate with cyberbullying, R€ as€ anen andhis colleagues (forthcoming) argue, that: exposure to online hate material does not attack the individual in isolation; instead, this form of violence occurs when individuals are unwittingly exposed to materials against their will that express hatred or degrading attitudes toward a collective to which they belong. That is, hate materials denigrate groups; it is not an attack that focuses on individuals. Extremists, both individuals and organized groups, champion their cause, recruit members, advocate violence, and create international extremist communities through websites, blogs, chat rooms, file archives, listservers, news groups, internet communities, online video games, and web rings (Amster, 2009; Burris, Smith, & Strahm, 2000; Franklin 2010; Hussain & Saltman, 2014). Organized hate groups such as the Ku Klux Klan have used the web since its public inception (Amster, 2009; Gerstenfeld, Grant, & Chiang, 2003), but individuals maintaining sites or commenting online have surpassed organized groups as the main perpetrators (Potok, 2015). Given the nature of our analysis (self-reported exposure to online hate materials), we cannot determine if the material to which the respondents refer was posted by a formal group or an individual; nevertheless, the respondents claim the site expressed hatred toward some collectivedthe essence of our definition of online hate materials. It is important to realize exposure to online hate material may not be victimizing, per se. Some people actively seek such materials, and they would not be “victimized” in the traditional sense of the word. Others, however, come upon this material inadvertently. Even when the material is found accidently, we should not overstate the dangers these materials pose. Many people view hate materials without experiencing negative consequences, and most hate messages do not directly advocate violence (Douglas, McGarty, Bliuc, & Lala, 2005; Gerstenfeld et al., 2003; Glaser, Dixit, & Green, 2002; McNamee, Peterson, & Pe~ na, 2010). Nevertheless, exposure to hate materials correlates with several problematic behaviors and attitudes (Subrahmanyam & Smahel, 2011). For example, members of targeted groups can experience mood swings, anger, and fear after exposure (Tynes, 2006; Tynes, Reynolds, & Greenfield, 2004). In addition, exposure to online hate materials is inversely related to social trust (Nasi et al., 2015). Long-term exposure to hate materials can reinforce discrimination against vulnerable groups (Cowan & Mettrick, 2002; Foxman & Wolf, 2013) and lead to an inter-generational perpetuation of extremist ideologies (Perry, 2000; Tynes, 2006). In some cases, exposure to online hate materials is directly linked to violence, including acts of mass violence and terror (Federal Bureau of Investigation 2011a; for a list of deadly attacks see Freilich, Belli, & Chermak., 2011; The New America Foundation International Security Program 2015). Recently, exposure to extremist ideology has been implicated in recruiting youth to extremist causes, including terrorist organizations such as the Islamic State of Iraq and the Levant (ISIL). It is therefore important to understand who is likely to be exposed to these materials. 2. Correlates of exposure The limited number of existing studies analyzing exposure to online hate and extremism rely on Cohen and Felson’s (1979) routine activity theory (RAT) and its recent revisions. RAT argues that crimes occur when a motivated offender, a suitable target, and a lack of capable guardians converge in time and space (Cohen & Felson, 1979). Individuals’ activities can place them in danger by bringing them into contact with potential offenders and into environments that lack guardians who could confront those offenders (see Cohen & Felson, 1979; Miethe & Meier, 1990). In addition, individuals’ routines influence how attractive they are to offenders, and the probability of victimization increases as target attractiveness increases (Cohen & Felson, 1979). While there are complicating factors for applying RAT to the online world (see Tillyer & Eck, 2009; Yar, 2005, 2013), the cyberlifestyle-routine activities perspective (Eck & Clarke, 2003; Reyns, 2013; Reyns, Henson, & Fisher, 2011) overcomes some of these problems. Most notably, while cybervictims and offenders do not converge in time and space as victims and offenders do in the offline world, they nevertheless come into virtual contact through their networked devices (Reyns et al., 2011). The asynchronous nature of cyberviolence is clearly seen with exposure to hate material. Those posting hate materials can offend people across spaces and time because, once materials are posted, people can become exposed to them without ever directly interacting with offenders. The primary factor likely resulting in hate material exposure is proximity to “offenders.” More precisely, given the asynchronous nature of the Internet, proximity to the virtual places where offenders have been is the primary determinant of exposure. In the language of RAT, victimization should be related to factors leading one into dangerous places. As noted above, one need not directly encounter an offender; instead, one only need a",
"title": ""
},
{
"docid": "6ce7cce9253698692d270c9bd584d703",
"text": "The fast decrease in cost of DNA sequencing has resulted in an enormous growth in available genome data, and hence led to an increasing demand for fast DNA analysis algorithms used for diagnostics of genetic disorders, such as cancer. One of the most computationally intensive steps in the analysis is represented by the DNA read alignment. In this paper, we present an accelerated version of BWA-MEM, one of the most popular read alignment algorithms, by implementing a heterogeneous hardware/software optimized version on the Convey HC2ex platform. A challenging factor of the BWA-MEM algorithm is the fact that it consists of not one, but three computationally intensive kernels: SMEM generation, suffix array lookup and local Smith-Waterman. Obtaining substantial speedup is hence contingent on accelerating all of these three kernels at once. The paper shows an architecture containing two hardware-accelerated kernels and one kernel optimized in software. The two hardware kernels of suffix array lookup and local Smith-Waterman are able to reach speedups of 2.8x and 5.7x, respectively. The software optimization of the SMEM generation kernel is able to achieve a speedup of 1.7x. This enables a total application acceleration of 2.6x compared to the original software version.",
"title": ""
},
{
"docid": "e0c52b0fdf2d67bca4687b8060565288",
"text": "Large graph databases are commonly collected and analyzed in numerous domains. For reasons related to either space efficiency or for privacy protection (e.g., in the case of social network graphs), it sometimes makes sense to replace the original graph with a summary, which removes certain details about the original graph topology. However, this summarization process leaves the database owner with the challenge of processing queries that are expressed in terms of the original graph, but are answered using the summary. In this paper, we propose a formal semantics for answering queries on summaries of graph structures. At its core, our formulation is based on a random worlds model. We show that important graph-structure queries (e.g., adjacency, degree, and eigenvector centrality) can be answered efficiently and in closed form using these semantics. Further, based on this approach to query answering, we formulate three novel graph partitioning/compression problems. We develop algorithms for finding a graph summary that least affects the accuracy of query results, and we evaluate our proposed algorithms using both real and synthetic data.",
"title": ""
},
{
"docid": "08606c417ec49d44c4d2715ae96c0c43",
"text": "Online information intermediaries such as Facebook and Google are slowly replacing traditional media channels thereby partly becoming the gatekeepers of our society. To deal with the growing amount of information on the social web and the burden it brings on the average user, these gatekeepers recently started to introduce personalization features, algorithms that filter information per individual. In this paper we show that these online services that filter information are not merely algorithms. Humans not only affect the design of the algorithms, but they also can manually influence the filtering process even when the algorithm is operational. We further analyze filtering processes in detail, show how personalization connects to other filtering techniques, and show that both human and technical biases are present in today’s emergent gatekeepers. We use the existing literature on gatekeeping and search engine bias and provide a model of algorithmic gatekeeping.",
"title": ""
},
{
"docid": "55a5f8aedabc298ca5c4135b62978dcb",
"text": "We study the set of all decompositions (clusterings) of a graph through its characterization as a set of lifted multicuts. This leads us to practically relevant insights related to the definition of classes of decompositions by must-join and must-cut constraints and related to the comparison of clusterings by metrics. To find optimal decompositions defined by minimum cost lifted multicuts, we establish some properties of some facets of lifted multicut polytopes, define efficient separation procedures and apply these in a branchand-cut algorithm.",
"title": ""
},
{
"docid": "6dc14a47dccb7ec071d8b1e4b399c50c",
"text": "Applying detailed knowledge of the customer to a larger business domain.",
"title": ""
},
{
"docid": "5c5c0b0a391240c17ee899290f5e4a93",
"text": "We present Paged Graph Visualization (PGV), a new semi-autonomous tool for RDF data exploration and visualization. PGV consists of two main components: a) the \"PGV explorer\" and b) the \"RDF pager\" module utilizing BRAHMS, our high per-formance main-memory RDF storage system. Unlike existing graph visualization techniques which attempt to display the entire graph and then filter out irrelevant data, PGV begins with a small graph and provides the tools to incrementally explore and visualize relevant data of very large RDF ontologies. We implemented several techniques to visualize and explore hot spots in the graph, i.e. nodes with large numbers of immediate neighbors. In response to the user-controlled, semantics-driven direction of the exploration, the PGV explorer obtains the necessary sub-graphs from the RDF pager and enables their incremental visualization leaving the previously laid out sub-graphs intact. We outline the problem of visualizing large RDF data sets, discuss our interface and its implementation, and through a controlled experiment we show the benefits of PGV.",
"title": ""
},
{
"docid": "fd35019f37ea3b05b7b6a14bf74d5ad1",
"text": "Given the tremendous growth of sport fans, the “Intelligent Arena”, which can greatly improve the fun of traditional sports, becomes one of the new-emerging applications and research topics. The development of multimedia computing and artificial intelligence technologies support intelligent sport video analysis to add live video broadcast, score detection, highlight video generation, and online sharing functions to the intelligent arena applications. In this paper, we have proposed a deep learning based video analysis scheme for intelligent basketball arena applications. First of all, with multiple cameras or mobile devices capturing the activities in arena, the proposed scheme can automatically select the camera to give high-quality broadcast in real-time. Furthermore, with basketball energy image based deep conventional neural network, we can detect the scoring clips as the highlight video reels to support the wonderful actions replay and online sharing functions. Finally, evaluations on a built real-world basketball match dataset demonstrate that the proposed system can obtain 94.59% accuracy with only less than 45m s processing time (i.e., 10m s broadcast camera selection, and 35m s for scoring detection) for each frame. As the outstanding performance, the proposed deep learning based basketball video analysis scheme is implemented into a commercial intelligent basketball arena application named “Standz Basketball”. Although the application had been only released for one month, it achieves the 85t h day download ranking place in the sport category of Chinese iTunes market.",
"title": ""
},
{
"docid": "49575576bc5a0b949c81b0275cbc5f41",
"text": "From email to online banking, passwords are an essential component of modern internet use. Yet, users do not always have good password security practices, leaving their accounts vulnerable to attack. We conducted a study which combines self-report survey responses with measures of actual online behavior gathered from 134 participants over the course of six weeks. We find that people do tend to re-use each password on 1.7–3.4 different websites, they reuse passwords that are more complex, and mostly they tend to re-use passwords that they have to enter frequently. We also investigated whether self-report measures are accurate indicators of actual behavior, finding that though people understand password security, their self-reported intentions have only a weak correlation with reality. These findings suggest that users manage the challenge of having many passwords by choosing a complex password on a website where they have to enter it frequently in order to memorize that password, and then re-using that strong password across other websites.",
"title": ""
},
{
"docid": "099d292857e6e363f06eb606b0ce5b36",
"text": "The blockchain technology has evolved beyond traditional payment solutions in the finance sector and offers a potential for transforming many sectors including the public sector. The novel integration of technology and economy that open public block-chains have brought represents both challenges to and opportunities for enhancing digital public services. So far, the public sector has lagged behind other sectors in both research and exploration of this technology, but pilot cases show that there is a great potential for reforming and even transforming public service delivery.\n We argue that the open blockchain technology is best understood as a possible information infrastructure, given its universal, evolving, open and transparent nature. A comparison with Internet is meaningful despite obvious differences between the two. Based on some case studies, we have developed an analytical framework for better understanding the potential benefits as well as the existing challenges when introducing blockchain technology in the public sector.",
"title": ""
},
{
"docid": "fb1d84d15fd4a531a3a81c254ad3cab0",
"text": "Word embeddings have recently gained considerable popularity for modeling words in different Natural Language Processing (NLP) tasks including semantic similarity measurement. However, notwithstanding their success, word embeddings are by their very nature unable to capture polysemy, as different meanings of a word are conflated into a single representation. In addition, their learning process usually relies on massive corpora only, preventing them from taking advantage of structured knowledge. We address both issues by proposing a multifaceted approach that transforms word embeddings to the sense level and leverages knowledge from a large semantic network for effective semantic similarity measurement. We evaluate our approach on word similarity and relational similarity frameworks, reporting state-of-the-art performance on multiple datasets.",
"title": ""
},
{
"docid": "6f3931bf36c98642ee89284c6d6d7b7e",
"text": "Despite rapidly increasing numbers of diverse online shoppers the relationship of website design to trust, satisfaction, and loyalty has not previously been modeled across cultures. In the current investigation three components of website design (Information Design, Navigation Design, and Visual Design) are considered for their impact on trust and satisfaction. In turn, relationships of trust and satisfaction to online loyalty are evaluated. Utilizing data collected from 571 participants in Canada, Germany, and China various relationships in the research model are tested using PLS analysis for each country separately. In addition the overall model is tested for all countries combined as a control and verification of earlier research findings, although this time with a mixed country sample. All paths in the overall model are confirmed. Differences are determined for separate country samples concerning whether Navigation Design, Visual Design, and Information Design result in trust, satisfaction, and ultimately loyalty suggesting design characteristics should be a central consideration in website design across cultures.",
"title": ""
},
{
"docid": "f21e0b6062b88a14e3e9076cdfd02ad5",
"text": "Beyond being facilitators of human interactions, social networks have become an interesting target of research, providing rich information for studying and modeling user’s behavior. Identification of personality-related indicators encrypted in Facebook profiles and activities are of special concern in our current research efforts. This paper explores the feasibility of modeling user personality based on a proposed set of features extracted from the Facebook data. The encouraging results of our study, exploring the suitability and performance of several classification techniques, will also be presented.",
"title": ""
},
{
"docid": "5c935db4a010bc26d93dd436c5e2f978",
"text": "A taxonomic revision of Australian Macrobrachium identified three species new to the Australian fauna – two undescribed species and one new record, viz. M. auratumsp. nov., M. koombooloombasp. nov., and M. mammillodactylus(Thallwitz, 1892). Eight taxa previously described by Riek (1951) are recognised as new junior subjective synonyms, viz. M. adscitum adscitum, M. atactum atactum, M. atactum ischnomorphum, M. atactum sobrinum, M. australiense crassum, M. australiense cristatum, M. australiense eupharum of M. australienseHolthuis, 1950, and M. glypticumof M. handschiniRoux, 1933. Apart from an erroneous type locality for a junior subjective synonym, there were no records to confirm the presence of M. australe(Guérin-Méneville, 1838) on the Australian continent. In total, 13 species of Macrobrachiumare recorded from the Australian continent. Keys to male developmental stages and Australian species are provided. A revised diagnosis is given for the genus. A list of 31 atypical species which do not appear to be based on fully developed males or which require re-evaluation of their generic status is provided. Terminology applied to spines and setae is revised.",
"title": ""
},
{
"docid": "ce7175f868e2805e9e08e96a1c9738f4",
"text": "The development of the Semantic Web, with machine-readable content, has the potential to revolutionize the World Wide Web and its use. In A Semantic Web Primer Grigoris Antoniou and Frank van Harmelen provide an introduction and guide to this emerging field, describing its key ideas, languages, and technologies. Suitable for use as a textbook or for self-study by professionals, the book concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own and includes exercises, project descriptions, and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL, and rules) and technologies (explicit metadata, ontologies, and logic and inference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processible semantics; and OWL, the W3C-approved standard for a Web ontology language that is more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.",
"title": ""
},
{
"docid": "cc77d2ed2e79dd9381005060ba5c2a0e",
"text": "This paper provides a detailed description of the IBM SiGe BiCMOS and rf CMOS technologies. The technologies provide high-performance SiGe heterojunction bipolar transistors (HBTs) combined with advanced CMOS technology and a variety of passive devices critical for realizing an integrated mixed-signal system-on-a-chip (SoC). The paper reviews the process development and integration methodology, presents the device characteristics, and shows how the development and device selection were geared toward usage in mixed-signal IC development.",
"title": ""
},
{
"docid": "cbc9437811bff9a1d96dd5d5f886c598",
"text": "Weakly supervised learning for object detection has been gaining significant attention in the recent past. Visually similar objects are extracted automatically from weakly labelled videos hence bypassing the tedious process of manually annotating training data. However, the problem as applied to small or medium sized objects is still largely unexplored. Our observation is that weakly labelled information can be derived from videos involving human-object interactions. Since the object is characterized neither by its appearance nor its motion in such videos, we propose a robust framework that taps valuable human context and models similarity of objects based on appearance and functionality. Furthermore, the framework is designed such that it maximizes the utility of the data by detecting possibly multiple instances of an object from each video. We show that object models trained in this fashion perform between 86% and 92% of their fully supervised counterparts on three challenging RGB and RGB-D datasets.",
"title": ""
},
{
"docid": "41353a12a579f72816f1adf3cba154dd",
"text": "The crux of our initialization technique is n-gram selection, which assists neural networks to extract important n-gram features at the beginning of the training process. In the following tables, we illustrate those selected n-grams of different classes and datasets to understand our technique intuitively. Since all of MR, SST-1, SST-2, CR, and MPQA are sentiment classification datasets, we only report the selected n-grams of SST-1 (Table 1). N-grams selected by our method in SUBJ and TREC are shown in Table 2 and Table 3.",
"title": ""
},
{
"docid": "40a74168484667d58ff46c69c378a38b",
"text": "In this supplementary material of [6] we provide additional derivations and implementation details. Section 1 provides details about the derivation of eq. (5) in the paper. Section 2 contains detailed derivations of the Fourier coefficients of the desired convolution output yj and the interpolation functions bd, described in section 3.4 in the paper. Details about the numerical optimization is given in section 3. We provide detailed results on the OTB-2015 and Temple-Color datasets in sections 4 and 5 respectively. In our object tracking experiments, we use the same parameter settings for our method in all state-of-the-art comparisons (sections 5.2, 5.3 and 5.4), i.e. for all datasets and videos. Further, we use the same parameter settings for all feature point tracking experiments. Code, raw result files and a video of qualitative feature point tracking results on the MPI Sintel dataset are available at the project webpage http://www.cvl.isy. liu.se/research/objrec/visualtracking/conttrack/index.html.",
"title": ""
},
{
"docid": "ac7318e7ccedc07b853d0958142b691a",
"text": "There are two distinct approaches to solving reinforcement learning problems, namely, searching in value function space and searching in policy space. Temporal di erence methods and evolutionary algorithms are well-known examples of these approaches. Kaelbling, Littman and Moore recently provided an informative survey of temporal di erence methods. This article focuses on the application of evolutionary algorithms to the reinforcement learning problem, emphasizing alternative policy representations, credit assignment methods, and problem-speci c genetic operators. Strengths and weaknesses of the evolutionary approach to reinforcement learning are presented, along with a survey of representative applications.",
"title": ""
}
] |
scidocsrr
|
81e40f3c479f6469c08bd608b8bb8869
|
Web search query privacy: Evaluating query obfuscation and anonymizing networks
|
[
{
"docid": "46ea713c4206d57144350a7871433392",
"text": "In this paper, we use a blog corpus to demonstrate that we can often identify the author of an anonymous text even where there are many thousands of candidate authors. Our approach combines standard information retrieval methods with a text categorization meta-learning scheme that determines when to even venture a guess.",
"title": ""
}
] |
[
{
"docid": "88804f285f4d608b81a1cd741dbf2b7e",
"text": "Predicting ad click-through rates (CTR) is a massive-scale learning problem that is central to the multi-billion dollar online advertising industry. We present a selection of case studies and topics drawn from recent experiments in the setting of a deployed CTR prediction system. These include improvements in the context of traditional supervised learning based on an FTRL-Proximal online learning algorithm (which has excellent sparsity and convergence properties) and the use of per-coordinate learning rates.\n We also explore some of the challenges that arise in a real-world system that may appear at first to be outside the domain of traditional machine learning research. These include useful tricks for memory savings, methods for assessing and visualizing performance, practical methods for providing confidence estimates for predicted probabilities, calibration methods, and methods for automated management of features. Finally, we also detail several directions that did not turn out to be beneficial for us, despite promising results elsewhere in the literature. The goal of this paper is to highlight the close relationship between theoretical advances and practical engineering in this industrial setting, and to show the depth of challenges that appear when applying traditional machine learning methods in a complex dynamic system.",
"title": ""
},
{
"docid": "93d74028598d9d654ce198df606ba0ef",
"text": "Continually advancing technology has made it feasible to capture data online for onward transmission as a steady flow of newly generated data points, termed as data stream. Continuity and unboundedness of data streams make storage of data and multiple scans of data an impractical proposition for the purpose of knowledge discovery. Need to learn structures from data in streaming environment has been a driving force for making clustering a popular technique for knowledge discovery from data streams. Continuous nature of streaming data makes it infeasible to look for point membership among the clusters discovered so far, necessitating employment of a synopsis structure to consolidate incoming data points. This synopsis is exploited for building clustering scheme to meet subsequent user demands. The proposed Exclusive and Complete Clustering (ExCC) algorithm captures non-overlapping clusters in data streams with mixed attributes, such that each point either belongs to some cluster or is an outlier/noise. The algorithm is robust, adaptive to changes in data distribution and detects succinct outliers on-the-fly. It deploys a fixed granularity grid structure as synopsis and performs clustering by coalescing dense regions in grid. Speed-based pruning is applied to synopsis prior to clustering to ensure currency of discovered clusters. Extensive experimentation demonstrates that the algorithm is robust, identifies succinct outliers on-the-fly and is adaptive to change in the data distribution. ExCC algorithm is further evaluated for performance and compared with other contemporary algorithms.",
"title": ""
},
{
"docid": "2752c235aea735a04b70272deb042ea6",
"text": "Psychophysiological studies with music have not examined what exactly in the music might be responsible for the observed physiological phenomena. The authors explored the relationships between 11 structural features of 16 musical excerpts and both self-reports of felt pleasantness and arousal and different physiological measures (respiration, skin conductance, heart rate). Overall, the relationships between musical features and experienced emotions corresponded well with those known between musical structure and perceived emotions. This suggests that the internal structure of the music played a primary role in the induction of the emotions in comparison to extramusical factors. Mode, harmonic complexity, and rhythmic articulation best differentiated between negative and positive valence, whereas tempo, accentuation, and rhythmic articulation best discriminated high arousal from low arousal. Tempo, accentuation, and rhythmic articulation were the features that most strongly correlated with physiological measures. Music that induced faster breathing and higher minute ventilation, skin conductance, and heart rate was fast, accentuated, and staccato. This finding corroborates the contention that rhythmic aspects are the major determinants of physiological responses to music.",
"title": ""
},
{
"docid": "2908fc6673d28a519c26bd97b2045090",
"text": "Early sport specialization (ESS) refers to intense year round training in a specific sport with the exclusion of other sports at a young age. This approach to training is heavily debated and there are claims both in support and against ESS. ESS is considered to be more common in the modern day youth athlete and could be a source of overuse injuries and burnout. This case describes a 16 year old elite level baseball pitcher who engaged in high volume, intense training at a young age which lead to several significant throwing related injuries. The case highlights the historical context of ESS, the potential risk and benefits as well as the evidence for its effectiveness. It is important for health care professionals to be informed on the topic of ESS in order to educate athletes, parents, coaches and organizations of the potential risks and benefits.",
"title": ""
},
{
"docid": "ce3ebda58ece035bbc52695b229cb413",
"text": "Robust sensing of the environment is fundamental for driver assistance systems performing safe maneuvers. While approaches to object detection have experienced tremendous improvements since the introduction and combination of region proposal and convolutional neural networks in one framework, the detection of distant objects occupying just a few pixels in images can be challenging though. The convolutional and pooling layers reduce the image information to feature maps; yet, relevant information may be lost through pooling and convolution for small objects. In order to address this challenge, a new approach to proposing regions is presented that extends the architecture of a region proposal network by incorporating priors to guide the proposals towards regions containing potential target objects. Moreover, inspired by the concept of saliency, a saliency-based prior is chosen to guide the RPN towards important regions in order to make efficient use of differences between objects and background in an unsupervised fashion. This allows the network not only to consider local information provided by the convolutional layers, but also to take into account global information provided by the saliency priors. Experimental results based on a distant vehicle dataset and different configurations including three priors show that the incorporation of saliency-inspired priors into a region proposal network can improve its performance significantly.",
"title": ""
},
{
"docid": "d311bfc22c30e860c529b2aeb16b6d40",
"text": "We study the emergence of communication in multiagent adversarial settings inspired by the classic Imitation game. A class of three player games is used to explore how agents based on sequence to sequence (Seq2Seq) models can learn to communicate information in adversarial settings. We propose a modeling approach, an initial set of experiments and use signaling theory to support our analysis. In addition, we describe how we operationalize the learning process of actor-critic Seq2Seq based agents in these communicational games.",
"title": ""
},
{
"docid": "c2ed6ac38a6014db73ba81dd898edb97",
"text": "The ability of personality traits to predict important life outcomes has traditionally been questioned because of the putative small effects of personality. In this article, we compare the predictive validity of personality traits with that of socioeconomic status (SES) and cognitive ability to test the relative contribution of personality traits to predictions of three critical outcomes: mortality, divorce, and occupational attainment. Only evidence from prospective longitudinal studies was considered. In addition, an attempt was made to limit the review to studies that controlled for important background factors. Results showed that the magnitude of the effects of personality traits on mortality, divorce, and occupational attainment was indistinguishable from the effects of SES and cognitive ability on these outcomes. These results demonstrate the influence of personality traits on important life outcomes, highlight the need to more routinely incorporate measures of personality into quality of life surveys, and encourage further research about the developmental origins of personality traits and the processes by which these traits influence diverse life outcomes.",
"title": ""
},
{
"docid": "c9275012c275a0288849e6eb8e7156c4",
"text": "Evaluation of patients with shoulder disorders often presents challenges. Among the most troublesome are revision surgery in patients with massive rotator cuff tear, atraumatic shoulder instability, revision arthroscopic stabilization surgery, adhesive capsulitis, and bicipital and subscapularis injuries. Determining functional status is critical before considering surgical options in the patient with massive rotator cuff tear. When nonsurgical treatment of atraumatic shoulder stability is not effective, inferior capsular shift is the treatment of choice. Arthroscopic revision of failed arthroscopic shoulder stabilization procedures may be undertaken when bone and tissue quality are good. Arthroscopic release is indicated when idiopathic adhesive capsulitis does not respond to nonsurgical treatment; however, results of both nonsurgical and surgical treatment of posttraumatic and postoperative adhesive capsulitis are often disappointing. Patients not motivated to perform the necessary postoperative therapy following subscapularis repair are best treated with arthroscopic débridement and biceps tenotomy.",
"title": ""
},
{
"docid": "f97244b3ca9641b43dc4f4592e30f48b",
"text": "In many real applications of machine learning and data mining, we are often confronted with high-dimensional data. How to cluster high-dimensional data is still a challenging problem due to the curse of dimensionality. In this paper, we try to address this problem using joint dimensionality reduction and clustering. Different from traditional approaches that conduct dimensionality reduction and clustering in sequence, we propose a novel framework referred to as discriminative embedded clustering which alternates them iteratively. Within this framework, we are able not only to view several traditional approaches and reveal their intrinsic relationships, but also to be stimulated to develop a new method. We also propose an effective approach for solving the formulated nonconvex optimization problem. Comprehensive analyses, including convergence behavior, parameter determination, and computational complexity, together with the relationship to other related approaches, are also presented. Plenty of experimental results on benchmark data sets illustrate that the proposed method outperforms related state-of-the-art clustering approaches and existing joint dimensionality reduction and clustering methods.",
"title": ""
},
{
"docid": "50df746279b25baa3ecae2b75abd169e",
"text": "BACKGROUND\nThere is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges.\n\n\nMETHODS\nInitial requirements for AO were elicited by analysis of integration needs between biomedical web communities, and of needs for representing and integrating results of biomedical text mining. Analysis of strengths and weaknesses of previous efforts in this area was also performed. A series of increasingly refined annotation tools were then developed along with a metadata model in OWL, and deployed for feedback and additional requirements the ontology to users at a major pharmaceutical company and a major academic center. Further requirements and critiques of the model were also elicited through discussions with many colleagues and incorporated into the work.\n\n\nRESULTS\nThis paper presents Annotation Ontology (AO), an open ontology in OWL-DL for annotating scientific documents on the web. AO supports both human and algorithmic content annotation. It enables \"stand-off\" or independent metadata anchored to specific positions in a web document by any one of several methods. In AO, the document may be annotated but is not required to be under update control of the annotator. AO contains a provenance model to support versioning, and a set model for specifying groups and containers of annotation. AO is freely available under open source license at http://purl.org/ao/, and extensive documentation including screencasts is available on AO's Google Code page: http://code.google.com/p/annotation-ontology/ .\n\n\nCONCLUSIONS\nThe Annotation Ontology meets critical requirements for an open, freely shareable model in OWL, of annotation metadata created against scientific documents on the Web. We believe AO can become a very useful common model for annotation metadata on Web documents, and will enable biomedical domain ontologies to be used quite widely to annotate the scientific literature. Potential collaborators and those with new relevant use cases are invited to contact the authors.",
"title": ""
},
{
"docid": "2d3bcc3c6759c584c2deaa8b99fcbfb0",
"text": "We develop a dynamic programming algorithm for haplotype block partitioning to minimize the number of representative single nucleotide polymorphisms (SNPs) required to account for most of the common haplotypes in each block. Any measure of haplotype quality can be used in the algorithm and of course the measure should depend on the specific application. The dynamic programming algorithm is applied to analyze the chromosome 21 haplotype data of Patil et al. [Patil, N., Berno, A. J., Hinds, D. A., Barrett, W. A., Doshi, J. M., Hacker, C. R., Kautzer, C. R., Lee, D. H., Marjoribanks, C., McDonough, D. P., et al. (2001) Science 294, 1719-1723], who searched for blocks of limited haplotype diversity. Using the same criteria as in Patil et al., we identify a total of 3,582 representative SNPs and 2,575 blocks that are 21.5% and 37.7% smaller, respectively, than those identified using a greedy algorithm of Patil et al. We also apply the dynamic programming algorithm to the same data set based on haplotype diversity. A total of 3,982 representative SNPs and 1,884 blocks are identified to account for 95% of the haplotype diversity in each block.",
"title": ""
},
{
"docid": "2b19c55c2d69158361e27ce459c3112d",
"text": "In many domains, classes have highly regular internal structure. For example, so-called business objects often contain boilerplate code for mapping database fields to class members. The boilerplate code must be repeated per-field for every class, because existing mechanisms for constructing classes do not provide a way to capture and reuse such member-level structure. As a result, programmers often resort to ad hoc code generation. This paper presents a lightweight mechanism for specifying and reusing member-level structure in Java programs. The proposal is based on a modest extension to traits that we have termed trait-based metaprogramming. Although the semantics of the mechanism are straightforward, its type theory is difficult to reconcile with nominal subtyping. We achieve reconciliation by introducing a hybrid structural/nominal type system that extends Java’s type system. The paper includes a formal calculus defined by translation to Featherweight Generic Java.",
"title": ""
},
{
"docid": "4964e7b4054d7f0a9449e7abc97c41d0",
"text": "Studies on Simultaneous Saccharification and Fermentation (SSF) of corn flour, a major agricultural product as the substrate using starch digesting glucoamylase enzyme derived from Aspergillus niger and non starch digesting and sugar fermenting Saccharomyces cerevisiae in a batch fermentation. Experiments based on Central Composite Design (CCD) were conducted to study the effect of substrate concentration, pH, temperature, enzyme concentration on Ethanol Concentration and the above parameters were optimized using Response Surface Methodology (RSM). The optimum values of substrate concentration, pH, temperature and enzyme concentration were found to be 160 g/l, 5.5, 30°C and 50 IU respectively. The effect of inoculums age on ethanol concentration was also investigated. The corn flour solution equivalent to 16% initial starch concentration gave the highest ethanol concentration of 63.04 g/l after 48 h of fermentation at optimum conditions of pH and temperature. Monod model and Logistic model were used for growth kinetics and Leudeking – Piret model was used for product formation kinetics. Keywords—Simultaneous Saccharification and Fermentation (SSF), Corn Starch, Ethanol, Logisitic Model.",
"title": ""
},
{
"docid": "572348e4389acd63ea7c0667e87bbe04",
"text": "Through the analysis of collective upvotes and downvotes in multiple social media, we discover the bimodal regime of collective evaluations. When online content surpasses the local social context by reaching a threshold of collective attention, negativity grows faster with positivity, which serves as a trace of the burst of a filter bubble. To attain a global audience, we show that emotions expressed in online content has a significant effect and also play a key role in creating polarized opinions.",
"title": ""
},
{
"docid": "0344917c6b44b85946313957a329bc9c",
"text": "Recently, Haas and Hellerstein proposed the hash ripple join algorithm in the context of online aggregation. Although the algorithm rapidly gives a good estimate for many join-aggregate problem instances, the convergence can be slow if the number of tuples that satisfy the join predicate is small or if there are many groups in the output. Furthermore, if memory overflows (for example, because the user allows the algorithm to run to completion for an exact answer), the algorithm degenerates to block ripple join and performance suffers. In this paper, we build on the work of Haas and Hellerstein and propose a new algorithm that (a) combines parallelism with sampling to speed convergence, and (b) maintains good performance in the presence of memory overflow. Results from a prototype implementation in a parallel DBMS show that its rate of convergence scales with the number of processors, and that when allowed to run to completion, even in the presence of memory overflow, it is competitive with the traditional parallel hybrid hash join algorithm.",
"title": ""
},
{
"docid": "dc23db7027a8abd982ce2532601ded72",
"text": "This paper presents TiNA, a scheme for minimizing energy consumption in sensor networks by exploiting end-user tolerance to temporal coherency. TiNA utilizes temporal coherency tolerances to both reduce the amount of information transmitted by individual nodes (communication cost dominates power usage in sensor networks), and to improve quality of data when not all sensor readings can be propagated up the network within a given time constraint. TiNA was evaluated against a traditional in-network aggregation scheme with respect to power savings as well as the quality of data for aggregate queries. Preliminary results show that TiNA can reduce power consumption by up to 50% without any loss in the quality of data.",
"title": ""
},
{
"docid": "1bc001dc0e4adb2f1c7bc736e2d105f7",
"text": "Web personalization has quickly moved from an added value feature to a necessity, particularly for large information services and sites that generate revenue by selling products. Web personalization can be viewed as using user preferences profiles to dynamically serve customized content to particular users. User preferences may be obtained explicitly, or by passive observation of users over time as they interact with the system. Principal elements of Web personalization include modeling of Web objects (pages, etc.) and subjects (users), matching between and across objects and/or subjects, and determination of the set of actions to be recommended for personalization. Existing approaches used by many Web-based companies, as well as approaches based on collaborative filtering (e.g., GroupLens [HKBR99] and Firefly [SM95]), rely heavily on human input for determining the personalization actions. This type of input is often a subjective description of the users by the users themselves, and thus prone to biases. Furthermore, the profile is static, and its performance degrades over time as the profile ages. Recently, a number of approaches have been developed dealing with specific aspects of Web usage mining for the purpose of automatically discovering user profiles. For example, Perkowitz and Etzioni [PE98] proposed the idea of optimizing the structure of Web sites based co-occurrence patterns of pages within usage data for the site. Schechter et al [SKS98] have developed techniques for using path profiles of users to predict future HTTP requests, which can be used for network and proxy caching. Spiliopoulou et al [SF99], Cooley et al [CMS99], and Buchner and Mulvenna [BM99] have applied data mining techniques to extract usage patterns from Web logs, for the purpose of deriving marketing intelligence. Shahabi et al [SZA97], Yan et al [YJGD96], and Nasraoui et al [NFJK99] have proposed clustering of user sessions to predict future user behavior. In this paper we describe an approach to usage-based Web personalization taking into account both the offline tasks related to the mining of usage data, and the online process of automatic Web page customization based on the mined knowledge. Specifically, we propose an effective technique for capturing common user profiles based on association-rule discovery and usage-based clustering. We also propose techniques for combining this knowledge with the current status of an ongoing Web activity to perform realtime personalization. Finally, we provide an experimental evaluation of the proposed techniques using real Web usage data.",
"title": ""
},
{
"docid": "32b1e18aad03bc753a63c39ad36eb58f",
"text": "Classification of Web page content is essential to many tasks in Web information retrieval such as maintaining Web directories and focused crawling. The uncontrolled nature of Web content presents additional challenges to Web page classification as compared to traditional text classification, but the interconnected nature of hypertext also provides features that can assist the process.\n As we review work in Web page classification, we note the importance of these Web-specific features and algorithms, describe state-of-the-art practices, and track the underlying assumptions behind the use of information from neighboring pages.",
"title": ""
},
{
"docid": "781ef0722d8a03024924a556aa1dc61e",
"text": "Digital 3D mosaics generation is a current trend of NPR (Non Photorealistic Rendering) field; in this demo we present an interactive system realized in JAVA where the user can simulate ancient mosaic in a 3D environment starting for any input image. Different simulation engines able to render the so-called \"Opus Musivum\"and \"Opus Vermiculatum\" are employed. Different parameters can be dynamically adjusted to obtain very impressive results.",
"title": ""
}
] |
scidocsrr
|
b37209970c7f4962108591489d54fbed
|
Hierarchical LSTMs with Adaptive Attention for Visual Captioning
|
[
{
"docid": "6af09f57f2fcced0117dca9051917a0d",
"text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.",
"title": ""
},
{
"docid": "e2d8da3d28f560c4199991dbdffb8c2c",
"text": "Attention-based neural encoder-decoder frameworks have been widely adopted for image captioning. Most methods force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as the and of. Other words that may seem visual can often be predicted reliably just from the language model e.g., sign after behind a red stop or phone following talking on a cell. In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time step, our model decides whether to attend to the image (and if so, to which regions) or to the visual sentinel. The model decides whether to attend to the image and where, in order to extract meaningful information for sequential word generation. We test our method on the COCO image captioning 2015 challenge dataset and Flickr30K. Our approach sets the new state-of-the-art by a significant margin.",
"title": ""
}
] |
[
{
"docid": "bad378dceb9e4c060fa52acdf328d845",
"text": "Autonomous robot execution of surgical sub-tasks has the potential to reduce surgeon fatigue and facilitate supervised tele-surgery. This paper considers the sub-task of surgical debridement: removing dead or damaged tissue fragments to allow the remaining healthy tissue to heal. We present an autonomous multilateral surgical debridement system using the Raven, an open-architecture surgical robot with two cable-driven 7 DOF arms. Our system combines stereo vision for 3D perception with trajopt, an optimization-based motion planner, and model predictive control (MPC). Laboratory experiments involving sensing, grasping, and removal of 120 fragments suggest that an autonomous surgical robot can achieve robustness comparable to human performance. Our robot system demonstrated the advantage of multilateral systems, as the autonomous execution was 1.5× faster with two arms than with one; however, it was two to three times slower than a human. Execution speed could be improved with better state estimation that would allow more travel between MPC steps and fewer MPC replanning cycles. The three primary contributions of this paper are: (1) introducing debridement as a sub-task of interest for surgical robotics, (2) demonstrating the first reliable autonomous robot performance of a surgical sub-task using the Raven, and (3) reporting experiments that highlight the importance of accurate state estimation for future research. Further information including code, photos, and video is available at: http://rll.berkeley.edu/raven.",
"title": ""
},
{
"docid": "6b81fe23d8c2cb7ad7d296546a3cdadf",
"text": "Please cite this article in press as: H.J. Oh Vis. Comput. (2008), doi:10.1016/j.imavis In this paper, we propose a novel occlusion invariant face recognition algorithm based on Selective Local Non-negative Matrix Factorization (S-LNMF) technique. The proposed algorithm is composed of two phases; the occlusion detection phase and the selective LNMF-based recognition phase. We use a local approach to effectively detect partial occlusions in an input face image. A face image is first divided into a finite number of disjointed local patches, and then each patch is represented by PCA (Principal Component Analysis), obtained by corresponding occlusion-free patches of training images. And the 1-NN threshold classifier is used for occlusion detection for each patch in the corresponding PCA space. In the recognition phase, by employing the LNMF-based face representation, we exclusively use the LNMF bases of occlusion-free image patches for face recognition. Euclidean nearest neighbor rule is applied for the matching. We have performed experiments on AR face database that includes many occluded face images by sunglasses and scarves. The experimental results demonstrate that the proposed local patch-based occlusion detection technique works well and the S-LNMF method shows superior performance to other conventional approaches. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "8f174607776cd7dc8c69739183121fcc",
"text": "We empirically evaluate several state-of-the-art methods for constructing ensembles of heterogeneous classifiers with stacking and show that they perform (at best) comparably to selecting the best classifier from the ensemble by cross validation. Among state-of-the-art stacking methods, stacking with probability distributions and multi-response linear regression performs best. We propose two extensions of this method, one using an extended set of meta-level features and the other using multi-response model trees to learn at the meta-level. We show that the latter extension performs better than existing stacking approaches and better than selecting the best classifier by cross validation.",
"title": ""
},
{
"docid": "10a0f370ad3e9c3d652e397860114f90",
"text": "Statistical data associated with geographic regions is nowadays globally available in large amounts and hence automated methods to visually display these data are in high demand. There are several well-established thematic map types for quantitative data on the ratio-scale associated with regions: choropleth maps, cartograms, and proportional symbol maps. However, all these maps suffer from limitations, especially if large data values are associated with small regions. To overcome these limitations, we propose a novel type of quantitative thematic map, the necklace map. In a necklace map, the regions of the underlying two-dimensional map are projected onto intervals on a one-dimensional curve (the necklace) that surrounds the map regions. Symbols are scaled such that their area corresponds to the data of their region and placed without overlap inside the corresponding interval on the necklace. Necklace maps appear clear and uncluttered and allow for comparatively large symbol sizes. They visualize data sets well which are not proportional to region sizes. The linear ordering of the symbols along the necklace facilitates an easy comparison of symbol sizes. One map can contain several nested or disjoint necklaces to visualize clustered data. The advantages of necklace maps come at a price: the association between a symbol and its region is weaker than with other types of maps. Interactivity can help to strengthen this association if necessary. We present an automated approach to generate necklace maps which allows the user to interactively control the final symbol placement. We validate our approach with experiments using various data sets and maps.",
"title": ""
},
{
"docid": "0e0b0b6b0fdab06fa9d3ebf6a8aefd6b",
"text": "Hippocampal place fields have been shown to reflect behaviorally relevant aspects of space. For instance, place fields tend to be skewed along commonly traveled directions, they cluster around rewarded locations, and they are constrained by the geometric structure of the environment. We hypothesize a set of design principles for the hippocampal cognitive map that explain how place fields represent space in a way that facilitates navigation and reinforcement learning. In particular, we suggest that place fields encode not just information about the current location, but also predictions about future locations under the current transition distribution. Under this model, a variety of place field phenomena arise naturally from the structure of rewards, barriers, and directional biases as reflected in the transition policy. Furthermore, we demonstrate that this representation of space can support efficient reinforcement learning. We also propose that grid cells compute the eigendecomposition of place fields in part because is useful for segmenting an enclosure along natural boundaries. When applied recursively, this segmentation can be used to discover a hierarchical decomposition of space. Thus, grid cells might be involved in computing subgoals for hierarchical reinforcement learning.",
"title": ""
},
{
"docid": "4eff2dc30d4b0031dec8be5dda3157d8",
"text": "We introduce a scheme for molecular simulations, the deep potential molecular dynamics (DPMD) method, based on a many-body potential and interatomic forces generated by a carefully crafted deep neural network trained with ab initio data. The neural network model preserves all the natural symmetries in the problem. It is first-principles based in the sense that there are no ad hoc components aside from the network model. We show that the proposed scheme provides an efficient and accurate protocol in a variety of systems, including bulk materials and molecules. In all these cases, DPMD gives results that are essentially indistinguishable from the original data, at a cost that scales linearly with system size.",
"title": ""
},
{
"docid": "69d14f179f71c33fdd3140f21f0511a2",
"text": "Image segmentation is one of the substantial techniques in the field of image processing. It is excessively used in the field of medicine provides visual means for identification, inspection and tracking of diseases for surgical planning and simulation. Active contours or snakes are used extensively for image segmentation and processing applications, particularly to locate object boundaries. Active contours are regarded as promising and vigorously researched model-based approach to computer assisted medical image analysis. However, its utility is limited due to poor convergence of concavities and small capture range. Many subsequent models have been introduced in order to overcome these problems. This paper reviews the traditional model, the Gradient vector flow (GVF) model and the balloon model for different images and proposes a model which can provide the most accurate segmentation.",
"title": ""
},
{
"docid": "4fb6e2a74562e0442fb7bce743ccd95a",
"text": "Multiple-group confirmatory factor analysis (MG-CFA) is among the most productive extensions of structural equation modeling. Many researchers conducting cross-cultural or longitudinal studies are interested in testing for measurement and structural invariance. The aim of the present paper is to provide a tutorial in MG-CFA using the freely available R-packages lavaan, semTools, and semPlot. The combination of these packages enable a highly efficient analysis of the measurement models both for normally distributed as well as ordinal data. Data from two freely available datasets – the first with continuous the second with ordered indicators will be used to provide a walk-through the individual steps.",
"title": ""
},
{
"docid": "0a389a9823a6cb060ba0263710cfc7f1",
"text": "Generative adversarial networks (GANs) can be interpreted as an adversarial game between two players, a discriminator D and a generator G, in which D learns to classify real from fake data and G learns to generate realistic data by \"fooling\" D into thinking that fake data is actually real data. Currently, a dominating view is that G actually learns by minimizing a divergence given that the general objective function is a divergence whenD is optimal. However, this view has been challenged due to inconsistencies between theory and practice. In this paper, we discuss of the properties associated with most loss functions for G (e.g., saturating/nonsaturating f -GAN, LSGAN, WGAN, etc.). We show that these loss functions are not divergences and do not have the same equilibrium as expected of divergences. This suggests that G does not need to minimize the same objective function as D maximize, nor maximize the objective of D after swapping real data with fake data (non-saturating GAN) but can instead use a wide range of possible loss functions to learn to generate realistic data. We define GANs through two separate and independent D maximization and G minimization steps. We generalize the generator step to four new classes of loss functions, most of which are actual divergences (while traditional G loss functions are not). We test a wide variety of loss functions from these four classes on a synthetic dataset and on CIFAR-10. We observe that most loss functions converge well and provide comparable data generation quality to non-saturating GAN, LSGAN, and WGAN-GP generator loss functions, whether we use divergences or non-divergences. These results suggest that GANs do not conform well to the divergence minimization theory and form a much broader range of models than previously assumed.",
"title": ""
},
{
"docid": "9f7bb80631e6aa2b13d0045580af15d1",
"text": "This paper presents an extensive study of the software implementation on workstations of the NIST-recommended elliptic curves over prime fields. We present the results of our implementation in C and assembler on a Pentium II 400 MHz workstation. We also provide a comparison with the NIST-recommended curves over binary fields.",
"title": ""
},
{
"docid": "8e8c77d09990588aac87198c81d68bf0",
"text": "Recurrent neural networks are powerful models for processing sequential data, but they are generally plagued by vanishing and exploding gradient problems. Unitary recurrent neural networks (uRNNs), which use unitary recurrence matrices, have recently been proposed as a means to avoid these issues. However, in previous experiments, the recurrence matrices were restricted to be a product of parameterized unitary matrices, and an open question remains: when does such a parameterization fail to represent all unitary matrices, and how does this restricted representational capacity limit what can be learned? To address this question, we propose full-capacity uRNNs that optimize their recurrence matrix over all unitary matrices, leading to significantly improved performance over uRNNs that use a restricted-capacity recurrence matrix. Our contribution consists of two main components. First, we provide a theoretical argument to determine if a unitary parameterization has restricted capacity. Using this argument, we show that a recently proposed unitary parameterization has restricted capacity for hidden state dimension greater than 7. Second, we show how a complete, fullcapacity unitary recurrence matrix can be optimized over the differentiable manifold of unitary matrices. The resulting multiplicative gradient step is very simple and does not require gradient clipping or learning rate adaptation. We confirm the utility of our claims by empirically evaluating our new full-capacity uRNNs on both synthetic and natural data, achieving superior performance compared to both LSTMs and the original restricted-capacity uRNNs. Advances in Neural Information Processing Systems (NIPS) This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2016 201 Broadway, Cambridge, Massachusetts 02139",
"title": ""
},
{
"docid": "96718ecc3de9cc1b719a49cc2092f6f7",
"text": "n-gram statistical language model has been successfully applied to capture programming patterns to support code completion and suggestion. However, the approaches using n-gram face challenges in capturing the patterns at higher levels of abstraction due to the mismatch between the sequence nature in n-grams and the structure nature of syntax and semantics in source code. This paper presents GraLan, a graph-based statistical language model and its application in code suggestion. GraLan can learn from a source code corpus and compute the appearance probabilities of any graphs given the observed (sub)graphs. We use GraLan to develop an API suggestion engine and an AST-based language model, ASTLan. ASTLan supports the suggestion of the next valid syntactic template and the detection of common syntactic templates. Our empirical evaluation on a large corpus of open-source projects has shown that our engine is more accurate in API code suggestion than the state-of-the-art approaches, and in 75% of the cases, it can correctly suggest the API with only five candidates. ASTLan also has high accuracy in suggesting the next syntactic template and is able to detect many useful and common syntactic templates.",
"title": ""
},
{
"docid": "479089fb59b5b810f95272d04743f571",
"text": "We address offensive tactic recognition in broadcast basketball videos. As a crucial component towards basketball video content understanding, tactic recognition is quite challenging because it involves multiple independent players, each of which has respective spatial and temporal variations. Motivated by the observation that most intra-class variations are caused by non-key players, we present an approach that integrates key player detection into tactic recognition. To save the annotation cost, our approach can work on training data with only video-level tactic annotation, instead of key players labeling. Specifically, this task is formulated as an MIL (multiple instance learning) problem where a video is treated as a bag with its instances corresponding to subsets of the five players. We also propose a representation to encode the spatio-temporal interaction among multiple players. It turns out that our approach not only effectively recognizes the tactics but also precisely detects the key players.",
"title": ""
},
{
"docid": "c6d6d5eb5fe80a9a54df948a2483a255",
"text": "Image Steganography is the process of embedding text in images such that its existence cannot be detected by Human Visual System (HVS) and is known only to sender and receiver. This paper presents a novel approach for image steganography using Hue-Saturation-Intensity (HSI) color space based on Least Significant Bit (LSB). The proposed method transforms the image from RGB color space to Hue-Saturation-Intensity (HSI) color space and then embeds secret data inside the Intensity Plane (I-Plane) and transforms it back to RGB color model after embedding. The said technique is evaluated by both subjective and Objective Analysis. Experimentally it is found that the proposed method have larger Peak Signal-to Noise Ratio (PSNR) values, good imperceptibility and multiple security levels which shows its superiority as compared to several existing methods.",
"title": ""
},
{
"docid": "ee0d858955c3c45ac3d990d3ad9d56ed",
"text": "Survival analysis is a subfield of statistics where the goal is to analyze and model data where the outcome is the time until an event of interest occurs. One of the main challenges in this context is the presence of instances whose event outcomes become unobservable after a certain time point or when some instances do not experience any event during the monitoring period. This so-called censoring can be handled most effectively using survival analysis techniques. Traditionally, statistical approaches have been widely developed in the literature to overcome the issue of censoring. In addition, many machine learning algorithms have been adapted to deal with such censored data and tackle other challenging problems that arise in real-world data. In this survey, we provide a comprehensive and structured review of the statistical methods typically used and the machine learning techniques developed for survival analysis, along with a detailed taxonomy of the existing methods. We also discuss several topics that are closely related to survival analysis and describe several successful applications in a variety of real-world application domains. We hope that this article will give readers a more comprehensive understanding of recent advances in survival analysis and offer some guidelines for applying these approaches to solve new problems arising in applications involving censored data.",
"title": ""
},
{
"docid": "d23c5fc626d0f7b1d9c6c080def550b8",
"text": "Gamification of education is a developing approach for increasing learners’ motivation and engagement by incorporating game design elements in educational environments. With the growing popularity of gamification and yet mixed success of its application in educational contexts, the current review is aiming to shed a more realistic light on the research in this field by focusing on empirical evidence rather than on potentialities, beliefs or preferences. Accordingly, it critically examines the advancement in gamifying education. The discussion is structured around the used gamification mechanisms, the gamified subjects, the type of gamified learning activities, and the study goals, with an emphasis on the reliability and validity of the reported outcomes. To improve our understanding and offer a more realistic picture of the progress of gamification in education, consistent with the presented evidence, we examine both the outcomes reported in the papers and how they have been obtained. While the gamification in education is still a growing phenomenon, the review reveals that (i) insufficient evidence exists to support the long-term benefits of gamification in educational contexts; (ii) the practice of gamifying learning has outpaced researchers’ understanding of its mechanisms and methods; (iii) the knowledge of how to gamify an activity in accordance with the specifics of the educational context is still limited. The review highlights the need for systematically designed studies and rigorously tested approaches confirming the educational benefits of gamification, if gamified learning is to become a recognized instructional approach.",
"title": ""
},
{
"docid": "c26caff761092bc5b6af9f1c66986715",
"text": "The mechanisms used by DNN accelerators to leverage datareuse and perform data staging are known as dataflow, and they directly impact the performance and energy efficiency of DNN accelerator designs. Co-optimizing the accelerator microarchitecture and its internal dataflow is crucial for accelerator designers, but there is a severe lack of tools and methodologies to help them explore the co-optimization design space. In this work, we first introduce a set of datacentric directives to concisely specify DNN dataflows in a compiler-friendly form. Next, we present an analytical model, MAESTRO, that estimates various cost-benefit tradeoffs of a dataflow including execution time and energy efficiency for a DNN model and hardware configuration. Finally, we demonstrate the use of MAESTRO to drive a hardware design space exploration (DSE) engine. The DSE engine searched 480M designs and identified 2.5M valid designs at an average rate of 0.17M designs per second, and also identified throughputand energy-optimized designs among this set.",
"title": ""
},
{
"docid": "d961d5b1e310513cb3a70376cb65e5e4",
"text": "Defect prediction models help software quality assurance teams to effectively allocate their limited resources to the most defect-prone software modules. A variety of classification techniques have been used to build defect prediction models ranging from simple (e.g., logistic regression) to advanced techniques (e.g., Multivariate Adaptive Regression Splines (MARS)). Surprisingly, recent research on the NASA dataset suggests that the performance of a defect prediction model is not significantly impacted by the classification technique that is used to train it. However, the dataset that is used in the prior study is both: (a) noisy, i.e., contains erroneous entries and (b) biased, i.e., only contains software developed in one setting. Hence, we set out to replicate this prior study in two experimental settings. First, we apply the replicated procedure to the same (known-to-be noisy) NASA dataset, where we derive similar results to the prior study, i.e., the impact that classification techniques have appear to be minimal. Next, we apply the replicated procedure to two new datasets: (a) the cleaned version of the NASA dataset and (b) the PROMISE dataset, which contains open source software developed in a variety of settings (e.g., Apache, GNU). The results in these new datasets show a clear, statistically distinct separation of groups of techniques, i.e., the choice of classification technique has an impact on the performance of defect prediction models. Indeed, contrary to earlier research, our results suggest that some classification techniques tend to produce defect prediction models that outperform others.",
"title": ""
},
{
"docid": "60ed46346d2992789e4ecd34e1936cc7",
"text": "The aim of this study was to differentiate the effects of body load and joint movements on the leg muscle activation pattern during assisted locomotion in spinal man. Stepping movements were induced by a driven gait orthosis (DGO) on a treadmill in patients with complete para-/tetraplegia and, for comparison, in healthy subjects. All subjects were unloaded by 70% of their body weight. EMG of upper and lower leg muscles and joint movements of the DGO of both legs were recorded. In the patients, normal stepping movements and those mainly restricted to the hips (blocked knees) were associated with a pattern of leg muscle EMG activity that corresponded to that of the healthy subjects, but the amplitude was smaller. Locomotor movements restricted to imposed ankle joint movements were followed by no, or only focal EMG responses in the stretched muscles. Unilateral locomotion in the patients was associated with a normal pattern of leg muscle EMG activity restricted to the moving side, while in the healthy subjects a bilateral activation occurred. This indicates that interlimb coordination depends on a supraspinal input. During locomotion with 100% body unloading in healthy subjects and patients, no EMG activity was present. Thus, it can be concluded that afferent input from hip joints, in combination with that from load receptors, plays a crucial role in the generation of locomotor activity in the isolated human spinal cord. This is in line with observations from infant stepping experiments and experiments in cats. Afferent feedback from knee and ankle joints may be involved largely in the control of focal movements.",
"title": ""
},
{
"docid": "5429dc7fb5f5c5e1b16c0718ffc3be7f",
"text": "1 College of Geographical Sciences, Fujian Normal University, Fuzhou 350007, China 2 Fujian Provincial Engineering Research Center for Monitoring and Assessing Terrestrial Disasters, Fuzhou 350007, China 3Department of Geography, Ludwig-Maximilians-Universität München, 80333 Munich, Germany 4 Institute of Water Management, Hydrology and Hydraulic Engineering, University of Natural Resources and Life Sciences, 1190 Vienna, Austria",
"title": ""
}
] |
scidocsrr
|
e21e473351de83d3919873b25ce7ea1b
|
Computational methods for identifying miRNA sponge interactions
|
[
{
"docid": "b324860905b6d8c4b4a8429d53f2543d",
"text": "MicroRNAs (miRNAs) are endogenous approximately 22 nt RNAs that can play important regulatory roles in animals and plants by targeting mRNAs for cleavage or translational repression. Although they escaped notice until relatively recently, miRNAs comprise one of the more abundant classes of gene regulatory molecules in multicellular organisms and likely influence the output of many protein-coding genes.",
"title": ""
}
] |
[
{
"docid": "750846bc27dc013bd0d392959caf3ecc",
"text": "Analysis of the WinZip en ryption method Tadayoshi Kohno May 8, 2004 Abstra t WinZip is a popular ompression utility for Mi rosoft Windows omputers, the latest version of whi h is advertised as having \\easy-to-use AES en ryption to prote t your sensitive data.\" We exhibit several atta ks against WinZip's new en ryption method, dubbed \\AE-2\" or \\Advan ed En ryption, version two.\" We then dis uss se ure alternatives. Sin e at a high level the underlying WinZip en ryption method appears se ure (the ore is exa tly En ryptthen-Authenti ate using AES-CTR and HMAC-SHA1), and sin e one of our atta ks was made possible be ause of the way that WinZip Computing, In . de ided to x a di erent se urity problem with its previous en ryption method AE-1, our atta ks further unders ore the subtlety of designing ryptographi ally se ure software.",
"title": ""
},
{
"docid": "600673953f89f29f2f9c3fe73cac1d13",
"text": "The multivariate regression model is considered with p regressors. A latent vector with p binary entries serves to identify one of two types of regression coef®cients: those close to 0 and those not. Specializing our general distributional setting to the linear model with Gaussian errors and using natural conjugate prior distributions, we derive the marginal posterior distribution of the binary latent vector. Fast algorithms aid its direct computation, and in high dimensions these are supplemented by a Markov chain Monte Carlo approach to sampling from the known posterior distribution. Problems with hundreds of regressor variables become quite feasible. We give a simple method of assigning the hyperparameters of the prior distribution. The posterior predictive distribution is derived and the approach illustrated on compositional analysis of data involving three sugars with 160 near infra-red absorbances as regressors.",
"title": ""
},
{
"docid": "917287666755fe4b1832f5b6025414bb",
"text": "The Piver classification of radical hysterectomy for the treatment of cervical cancer is outdated and misused. The Surgery Committee of the Gynecological Cancer Group of the European Organization for Research and Treatment of Cancer (EORTC) produced, approved, and adopted a revised classification. It is hoped that at least within the EORTC participating centers, a standardization of procedures is achieved. The clinical indications of the new classification are discussed.",
"title": ""
},
{
"docid": "4e071e10b9263d98061b87a7c7ceee02",
"text": "Seeking more common ground between data scientists and their critics.",
"title": ""
},
{
"docid": "485aba813ad5587a6acb91bb3ad5ced9",
"text": "Nowadays, the transformerless inverters have become a widespread trend in the single-phase grid-connected photovoltaic (PV) systems because of the low cost and high efficiency concerns. Unfortunately, due to the non-galvanic isolation configuration, the ground leakage current would appear through the PV parasitic capacitance into the ground, which induces the physical danger and serious EMI problems. A novel transformerless single-phase inverter with two unipolar SPWM control strategies is proposed in this paper. The inverter can guarantee no ground leakage current and high reliability by applying either of the SPWM strategies. Meanwhile, the low total harmonic distortion (THD) of the grid-connected current is achieved thanks to the alleviation of the dead time effect. Besides, the required input DC voltage is the same low as that of the full-bridge inverter. Furthermore, the output filter inductance is reduced greatly due to the three-level output voltage, which leads to the high power density and high efficiency. At last, a 1kW prototype has been built and tested to verify the theoretical analysis of the paper.",
"title": ""
},
{
"docid": "b9f5810b47ca3099c2d82230c6db1d04",
"text": "Illegal logging has been identified as a major problem in the world, which may be minimized through effective monitoring of forest covered areas. In this paper, we propose and describe the initial steps to build a new three-tier architecture for Forest Monitoring based on Wireless Sensor Network and Chainsaw Noise Identification using a Neural Network. In addition to detection of chainsaw noises, we also propose methodologies to localize the origin of the chainsaw noise.",
"title": ""
},
{
"docid": "37c528b7491ff73e9c03de388db56483",
"text": "......................................................................................................................... 2 Acknowledgements ....................................................................................................... 3",
"title": ""
},
{
"docid": "7381d61eea849ecdf74c962042d0c5ff",
"text": "Unmanned aerial vehicle (UAV) synthetic aperture radar (SAR) is very important for battlefield awareness. For SAR systems mounted on a UAV, the motion errors can be considerably high due to atmospheric turbulence and aircraft properties, such as its small size, which makes motion compensation (MOCO) in UAV SAR more urgent than other SAR systems. In this paper, based on 3-D motion error analysis, a novel 3-D MOCO method is proposed. The main idea is to extract necessary motion parameters, i.e., forward velocity and displacement in line-of-sight direction, from radar raw data, based on an instantaneous Doppler rate estimate. Experimental results show that the proposed method is suitable for low- or medium-altitude UAV SAR systems equipped with a low-accuracy inertial navigation system.",
"title": ""
},
{
"docid": "1d61e1eb5275444c6a2a3f8ad5c2865a",
"text": "We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore,we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance fetures is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix. European Conference on Computer Vision (ECCV) This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2006 201 Broadway, Cambridge, Massachusetts 02139 Region Covariance: A Fast Descriptor for Detection and Classification Oncel Tuzel, Fatih Porikli, and Peter Meer 1 Computer Science Department, 2 Electrical and Computer Engineering Department, Rutgers University, Piscataway, NJ 08854 {otuzel, meer}@caip.rutgers.edu 3 Mitsubishi Electric Research Laboratories, Cambridge, MA 02139 {fatih}@merl.com Abstract. We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix. We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix.",
"title": ""
},
{
"docid": "dc2c952b5864a167c19b34be6db52389",
"text": "Data mining is popularly used to combat frauds because of its effectiveness. It is a well-defined procedure that takes data as input and produces models or patterns as output. Neural network, a data mining technique was used in this study. The design of the neural network (NN) architecture for the credit card detection system was based on unsupervised method, which was applied to the transactions data to generate four clusters of low, high, risky and high-risk clusters. The self-organizing map neural network (SOMNN) technique was used for solving the problem of carrying out optimal classification of each transaction into its associated group, since a prior output is unknown. The receiver-operating curve (ROC) for credit card fraud (CCF) detection watch detected over 95% of fraud cases without causing false alarms unlike other statistical models and the two-stage clusters. This shows that the performance of CCF detection watch is in agreement with other detection software, but performs better.",
"title": ""
},
{
"docid": "c5b9053b1b22d56dd827009ef529004d",
"text": "An integrated receiver with high sensitivity and low walk error for a military purpose pulsed time-of-flight (TOF) LADAR system is proposed. The proposed receiver adopts a dual-gain capacitive-feedback TIA (C-TIA) instead of widely used resistive-feedback TIA (R-TIA) to increase the sensitivity. In addition, a new walk-error improvement circuit based on a constant-delay detection method is proposed. Implemented in 0.35 μm CMOS technology, the receiver achieves an input-referred noise current of 1.36 pA/√Hz with bandwidth of 140 MHz and minimum detectable signal (MDS) of 10 nW with a 5 ns pulse at SNR=3.3, maximum walk-error of 2.8 ns, and a dynamic range of 1:12,000 over the operating temperature range of -40 °C to +85 °C.",
"title": ""
},
{
"docid": "cfd01fa97733c0df6e07b3b7ddebb4e2",
"text": "Radio frequency identification (RFID) is an emerging technology in the building industry. Many researchers have demonstrated how to enhance material control or production management with RFID. However, there is a lack of integrated understanding of lifecycle management. This paper develops and demonstrates a framework to Information Lifecycle Management (ILM) with RFID for material control. The ILM framework includes key RFID checkpoints and material types to facilitate material control on construction sites. In addition, this paper presents a context-aware scenario to examine multiple on-site context and RFID parameters. From tagging nodes at the factory to reading nodes at each lifecycle stage, this paper demonstrates how to manage complex construction materials with RFID and how to construct integrated information flows at different lifecycle stages. To validate key material types and the scenario, the study reports on two on-site trials: read distance test and on-site simulation. Finally, the research provides discussion and recommended approaches to implementing ILM. The results show that the ILM framework has the potential for a variety of stakeholders to adopt RFID in the building industry. This paper provides the understanding about the effectiveness of ILM with RFID for material control, which can serve as a base for adopting other IT technologies in the building industry. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4a0756bffc50e11a0bcc2ab88502e1a2",
"text": "The interest in attribute weighting for soft subspace clustering have been increasing in the last years. However, most of the proposed approaches are designed for dealing only with numeric data. In this paper, our focus is on soft subspace clustering for categorical data. In soft subspace clustering, the attribute weighting approach plays a crucial role. Due to this, we propose an entropy-based approach for measuring the relevance of each categorical attribute in each cluster. Besides that, we propose the EBK-modes (entropy-based k-modes), an extension of the basic k-modes that uses our approach for attribute weighting. We performed experiments on five real-world datasets, comparing the performance of our algorithms with four state-of-the-art algorithms, using three well-known evaluation metrics: accuracy, f-measure and adjusted Rand index. According to the experiments, the EBK-modes outperforms the algorithms that were considered in the evaluation, regarding the considered metrics.",
"title": ""
},
{
"docid": "99f62da011921c0ff51daf0c928c865a",
"text": "The Health Belief Model, social learning theory (recently relabelled social cognitive theory), self-efficacy, and locus of control have all been applied with varying success to problems of explaining, predicting, and influencing behavior. Yet, there is conceptual confusion among researchers and practitioners about the interrelationships of these theories and variables. This article attempts to show how these explanatory factors may be related, and in so doing, posits a revised explanatory model which incorporates self-efficacy into the Health Belief Model. Specifically, self-efficacy is proposed as a separate independent variable along with the traditional health belief variables of perceived susceptibility, severity, benefits, and barriers. Incentive to behave (health motivation) is also a component of the model. Locus of control is not included explicitly because it is believed to be incorporated within other elements of the model. It is predicted that the new formulation will more fully account for health-related behavior than did earlier formulations, and will suggest more effective behavioral interventions than have hitherto been available to health educators.",
"title": ""
},
{
"docid": "ef48977d9bd479152e245a431ad4df57",
"text": "The Modicon Communication Bus (Modbus) protocol is one of the most commonly used protocols in industrial control systems. Modbus was not designed to provide security. This paper confirms that the Modbus protocol is vulnerable to flooding attacks. These attacks involve injection of commands that result in disrupting the normal operation of the control system. This paper describes a set of experiments that shows that an anomaly-based change detection algorithm and signature-based Snort threshold module are capable of detecting Modbus flooding attacks. In comparing these intrusion detection techniques, we find that the signature-based detection requires a carefully selected threshold value, and that the anomaly-based change detection algorithm may have a short delay before detecting the attacks depending on the parameters used. In addition, we also generate a network traffic dataset of flooding attacks on the Modbus control system protocol.",
"title": ""
},
{
"docid": "df95cb79af584d73db300e811c7a2348",
"text": "We present a neural network-based face detection system. A retinally connected neural network examines small windows of an image, and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We use a bootstrap algorithm for training the networks, which adds false detections into the training set as training progresses. This eliminates the difficult task of manually selecting non-face training examples, which must be chosen to span the entire space of non-face images. Comparisons with other state-of-the-art face detection systems a re presented; our system has better performance in terms of detection and false-positive rates.",
"title": ""
},
{
"docid": "51e932bdc612ea54522b65783185ed49",
"text": "One of the major challenges in using electrical energy is the efficiency in its storage. Current methods, such as chemical batteries, hydraulic pumping, and water splitting, suffer from low energy density or incompatibility with current transportation infrastructure. Here, we report a method to store electrical energy as chemical energy in higher alcohols, which can be used as liquid transportation fuels. We genetically engineered a lithoautotrophic microorganism, Ralstonia eutropha H16, to produce isobutanol and 3-methyl-1-butanol in an electro-bioreactor using CO(2) as the sole carbon source and electricity as the sole energy input. The process integrates electrochemical formate production and biological CO(2) fixation and higher alcohol synthesis, opening the possibility of electricity-driven bioconversion of CO(2) to commercial chemicals.",
"title": ""
},
{
"docid": "9fa8cb7c5e73cea3409c610f0b0a80ed",
"text": "Emerging health problems require rapid advice. We describe the development and pilot testing of a systematic, transparent approach used by the World Health Organization (WHO) to develop rapid advice guidelines in response to requests from member states confronted with uncertainty about the pharmacological management of avian influenza A (H5N1) virus infection. We first searched for systematic reviews of randomized trials of treatment and prevention of seasonal influenza and for non-trial evidence on H5N1 infection, including case reports and animal and in vitro studies. A panel of clinical experts, clinicians with experience in treating patients with H5N1, influenza researchers, and methodologists was convened for a two-day meeting. Panel members reviewed the evidence prior to the meeting and agreed on the process. It took one month to put together a team to prepare the evidence profiles (i.e., summaries of the evidence on important clinical and policy questions), and it took the team only five weeks to prepare and revise the evidence profiles and to prepare draft guidelines prior to the panel meeting. A draft manuscript for publication was prepared within 10 days following the panel meeting. Strengths of the process include its transparency and the short amount of time used to prepare these WHO guidelines. The process could be improved by shortening the time required to commission evidence profiles. Further development is needed to facilitate stakeholder involvement, and evaluate and ensure the guideline's usefulness.",
"title": ""
},
{
"docid": "0950d606153e4e634f4bb5633562aa69",
"text": "The approach that one chooses to evolve software-intensive systems depends on the organization, the system, and the technology. We believe that significant progress in system architecture, system understanding, object technology, and net-centric computing make it possible to economically evolve software systems to a state in which they exhibit greater functionality and maintainability. In particular, interface technology, wrapping technology, and network technology are opening many opportunities to leverage existing software assets instead of scrapping them and starting over. But these promising technologies cannot be applied in a vacuum or without management understanding and control. There must be a framework in which to motivate the organization to understand its business opportunities, its application systems, and its road to an improved target system. This report outlines a comprehensive system evolution approach that incorporates an enterprise framework for the application of the promising technologies in the context of legacy systems.",
"title": ""
},
{
"docid": "2b245f211394279886c0f8d8778c039f",
"text": "We construct the type-IIB AdS4nK supergravity solutions which are dual to the three-dimensional N = 4 superconformal field theories that arise as infrared fixed points of circular-quiver gauge theories. These superconformal field theories are labeled by a triple (ρ, ρ̂, L) subject to constraints, where ρ and ρ̂ are two partitions of a number N , and L is a positive integer. We show that in the limit of large L the localized five-branes in our solutions are effectively smeared, and these type-IIB solutions are dual to the near-horizon geometry of M-theory M2-branes at a C/(Zk × Zk̂) orbifold singularity. There is no known M-theory description, on the other hand, that captures the dependence on the full generic data (ρ, ρ̂, L) . The constraints satisfied by this data, together with the enhanced non-abelian flavour symmetries of the superconformal field theories are precisely reproduced by the type-IIB supergravity solutions. As a bonus, we uncover a novel type of “orbifold equivalence” between different quantum field theories and provide quantitative evidence for this equivalence. 1 ha l-0 07 41 25 2, v er si on 1 12 O ct 2 01 2",
"title": ""
}
] |
scidocsrr
|
eaef8a0c725fa1a67f706c823466848c
|
Energy-efficient work-stealing language runtimes
|
[
{
"docid": "d5064c6c337a2d92b745970fe2bbb0bb",
"text": "With power-related concerns becoming dominant aspects of hardware and software design, significant research effort has been devoted towards system power minimization. Among run-time power-management techniques, dynamic voltage scaling (DVS) has emerged as an important approach, with the ability to provide significant power savings. DVS exploits the ability to control the power consumption by varying a processor's supply voltage (V) and clock frequency (f). DVS controls energy by scheduling different parts of the computation to different (V, f) pairs; the goal is to minimize energy while meeting performance needs. Although processors like the Intel XScale and Transmeta Crusoe allow software DVS control, such control has thus far largely been used at the process/task level under operating system control. This is mainly because the energy and time overhead for switching DVS modes is considered too large and difficult to manage within a single program.In this paper we explore the opportunities and limits of compile-time DVS scheduling. We derive an analytical model for the maximum energy savings that can be obtained using DVS given a few known program and processor parameters. We use this model to determine scenarios where energy consumption benefits from compile-time DVS and those where there is no benefit. The model helps us extrapolate the benefits of compile-time DVS into the future as processor parameters change. We then examine how much of these predicted benefits can actually be achieved through optimal settings of DVS modes. This is done by extending the existing Mixed-integer Linear Program (MILP) formulation for this problem by accurately accounting for DVS energy switching overhead, by providing finer-grained control on settings and by considering multiple data categories in the optimization. Overall, this research provides a comprehensive view of compile-time DVS management, providing both practical techniques for its immediate deployment as well theoretical bounds for use into the future.",
"title": ""
}
] |
[
{
"docid": "35ae9dc4e87dd06a4d76bd44d0295b64",
"text": "Current Chinese event extract ion systems suffer much from the low recall due to unknown triggers. To resolve this problem, this paper firstly introduces morphological structures to better represent the compositional semantics inside Chinese triggers and then proposes a mechanism to automatically identify the head morpheme (either verb or noun) as the governing sememe of a trigger. Finally, it proposes a mechanism of combining the morphological structures and sememes of Chinese words to infer unknown triggers to improve the recall of the Chinese event extraction system. Evaluation on the ACE 2005 Chinese corpus justifies the effectiveness of our approach over a state-of-the-art system.",
"title": ""
},
{
"docid": "9b086872cad65b92237696ec3a48550f",
"text": "Memory-augmented neural networks (MANNs) refer to a class of neural network models equipped with external memory (such as neural Turing machines and memory networks). These neural networks outperform conventional recurrent neural networks (RNNs) in terms of learning long-term dependency, allowing them to solve intriguing AI tasks that would otherwise be hard to address. This paper concerns the problem of quantizing MANNs. Quantization is known to be effective when we deploy deep models on embedded systems with limited resources. Furthermore, quantization can substantially reduce the energy consumption of the inference procedure. These benefits justify recent developments of quantized multilayer perceptrons, convolutional networks, and RNNs. However, no prior work has reported the successful quantization of MANNs. The in-depth analysis presented here reveals various challenges that do not appear in the quantization of the other networks. Without addressing them properly, quantized MANNs would normally suffer from excessive quantization error which leads to degraded performance. In this paper, we identify memory addressing (specifically, content-based addressing) as the main reason for the performance degradation and propose a robust quantization method for MANNs to address the challenge. In our experiments, we achieved a computation-energy gain of 22× with 8-bit fixed-point and binary quantization compared to the floating-point implementation. Measured on the bAbI dataset, the resulting model, named the quantized MANN (Q-MANN), improved the error rate by 46% and 30% with 8-bit fixed-point and binary quantization, respectively, compared to the MANN quantized using conventional techniques.",
"title": ""
},
{
"docid": "a3da533f428b101c8f8cb0de04546e48",
"text": "In this paper we investigate the challenging problem of cursive text recognition in natural scene images. In particular, we have focused on isolated Urdu character recognition in natural scenes that could not be handled by tradition Optical Character Recognition (OCR) techniques developed for Arabic and Urdu scanned documents. We also present a dataset of Urdu characters segmented from images of signboards, street scenes, shop scenes and advertisement banners containing Urdu text. A variety of deep learning techniques have been proposed by researchers for natural scene text detection and recognition. In this work, a Convolutional Neural Network (CNN) is applied as a classifier, as CNN approaches have been reported to provide high accuracy for natural scene text detection and recognition. A dataset of manually segmented characters was developed and deep learning based data augmentation techniques were applied to further increase the size of the dataset. The training is formulated using filter sizes of 3x3, 5x5 and mixed 3x3 and 5x5 with a stride value of 1 and 2. The CNN model is trained with various learning rates and state-of-the-art results are achieved.",
"title": ""
},
{
"docid": "cab673895969ded614a4063d19777f4d",
"text": "Functional magnetic resonance imaging was used to assess the cortical areas active during the observation of mouth actions performed by humans and by individuals belonging to other species (monkey and dog). Two types of actions were presented: biting and oral communicative actions (speech reading, lip-smacking, barking). As a control, static images of the same actions were shown. Observation of biting, regardless of the species of the individual performing the action, determined two activation foci (one rostral and one caudal) in the inferior parietal lobule and an activation of the pars opercularis of the inferior frontal gyrus and the adjacent ventral premotor cortex. The left rostral parietal focus (possibly BA 40) and the left premotor focus were very similar in all three conditions, while the right side foci were stronger during the observation of actions made by conspecifics. The observation of speech reading activated the left pars opercularis of the inferior frontal gyrus, the observation of lip-smacking activated a small focus in the pars opercularis bilaterally, and the observation of barking did not produce any activation in the frontal lobe. Observation of all types of mouth actions induced activation of extrastriate occipital areas. These results suggest that actions made by other individuals may be recognized through different mechanisms. Actions belonging to the motor repertoire of the observer (e.g., biting and speech reading) are mapped on the observer's motor system. Actions that do not belong to this repertoire (e.g., barking) are essentially recognized based on their visual properties. We propose that when the motor representation of the observed action is activated, the observer gains knowledge of the observed action in a personal perspective, while this perspective is lacking when there is no motor activation.",
"title": ""
},
{
"docid": "1b844eb4aeaac878ebffaaf5b4d6e3ab",
"text": "Recently, deep residual networks have been successfully applied in many computer vision and natural language processing tasks, pushing the state-of-the-art performance with deeper and wider architectures. In this work, we interpret deep residual networks as ordinary differential equations (ODEs), which have long been studied in mathematics and physics with rich theoretical and empirical success. From this interpretation, we develop a theoretical framework on stability and reversibility of deep neural networks, and derive three reversible neural network architectures that can go arbitrarily deep in theory. The reversibility property allows a memoryefficient implementation, which does not need to store the activations for most hidden layers. Together with the stability of our architectures, this enables training deeper networks using only modest computational resources. We provide both theoretical analyses and empirical results. Experimental results demonstrate the efficacy of our architectures against several strong baselines on CIFAR-10, CIFAR-100 and STL-10 with superior or on-par state-of-the-art performance. Furthermore, we show our architectures yield superior results when trained using fewer training data.",
"title": ""
},
{
"docid": "c6b24c3a310bee0e9dd2c0052e4e2edb",
"text": "Several companies have recently announced plans to build \"green\" datacenters, i.e. datacenters partially or completely powered by renewable energy. These datacenters will either generate their own renewable energy or draw it directly from an existing nearby plant. Besides reducing carbon footprints, renewable energy can potentially reduce energy costs, reduce peak power costs, or both. However, certain renewable fuels are intermittent, which requires approaches for tackling the energy supply variability. One approach is to use batteries and/or the electrical grid as a backup for the renewable energy. It may also be possible to adapt the workload to match the renewable energy supply. For highest benefits, green datacenter operators must intelligently manage their workloads and the sources of energy at their disposal.\n In this paper, we first discuss the tradeoffs involved in building green datacenters today and in the future. Second, we present Parasol, a prototype green datacenter that we have built as a research platform. Parasol comprises a small container, a set of solar panels, a battery bank, and a grid-tie. Third, we describe GreenSwitch, our model-based approach for dynamically scheduling the workload and selecting the source of energy to use. Our real experiments with Parasol, GreenSwitch, and MapReduce workloads demonstrate that intelligent workload and energy source management can produce significant cost reductions. Our results also isolate the cost implications of peak power management, storing energy on the grid, and the ability to delay the MapReduce jobs. Finally, our results demonstrate that careful workload and energy source management can minimize the negative impact of electrical grid outages.",
"title": ""
},
{
"docid": "e19e0c8457bd0e17baeca389d311b775",
"text": "Magic syndrome is a very uncommon disease, and vascular involvement is exceptional; only one case has been reported in the literature associated to a true aortic aneurysm. The treatment of aneurysms recommended in these patients is based on isolated cases and includes corticosteroids, other immunosuppressant drugs, and surgery. We report a case of a patient with Magic syndrome who developed aneurysm at the end of the aorta during treatment with infliximab, corticosteroids, and cyclosporine and who needed endovascular prosthesis implantation. After 12 months, she suffered an aneurysm of the ascending aorta, dilatation of the sinotubular junction, and severe aortic insufficiency, which forced surgery. During this time, the patient finally died.",
"title": ""
},
{
"docid": "4d1599b190f476d0a2830657365cfc44",
"text": "While Electronic Medical Records (EMR) contain detailed records of the patient-clinician encounter - vital signs, laboratory tests, symptoms, caregivers' notes, interventions prescribed and outcomes - developing predictive models from this data is not straightforward. These data contain systematic biases that violate assumptions made by off-the-shelf machine learning algorithms, commonly used in the literature to train predictive models. In this paper, we discuss key issues and subtle pitfalls specific to building predictive models from EMR. We highlight the importance of carefully considering both the special characteristics of EMR as well as the intended clinical use of the predictive model and show that failure to do so could lead to developing models that are less useful in practice. Finally, we describe approaches for training and evaluating models on EMR using early prediction of septic shock as our example application.",
"title": ""
},
{
"docid": "4c5d12c3b1254c83819eac53dd57ce40",
"text": "traditional topic detection method can not be applied to the microblog topic detection directly, because the microblog text is a kind of the short, fractional and grass-roots text. In order to detect the hot topic in the microblog text effectively, we propose a microblog topic detection method based on the combination of the latent semantic analysis and the structural property. According to the dialogic property of the microblog, our proposed method firstly creates semantic space based on the replies to the thread, with the aim to solve the data sparseness problem; secondly, create the microblog model based on the latent semantic analysis; finally, propose a semantic computation method combined with the time information. We then adopt the agglomerative hierarchical clustering method as the microblog topic detection method. Experimental results show that our proposed methods improve the performances of the microblog topic detection greatly.",
"title": ""
},
{
"docid": "a56f1d8fb72393cbafb92df51dfe3239",
"text": "Approximately 2,800 years ago, a blind poet wandered from city to city in Greece telling a tall tale—that of a nobleman, war hero, and daring adventurer who did not catch sight of his homeland for 20 long years. The poet was Homer and the larger-than-life character was Odysseus. Homer sang the epic adventures of cunning Odysseus who fought the Trojan War for 10 years and labored for another 10 on a die-hard mission to return to his homeland, the island of Ithaca, and reunite with his loyal wife, Penelope, and their son, Telemachus. Three of the 10 return years were spent on sea, facing the wrath of Gods, monsters, and assorted evil-doers. The other 7 were spent on the island of Ogygia, in the seducing arms of a nymph, the beautiful and possessive Calypso. Yet, despite this dolce vita, Odysseus never took his mind off Ithaca, refusing Calypso’s offer to make him immortal. On the edge of ungratefulness, he confided to his mistress, “Full well I acknowledge Prudent Penelope cannot compare with your stature or beauty, for she is only a mortal, and you are immortal and ageless. Nevertheless it is she whom I daily desire and pine for. Therefore I long for my home and to see the day of returning” (Homer, The Odyssey, trans. 1921, Book V, pp. 78–79). 1 Return was continually on Odysseus’ mind, and the Greek word for it is nostos. His burning wish for nostos afflicted unbearable suffering on Odysseus, and the Greek word for it is algos. Nostalgia, then, is the psychological suffering caused by unrelenting yearning to",
"title": ""
},
{
"docid": "4dfd564948e2250fa304f769e44f1a8c",
"text": "The automatic extraction of breaking news events from natural language text is a valuable capability for decision support systems. Traditional systems tend to focus on extracting events from a single media source and often ignore cross-media references. Here, we describe a large-scale automated system for extracting natural disasters and critical events from both newswire text and social media. We outline a comprehensive architecture that can identify, categorize and summarize seven different event types - namely floods, storms, fires, armed conflict, terrorism, infrastructure breakdown, and labour unavailability. The system comprises fourteen modules and is equipped with a novel coreference mechanism, capable of linking events extracted from the two complementary data sources. Additionally, the system is easily extensible to accommodate new event types. Our experimental evaluation demonstrates the effectiveness of the system.",
"title": ""
},
{
"docid": "e0fc5dabbc57100a1c726703e82be706",
"text": "In this paper, we examined the effects of financial news on Ho Chi Minh Stock Exchange (HoSE) and we tried to predict the direction of VN30 Index after the news articles were published. In order to do this study, we got news articles from three big financial websites and we represented them as feature vectors. Recently, researchers have used machine learning technique to integrate with financial news in their prediction model. Actually, news articles are important factor that influences investors in a quick way so it is worth considering the news impact on predicting the stock market trends. Previous works focused only on market news or on the analysis of the stock quotes in the past to predict the stock market behavior in the future. We aim to build a stock trend prediction model using both stock news and stock prices of VN30 index that will be applied in Vietnam stock market while there has been a little focus on using news articles to predict the stock direction. Experiment results show that our proposed method achieved high accuracy in VN30 index trend prediction.",
"title": ""
},
{
"docid": "a3386199b44e3164fafe8a8ae096b130",
"text": "Diehl Aerospace GmbH (DAs) is currently involved in national German Research & Technology (R&T) projects (e.g. SYSTAVIO, SESAM) and in European R&T projects like ASHLEY to extend and to improve the Integrated Modular Avionics (IMA) technology. Diehl Aerospace is investing to expand its current IMA technology to enable further integration of systems including hardware modules, associated software, tools and processes while increasing the level of standardization. An additional objective is to integrate more systems on a common computing platform which uses the same toolchain, processes and integration experiences. New IMA components enable integration of high integrity fast loop system applications such as control applications. Distributed architectures which provide new types of interfaces allow integration of secondary power distribution systems along with other IMA functions. Cross A/C type usage is also a future emphasis to increase standardization and decrease development and operating costs as well as improvements on time to market and affordability of systems.",
"title": ""
},
{
"docid": "0fdaac9c71730b1933ecd5918bb22431",
"text": "A simple design for circularly-polarized (CP) annular slot antennas is first described. The antenna is fed by a V-shaped coupling strip loaded with a small resistance, and it can generate CP radiation as long as the inclined angle of the V-shaped coupling strip is properly adjusted. Numerical analyses to the effects of varying the angle on CP axial ratio are performed. Several CP prototypes are fabricated. Both simulated and measured results demonstrate that the proposed feeding mechanism can give good CP performances for the case that the slot width is varied from 0.008 to 0.09 λ0. Then, a design for polarization reconfigurable antennas is developed from the feeding mechanism. Only two PIN diodes are involved in the reconfigurable design that can offer the switching among three different polarizations, including one linear polarization and dual orthogonal circular polarizations. Details of the designs and experimental results are shown.",
"title": ""
},
{
"docid": "e769b1eab6d5ebf78bc5d2bb12f05607",
"text": "This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed.",
"title": ""
},
{
"docid": "020a05131d33614ba6b2ab1ece6e394b",
"text": "A crossed dipole antenna is incorporated with an AMC surface to achieve low-profile and broadband characteristics. Interactions between the crossed dipole and the AMC surface are meticulously considered for optimum design. The antenna yields an impedance bandwidth of 18.8% for |S11| <; -10 dB and a 3-dB AR bandwidth of 10.9%.",
"title": ""
},
{
"docid": "a413ebf5cc18d8c423db7ad82f207379",
"text": "Kernel Approximation Methods for Speech Recognition",
"title": ""
},
{
"docid": "61ae61d0950610ee2ad5e07f64f9b983",
"text": "We present Searn, an algorithm for integrating search and learning to solve complex structured prediction problems such as those that occur in natural language, speech, computational biology, and vision. Searn is a meta-algorithm that transforms these complex problems into simple classification problems to which any binary classifier may be applied. Unlike current algorithms for structured learning that require decomposition of both the loss function and the feature functions over the predicted structure, Searn is able to learn prediction functions for any loss function and any class of features. Moreover, Searn comes with a strong, natural theoretical guarantee: good performance on the derived classification problems implies good performance on the structured prediction problem.",
"title": ""
},
{
"docid": "030f53c532e6989b12c8c2199d6bd5ac",
"text": "Instagram is a popular social networking application that allows users to express themselves through the uploaded content and the different filters they can apply. In this study we look at personality prediction from Instagram picture features. We explore two different features that can be extracted from pictures: 1) visual features (e.g., hue, valence, saturation), and 2) content features (i.e., the content of the pictures). To collect data, we conducted an online survey where we asked participants to fill in a personality questionnaire and grant us access to their Instagram account through the Instagram API. We gathered 54,962 pictures of 193 Instagram users. With our results we show that visual and content features can be used to predict personality from and perform in general equally well. Combining the two however does not result in an increased predictive power. Seemingly, they are not adding more value than they already consist of independently.",
"title": ""
},
{
"docid": "367d49d63f0c79906b50cfb9943c8d3a",
"text": "This article develops a conceptual framework for advancing theories of environmentally significant individual behavior and reports on the attempts of the author’s research group and others to develop such a theory. It discusses definitions of environmentally significant behavior; classifies the behaviors and their causes; assesses theories of environmentalism, focusing especially on value-belief-norm theory; evaluates the relationship between environmental concern and behavior; and summarizes evidence on the factors that determine environmentally significant behaviors and that can effectively alter them. The article concludes by presenting some major propositions supported by available research and some principles for guiding future research and informing the design of behavioral programs for environmental protection.",
"title": ""
}
] |
scidocsrr
|
c5ccd2bc1df154da4d9a66b109375695
|
Collaborative filtering recommender systems
|
[
{
"docid": "a72932cd98f425eafc19b9786da4319d",
"text": "Recommender systems are changing from novelties used by a few E-commerce sites, to serious business tools that are re-shaping the world of E-commerce. Many of the largest commerce Web sites are already using recommender systems to help their customers find products to purchase. A recommender system learns from a customer and recommends products that she will find most valuable from among the available products. In this paper we present an explanation of how recommender systems help E-commerce sites increase sales, and analyze six sites that use recommender systems including several sites that use more than one recommender system. Based on the examples, we create a taxonomy of recommender systems, including the interfaces they present to customers, the technologies used to create the recommendations, and the inputs they need from customers. We conclude with ideas for new applications of recommender systems to E-commerce.",
"title": ""
},
{
"docid": "bd9f584e7dbc715327b791e20cd20aa9",
"text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.",
"title": ""
},
{
"docid": "dd0f335262aab9aa5adb0ad7d25b80bf",
"text": "We present a framework for adaptive news access, based on machine learning techniques specifically designed for this task. First, we focus on the system's general functionality and system architecture. We then describe the interface and design of two deployed news agents that are part of the described architecture. While the first agent provides personalized news through a web-based interface, the second system is geared towards wireless information devices such as PDAs (personal digital assistants) and cell phones. Based on implicit and explicit user feedback, our agents use a machine learning algorithm to induce individual user models. Motivated by general shortcomings of other user modeling systems for Information Retrieval applications, as well as the specific requirements of news classification, we propose the induction of hybrid user models that consist of separate models for short-term and long-term interests. Furthermore, we illustrate how the described algorithm can be used to address an important issue that has thus far received little attention in the Information Retrieval community: a user's information need changes as a direct result of interaction with information. We empirically evaluate the system's performance based on data collected from regular system users. The goal of the evaluation is not only to understand the performance contributions of the algorithm's individual components, but also to assess the overall utility of the proposed user modeling techniques from a user perspective. Our results provide empirical evidence for the utility of the hybrid user model, and suggest that effective personalization can be achieved without requiring any extra effort from the user.",
"title": ""
}
] |
[
{
"docid": "17ac85242f7ee4bc4991e54403e827c4",
"text": "Over the last two decades, an impressive progress has been made in the identification of novel factors in the translocation machineries of the mitochondrial protein import and their possible roles. The role of lipids and possible protein-lipids interactions remains a relatively unexplored territory. Investigating the role of potential lipid-binding regions in the sub-units of the mitochondrial motor might help to shed some more light in our understanding of protein-lipid interactions mechanistically. Bioinformatics results seem to indicate multiple potential lipid-binding regions in each of the sub-units. The subsequent characterization of some of those regions in silico provides insight into the mechanistic functioning of this intriguing and essential part of the protein translocation machinery. Details about the way the regions interact with phospholipids were found by the use of Monte Carlo simulations. For example, Pam18 contains one possible transmembrane region and two tilted surface bound conformations upon interaction with phospholipids. The results demonstrate that the presented bioinformatics approach might be useful in an attempt to expand the knowledge of the possible role of protein-lipid interactions in the mitochondrial protein translocation process.",
"title": ""
},
{
"docid": "915544d06496a34d4c7101236e24368d",
"text": "1569-190X/$ see front matter 2010 Elsevier B.V doi:10.1016/j.simpat.2010.03.004 * Corresponding author. Tel.: +34 91 3089469. E-mail addresses: vsanz@dia.uned.es (V. Sanz) (S. Dormido). The analysis and identification of the requirements needed to describe P-DEVS models using the Modelica language are discussed in this manuscript. A new free Modelica package, named DEVSLib, is presented. It facilitates the description of discrete-event models according to the Parallel DEVS formalism and provides components to interface with continuous-time models, which can be composed using other Modelica libraries. In addition, DEVSLib contains models implementing Quantized State System (QSS) integration methods. The model definition capabilities provided by DEVSLib are similar to the ones in the simulation environments specifically designed for supporting the DEVS formalism. The main additional advantage of DEVSLib is that it can be used together with other Modelica libraries in order to compose multi-domain and multi-formalism hybrid models. DEVSLib is included in the DESLib Modelica library, which is freely available for download at http:// www.euclides.dia.uned.es. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f2640838cfc3938d1a717229e77b3afc",
"text": "Defenders of enterprise networks have a critical need to quickly identify the root causes of malware and data leakage. Increasingly, USB storage devices are the media of choice for data exfiltration, malware propagation, and even cyber-warfare. We observe that a critical aspect of explaining and preventing such attacks is understanding the provenance of data (i.e., the lineage of data from its creation to current state) on USB devices as a means of ensuring their safe usage. Unfortunately, provenance tracking is not offered by even sophisticated modern devices. This work presents ProvUSB, an architecture for fine-grained provenance collection and tracking on smart USB devices. ProvUSB maintains data provenance by recording reads and writes at the block layer and reliably identifying hosts editing those blocks through attestation over the USB channel. Our evaluation finds that ProvUSB imposes a one-time 850 ms overhead during USB enumeration, but approaches nearly-bare-metal runtime performance (90% of throughput) on larger files during normal execution, and less than 0.1% storage overhead for provenance in real-world workloads. ProvUSB thus provides essential new techniques in the defense of computer systems and USB storage devices.",
"title": ""
},
{
"docid": "987de36823c8dbb9ff13aec4fecd6c9a",
"text": "Previous research has been done on mindfulness and nursing stress but no review has been done to highlight the most up-to-date findings, to justify the recommendation of mindfulness training for the nursing field. The present paper aims to review the relevant studies, derive conclusions, and discuss future direction of research in this field.A total of 19 research papers were reviewed. The majority was intervention studies on the effects of mindfulness-training programs on nursing stress. Higher mindfulness is correlated with lower nursing stress. Mindfulness-based training programs were found to have significant positive effects on nursing stress and psychological well-being. The studies were found to have non-standardized intervention methods, inadequate research designs, small sample size, and lack of systematic follow-up on the sustainability of treatment effects, limiting the generalizability of the results. There is also a lack of research investigation into the underlying mechanism of action of mindfulness on nursing stress. Future research that addresses these limitations is indicated.",
"title": ""
},
{
"docid": "3770720cff3a36596df097835f4f10a9",
"text": "As mobile computing technologies have been more powerful and inclusive in people’s daily life, the issue of mobile assisted language learning (MALL) has also been widely explored in CALL research. Many researches on MALL consider the emerging mobile technologies have considerable potentials for the effective language learning. This review study focuses on the investigation of newly emerging mobile technologies and their pedagogical applications for language teachers and learners. Recent research or review on mobile assisted language learning tends to focus on more detailed applications of newly emerging mobile technology, rather than has given a broader point focusing on types of mobile device itself. In this paper, I thus reviewed recent research and conference papers for the last decade, which utilized newly emerging and integrated mobile technology. Its pedagogical benefits and challenges are discussed.",
"title": ""
},
{
"docid": "bf156a97587b55e8afe255fe1b1a8ac0",
"text": "In recent years researches are focused towards mining infrequent patterns rather than frequent patterns. Mining infrequent pattern plays vital role in detecting any abnormal event. In this paper, an algorithm named Infrequent Pattern Miner for Data Streams (IPM-DS) is proposed for mining nonzero infrequent patterns from data streams. The proposed algorithm adopts the FP-growth based approach for generating all infrequent patterns. The proposed algorithm (IPM-DS) is evaluated using health data set collected from wearable physiological sensors that measure vital parameters such as Heart Rate (HR), Breathing Rate (BR), Oxygen Saturation (SPO2) and Blood pressure (BP) and also with two publically available data sets such as e-coli and Wine from UCI repository. The experimental results show that the proposed algorithm generates all possible infrequent patterns in less time.",
"title": ""
},
{
"docid": "67411bb40671a8c1dafe328c379b0cd4",
"text": "Continuous EEG Monitoring is becoming a commonly used tool in assessing brain function in critically ill patients. However, there is no uniformly accepted nomenclature for EEG patterns frequently encountered in these patients such as periodic discharges, fluctuating rhythmic patterns, and combinations thereof. Similarly, there is no consensus on which patterns are associated with ongoing neuronal injury, which patterns need to be treated, or how aggressively to treat them. The first step in addressing these issues is to standardize terminology to allow multicenter research projects and to facilitate communication. To this end, we gathered a group of electroencephalographers with particular expertise or interest in this area in order to develop standardized terminology to be used primarily in the research setting. One of the main goals was to eliminate terms with clinical connotations, intended or not, such as “triphasic waves,” a term that implies a metabolic encephalopathy with no relationship to seizures for many clinicians. We also avoid the use of “ictal,” “interictal” and “epileptiform” for the equivocal patterns that are the primary focus of this report. A standardized method of quantifying interictal discharges is also included for the same reasons, with no attempt to alter the existing definition of epileptiform discharges (sharp waves and spikes [Noachtar et al 1999]). Finally, we suggest here a scheme for categorizing background EEG activity. The revisions proposed here were based on solicited feedback on the initial version of the Report [Hirsch LJ et al 2005], from within and outside this committee and society, including public presentations and discussion at many venues. Interand intraobserver agreement between expert EEG readers using the initial version of the terminology was found to be moderate for major terms but only slight to fair for modifiers. [Gerber PA et al 2008] A second assessment was performed on an interim version after extensive changes were introduced. This assessment showed significant improvement with an inter-rater agreement almost perfect for main terms (k = 0.87, 0.92) and substantial agreement for the modifiers of amplitude (93%) and frequency (80%) (Mani R, et al, 2012). Last, after official posting on the ACNS Website and solicitation of comment from ACNS members and others, additional minor additions and revisions were enacted. To standardize terminology of periodic and rhythmic EEG patterns in the critically ill in order to aid communication and future research involving such patterns. Our goal is to avoid terms with clinical connotations and to define terms thoroughly enough to maximize inter-rater reliability. Not included in this nomenclature: Unequivocal electrographic seizures including the following: Generalized spike-wave discharges at 3/s or faster; and clearly evolving discharges of any type that reach a frequency .4/s, whether focal or generalized. These would still be referred to as electrographic seizures. However, their prevalence, duration, frequency and relation to stimulation should be stated as described below when being used for research purposes. Corollary: The following patterns are included in this nomenclature and would not be termed electrographic seizures for research purposes (whether or not these patterns are determined to represent seizures clinically in a given patient): Generalized spike and wave patterns slower than 3/s; and evolving discharges that remain slower than or equal to 4/s. This does not imply that these patterns are not ictal, but simply that they may or may not be. Clinical correlation, including response to treatment, may be necessary to make this determination. N.B.: This terminology can be applied to all ages, but is not intended for use in neonates.",
"title": ""
},
{
"docid": "4b4a3eb0e24f48bab61d348f61b31f32",
"text": "In recent years, gesture recognition has received much attention from research communities. Computer vision-based gesture recognition has many potential applications in the area of human-computer interaction as well as sign language recognition. Sign languages use a combination of hand shapes, motion and locations as well as facial expressions. Finger-spelling is a manual representation of alphabet letters, which is often used where there is no sign word to correspond to a spoken word. In Australia, a sign language called Auslan is used by the deaf community and and the finger-spelling letters use two handed motion, unlike the well known finger-spelling of American Sign Language (ASL) that uses static shapes. This thesis presents the Auslan Finger-spelling Recognizer (AFR) that is a real-time system capable of recognizing signs that consists of Auslan manual alphabet letters from video sequences. The AFR system has two components: the first is the feature extraction process that extracts a combination of spatial and motion features from the images. Which classifies a sequence of features using Hidden Markov Models (HMMs). Tests using a vocabulary of twenty signed words showed the system could achieve 97% accuracy at the letter level and 88% at the word level using a finite state grammar network and embedded training.",
"title": ""
},
{
"docid": "c3aa37246eeb4745616790dde605ec9d",
"text": "The paper describes the use of Conditional Random Fields(CRF) utilizing contextual information in automatically labeling extracted segments of scanned documents as Machine-print, Handwriting and Noise. The result of such a labeling can serve as an indexing step for a context-based image retrieval system or a bio-metric signature verification system. A simple region growing algorithm is first used to segment the document into a number of patches. A label for each such segmented patch is inferred using a CRF model. The model is flexible enough to include signatures as a type of handwriting and isolate it from machine-print and noise. The robustness of the model is due to the inherent nature of modeling neighboring spatial dependencies in the labels as well as the observed data using CRF. Maximum pseudo-likelihood estimates for the parameters of the CRF model are learnt using conjugate gradient descent. Inference of labels is done by computing the probability of the labels under the model with Gibbs sampling. Experimental results show that this approach provides for 95.75% of the data being assigned correct labels. The CRF based model is shown to be superior to Neural Networks and Naive Bayes.",
"title": ""
},
{
"docid": "ec7f20169de673cc14b31e8516937df2",
"text": "Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However, use of a template does not certify that the paper has been accepted for publication in the named journal. INFORMS journal templates are for the exclusive purpose of submitting to an INFORMS journal and should not be used to distribute the papers in print or online or to submit the papers to another publication.",
"title": ""
},
{
"docid": "1e4d9d451b3713c9a06a7b0b8cb4e471",
"text": "The Web 3.0 is approaching fast and the Online Social Networks (OSNs) are becoming more and more pervasive in today daily activities. A subsequent consequence is that criminals are running at the same speed as technology and most of the time highly sophisticated technological machineries are used by them. Images are often involved in illicit or illegal activities, with it now being fundamental to try to ascertain as much as information on a given image as possible. Today, most of the images coming from the Internet flow through OSNs. The paper analyzes the characteristics of images published on some OSNs. The analysis mainly focuses on how the OSN processes the uploaded images and what changes are made to some of the characteristics, such as JPEG quantization table, pixel resolution and related metadata. The experimental analysis was carried out in June-July 2011 on Facebook, Badoo and Google+. It also has a forensic value: it can be used to establish whether an image has been downloaded from an OSN or not.",
"title": ""
},
{
"docid": "78eb6a1734891815adeeb2b1132b3f8e",
"text": "Portable systems demand energy efficiency in order to maximize battery life. IRAM architectures, which combine DRAM and a processor on the same chip in a DRAM process, are more energy efficient than conventional systems. The high density of DRAM permits a much larger amount of memory on-chip than a traditional SRAM cache design in a logic process. This allows most or all IRAM memory accesses to be satisfied on-chip. Thus there is much less need to drive high-capacitance off-chip buses, which contribute significantly to the energy consumption of a system. To quantify this advantage we apply models of energy consumption in DRAM and SRAM memories to results from cache simulations of applications reflective of personal productivity tasks on low power systems. We find that IRAM memory hierarchies consume as little as 22% of the energy consumed by a conventional memory hierarchy for memory-intensive applications, while delivering comparable performance. Furthermore, the energy consumed by a system consisting of an IRAM memory hierarchy combined with an energy efficient CPU core is as little as 40% of that of the same CPU core with a traditional memory hierarchy.",
"title": ""
},
{
"docid": "9dfcba284d0bf3320d893d4379042225",
"text": "Botnet is a hybrid of previous threats integrated with a command and control system and hundreds of millions of computers are infected. Although botnets are widespread development, the research and solutions for botnets are not mature. In this paper, we present an overview of research on botnets. We discuss in detail the botnet and related research including infection mechanism, botnet malicious behavior, command and control models, communication protocols, botnet detection, and botnet defense. We also present a simple case study of IRC-based SpyBot.",
"title": ""
},
{
"docid": "c6f9c8ee92acfd02e49253b1e065ca46",
"text": "The majority of penile carcinoma is squamous cell carcinoma. Although uncommon in the United States, it represents a larger proportion of cancers in the underdeveloped world. Invasive squamous cell carcinoma may arise from precursor lesions or de novo , and has been associated with lack of circumcision and HPV infection. Early diagnosis is imperative as lymphatic spread is associated with a poor prognosis. Radical surgical treatment is no longer the mainstay, and penile sparing treatments now are often used, including Mohs micrographic surgery. Therapeutic decisions should be made with regard to the size and location of the tumor, as well as the functional desires of the patient. It is critical for the dermatologist to be familiar with the evaluation, grading/staging, and treatment advances of penile squamous cell carcinoma. Herein, we present a review of the literature regarding penile squamous cell carcinoma, as well as a case report of invasive squamous cell carcinoma treated with Mohs micrographic surgery.",
"title": ""
},
{
"docid": "3f5e8ac89e893d3166f5e3c50f91b8cc",
"text": "Biosequences typically have a small alphabet, a long length, and patterns containing gaps (i.e., \"don't care\") of arbitrary size. Mining frequent patterns in such sequences faces a different type of explosion than in transaction sequences primarily motivated in market-basket analysis. In this paper, we study how this explosion affects the classic sequential pattern mining, and present a scalable two-phase algorithm to deal with this new explosion. The <i>Segment Phase</i> first searches for short patterns containing no gaps, called <i>segments</i>. This phase is efficient. The <i>Pattern Phase</i> searches for long patterns containing multiple segments separated by variable length gaps. This phase is time consuming. The purpose of two phases is to exploit the information obtained from the first phase to speed up the pattern growth and matching and to prune the search space in the second phase. We evaluate this approach on synthetic and real life data sets.",
"title": ""
},
{
"docid": "391fb9de39cb2d0635f2329362db846e",
"text": "In recent years, there has been an explosion of interest in mining time series databases. As with most computer science problems, representation of the data is the key to efficient and effective solutions. One of the most commonly used representations is piecewise linear approximation. This representation has been used by various researchers to support clustering, classification, indexing and association rule mining of time series data. A variety of algorithms have been proposed to obtain this representation, with several algorithms having been independently rediscovered several times. In this paper, we undertake the first extensive review and empirical comparison of all proposed techniques. We show that all these algorithms have fatal flaws from a data mining perspective. We introduce a novel algorithm that we empirically show to be superior to all others in the literature.",
"title": ""
},
{
"docid": "844c75292441af560ed2d2abc1d175f6",
"text": "Completion rates for massive open online classes (MOOCs) are notoriously low, but learner intent is an important factor. By studying students who drop out despite their intent to complete the MOOC, it may be possible to develop interventions to improve retention and learning outcomes. Previous research into predicting MOOC completion has focused on click-streams, demographics, and sentiment analysis. This study uses natural language processing (NLP) to examine if the language in the discussion forum of an educational data mining MOOC is predictive of successful class completion. The analysis is applied to a subsample of 320 students who completed at least one graded assignment and produced at least 50 words in discussion forums. The findings indicate that the language produced by students can predict with substantial accuracy (67.8 %) whether students complete the MOOC. This predictive power suggests that NLP can help us both to understand student retention in MOOCs and to develop automated signals of student success.",
"title": ""
},
{
"docid": "04ed876237214c1366f966b80ebb7fd4",
"text": "Load Balancing is essential for efficient operations indistributed environments. As Cloud Computing is growingrapidly and clients are demanding more services and betterresults, load balancing for the Cloud has become a veryinteresting and important research area. Many algorithms weresuggested to provide efficient mechanisms and algorithms forassigning the client's requests to available Cloud nodes. Theseapproaches aim to enhance the overall performance of the Cloudand provide the user more satisfying and efficient services. Inthis paper, we investigate the different algorithms proposed toresolve the issue of load balancing and task scheduling in CloudComputing. We discuss and compare these algorithms to providean overview of the latest approaches in the field.",
"title": ""
},
{
"docid": "65a9813786554ede5e3c36f62b345ad8",
"text": "Web search queries provide a surprisingly large amount of information, which can be potentially organized and converted into a knowledgebase. In this paper, we focus on the problem of automatically identifying brand and product entities from a large collection of web queries in online shopping domain. We propose an unsupervised approach based on adaptor grammars that does not require any human annotation efforts nor rely on any external resources. To reduce the noise and normalize the query patterns, we introduce a query standardization step, which groups multiple search patterns and word orderings together into their most frequent ones. We present three different sets of grammar rules used to infer query structures and extract brand and product entities. To give an objective assessment of the performance of our approach, we conduct experiments on a large collection of online shopping queries and intrinsically evaluate the knowledgebase generated by our method qualitatively and quantitatively. In addition, we also evaluate our framework on extrinsic tasks on query tagging and chunking. Our empirical studies show that the knowledgebase discovered by our approach is highly accurate, has good coverage and significantly improves the performance on the external tasks.",
"title": ""
},
{
"docid": "38bdf0690a8409808cc337475ccf8347",
"text": "Network Traffic Matrix (TM) prediction is defined as the problem of estimating future network traffic from the previous and achieved network traffic data. It is widely used in network planning, resource management and network security. Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that is well-suited to learn from experience to classify, process and predict time series with time lags of unknown size. LSTMs have been shown to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. In this paper, we propose a LSTM RNN framework for predicting Traffic Matrix (TM) in large networks. By validating our framework on real-world data from GÉANT network, we show that our LSTM models converge quickly and give state of the art TM prediction performance for relatively small sized models. keywords Traffic Matrix, Prediction, Neural Networks, Long Short-Term Mermory",
"title": ""
}
] |
scidocsrr
|
971d8256500dcd83689aa9541ef38db1
|
Learning Memory Access Patterns
|
[
{
"docid": "75a1c22e950ccb135c054353acb8571a",
"text": "We study the problem of building generative models of natural source code (NSC); that is, source code written and understood by humans. Our primary contribution is to describe a family of generative models for NSC that have three key properties: First, they incorporate both sequential and hierarchical structure. Second, we learn a distributed representation of source code elements. Finally, they integrate closely with a compiler, which allows leveraging compiler logic and abstractions when building structure into the model. We also develop an extension that includes more complex structure, refining how the model generates identifier tokens based on what variables are currently in scope. Our models can be learned efficiently, and we show empirically that including appropriate structure greatly improves the models, measured by the probability of generating test programs.",
"title": ""
},
{
"docid": "0c1cd807339481f3a0b6da1fbe96950c",
"text": "Predictive modeling using machine learning is an effective method for building compiler heuristics, but there is a shortage of benchmarks. Typical machine learning experiments outside of the compilation field train over thousands or millions of examples. In machine learning for compilers, however, there are typically only a few dozen common benchmarks available. This limits the quality of learned models, as they have very sparse training data for what are often high-dimensional feature spaces. What is needed is a way to generate an unbounded number of training programs that finely cover the feature space. At the same time the generated programs must be similar to the types of programs that human developers actually write, otherwise the learning will target the wrong parts of the feature space. We mine open source repositories for program fragments and apply deep learning techniques to automatically construct models for how humans write programs. We sample these models to generate an unbounded number of runnable training programs. The quality of the programs is such that even human developers struggle to distinguish our generated programs from hand-written code. We use our generator for OpenCL programs, CLgen, to automatically synthesize thousands of programs and show that learning over these improves the performance of a state of the art predictive model by 1.27x. In addition, the fine covering of the feature space automatically exposes weaknesses in the feature design which are invisible with the sparse training examples from existing benchmark suites. Correcting these weaknesses further increases performance by 4.30x.",
"title": ""
}
] |
[
{
"docid": "81cd34302bf028a444019e228a5148d7",
"text": "Since the release of the large discourse-level annotation of the Penn Discourse Treebank (PDTB), research work has been carried out on certain subtasks of this annotation, such as disambiguating discourse connectives and classifying Explicit or Implicit relations. We see a need to construct a full parser on top of these subtasks and propose a way to evaluate the parser. In this work, we have designed and developed an end-to-end discourse parser-to-parse free texts in the PDTB style in a fully data-driven approach. The parser consists of multiple components joined in a sequential pipeline architecture, which includes a connective classifier, argument labeler, explicit classifier, non-explicit classifier, and attribution span labeler. Our trained parser first identifies all discourse and non-discourse relations, locates and labels their arguments, and then classifies the sense of the relation between each pair of arguments. For the identified relations, the parser also determines the attribution spans, if any, associated with them. We introduce novel approaches to locate and label arguments, and to identify attribution spans. We also significantly improve on the current state-of-the-art connective classifier. We propose and present a comprehensive evaluation from both component-wise and error-cascading perspectives, in which we illustrate how each component performs in isolation, as well as how the pipeline performs with errors propagated forward. The parser gives an overall system F1 score of 46.80 percent for partial matching utilizing gold standard parses, and 38.18 percent with full automation.",
"title": ""
},
{
"docid": "9245a5a3daad7fbce9416b1dedb9e9ab",
"text": "BACKGROUND\nDespite the growing epidemic of heart failure with preserved ejection fraction (HFpEF), no valid measure of patients' health status (symptoms, function, and quality of life) exists. We evaluated the Kansas City Cardiomyopathy Questionnaire (KCCQ), a validated measure of HF with reduced EF, in patients with HFpEF.\n\n\nMETHODS AND RESULTS\nUsing a prospective HF registry, we dichotomized patients into HF with reduced EF (EF≤ 40) and HFpEF (EF≥50). The associations between New York Heart Association class, a commonly used criterion standard, and KCCQ Overall Summary and Total Symptom domains were evaluated using Spearman correlations and 2-way ANOVA with differences between patients with HF with reduced EF and HFpEF tested with interaction terms. Predictive validity of the KCCQ Overall Summary scores was assessed with Kaplan-Meier curves for death and all-cause hospitalization. Covariate adjustment was made using Cox proportional hazards models. Internal reliability was assessed with Cronbach's α. Among 849 patients, 200 (24%) had HFpEF. KCCQ summary scores were strongly associated with New York Heart Association class in both patients with HFpEF (r=-0.62; P<0.001) and HF with reduced EF (r=-0.55; P=0.27 for interaction). One-year event-free rates by KCCQ category among patients with HFpEF were 0 to 25=13.8%, 26 to 50=59.1%, 51 to 75=73.8%, and 76 to 100=77.8% (log rank P<0.001), with no significant interaction by EF (P=0.37). The KCCQ domains demonstrated high internal consistency among patients with HFpEF (Cronbach's α=0.96 for overall summary and ≥0.69 in all subdomains).\n\n\nCONCLUSIONS\nAmong patients with HFpEF, the KCCQ seems to be a valid and reliable measure of health status and offers excellent prognostic ability. Future studies should extend and replicate our findings, including the establishment of its responsiveness to clinical change.",
"title": ""
},
{
"docid": "f20e0b50b72b4b2796b77757ff20210e",
"text": "The dominant neural architectures in question answer retrieval are based on recurrent or convolutional encoders configured with complex word matching layers. Given that recent architectural innovations are mostly new word interaction layers or attention-based matching mechanisms, it seems to be a well-established fact that these components are mandatory for good performance. Unfortunately, the memory and computation cost incurred by these complex mechanisms are undesirable for practical applications. As such, this paper tackles the question of whether it is possible to achieve competitive performance with simple neural architectures. We propose a simple but novel deep learning architecture for fast and efficient question-answer ranking and retrieval. More specifically, our proposed model, HyperQA, is a parameter efficient neural network that outperforms other parameter intensive models such as Attentive Pooling BiLSTMs and Multi-Perspective CNNs on multiple QA benchmarks. The novelty behind HyperQA is a pairwise ranking objective that models the relationship between question and answer embeddings in Hyperbolic space instead of Euclidean space. This empowers our model with a self-organizing ability and enables automatic discovery of latent hierarchies while learning embeddings of questions and answers. Our model requires no feature engineering, no similarity matrix matching, no complicated attention mechanisms nor over-parameterized layers and yet outperforms and remains competitive to many models that have these functionalities on multiple benchmarks.",
"title": ""
},
{
"docid": "33b37422ace8a300d53d4896de6bbb6f",
"text": "Digital investigations of the real world through point clouds and derivatives are changing how curators, cultural heritage researchers and archaeologists work and collaborate. To progressively aggregate expertise and enhance the working proficiency of all professionals, virtual reconstructions demand adapted tools to facilitate knowledge dissemination. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. In this paper, we review the state of the art of point cloud integration within archaeological applications, giving an overview of 3D technologies for heritage, digital exploitation and case studies showing the assimilation status within 3D GIS. Identified issues and new perspectives are addressed through a knowledge-based point cloud processing framework for multi-sensory data, and illustrated on mosaics and quasi-planar objects. A new acquisition, pre-processing, segmentation and ontology-based classification method on hybrid point clouds from both terrestrial laser scanning and dense image matching is proposed to enable reasoning for information extraction. Experiments in detection and semantic enrichment show promising results of 94% correct semantization. Then, we integrate the metadata in an archaeological smart point cloud data structure allowing spatio-semantic queries related to CIDOC-CRM. Finally, a WebGL prototype is presented that leads to efficient communication between actors by proposing optimal 3D data visualizations as a basis on which interaction can grow.",
"title": ""
},
{
"docid": "309e14c07a3a340f7da15abeb527231d",
"text": "The random forest algorithm, proposed by L. Breiman in 2001, has been extremely successful as a general-purpose classification and regression method. The approach, which combines several randomized decision trees and aggregates their predictions by averaging, has shown excellent performance in settings where the number of variables is much larger than the number of observations. Moreover, it is versatile enough to be applied to large-scale problems, is easily adapted to various ad-hoc learning tasks, and returns measures of variable importance. The present article reviews the most recent theoretical and methodological developments for random forests. Emphasis is placed on the mathematical forces driving the algorithm, with special attention given to the selection of parameters, the resampling mechanism, and variable importance measures. This review is intended to provide non-experts easy access to the main ideas.",
"title": ""
},
{
"docid": "db215a998da127466bcb5e80b750cbbb",
"text": "to design and build computing systems capable of running themselves, adjusting to varying circumstances, and preparing their resources to handle most efficiently the workloads we put upon them. These autonomic systems must anticipate needs and allow users to concentrate on what they want to accomplish rather than figuring how to rig the computing systems to get them there. Abtract The performance of current shared-memory multiprocessor systems depends on both the efficient utilization of all the architectural elements in the system (processors, memory, etc), and the workload characteristics. This Thesis has the main goal of improving the execution of workloads of parallel applications in shared-memory multiprocessor systems by using real performance information in the processor scheduling. In multiprocessor systems, users request for resources (processors) to execute their parallel applications. The Operating System is responsible to distribute the available physical resources among parallel applications in the more convenient way for both the system and the application performance. It is a typical practice of users in multiprocessor systems to request for a high number of processors assuming that the higher the processor request, the higher the number of processors allocated, and the higher the speedup achieved by their applications. However, this is not true. Parallel applications have different characteristics with respect to their scalability. Their speedup also depends on run-time parameters such as the influence of the rest of running applications. This Thesis proposes that the system should not base its decisions on the users requests only, but the system must decide, or adjust, its decisions based on real performance information calculated at run-time. The performance of parallel applications is an information that the system can dynamically measure without introducing a significant penalty in the application execution time. Using this information, the processor allocation can be decided, or modified, being robust to incorrect processor requests given by users. We also propose that the system use a target efficiency to ensure the efficient use of processors. This target efficiency is a system parameter and can be dynamically decided as a function of the characteristics of running applications or the number of queued applications. We also propose to coordinate the different scheduling levels that operate in the processor scheduling: the run-time scheduler, the processor scheduler, and the queueing system. We propose to establish an interface between levels to send and receive information, and to take scheduling decisions considering the information provided by the rest of …",
"title": ""
},
{
"docid": "20db149230db9df2a30f5cd788db1d89",
"text": "IP flows have heavy-tailed packet and byte size distributions. This make them poor candidates for uniform sampling---i.e. selecting 1 in N flows---since omission or inclusion of a large flow can have a large effect on estimated total traffic. Flows selected in this manner are thus unsuitable for use in usage sensitive billing. We propose instead using a size-dependent sampling scheme which gives priority to the larger contributions to customer usage. This turns the heavy tails to our advantage; we can obtain accurate estimates of customer usage from a relatively small number of important samples.The sampling scheme allows us to control error when charging is sensitive to estimated usage only above a given base level. A refinement allows us to strictly limit the chance that a customers estimated usage will exceed their actual usage. Furthermore, we show that a secondary goal, that of controlling the rate at which samples are produced, can be fulfilled provided the billing cycle is sufficiently long. All these claims are supported by experiments on flow traces gathered from a commercial network.",
"title": ""
},
{
"docid": "5e86f40cfc3b2e9664ea1f7cc5bf730c",
"text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.",
"title": ""
},
{
"docid": "7eec1e737523dc3b78de135fc71b058f",
"text": "Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This \"pyramid match\" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches",
"title": ""
},
{
"docid": "dca2900c2b002e3119435bcf983c5aac",
"text": "Substantial evidence suggests that the accumulation of beta-amyloid (Abeta)-derived peptides contributes to the aetiology of Alzheimer's disease (AD) by stimulating formation of free radicals. Thus, the antioxidant alpha-lipoate, which is able to cross the blood-brain barrier, would seem an ideal substance in the treatment of AD. We have investigated the potential effectiveness of alpha-lipoic acid (LA) against cytotoxicity induced by Abeta peptide (31-35) (30 microM) and hydrogen peroxide (H(2)O(2)) (100 microM) with the cellular 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) reduction and fluorescence dye propidium iodide assays in primary neurons of rat cerebral cortex. We found that treatment with LA protected cortical neurons against cytotoxicity induced by Abeta or H(2)O(2). In addition, LA-induced increase in the level of Akt in the neurons was observed by Western blot. The LA-induced neuroprotection and Akt increase were attenuated by pre-treatment with the phosphatidylinositol 3-kinase inhibitor, LY294002 (50 microM). Our data suggest that the neuroprotective effects of the antioxidant LA are partly mediated through activation of the PKB/Akt signaling pathway.",
"title": ""
},
{
"docid": "e1651c1f329b8caa53e5322be5bf700b",
"text": "Personalized curriculum sequencing is an important research issue for web-based learning systems because no fixed learning paths will be appropriate for all learners. Therefore, many researchers focused on developing e-learning systems with personalized learning mechanisms to assist on-line web-based learning and adaptively provide learning paths in order to promote the learning performance of individual learners. However, most personalized e-learning systems usually neglect to consider if learner ability and the difficulty level of the recommended courseware are matched to each other while performing personalized learning services. Moreover, the problem of concept continuity of learning paths also needs to be considered while implementing personalized curriculum sequencing because smooth learning paths enhance the linked strength between learning concepts. Generally, inappropriate courseware leads to learner cognitive overload or disorientation during learning processes, thus reducing learning performance. Therefore, compared to the freely browsing learning mode without any personalized learning path guidance used in most web-based learning systems, this paper assesses whether the proposed genetic-based personalized e-learning system, which can generate appropriate learning paths according to the incorrect testing responses of an individual learner in a pre-test, provides benefits in terms of learning performance promotion while learning. Based on the results of pre-test, the proposed genetic-based personalized e-learning system can conduct personalized curriculum sequencing through simultaneously considering courseware difficulty level and the concept continuity of learning paths to support web-based learning. Experimental results indicated that applying the proposed genetic-based personalized e-learning system for web-based learning is superior to the freely browsing learning mode because of high quality and concise learning path for individual learners. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2946ab78f387ee6263759a3cc1fbef24",
"text": "We propose a multi-layer structure to mediate essential components in sound spatialization. This approach will facilitate artistic work with spatialization systems, a process which currently lacks structure, flexibility, and interoperability.",
"title": ""
},
{
"docid": "01202e09e54a1fc9f5b36d67fbbf3870",
"text": "This paper is intended to investigate the copper-graphene surface plasmon resonance (SPR)-based biosensor by considering the high adsorption efficiency of graphene. Copper (Cu) is used as a plasmonic material whereas graphene is used to prevent Cu from oxidation and enhance the reflectance intensity. Numerical investigation is performed using finite-difference-time-domain (FDTD) method by comparing the sensing performance such as reflectance intensity that explains the sensor sensitivity and the full-width-at-half-maximum (FWHM) of the spectrum for detection accuracy. The measurements were observed with various Cu thin film thicknesses ranging from 20nm to 80nm with 785nm operating wavelength. The proposed sensor shows that the 40nm-thick Cu-graphene (1 layer) SPR-based sensor gave better performance with narrower plasmonic spectrum line width (reflectance intensity of 91.2%) and better FWHM of 3.08°. The measured results also indicate that the Cu-graphene SPR-based sensor is suitable for detecting urea with refractive index of 1.49 in dielectric medium.",
"title": ""
},
{
"docid": "74f148aaf1dd6ee1fbfb4338aded64bf",
"text": "Complexity is crucial to characterize tasks performed by humans through computer systems. Yet, the theory and practice of crowdsourcing currently lacks a clear understanding of task complexity, hindering the design of effective and efficient execution interfaces or fair monetary rewards. To understand how complexity is perceived and distributed over crowdsourcing tasks, we instrumented an experiment where we asked workers to evaluate the complexity of 61 real-world re-instantiated crowdsourcing tasks. We show that task complexity, while being subjective, is coherently perceived across workers; on the other hand, it is significantly influenced by task type. Next, we develop a high-dimensional regression model, to assess the influence of three classes of structural features (metadata, content, and visual) on task complexity, and ultimately use them to measure task complexity. Results show that both the appearance and the language used in task description can accurately predict task complexity. Finally, we apply the same feature set to predict task performance, based on a set of 5 years-worth tasks in Amazon MTurk. Results show that features related to task complexity can improve the quality of task performance prediction, thus demonstrating the utility of complexity as a task modeling property.",
"title": ""
},
{
"docid": "7b35fd3b03da392ecdd997be16ed9040",
"text": "Sampling based planners have become increasingly efficient in solving the problems of classical motion planning and its applications. In particular, techniques based on the rapidly-exploring random trees (RRTs) have generated highly successful single-query planners. Recently, a variant of this planner called dynamic-domain RRT was introduced by Yershova et al. (2005). It relies on a new sampling scheme that improves the performance of the RRT approach on many motion planning problems. One of the drawbacks of this method is that it introduces a new parameter that requires careful tuning. In this paper we analyze the influence of this parameter and propose a new variant of the dynamic-domain RRT, which iteratively adapts the sampling domain for the Voronoi region of each node during the search process. This allows automatic tuning of the parameter and significantly increases the robustness of the algorithm. The resulting variant of the algorithm has been tested on several path planning problems.",
"title": ""
},
{
"docid": "a3f06bfcc2034483cac3ee200803878c",
"text": "This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based upon the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudo-code and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques.",
"title": ""
},
{
"docid": "c724ba8456a0e19fc440ff4d7297faee",
"text": "Digital camera sensors are sensitive to wavelengths ranging from the ultraviolet (200-400nm) to the near-infrared (700-100nm) bands. This range is, however, reduced because the aim of photographic cameras is to capture and reproduce the visible spectrum (400-700nm) only. Ultraviolet radiation is filtered out by the optical elements of the camera, while a specifically designed “hot-mirror” is placed in front of the sensor to prevent near-infrared contamination of the visible image. We propose that near-infrared data can actually prove remarkably useful in colour constancy, to estimate the incident illumination as well as providing to detect the location of different illuminants in a multiply lit scene. Looking at common illuminants spectral power distribution show that very strong differences exist between the near-infrared and visible bands, e.g., incandescent illumination peaks in the near-infrared while fluorescent sources are mostly confined to the visible band. We show that illuminants can be estimated by simply looking at the ratios of two images: a standard RGB image and a near-infrared only image. As the differences between illuminants are amplified in the near-infrared, this estimation proves to be more reliable than using only the visible band. Furthermore, in most multiple illumination situations one of the light will be predominantly near-infrared emitting (e.g., flash, incandescent) while the other will be mostly visible emitting (e.g., fluorescent, skylight). Using near-infrared and RGB image ratios allow us to accurately pinpoint the location of diverse illuminant and recover a lighting map.",
"title": ""
},
{
"docid": "eb72c4bfa65b25785b9a23ca9cd56cc0",
"text": "The cortical anatomy of the conscious resting state (REST) was investigated using a meta-analysis of nine positron emission tomography (PET) activation protocols that dealt with different cognitive tasks but shared REST as a common control state. During REST, subjects were in darkness and silence, and were instructed to relax, refrain from moving, and avoid systematic thoughts. Each protocol contrasted REST to a different cognitive task consisting either of language, mental imagery, mental calculation, reasoning, finger movement, or spatial working memory, using either auditory, visual or no stimulus delivery, and requiring either vocal, motor or no output. A total of 63 subjects and 370 spatially normalized PET scans were entered in the meta-analysis. Conjunction analysis revealed a network of brain areas jointly activated during conscious REST as compared to the nine cognitive tasks, including the bilateral angular gyrus, the left anterior precuneus and posterior cingulate cortex, the left medial frontal and anterior cingulate cortex, the left superior and medial frontal sulcus, and the left inferior frontal cortex. These results suggest that brain activity during conscious REST is sustained by a large scale network of heteromodal associative parietal and frontal cortical areas, that can be further hierarchically organized in an episodic working memory parieto-frontal network, driven in part by emotions, working under the supervision of an executive left prefrontal network.",
"title": ""
},
{
"docid": "ba457819a7375c5dfee9ab870c56cc55",
"text": "A biometric system is vulnerable to a variety of attacks aimed at undermining the integrity of the authentication process. These attacks are intended to either circumvent the security afforded by the system or to deter the normal functioning of the system. We describe the various threats that can be encountered by a biometric system. We specifically focus on attacks designed to elicit information about the original biometric data of an individual from the stored template. A few algorithms presented in the literature are discussed in this regard. We also examine techniques that can be used to deter or detect these attacks. Furthermore, we provide experimental results pertaining to a hybrid system combining biometrics with cryptography, that converts traditional fingerprint templates into novel cryptographic structures.",
"title": ""
},
{
"docid": "e26d52cdc3636e3034d76bc684b9dc95",
"text": "The problem of cross-modal retrieval from multimedia repositories is considered. This problem addresses the design of retrieval systems that support queries across content modalities, for example, using an image to search for texts. A mathematical formulation is proposed, equating the design of cross-modal retrieval systems to that of isomorphic feature spaces for different content modalities. Two hypotheses are then investigated regarding the fundamental attributes of these spaces. The first is that low-level cross-modal correlations should be accounted for. The second is that the space should enable semantic abstraction. Three new solutions to the cross-modal retrieval problem are then derived from these hypotheses: correlation matching (CM), an unsupervised method which models cross-modal correlations, semantic matching (SM), a supervised technique that relies on semantic representation, and semantic correlation matching (SCM), which combines both. An extensive evaluation of retrieval performance is conducted to test the validity of the hypotheses. All approaches are shown successful for text retrieval in response to image queries and vice versa. It is concluded that both hypotheses hold, in a complementary form, although evidence in favor of the abstraction hypothesis is stronger than that for correlation.",
"title": ""
}
] |
scidocsrr
|
4a6c6c1cf3752da9122e92529d027554
|
Multiview Cross-supervision for Semantic Segmentation
|
[
{
"docid": "10c357d046dbf27cab92b1c3f91affb1",
"text": "We propose a novel deep architecture, SegNet, for semantic pixel wise image labelling 1. SegNet has several attractive properties; (i) it only requires forward evaluation of a fully learnt function to obtain smooth label predictions, (ii) with increasing depth, a larger context is considered for pixel labelling which improves accuracy, and (iii) it is easy to visualise the effect of feature activation(s) in the pixel label space at any depth. SegNet is composed of a stack of encoders followed by a corresponding decoder stack which feeds into a soft-max classification layer. The decoders help map low resolution feature maps at the output of the encoder stack to full input image size feature maps. This addresses an important drawback of recent deep learning approaches which have adopted networks designed for object categorization for pixel wise labelling. These methods lack a mechanism to map deep layer feature maps to input dimensions. They resort to ad hoc methods to upsample features, e.g. by replication. This results in noisy predictions and also restricts the number of pooling layers in order to avoid too much upsampling and thus reduces spatial context. SegNet overcomes these problems by learning to map encoder outputs to image pixel labels. We test the performance of SegNet on outdoor RGB scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results show that SegNet achieves state-of-the-art performance even without use of additional cues such as depth, video frames or post-processing with CRF models.",
"title": ""
},
{
"docid": "98e557f291de3b305a91e47f59a9ed34",
"text": "We propose SfM-Net, a geometry-aware neural network for motion estimation in videos that decomposes frameto-frame pixel motion in terms of scene and object depth, camera motion and 3D object rotations and translations. Given a sequence of frames, SfM-Net predicts depth, segmentation, camera and rigid object motions, converts those into a dense frame-to-frame motion field (optical flow), differentiably warps frames in time to match pixels and back-propagates. The model can be trained with various degrees of supervision: 1) self-supervised by the reprojection photometric error (completely unsupervised), 2) supervised by ego-motion (camera motion), or 3) supervised by depth (e.g., as provided by RGBD sensors). SfMNet extracts meaningful depth estimates and successfully estimates frame-to-frame camera rotations and translations. It often successfully segments the moving objects in the scene, even though such supervision is never provided.",
"title": ""
}
] |
[
{
"docid": "7dde24346f2df846b9dbbe45cd9a99d6",
"text": "The Pemberton Happiness Index (PHI) is a recently developed integrative measure of well-being that includes components of hedonic, eudaimonic, social, and experienced well-being. The PHI has been validated in several languages, but not in Portuguese. Our aim was to cross-culturally adapt the Universal Portuguese version of the PHI and to assess its psychometric properties in a sample of the Brazilian population using online surveys.An expert committee evaluated 2 versions of the PHI previously translated into Portuguese by the original authors using a standardized form for assessment of semantic/idiomatic, cultural, and conceptual equivalence. A pretesting was conducted employing cognitive debriefing methods. In sequence, the expert committee evaluated all the documents and reached a final Universal Portuguese PHI version. For the evaluation of the psychometric properties, the data were collected using online surveys in a cross-sectional study. The study population included healthcare professionals and users of the social network site Facebook from several Brazilian geographic areas. In addition to the PHI, participants completed the Satisfaction with Life Scale (SWLS), Diener and Emmons' Positive and Negative Experience Scale (PNES), Psychological Well-being Scale (PWS), and the Subjective Happiness Scale (SHS). Internal consistency, convergent validity, known-group validity, and test-retest reliability were evaluated. Satisfaction with the previous day was correlated with the 10 items assessing experienced well-being using the Cramer V test. Additionally, a cut-off value of PHI to identify a \"happy individual\" was defined using receiver-operating characteristic (ROC) curve methodology.Data from 1035 Brazilian participants were analyzed (health professionals = 180; Facebook users = 855). Regarding reliability results, the internal consistency (Cronbach alpha = 0.890 and 0.914) and test-retest (intraclass correlation coefficient = 0.814) were both considered adequate. Most of the validity hypotheses formulated a priori (convergent and know-group) was further confirmed. The cut-off value of higher than 7 in remembered PHI was identified (AUC = 0.780, sensitivity = 69.2%, specificity = 78.2%) as the best one to identify a happy individual.We concluded that the Universal Portuguese version of the PHI is valid and reliable for use in the Brazilian population using online surveys.",
"title": ""
},
{
"docid": "beb8cb3566af719308c9ec249c955ff0",
"text": " Abstract—This article presents the review of the computing models applied for solving problems of midterm load forecasting. The load forecasting results can be used in electricity generation such as energy reservation and maintenance scheduling. Principle, strategy and results of short term, midterm, and long term load forecasting using statistic methods and artificial intelligence technology (AI) are summaried, Which, comparison between each method and the articles have difference feature input and strategy. The last, will get the idea or literature review conclusion to solve the problem of mid term load forecasting (MTLF).",
"title": ""
},
{
"docid": "6e5c74562ed54f068217fe98cdba946d",
"text": "Consumer behavior is essentially a decision-making processes by consumers either individuals, groups, and organizations that includes the process of choosing, buying, obtaining, use of goods or services. The main question in consumer behavior research is how consumers make a purchase decision. This study indentifies factors that are statistically significant to impulsive buying to Kacang Garuda (Peanut) product of each gender in Surabaya. By using primary data with the population of people ages 18–40, collected with purposive sampling, by spreading questionnaire. Selected object is Garuda Peanut because peanut products are low involvement products that trigger the occurrence of impulsive buying behavior, as well as Garuda Peanut is the market leader for peanut products in Indonesia. This research limits the factors into three, product attractiveness attributed by unique and interesting package, attractive package color, and package size availability, word of mouth attributed by convincing salesman, info from relatives and info from friends, and quality attributed by reliability, conformance quality and durability. The data shows that product attractiveness and quality are significant in increasing the degree of impulsive buying to both gender, but word of mouth applies only to female gender.",
"title": ""
},
{
"docid": "b6821edd0b9a4912ace0896d516cde95",
"text": "In this paper we propose a trajectory planning approach for autonomous vehicles on structured road maps. Therefore we are using the well-known A∗ optimal path planning algorithm. We generate a safe optimal trajectory through a three-dimensional graph, considering the two-dimensional position and time. (1) The graph is generated dynamically with fixed time differences and flexible distances between nodes, based on the vehicle's velocity, using a structured road map. (2) Furthermore the position of dynamic obstacles is predicted over time along the road lanes. The proposed Flexible Unit A∗ (FU-A∗) algorithm was tested for real-time applications with execution times of less than 50 ms on the car's main computer. The feasibility and reliability of FU-A∗ is validated by implementing on simulated autonomous car of Freie university “MadeInGermany” using the roadmap of Tempelhof, Berlin.",
"title": ""
},
{
"docid": "d25a3d1a921d78c4e447c8e010647351",
"text": "In the TREC 2005 Spam Evaluation Track, a number of popular spam filters – all owing their heritage to Graham’s A Plan for Spam – did quite well. Machine learning techniques reported elsewhere to perform well were hardly represented in the participating filters, and not represented at all in the better results. A non-traditional technique Prediction by Partial Matching (PPM) – performed exceptionally well, at or near the top of every test. Are the TREC results an anomaly? Is PPM really the best method for spam filtering? How are these results to be reconciled with others showing that methods like Support Vector Machines (SVM) are superior? We address these issues by testing implementations of five different classification methods on the TREC public corpus using the online evaluation methodology introduced in TREC. These results are complemented with cross validation experiments, which facilitate a comparison of the methods considered in the study under different evaluation schemes, and also give insight into the nature and utility of the evaluation regimens themselves. For comparison with previously published results, we also conducted cross validation experiments on the Ling-Spam and PU1 datasets. These tests reveal substantial differences attributable to different test assumptions, in particular batch vs. on-line training and testing, the order of classification, and the method of tokenization. Notwithstanding these differences, the methods that perform well at TREC also perform well using established test methods and corpora. Two previously untested methods – one based on Dynamic Markov Compression and one using logistic regression – compare favorably with competing approaches.",
"title": ""
},
{
"docid": "88afb98c0406d7c711b112fbe2a6f25e",
"text": "This paper provides a new metric, knowledge management performance index (KMPI), for assessing the performance of a firm in its knowledge management (KM) at a point in time. Firms are assumed to have always been oriented toward accumulating and applying knowledge to create economic value and competitive advantage. We therefore suggest the need for a KMPI which we have defined as a logistic function having five components that can be used to determine the knowledge circulation process (KCP): knowledge creation, knowledge accumulation, knowledge sharing, knowledge utilization, and knowledge internalization. When KCP efficiency increases, KMPI will also expand, enabling firms to become knowledgeintensive. To prove KMPI’s contribution, a questionnaire survey was conducted on 101 firms listed in the KOSDAQ market in Korea. We associated KMPI with three financial measures: stock price, price earnings ratio (PER), and R&D expenditure. Statistical results show that the proposed KMPI can represent KCP efficiency, while the three financial performance measures are also useful. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a8a8656f2f7cdcab79662cb150c8effa",
"text": "As networks grow both in importance and size, there is an increasing need for effective security monitors such as Network Intrusion Detection System to prevent such illicit accesses. Intrusion Detection Systems technology is an effective approach in dealing with the problems of network security. In this paper, we present an intrusion detection model based on hybrid fuzzy logic and neural network. The key idea is to take advantage of different classification abilities of fuzzy logic and neural network for intrusion detection system. The new model has ability to recognize an attack, to differentiate one attack from another i.e. classifying attack, and the most important, to detect new attacks with high detection rate and low false negative. Training and testing data were obtained from the Defense Advanced Research Projects Agency (DARPA) intrusion detection evaluation data set.",
"title": ""
},
{
"docid": "834c8c425ce231a50c307df056fe7b7f",
"text": "We introduce a new model for building conditional generative models in a semisupervised setting to conditionally generate data given attributes by adapting the GAN framework. The proposed semi-supervised GAN (SS-GAN) model uses a pair of stacked discriminators to learn the marginal distribution of the data, and the conditional distribution of the attributes given the data respectively. In the semi-supervised setting, the marginal distribution (which is often harder to learn) is learned from the labeled + unlabeled data, and the conditional distribution is learned purely from the labeled data. Our experimental results demonstrate that this model performs significantly better compared to existing semi-supervised conditional GAN models.",
"title": ""
},
{
"docid": "5518e4814b9eb0cd90b3563bd33c0ddc",
"text": "Most machine-learning methods focus on classifying instances whose classes have already been seen in training. In practice, many applications require classifying instances whose classes have not been seen previously. Zero-shot learning is a powerful and promising learning paradigm, in which the classes covered by training instances and the classes we aim to classify are disjoint. In this paper, we provide a comprehensive survey of zero-shot learning. First of all, we provide an overview of zero-shot learning. According to the data utilized in model optimization, we classify zero-shot learning into three learning settings. Second, we describe different semantic spaces adopted in existing zero-shot learning works. Third, we categorize existing zero-shot learning methods and introduce representative methods under each category. Fourth, we discuss different applications of zero-shot learning. Finally, we highlight promising future research directions of zero-shot learning.",
"title": ""
},
{
"docid": "7c41dcf173b9873992bd3add37800ac7",
"text": "Community question answering (CQA) represents the type of Web applications where people can exchange knowledge via asking and answering questions. One significant challenge of most real-world CQA systems is the lack of effective matching between questions and the potential good answerers, which adversely affects the efficient knowledge acquisition and circulation. On the one hand, a requester might experience many low-quality answers without receiving a quality response in a brief time; on the other hand, an answerer might face numerous new questions without being able to identify the questions of interest quickly. Under this situation, expert recommendation emerges as a promising technique to address the above issues. Instead of passively waiting for users to browse and find their questions of interest, an expert recommendation method raises the attention of users to the appropriate questions actively and promptly. The past few years have witnessed considerable efforts that address the expert recommendation problem from different perspectives. These methods all have their issues that need to be resolved before the advantages of expert recommendation can be fully embraced. In this survey, we first present an overview of the research efforts and state-of-the-art techniques for the expert recommendation in CQA. We next summarize and compare the existing methods concerning their advantages and shortcomings, followed by discussing the open issues and future research directions.",
"title": ""
},
{
"docid": "ea8685f27096f3e3e589ea8af90e78f5",
"text": "Acoustic data transmission is a technique to embed the data in a sound wave imperceptibly and to detect it at the receiver. This letter proposes a novel acoustic data transmission system designed based on the modulated complex lapped transform (MCLT). In the proposed system, data is embedded in an audio file by modifying the phases of the original MCLT coefficients. The data can be transmitted by playing the embedded audio and extracting it from the received audio. By embedding the data in the MCLT domain, the perceived quality of the resulting audio could be kept almost similar as the original audio. The system can transmit data at several hundreds of bits per second (bps), which is sufficient to deliver some useful short messages.",
"title": ""
},
{
"docid": "82d7a2b6045e90731d510ce7cce1a93c",
"text": "INTRODUCTION\nExtracellular vesicles (EVs) are critical mediators of intercellular communication, capable of regulating the transcriptional landscape of target cells through horizontal transmission of biological information, such as proteins, lipids, and RNA species. This capability highlights their potential as novel targets for disease intervention. Areas covered: This review focuses on the emerging importance of discovery proteomics (high-throughput, unbiased quantitative protein identification) and targeted proteomics (hypothesis-driven quantitative protein subset analysis) mass spectrometry (MS)-based strategies in EV biology, especially exosomes and shed microvesicles. Expert commentary: Recent advances in MS hardware, workflows, and informatics provide comprehensive, quantitative protein profiling of EVs and EV-treated target cells. This information is seminal to understanding the role of EV subtypes in cellular crosstalk, especially when integrated with other 'omics disciplines, such as RNA analysis (e.g., mRNA, ncRNA). Moreover, high-throughput MS-based proteomics promises to provide new avenues in identifying novel markers for detection, monitoring, and therapeutic intervention of disease.",
"title": ""
},
{
"docid": "688b702425c53e844d28758182306ce1",
"text": "DRAM is a precious resource in extreme-scale machines and is increasingly becoming scarce, mainly due to the growing number of cores per node. On future multi-petaflop and exaflop machines, the memory pressure is likely to be so severe that we need to rethink our memory usage models. Fortunately, the advent of non-volatile memory (NVM) offers a unique opportunity in this space. Current NVM offerings possess several desirable properties, such as low cost and power efficiency, but suffer from high latency and lifetime issues. We need rich techniques to be able to use them alongside DRAM. In this paper, we propose a novel approach for exploiting NVM as a secondary memory partition so that applications can explicitly allocate and manipulate memory regions therein. More specifically, we propose an NVMalloc library with a suite of services that enables applications to access a distributed NVM storage system. We have devised ways within NVMalloc so that the storage system, built from compute node-local NVM devices, can be accessed in a byte-addressable fashion using the memory mapped I/O interface. Our approach has the potential to re-energize out-of-core computations on large-scale machines by having applications allocate certain variables through NVMalloc, thereby increasing the overall memory capacity available. Our evaluation on a 128-core cluster shows that NVMalloc enables applications to compute problem sizes larger than the physical memory in a cost-effective manner. It can bring more performance/efficiency gain with increased computation time between NVM memory accesses or increased data access locality. In addition, our results suggest that while NVMalloc enables transparent access to NVM-resident variables, the explicit control it provides is crucial to optimize application performance.",
"title": ""
},
{
"docid": "d669dfcdc2486314bd7234e1f42357de",
"text": "The Luneburg lens (LL) represents a very attractive candidate for many applications such as multibeam antennas, multifrequency scanning, and spatial scanning, due to its focusing properties. Indeed, it is a dielectric sphere on which each surface point is a frequency-independent perfect focusing point. This is produced by its index governing law n, which follows the radial distribution n/sup 2/=2-r/sup 2/, where r is the normalized radial position. Practically, an LL is manufactured as a finite number of concentric homogeneous dielectric shells - this is called a discrete LL. The inaccuracies in the curved shell manufacturing process produce intershell air gaps, which degrade the performance of the lens. Furthermore, this requires different materials whose relative dielectric constant covers the range 1-2. The paper proposes a new LL manufacturing process to avoid these drawbacks. The paper describe the theoretical background and the performance of the obtained lens.",
"title": ""
},
{
"docid": "2fec6840021460bc629572f4fae6fc35",
"text": "It is anticipated that SDN coupled with NFV and cloud computing, will become a critical enabling technology to radically revolutionize the way network operators will architect and monetize their infrastructure. On the other hand, the Internet of Things (IoT) is transforming the interaction between cyberspace and the physical space with a tremendous impact on everyday life. The effectiveness of these technologies will require new methodological and engineering approaches due to the impressive scale of the problem and the new challenging requests in terms of performance, security and reliability. This paper presents a simple and general SDN-IoT architecture with NFV implementation with specific choices on where and how to adopt SDN and NFV approaches to address the new challenges of the Internet of Things. The architecture will accelerate innovations in the IoT sector, thanks to its flexibility opening new perspectives for fast deployment of software-enabled worldwide services. The paper also look at the business perspective by considering SDN and NFV as enablers of new added value services on top to the existing infrastructure providing more opportunities for revenues leveraging fast deployed services in the value chain.",
"title": ""
},
{
"docid": "240d47115c8bbf98e15ca4acae13ee62",
"text": "A trusted and active community aided and supported by the Internet of Things (IoT) is a key factor in food waste reduction and management. This paper proposes an IoT based context aware framework which can capture real-time dynamic requirements of both vendors and consumers and perform real-time match-making based on captured data. We describe our proposed reference framework and the notion of smart food sharing containers as enabling technology in our framework. A prototype system demonstrates the feasibility of a proposed approach using a smart container with embedded sensors.",
"title": ""
},
{
"docid": "386428ddfca099e7d1d2cbb88085ee83",
"text": "We tested the predictions of 2 explanations for retrieval-based learning; while the elaborative retrieval hypothesis assumes that the retrieval of studied information promotes the generation of semantically related information, which aids in later retrieval (Carpenter, 2009), the episodic context account proposed by Karpicke, Lehman, and Aue (in press) assumes that retrieval alters the representation of episodic context and improves one's ability to guide memory search on future tests. Subjects studied multiple word lists and either recalled each list (retrieval practice), did a math task (control), or generated associates for each word (elaboration) after each list. After studying the last list, all subjects recalled the list and, after a 5-min delay, recalled all lists. Analyses of correct recall, intrusions, response times, and temporal clustering dissociate retrieval practice from elaboration, supporting the episodic context account.",
"title": ""
},
{
"docid": "c0dd3979344c5f327fe447f46c13cffc",
"text": "Clinicians and researchers often ask patients to remember their past pain. They also use patient's reports of relief from pain as evidence of treatment efficacy, assuming that relief represents the difference between pretreatment pain and present pain. We have estimated the accuracy of remembering pain and described the relationship between remembered pain, changes in pain levels and reports of relief during treatment. During a 10-week randomized controlled clinical trial on the effectiveness of oral appliances for the management of chronic myalgia of the jaw muscles, subjects recalled their pretreatment pain and rated their present pain and perceived relief. Multiple regression analysis and repeated measures analyses of variance (ANOVA) were used for data analysis. Memory of the pretreatment pain was inaccurate and the errors in recall got significantly worse with the passage of time (P < 0.001). Accuracy of recall for pretreatment pain depended on the level of pain before treatment (P < 0.001): subjects with low pretreatment pain exaggerated its intensity afterwards, while it was underestimated by those with the highest pretreatment pain. Memory of pretreatment pain was also dependent on the level of pain at the moment of recall (P < 0.001). Ratings of relief increased over time (P < 0.001), and were dependent on both present and remembered pain (Ps < 0.001). However, true changes in pain were not significantly related to relief scores (P = 0.41). Finally, almost all patients reported relief, even those whose pain had increased. These results suggest that reports of perceived relief do not necessarily reflect true changes in pain.",
"title": ""
},
{
"docid": "2711d38ab9d5bcc8cd4a123630344fbf",
"text": "Using CMOS-MEMS micromachining techniques we have constructed a prototype earphone that is audible from 1 to 15 kHz. The fabrication of the acoustic membrane consists of only two steps in addition to the prior post-CMOS micromachining steps developed at CMU. The ability to build a membrane directly on a standard CMOS chip, integrating mechanical structures with signal processing electronics will enable a variety of applications including economical earphones, microphones, hearing aids, high-fidelity earphones, cellular phones and noise cancellation. The large compliance of the CMOS-MEMS membrane also promises application as a sensitive microphone and pressure sensor.",
"title": ""
}
] |
scidocsrr
|
737ee7b27327ca4c466a3b51f888e88d
|
Machine Learning and Knowledge Discovery in Databases
|
[
{
"docid": "d0bb1b3fc36016b166eb9ed25cb7ee61",
"text": "Informed driving is increasingly becoming a key feature for increasing the sustainability of taxi companies. The sensors that are installed in each vehicle are providing new opportunities for automatically discovering knowledge, which, in return, delivers information for real-time decision making. Intelligent transportation systems for taxi dispatching and for finding time-saving routes are already exploring these sensing data. This paper introduces a novel methodology for predicting the spatial distribution of taxi-passengers for a short-term time horizon using streaming data. First, the information was aggregated into a histogram time series. Then, three time-series forecasting techniques were combined to originate a prediction. Experimental tests were conducted using the online data that are transmitted by 441 vehicles of a fleet running in the city of Porto, Portugal. The results demonstrated that the proposed framework can provide effective insight into the spatiotemporal distribution of taxi-passenger demand for a 30-min horizon.",
"title": ""
},
{
"docid": "55694b963cde47e9aecbeb21fb0e79cf",
"text": "The rise of Uber as the global alternative taxi operator has attracted a lot of interest recently. Aside from the media headlines which discuss the new phenomenon, e.g. on how it has disrupted the traditional transportation industry, policy makers, economists, citizens and scientists have engaged in a discussion that is centred around the means to integrate the new generation of the sharing economy services in urban ecosystems. In this work, we aim to shed new light on the discussion, by taking advantage of a publicly available longitudinal dataset that describes the mobility of yellow taxis in New York City. In addition to movement, this data contains information on the fares paid by the taxi customers for each trip. As a result we are given the opportunity to provide a first head to head comparison between the iconic yellow taxi and its modern competitor, Uber, in one of the world’s largest metropolitan centres. We identify situations when Uber X, the cheapest version of the Uber taxi service, tends to be more expensive than yellow taxis for the same journey. We also demonstrate how Uber’s economic model effectively takes advantage of well known patterns in human movement. Finally, we take our analysis a step further by proposing a new mobile application that compares taxi prices in the city to facilitate traveller’s taxi choices, hoping to ultimately to lead to a reduction of commuter costs. Our study provides a case on how big datasets that become public can improve urban services for consumers by offering the opportunity for transparency in economic sectors that lack up to date regulations.",
"title": ""
}
] |
[
{
"docid": "7d3c07b505e27fdfea4ada999a233169",
"text": "Discriminatively trained undirected graphical models have had wide empirical success, and there has been increasing interest in toolkits that ease their application to complex relational data. The power in relational models is in their repeated structure and tied parameters; at issue is how to define these structures in a powerful and flexible way. Rather than using a declarative language, such as SQL or first-order logic, we advocate using an imperative language to express various aspects of model structure, inference, and learning. By combining the traditional, declarative, statistical semantics of factor graphs with imperative definitions of their construction and operation, we allow the user to mix declarative and procedural domain knowledge, and also gain significant efficiencies. We have implemented such imperatively defined factor graphs in a system we call FACTORIE, a software library for an object-oriented, strongly-typed, functional language. In experimental comparisons to Markov Logic Networks on joint segmentation and coreference, we find our approach to be 3-15 times faster while reducing error by 20-25%—achieving a new state of the art.",
"title": ""
},
{
"docid": "d8480f49edcc9034511698d5810ad839",
"text": "Defect prediction on new projects or projects with limited historical data is an interesting problem in defect prediction studies. This is largely because it is difficult to collect defect information to label a dataset for training a prediction model. Cross-project defect prediction (CPDP) has tried to solve this problem by reusing prediction models built by other projects that have enough historical data. However, CPDP does not always build a strong prediction model because of the different distributions among datasets. Approaches for defect prediction on unlabeled datasets have also tried to address the problem by adopting unsupervised learning but it has one major limitation, the necessity for manual effort. In this study, we propose novel approaches, CLA and CLAMI, that show the potential for defect prediction on unlabeled datasets in an automated manner without need for manual effort. The key idea of the CLA and CLAMI approaches is to label an unlabeled dataset by using the magnitude of metric values. In our empirical study on seven open-source projects, the CLAMI approach led to the promising prediction performances, 0.636 and 0.723 in average f-measure and AUC, that are comparable to those of defect prediction based on supervised learning.",
"title": ""
},
{
"docid": "4c20c48a5b1d86930c7e3cc9e6d8aa11",
"text": "Although transnational corporations play the crucial role as transplanters of technology, skills and access to the world market, how they facilitate structural upgrading and economic growth in developing countries has not been adequately conceptualized in terms of a theory of economic development. This article develops a dynamic paradigm o TNC-assisted development by recognizing five key structural characteristics of the global economy as underlying determinants. The phenomena of trade augmentation through foreign direct investment, increasing factor incongruity, and localized (but increasingly transnationalized learning and technological accumulation are identified as three principles that govern the process of rapid growth in the labour-driven stage of economic development and, eventually, the emergence of TNCs from the developing countries themselves also plays a role in this process.",
"title": ""
},
{
"docid": "1dad20d7f19e20945e9ad28aa5a70d93",
"text": "Article history: Received 3 January 2016 Received in revised form 9 June 2017 Accepted 26 September 2017 Available online 16 October 2017",
"title": ""
},
{
"docid": "17d06584c35a9879b0bd4b653ff64b40",
"text": "We present a solution to the rolling shutter (RS) absolute camera pose problem with known vertical direction. Our new solver, R5Pup, is an extension of the general minimal solution R6P, which uses a double linearized RS camera model initialized by the standard perspective P3P. Here, thanks to using known vertical directions, we avoid double linearization and can get the camera absolute pose directly from the RS model without the initialization by a standard P3P. Moreover, we need only five 2D-to-3D matches while R6P needed six such matches. We demonstrate in simulated and real experiments that our new R5Pup is robust, fast and a very practical method for absolute camera pose computation for modern cameras on mobile devices. We compare our R5Pup to the state of the art RS and perspective methods and demonstrate that it outperforms them when vertical direction is known in the range of accuracy available on modern mobile devices. We also demonstrate that when using R5Pup solver in structure from motion (SfM) pipelines, it is better to transform already reconstructed scenes into the standard position, rather than using hard constraints on the verticality of up vectors.",
"title": ""
},
{
"docid": "608bf85fa593c7ddff211c5bcc7dd20a",
"text": "We introduce a composite deep neural network architecture for supervised and language independent context sensitive lemmatization. The proposed method considers the task as to identify the correct edit tree representing the transformation between a word-lemma pair. To find the lemma of a surface word, we exploit two successive bidirectional gated recurrent structures the first one is used to extract the character level dependencies and the next one captures the contextual information of the given word. The key advantages of our model compared to the state-of-the-art lemmatizers such as Lemming and Morfette are (i) it is independent of human decided features (ii) except the gold lemma, no other expensive morphological attribute is required for joint learning. We evaluate the lemmatizer on nine languages Bengali, Catalan, Dutch, Hindi, Hungarian, Italian, Latin, Romanian and Spanish. It is found that except Bengali, the proposed method outperforms Lemming and Morfette on the other languages. To train the model on Bengali, we develop a gold lemma annotated dataset1 (having 1, 702 sentences with a total of 20, 257 word tokens), which is an additional contribution of this work.",
"title": ""
},
{
"docid": "fef5bf498eb0da7a62a2bc1433e9bd5f",
"text": "The “CRC Handbook” is well-known to anyone who has taken a college chemistry course, and CRC Press has traded on this name-familiarity to greatly expand its “Handbook” series. One of the newest entries to join titles such as the Handbook of Combinatorial Designs, the Handbook of Exact Solutions to Ordinary Differential Equations and the Handbook of Edible Weeds, is the Handbook of Graph Theory. Its editors will be familiar to many as the authors of the textbook, Graph Theory and Its Applications, which is also published by CRC Press. The handbooks about mathematics typically strive for comprehensiveness in a concise style, with sections contributed by specialists within subdisciplines. This volume runs to 1167 pages with 60 contributors providing 54 sections, organized into 11 chapters. As an indication of the topics covered, the chapter titles are Introduction to Graphs; Graph Representation; Directed Graphs; Connectivity and Traversability; Colorings and Related Topics; Algebraic Graph Theory; Topological Graph Theory; Analytic Graph Theory; Graphical Measurement; Graphs in Computer Science; Networks and Flows. Each section is organized into subsections that begin with the basic definitions and ideas, provide a few key examples and conclude with a list of facts (theorems) and remarks. Each of these items is referenced with a label (e.g. 7.7.3.F29 is the 29th Fact of Section 7.7, and can be found in Subsection 7.7.3). This makes for easy crossreferencing within the volume, and provides an easy reference system for the reader’s own use. Sections conclude with references to monographs and important research articles. And on occasion there are conjectures or open problems listed too. The author of every section has provided a glossary, which the editors have coalesced into separate glossaries for each of the eleven chapters. The editors have also strived for uniform terminology and notation throughout, and where this is impossible, the distinctions, subtleties or conflicts between subdisciplines have been carefully highlighted. These types of handbooks shine when one cannot remember that the Ramsey number R(5, 14) is only known to be bounded between 221 and 1280, or one cannot recall (or never knew) what an irredundance number is. For these sorts of questions, the believable claim of 90% content coverage should guarantee frequent success when it is consulted. The listed facts never include any proofs, and many do not include any reference to the literature. Presumably some of them are trivialities, but they could all use some pointer to where one can find a proof. The editors are proud of how long the bibliographies are, but sometimes they are too short. In most every case, there could be more guidance about which elements of the bibliography are the most useful for further general investigations into a topic. An advanced graduate student or researcher of graph theory will find a book of this sort invaluable. Within their specialty the coverage might be considered skimpy. However, for those occasions when ideas or results from an allied specialty are of interest, or only if one is curious about exactly what some topic involves, or what is known about it, then consulting this volume will answer many simple questions quickly. Similarly, someone in a related discipline, such as cryptography or computer science, whose work requires some knowledge of the state-of-the-art in graph theory, will also find this a good volume to consult for quick, easily located, answers. Given that it summarizes a field where over 1,000 papers are published each year, it is a must-have for the well-equipped mathematics research library.",
"title": ""
},
{
"docid": "2df1087f3125f6a2f8acd67649bcc87f",
"text": "CubeSats are positioned to play a key role in Earth Science, wherein multiple copies of the same RADAR instrument are launched in desirable formations, allowing for the measurement of atmospheric processes over a short evolutionary timescale. To achieve this goal, such CubeSats require a high-gain antenna (HGA) that fits in a highly constrained volume. This paper presents a novel mesh deployable Ka-band antenna design that folds in a 1.5 U (10 × 10 × 15 cm3) stowage volume suitable for 6 U (10 × 20 × 30 cm3) class CubeSats. Considering all aspects of the deployable mesh reflector antenna including the feed, detailed simulations and measurements show that 42.6-dBi gain and 52% aperture efficiency is achievable at 35.75 GHz. The mechanical deployment mechanism and associated challenges are also described, as they are critical components of a deployable CubeSat antenna. Both solid and mesh prototype antennas have been developed and measurement results show excellent agreement with simulations.",
"title": ""
},
{
"docid": "89cbba967d51d7b057f00ae1eecfe226",
"text": "This paper examines the connection among corporate bonds, stocks, and Treasury bonds under the Merton model with stochastic interest rate, focusing in particular on the volatility of corporate bonds and its connection to the equity volatility of the same firm and the Treasury bond volatility. For a broad cross-section of corporate bonds from 2002 through 2006, empirical measures of bond volatility are constructed using bond returns over daily, weekly, and monthly horizons. Comparing the empirical volatility with its model-implied counterpart, we find an overwhelming degree of excess volatility that is difficult to be explained by a default-based model. This excess volatility is found to be the strongest at the daily and weekly horizons, indicating a more pronounced liquidity component in corporate bonds at short horizons. At the monthly horizon, the excess volatility tapers off but remains significant. Moreover, we find that variables known to be linked to bond liquidity are important in explaining the cross-sectional variations in excess volatility, providing further evidence of a liquidity problem in corporate bonds. Finally, subtracting the equity and Treasury exposures from corporate bond returns, we find a non-trivial systematic component in the bond residuals that give rise to the excess volatility. ∗Bao is at the MIT Sloan School of Management, jackbao@mit.edu. Pan is at the MIT Sloan School of Management and NBER, junpan@mit.edu. We benefit from discussions with Jiang Wang; seminar participants at the MIT Finance Lunch and Boston University (Economics). We thank Duncan Ma for assistance in gathering Bloomberg data, and financial support from the outreach program of J.P. Morgan.",
"title": ""
},
{
"docid": "ea8685f27096f3e3e589ea8af90e78f5",
"text": "Acoustic data transmission is a technique to embed the data in a sound wave imperceptibly and to detect it at the receiver. This letter proposes a novel acoustic data transmission system designed based on the modulated complex lapped transform (MCLT). In the proposed system, data is embedded in an audio file by modifying the phases of the original MCLT coefficients. The data can be transmitted by playing the embedded audio and extracting it from the received audio. By embedding the data in the MCLT domain, the perceived quality of the resulting audio could be kept almost similar as the original audio. The system can transmit data at several hundreds of bits per second (bps), which is sufficient to deliver some useful short messages.",
"title": ""
},
{
"docid": "93ae39ed7b4d6b411a2deb9967e2dc7d",
"text": "This paper presents fundamental results about how zero-curvature (paper) surfaces behave near creases and apices of cones. These entities are natural generalizations of the edges and vertices of piecewise-planar surfaces. Consequently, paper surfaces may furnish a richer and yet still tractable class of surfaces for computer-aided design and computer graphics applications than do polyhedral surfaces.",
"title": ""
},
{
"docid": "047c36e2650b8abde75cccaeb0368c88",
"text": "Pancreas segmentation in computed tomography imaging has been historically difficult for automated methods because of the large shape and size variations between patients. In this work, we describe a custom-build 3D fully convolutional network (FCN) that can process a 3D image including the whole pancreas and produce an automatic segmentation. We investigate two variations of the 3D FCN architecture; one with concatenation and one with summation skip connections to the decoder part of the network. We evaluate our methods on a dataset from a clinical trial with gastric cancer patients, including 147 contrast enhanced abdominal CT scans acquired in the portal venous phase. Using the summation architecture, we achieve an average Dice score of 89.7 ± 3.8 (range [79.8, 94.8])% in testing, achieving the new state-of-the-art performance in pancreas segmentation on this dataset.",
"title": ""
},
{
"docid": "ace9a52f9210102a6fa77022032b266e",
"text": "Resorption of alveolar bone subsequent to extraction resulting in loss of alveolar bone height and width is sequelae that pose difficulty in placement of an implant especially in esthetically important area like maxillary anterior region. Several approaches have been described in the literature to overcome the complications of alveolar ridge resorption and to preserve the ridge like the hard and soft tissue augmentation with GBR, bone substitutes with or without immediate implant placement. An ideal method should always be cost effective and minimally invasive. Socket shield technique is one such new method where buccal root segment is retained as a shield which prevents resorption and achieves complete alveolar ridge preservation.",
"title": ""
},
{
"docid": "c9b7ad5ce16e96d611c608b78d5549f0",
"text": "Deep Neural Networks (DNNs) thrive in recent years in which Batch Normalization (BN) plays an indispensable role. However, it has been observed that BN is costly due to the reduction operations. In this paper, we propose alleviating this problem through sampling only a small fraction of data for normalization at each iteration. Specifically, we model it as a statistical sampling problem and identify that by sampling less correlated data, we can largely reduce the requirement of the number of data for statistics estimation in BN, which directly simplifies the reduction operations. Based on this conclusion, we propose two sampling strategies, “Batch Sampling” (randomly select several samples from each batch) and “Feature Sampling” (randomly select a small patch from each feature map of all samples), that take both computational efficiency and sample correlation into consideration. Furthermore, we introduce an extremely simple variant of BN, termed as Virtual Dataset Normalization (VDN), that can normalize the activations well with few synthetical random samples. All the proposed methods are evaluated on various datasets and networks, where an overall training speedup by up to 20% on GPU is practically achieved without the support of any specialized libraries, and the loss on accuracy and convergence rate are negligible. Finally, we extend our work to the “micro-batch normalization” problem and yield comparable performance with existing approaches at the case of tiny batch size.",
"title": ""
},
{
"docid": "aa5ea6c09a86348feb684ed1f85a5b59",
"text": "Neural networks are typically designed to deal with data in tensor forms. In this paper, we propose a novel neural network architecture accepting graphs of arbitrary structure. Given a dataset containing graphs in the form of (G, y) where G is a graph and y is its class, we aim to develop neural networks that read the graphs directly and learn a classification function. There are two main challenges: 1) how to extract useful features characterizing the rich information encoded in a graph for classification purpose, and 2) how to sequentially read a graph in a meaningful and consistent order. To address the first challenge, we design a localized graph convolution model and show its connection with two graph kernels. To address the second challenge, we design a novel SortPooling layer which sorts graph vertices in a consistent order so that traditional neural networks can be trained on the graphs. Experiments on benchmark graph classification datasets demonstrate that the proposed architecture achieves highly competitive performance with state-of-the-art graph kernels and other graph neural network methods. Moreover, the architecture allows end-to-end gradient-based training with original graphs, without the need to first transform graphs into vectors.",
"title": ""
},
{
"docid": "f16fd498b692875c3bd95460feaf06ec",
"text": "Raman and Fourier Transform Infrared (FT-IR) spectroscopy was used for assessment of structural differences of celluloses of various origins. Investigated celluloses were: bacterial celluloses cultured in presence of pectin and/or xyloglucan, as well as commercial celluloses and cellulose extracted from apple parenchyma. FT-IR spectra were used to estimate of the I(β) content, whereas Raman spectra were used to evaluate the degree of crystallinity of the cellulose. The crystallinity index (X(C)(RAMAN)%) varied from -25% for apple cellulose to 53% for microcrystalline commercial cellulose. Considering bacterial cellulose, addition of xyloglucan has an impact on the percentage content of cellulose I(β). However, addition of only xyloglucan or only pectins to pure bacterial cellulose both resulted in a slight decrease of crystallinity. However, culturing bacterial cellulose in the presence of mixtures of xyloglucan and pectins results in an increase of crystallinity. The results confirmed that the higher degree of crystallinity, the broader the peak around 913 cm(-1). Among all bacterial celluloses the bacterial cellulose cultured in presence of xyloglucan and pectin (BCPX) has the most similar structure to those observed in natural primary cell walls.",
"title": ""
},
{
"docid": "7f83aa38f6f715285b757e235da04257",
"text": "In recent researches on inverter-based distributed generators, disadvantages of traditional grid-connected current control, such as no grid-forming ability and lack of inertia, have been pointed out. As a result, novel control methods like droop control and virtual synchronous generator (VSG) have been proposed. In both methods, droop characteristics are used to control active and reactive power, and the only difference between them is that VSG has virtual inertia with the emulation of swing equation, whereas droop control has no inertia. In this paper, dynamic characteristics of both control methods are studied, in both stand-alone mode and synchronous-generator-connected mode, to understand the differences caused by swing equation. Small-signal models are built to compare transient responses of frequency during a small loading transition, and state-space models are built to analyze oscillation of output active power. Effects of delays in both controls are also studied, and an inertial droop control method is proposed based on the comparison. The results are verified by simulations and experiments. It is suggested that VSG control and proposed inertial droop control inherits the advantages of droop control, and in addition, provides inertia support for the system.",
"title": ""
},
{
"docid": "08cf1e6353fa3c9969188d946874c305",
"text": "In this paper we develop, analyze, and test a new algorithm for the global minimization of a function subject to simple bounds without the use of derivatives. The underlying algorithm is a pattern search method, more specifically a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the optional search phase of pattern search we apply a particle swarm scheme to globally explore the possible nonconvexity of the objective function. Our extensive numerical experiments showed that the resulting algorithm is highly competitive with other global optimization methods also based on function values.",
"title": ""
},
{
"docid": "c85e5745141e64e224a5c4c61f1b1866",
"text": "Crowd-sourcing has become a popular means of acquiring labeled data for many tasks where humans are more accurate than computers, such as image tagging, entity resolution, or sentiment analysis. However, due to the time and cost of human labor, solutions that solely rely on crowd-sourcing are often limited to small datasets (i.e., a few thousand items). This paper proposes algorithms for integrating machine learning into crowd-sourced databases in order to combine the accuracy of human labeling with the speed and cost-effectiveness of machine learning classifiers. By using active learning as our optimization strategy for labeling tasks in crowdsourced databases, we can minimize the number of questions asked to the crowd, allowing crowd-sourced applications to scale (i.e, label much larger datasets at lower costs). Designing active learning algorithms for a crowd-sourced database poses many practical challenges: such algorithms need to be generic, scalable, and easy-to-use for a broad range of practitioners, even those who are not machine learning experts. We draw on the theory of nonparametric bootstrap to design, to the best of our knowledge, the first active learning algorithms that meet all these requirements. Our results, on 3 real-world datasets collected with Amazon’s Mechanical Turk, and on 15 UCI datasets, show that our methods on average ask 1–2 orders of magnitude fewer questions than the baseline, and 4.5–44× fewer than existing active learning algorithms.",
"title": ""
}
] |
scidocsrr
|
9b7de044ed3d1c1d9d370e79e1cda105
|
Using Random Forest to Learn Imbalanced Data
|
[
{
"docid": "5a3b8a2ec8df71956c10b2eb10eabb99",
"text": "During a project examining the use of machine learning techniques for oil spill detection, we encountered several essential questions that we believe deserve the attention of the research community. We use our particular case study to illustrate such issues as problem formulation, selection of evaluation measures, and data preparation. We relate these issues to properties of the oil spill application, such as its imbalanced class distribution, that are shown to be common to many applications. Our solutions to these issues are implemented in the Canadian Environmental Hazards Detection System (CEHDS), which is about to undergo field testing.",
"title": ""
}
] |
[
{
"docid": "efe73e053983ba570342a9eea03216f7",
"text": "We present an approach to synthesizing photographic images conditioned on semantic layouts. Given a semantic label map, our approach produces an image with photographic appearance that conforms to the input layout. The approach thus functions as a rendering engine that takes a two-dimensional semantic specification of the scene and produces a corresponding photographic image. Unlike recent and contemporaneous work, our approach does not rely on adversarial training. We show that photographic images can be synthesized from semantic layouts by a single feedforward network with appropriate structure, trained end-to-end with a direct regression objective. The presented approach scales seamlessly to high resolutions; we demonstrate this by synthesizing photographic images at 2-megapixel resolution, the full resolution of our training data. Extensive perceptual experiments on datasets of outdoor and indoor scenes demonstrate that images synthesized by the presented approach are considerably more realistic than alternative approaches.",
"title": ""
},
{
"docid": "cf58d2d80764a5c8446d82b1b9499c00",
"text": "Estimation is a critical component of synchronization in wireless and signal processing systems. There is a rich body of work on estimator derivation, optimization, and statistical characterization from analytic system models which are used pervasively today. We explore an alternative approach to building estimators which relies principally on approximate regression using large datasets and large computationally efficient artificial neural network models capable of learning non-linear function mappings which provide compact and accurate estimates. For single carrier PSK modulation, we explore the accuracy and computational complexity of such estimators compared with the current gold-standard analytically derived alternatives. We compare performance in various wireless operating conditions and consider the trade offs between the two different classes of systems. Our results show the learned estimators can provide improvements in areas such as short-time estimation and estimation under non-trivial real world channel conditions such as fading or other non-linear hardware or propagation effects.",
"title": ""
},
{
"docid": "9296cf518b1b28862299e4a06d895761",
"text": "Introduction:\nEtiology of dental crowding may be related to arch constriction in diverse dimensions, and an appropriate manipulation of arch perimeter by intervening in basal bone discrepancies cases, may be a key for crowding relief, especially when incisors movement is limited due to underlying pathology, periodontal issues or restrictions related to soft tissue profile.\n\n\nObjectives: \nThis case report illustrates a 24-year old woman, with maxillary transverse deficiency, upper and lower arches crowding, Class II, division 1, subdivision right relationship, previous upper incisors traumatic episode and straight profile. A non-surgical and non-extraction treatment approach was feasible due to the miniscrew-assisted rapid palatal expansion technique (MARPE).\n\n\nMethods: \nThe MARPE appliance consisted of a conventional Hyrax expander supported by four orthodontic miniscrews. A slow expansion protocol was adopted, with an overall of 40 days of activation and a 3-month retention period. Intrusive traction miniscrew-anchored mechanics were used for correcting the Class II subdivision relationship, managing lower arch perimeter and midline deviation before including the upper central incisors.\n\n\nResults: \nPost-treatment records show an intermolar width increase of 5 mm, bilateral Class I molar and canine relationships, upper and lower crowding resolution, coincident dental midlines and proper intercuspation.\n\n\nConclusions: \nThe MARPE is an effective treatment approach for managing arch-perimeter deficiencies related to maxillary transverse discrepancies in adult patients.",
"title": ""
},
{
"docid": "e85b5115a489835bc58a48eaa727447a",
"text": "State-of-the art machine learning methods such as deep learning rely on large sets of hand-labeled training data. Collecting training data is prohibitively slow and expensive, especially when technical domain expertise is required; even the largest technology companies struggle with this challenge. We address this critical bottleneck with Snorkel, a new system for quickly creating, managing, and modeling training sets. Snorkel enables users to generate large volumes of training data by writing labeling functions, which are simple functions that express heuristics and other weak supervision strategies. These user-authored labeling functions may have low accuracies and may overlap and conflict, but Snorkel automatically learns their accuracies and synthesizes their output labels. Experiments and theory show that surprisingly, by modeling the labeling process in this way, we can train high-accuracy machine learning models even using potentially lower-accuracy inputs. Snorkel is currently used in production at top technology and consulting companies, and used by researchers to extract information from electronic health records, after-action combat reports, and the scientific literature. In this demonstration, we focus on the challenging task of information extraction, a common application of Snorkel in practice. Using the task of extracting corporate employment relationships from news articles, we will demonstrate and build intuition for a radically different way of developing machine learning systems which allows us to effectively bypass the bottleneck of hand-labeling training data.",
"title": ""
},
{
"docid": "c0fc94aca86a6aded8bc14160398ddea",
"text": "THE most persistent problems of recall all concern the ways in which past experiences and past reactions are utilised when anything is remembered. From a general point of view it looks as if the simplest explanation available is to suppose that when any specific event occurs some trace, or some group of traces, is made and stored up in the organism or in the mind. Later, an immediate stimulus re-excites the trace, or group of traces, and, provided a further assumption is made to the effect that the trace somehow carries with it a temporal sign, the re-excitement appears to be equivalent to recall. There is, of course, no direct evidence for such traces, but the assumption at first sight seems to be a very simple one, and so it has commonly been made.",
"title": ""
},
{
"docid": "1adad090fbb24e5f9a59928693388a64",
"text": "We present a privacy-preserving deep learning system in which many learning participants perform neural network-based deep learning over a combined dataset of all, without revealing the participants’ local data to a central server. To that end, we revisit the previous work by Shokri and Shmatikov (ACM CCS 2015) and show that, with their method, local data information may be leaked to an honest-but-curious server. We then fix that problem by building an enhanced system with the following properties: 1) no information is leaked to the server and 2) accuracy is kept intact, compared with that of the ordinary deep learning system also over the combined dataset. Our system bridges deep learning and cryptography: we utilize asynchronous stochastic gradient descent as applied to neural networks, in combination with additively homomorphic encryption. We show that our usage of encryption adds tolerable overhead to the ordinary deep learning system.",
"title": ""
},
{
"docid": "6eb9d8f22237bdc49570e219150d50b4",
"text": "Researchers in both machine translation (e.g., Brown et a/, 1990) arm bilingual lexicography (e.g., Klavans and Tzoukermarm, 1990) have recently become interested in studying parallel texts (also known as bilingual corpora), bodies of text such as the Canadian Hansards (parliamentary debates) which are available in multiple languages (such as French and English). Much of the current excitement surrounding parallel texts was initiated by Brown et aL (1990), who outline a selforganizing method for using these parallel texts to build a machine translation system.",
"title": ""
},
{
"docid": "98b4974b118ac3c6eabbd0edd98b638e",
"text": "A system that performs text categorization aims to assign appropriate categories from a predefined classification scheme to incoming documents. These assignments might be used for varied purposes such as filtering, or retrieval. This paper introduces a new effective model for text categorization with great corpus (more or less 1 million documents). Text categorization is performed using the Kullback-Leibler distance between the probability distribution of the document to classify and the probability distribution of each category. Using the same representation of categories, experiments show a significant improvement when the above mentioned method is used. KLD method achieve substantial improvements over the tfidf performing method.",
"title": ""
},
{
"docid": "96d2a6082de66034759b521547e8c8d2",
"text": "Recent developments in deep convolutional neural networks (DCNNs) have shown impressive performance improvements on various object detection/recognition problems. This has been made possible due to the availability of large annotated data and a better understanding of the nonlinear mapping between images and class labels, as well as the affordability of powerful graphics processing units (GPUs). These developments in deep learning have also improved the capabilities of machines in understanding faces and automatically executing the tasks of face detection, pose estimation, landmark localization, and face recognition from unconstrained images and videos. In this article, we provide an overview of deep-learning methods used for face recognition. We discuss different modules involved in designing an automatic face recognition system and the role of deep learning for each of them. Some open issues regarding DCNNs for face recognition problems are then discussed. This article should prove valuable to scientists, engineers, and end users working in the fields of face recognition, security, visual surveillance, and biometrics.",
"title": ""
},
{
"docid": "edb32fcaed1fd6d2c68eef127c04bf13",
"text": "Multiple logic-based reconstruction of conceptual data modelling languages such as EER, UML Class Diagrams, and ORM exists. They mainly cover various fragments of the languages and none are formalised such that the logic applies simultaneously for all three modelling language families as unifying mechanism. This hampers interchangeability, interoperability, and tooling support. In addition, due to the lack of a systematic design process of the logic used for the formalisation, hidden choices permeate the formalisations that have rendered them incompatible. We aim to address these problems, first, by structuring the logic design process in a methodological way. We generalise and extend the DSL design process to apply to logic language design more generally and, in particular, by incorporating an ontological analysis of language features in the process. Second, availing of this extended process, of evidence gathered of language feature usage, and of computational complexity insights from Description Logics (DL), we specify logic profiles taking into account the ontological commitments embedded in the languages. The profiles characterise the minimum logic structure needed to handle the semantics of conceptual models, enabling the development of interoperability tools. There is no known DL language that matches exactly the features of those profiles and the common core is small (in the tractable ALNI). Although hardly any inconsistencies can be derived with the profiles, it is promising for scalable runtime use of conceptual data models.",
"title": ""
},
{
"docid": "3a5d37570c54347840f5d3192b1b9008",
"text": "This thesis deals with linear transformations at various stages of the automatic speech recognition process. In current state-of-the-art speech recognition systems linear transformations are widely used to care for a potential mismatch of the training and testing data and thus enhance the recognition performance. A large number of approaches has been proposed in literature, though the connections between them have been disregarded so far. By developing a unified mathematical framework, close relationships between the particular approaches are identified and analyzed in detail. Mel frequency Cepstral coefficients (MFCC) are commonly used features for automatic speech recognition systems. The traditional way of computing MFCCs suffers from a twofold smoothing, which complicates both the MFCC computation and the system optimization. An improved approach is developed that does not use any filter bank and thus avoids the twofold smoothing. This integrated approach allows a very compact implementation and needs less parameters to be optimized. Starting from this new computation scheme for MFCCs, it is proven analytically that vocal tract normalization (VTN) equals a linear transformation in the Cepstral space for arbitrary invertible warping functions. The transformation matrix for VTN is explicitly calculated exemplary for three commonly used warping functions. Based on some general characteristics of typical VTN warping functions, a common structure of the transformation matrix is derived that is almost independent of the specific functional form of the warping function. By expressing VTN as a linear transformation it is possible, for the first time, to take the Jacobian determinant of the transformation into account for any warping function. The effect of considering the Jacobian determinant on the warping factor estimation is studied in detail. The second part of this thesis deals with a special linear transformation for speaker adaptation, the Maximum Likelihood Linear Regression (MLLR) approach. Based on the close interrelationship between MLLR and VTN proven in the first part, the general structure of the VTN matrix is adopted to restrict the MLLR matrix to a band structure, which significantly improves the MLLR adaptation for the case of limited available adaptation data. Finally, several enhancements to MLLR speaker adaptation are discussed. One deals with refined definitions of regression classes, which is of special importance for fast adaptation when only limited adaptation data are available. Another enhancement makes use of confidence measures to care for recognition errors that decrease the adaptation performance in the first pass of a two-pass adaptation process.",
"title": ""
},
{
"docid": "343ed18e56e6f562fa509710e4cf8dc6",
"text": "The automated analysis of facial expressions has been widely used in different research areas, such as biometrics or emotional analysis. Special importance is attached to facial expressions in the area of sign language, since they help to form the grammatical structure of the language and allow for the creation of language disambiguation, and thus are called Grammatical Facial Expressions (GFEs). In this paper we outline the recognition of GFEs used in the Brazilian Sign Language. In order to reach this objective, we have captured nine types of GFEs using a KinectTMsensor, designed a spatial-temporal data representation, modeled the research question as a set of binary classification problems, and employed a Machine Learning technique.",
"title": ""
},
{
"docid": "c5033a414493aa367ea9af5602471f49",
"text": "We present the Height Optimized Trie (HOT), a fast and space-efficient in-memory index structure. The core algorithmic idea of HOT is to dynamically vary the number of bits considered at each node, which enables a consistently high fanout and thereby good cache efficiency. The layout of each node is carefully engineered for compactness and fast search using SIMD instructions. Our experimental results, which use a wide variety of workloads and data sets, show that HOT outperforms other state-of-the-art index structures for string keys both in terms of search performance and memory footprint, while being competitive for integer keys. We believe that these properties make HOT highly useful as a general-purpose index structure for main-memory databases.",
"title": ""
},
{
"docid": "8c70f1af7d3132ca31b0cf603b7c5939",
"text": "Much of the existing work on action recognition combines simple features (e.g., joint angle trajectories, optical flow, spatio-temporal video features) with somewhat complex classifiers or dynamical models (e.g., kernel SVMs, HMMs, LDSs, deep belief networks). Although successful, these approaches represent an action with a set of parameters that usually do not have any physical meaning. As a consequence, such approaches do not provide any qualitative insight that relates an action to the actual motion of the body or its parts. For example, it is not necessarily the case that clapping can be correlated to hand motion or that walking can be correlated to a specific combination of motions from the feet, arms and body. In this paper, we propose a new representation of human actions called Sequence of the Most Informative Joints (SMIJ), which is extremely easy to interpret. At each time instant, we automatically select a few skeletal joints that are deemed to be the most informative for performing the current action. The selection of joints is based on highly interpretable measures such as the mean or variance of joint angles, maximum angular velocity of joints, etc. We then represent an action as a sequence of these most informative joints. Our experiments on multiple databases show that the proposed representation is very discriminative for the task of human action recognition and performs better than several state-of-the-art algorithms.",
"title": ""
},
{
"docid": "c7b2ada500bf543b5f3bcc42d504d888",
"text": "This paper proposes a novel passive technique for the collection of microwave images. A compact component is developed that passively codes and sums the waves received by an antenna array to which it is connected, and produces a unique signal that contains all of the scene information. This technique of passive multiplexing simplifies the microwave reception chains for radar and beamforming systems (whose complexity and cost highly increase with the number of antennas) and does not require any active elements to achieve beamsteering. The preservation of the waveforms is ensured using orthogonal codes supplied by the propagation through the component's uncorrelated channels. Here we show a multiplexing technique in the physical layer that, besides being compact and passive, is compatible with all ultrawideband antennas, enabling its implementation in various fields.",
"title": ""
},
{
"docid": "c592a75ae5b607f04bdb383a1a04ccba",
"text": "Searching for influential spreaders in complex networks is an issue of great significance for applications across various domains, ranging from the epidemic control, innovation diffusion, viral marketing, social movement to idea propagation. In this paper, we first display some of the most important theoretical models that describe spreading processes, and then discuss the problem of locating both the individual and multiple influential spreaders respectively. Recent approaches in these two topics are presented. For the identification of privileged single spreaders, we summarize several widely used centralities, such as degree, betweenness centrality, PageRank, k-shell, etc. We investigate the empirical diffusion data in a large scale online social community – LiveJournal. With this extensive dataset, we find that various measures can convey very distinct information of nodes. Of all the users in LiveJournal social network, only a small fraction of them involve in spreading. For the spreading processes in LiveJournal, while degree can locate nodes participating in information diffusion with higher probability, k-shell is more effective in finding nodes with large influence. Our results should provide useful information for designing efficient spreading strategies in reality.",
"title": ""
},
{
"docid": "dd840a0e33da0fc4e74fb2441d22c769",
"text": "The evolution of IPv6 technology had become a worldwide trend and showed a significant increase, particularly with the near-coming era named “Internet of Things” or so-called IOT. Concomitant with the transition process from version 4 to version 6, there are open security hole that considered to be vulnerable, mainly against cyber-attacks that poses a threat to companies implements IPv6 network topology. The purpose of this research is to create a model of acceptance of the factors that influenced the behavior of individuals in providing security within IPv6 network topology and analysis of factors that affects the acceptance of individuals in anticipating security with regards to IPv6 network topology. This study was conducted using both, quantitative method focuses on statistical processing on the result of questionnaire filled by respondents using Structural Equation Modeling (SEM), as well as qualitative method to conduct Focus Group Discussion (FGD) and interviews with experts from various background such as: practitioners, academician and government representatives. The results showed ease of use provides insignificant correlation to the referred behavior of avoiding threat on IPv6 environment.",
"title": ""
},
{
"docid": "5063adc5020cacddb5a4c6fd192fc17e",
"text": "In this paper, A Novel 1 to 4 modified Wilkinson power divider operating over the frequency range of (3 GHz to 8 GHz) is proposed. The design perception of the proposed divider based on two different stages and printed on FR4 (Epoxy laminate material) with the thickness of 1.57mm and єr =4.3 respectively. The modified design of this power divider including curved corners instead of the sharp edges and some modification in the length of matching stubs. In addition, this paper contain the power divider with equal power split at all ports, reasonable insertion loss, acceptable return loss below −10 dB, good impedance matching at all ports and satisfactory isolation performance has been obtained over the mentioned frequency range. The design concept and optimization development is practicable through CST simulation software.",
"title": ""
},
{
"docid": "8ff183a1ed88a8090e4ecb4e4df41e88",
"text": "Swarm robotics is based on the characteristics displayed by the insects and their colony and is applied to solve real world problems utilizing multi-robot systems. Research in this field has demonstrated the ability of such robot systems to assemble, inspect, disperse, aggregate and follow trails. A set of mobile and self-sufficient robots which has very restricted capabilities can form intricate patterns in the environment they inhabit. The simple patterns can be used by the robots to achieve high level tasks. In the proposed work, a set of robots are coordinating to form a specific pattern around the object with step wise linear motion and are programmed to push the object from a source position to the destination in an obstacle free environment. Initially the robots are placed in known positions. ZigBee communication protocol is used for interaction among the robots. A single robot is chosen as a central coordinator and controls the movement of the rest of the robots in the swarm. Master bot decides on the path to be taken and also supplies the slave bots with the coordinates to be reached. The entire scenario has been simulated using the open source tool Player/Stage and the hardware implementation has been done using micromouse chassis set up and the Arduino Uno controller board.",
"title": ""
},
{
"docid": "56b3aee7db16697e5f36c274cdc5a95c",
"text": "Robotic challenges like the Amazon Picking Challenge (APC) or the DARPA Challenges are an established and important way to drive scientific progress. They make research comparable on a well-defined benchmark with equal test conditions for all participants. However, such challenge events occur only occasionally, are limited to a small number of contestants, and the test conditions are very difficult to replicate after the main event. We present a new physical benchmark challenge for robotic picking: the ACRV Picking Benchmark. Designed to be reproducible, it consists of a set of 42 common objects, a widely available shelf, and exact guidelines for object arrangement using stencils. A well-defined evaluation protocol enables the comparison of complete robotic systems — including perception and manipulation — instead of sub-systems only. Our paper also describes and reports results achieved by an open baseline system based on a Baxter robot.",
"title": ""
}
] |
scidocsrr
|
e74d2a52f2cf6b8621707a6d0f086261
|
A Survey of Audio-Based Music Classification and Annotation
|
[
{
"docid": "44791d65e5f5e4645a6f99c0b2cdac8f",
"text": "Electronic Music Distribution (EMD) is in demand of robust, automatically extracted music descriptors. We introduce a timbral similarity measures for comparing music titles. This measure is based on a Gaussian model of cepstrum coefficients. We describe the timbre extractor and the corresponding timbral similarity relation. We describe experiments in assessing the quality of the similarity relation, and show that the measure is able to yield interesting similarity relations, in particular when used in conjunction with other similarity relations. We illustrate the use of the descriptor in several EMD applications developed in the context of the Cuidado European project.",
"title": ""
}
] |
[
{
"docid": "ca072e97f8a5486347040aeaa7909d60",
"text": "Camera-based stereo-vision provides cost-efficient vision capabilities for robotic systems. The objective of this paper is to examine the performance of stereo-vision as means to enable a robotic inspection cell for haptic quality testing with the ability to detect relevant information related to the inspection task. This information comprises the location and 3D representation of a complex object under inspection as well as the location and type of quality features which are subject to the inspection task. Among the challenges is the low-distinctiveness of features in neighboring area, inconsistent lighting, similar colors as well as low intra-class variances impeding the retrieval of quality characteristics. The paper presents the general outline of the vision chain as well as performance analysis of various algorithms for relevant steps in the machine vision chain thus indicating the capabilities and drawbacks of a camera-based stereo-vision for flexible use in complex machine vision tasks.",
"title": ""
},
{
"docid": "3946f4fabec4295e2be13b60b0ce8625",
"text": "The present study was designed and simulated for an all optical half-adder, based on 2D photonic crystals. The proposed structure in this work contains a hexagonal lattice. The main advantages of the proposed designation can be highlighted as its small sizing as well as simplicity. Furthermore, the other improvement of this half-adder can be regarded as providing proper distinct space in output between “0” and “1” as logical states. This improvement reduces the error in the identification of logical states (i.e., 0 and 1) at output. Because of the high photonic band gap for transverse electric (TE) polarization, the TE mode calculations are done to analyze the defected lines of light. The logical values of “0” and “1” were defined according to the amount of electrical field.",
"title": ""
},
{
"docid": "1380438b5c7739a77644520ebc744002",
"text": "The present work proposes a review and comparison of different Kernel functionals and neighborhood geometry for Nonlocal Means (NLM) in the task of digital image filtering. Some different alternatives to change the classical exponential kernel function used in NLM methods are explored. Moreover, some approaches that change the geometry of the neighborhood and use dimensionality reduction of the neighborhood or patches onto principal component analysis (PCA) are also analyzed, and their performance is compared with respect to the classic NLM method. Mainly, six approaches were compared using quantitative and qualitative evaluations, to do this an homogeneous framework has been established using the same simulation platform, the same computer, and same conditions for the initializing parameters. According to the obtained comparison, one can say that the NLM filtering could be improved when changing the kernel, particularly for the case of the Tukey kernel. On the other hand, the excellent performance given by recent hybrid approaches such as NLM SAP, NLM PCA (PH), and the BM3D SAPCA lead to establish that significantly improvements to the classic NLM could be obtained. Particularly, the BM3D SAPCA approach gives the best denoising results, however, the computation times were the longest.",
"title": ""
},
{
"docid": "5af470de0bc3ea61b1812374a09793b8",
"text": "In this paper, we propose a fully convolutional network for iterative non-blind deconvolution. We decompose the non-blind deconvolution problem into image denoising and image deconvolution. We train a FCNN to remove noise in the gradient domain and use the learned gradients to guide the image deconvolution step. In contrast to the existing deep neural network based methods, we iteratively deconvolve the blurred images in a multi-stage framework. The proposed method is able to learn an adaptive image prior, which keeps both local (details) and global (structures) information. Both quantitative and qualitative evaluations on the benchmark datasets demonstrate that the proposed method performs favorably against state-of-the-art algorithms in terms of quality and speed.",
"title": ""
},
{
"docid": "0ca445eed910eacccbb9f2cc9569181b",
"text": "Nanotechnology promises new solutions for many applications in the biomedical, industrial and military fields as well as in consumer and industrial goods. The interconnection of nanoscale devices with existing communication networks and ultimately the Internet defines a new networking paradigm that is further referred to as the Internet of Nano-Things. Within this context, this paper discusses the state of the art in electromagnetic communication among nanoscale devices. An in-depth view is provided from the communication and information theoretic perspective, by highlighting the major research challenges in terms of channel modeling, information encoding and protocols for nanonetworks and the Internet of Nano-Things.",
"title": ""
},
{
"docid": "fd14310dd9a039175c075059e4ed31e4",
"text": "A new self-reconfigurable robot is presented. The robot is a hybrid chain/lattice design with several novel features. An active mechanical docking mechanism provides inter-module connection, along with optical and electrical interface. The docking mechanisms function additionally as driven wheels. Internal slip rings provide unlimited rotary motion to the wheels, allowing the modules to move independently by driving on flat surfaces, or in assemblies negotiating more complex terrain. Modules in the system are mechanically homogeneous, with three identical docking mechanisms within a module. Each mechanical dock is driven by a high torque actuator to enable movement of large segments within a multi-module structure, as well as low-speed driving. Preliminary experimental results demonstrate locomotion, mechanical docking, and lifting of a single module.",
"title": ""
},
{
"docid": "9f87424062c624bc417f848cc2f33bf3",
"text": "The sentiment mining is a fast growing topic of both academic research and commercial applications, especially with the widespread of short-text applications on the Web. A fundamental problem that confronts sentiment mining is the automatics and correctness of mined sentiment. This paper proposes an DLDA (Double Latent Dirichlet Allocation) model to analyze sentiment for short-texts based on topic model. Central to DLDA is to add sentiment to topic model and consider sentiment as equal to topic, but independent of topic. DLDA is actually two methods DLDA I and its improvement DLDA II. Compared to the single topic-word LDA, the double LDA I, i.e., DLDA I designs another sentiment-word LDA. Both LDAs are independent of each other, but they combine to influence the selected words in short-texts. DLDA II is an improvement of DLDA I. It employs entropy formula to assign weights of words in the Gibbs sampling based on the ideas that words with stronger sentiment orientation should be assigned with higher weights. Experiments show that compared with other traditional topic methods, both DLDA I and II can achieve higher accuracy with less manual needs.",
"title": ""
},
{
"docid": "f60eb31e59910e4c7ba24ab474a9f696",
"text": "We present a new architecture for storing and accessing entity mentions during online text processing. While reading the text, entity references are identified, and may be stored by either updating or overwriting a cell in a fixedlength memory. The update operation implies coreference with the other mentions that are stored in the same cell; the overwrite operations causes these mentions to be forgotten. By encoding the memory operations as differentiable gates, it is possible to train the model end-to-end, using both a supervised anaphora resolution objective as well as a supplementary language modeling objective. Evaluation on a dataset of pronoun-name anaphora demonstrates that the model achieves state-of-the-art performance with purely left-to-right processing of the text.",
"title": ""
},
{
"docid": "50c961c8b229c7a4b31ca6a67e06112c",
"text": "The emerging three-dimensional (3D) chip architectures, with their intrinsic capability of reducing the wire length, is one of the promising solutions to mitigate the interconnect problem in modern microprocessor designs. 3D memory stacking also enables much higher memory bandwidth for future chip-multiprocessor design, mitigating the ``memory wall\" problem. In addition, heterogenous integration enabled by 3D technology can also result in innovation designs for future microprocessors. This paper serves as a survey of various approaches to design future 3D microprocessors, leveraging the benefits of fast latency, higher bandwidth, and heterogeneous integration capability that are offered by 3D technology.",
"title": ""
},
{
"docid": "ed097b44837a57ad0053ae06a95f1543",
"text": "For underwater videos, the performance of object tracking is greatly affected by illumination changes, background disturbances and occlusion. Hence, there is a need to have a robust function that computes image similarity, to accurately track the moving object. In this work, a hybrid model that incorporates the Kalman Filter, a Siamese neural network and a miniature neural network has been developed for object tracking. It was observed that the usage of the Siamese network to compute image similarity significantly improved the robustness of the tracker. Although the model was developed for underwater videos, it was found that it performs well for both underwater and human surveillance videos. A metric has been defined for analyzing detections-to-tracks mapping accuracy. Tracking results have been analyzed using Multiple Object Tracking Accuracy (MOTA) and Multiple Object Tracking Precision (MOTP)metrics.",
"title": ""
},
{
"docid": "9d068f6b812272750fe8a56562d703a2",
"text": "Sustainable development, although a widely used phrase and idea, has many different meanings and therefore provokes many different responses. In broad terms, the concept of sustainable development is an attempt to combine growing concerns about a range of environmental issues with socio-economic issues. To aid understanding of these different policies this paper presents a classification and mapping of different trends of thought on sustainable development, their political and policy frameworks and their attitudes towards change and means of change. Sustainable development has the potential to address fundamental challenges for humanity, now and into the future. However, to do this, it needs more clarity of meaning, concentrating on sustainable livelihoods and well-being rather than well-having, and long term environmental sustainability, which requires a strong basis in principles that link the social and environmental to human equity. Copyright © 2005 John Wiley & Sons, Ltd and ERP Environment. Received 31 July 2002; revised 16 October 2003; accepted 3 December 2003 Sustainable Development: A Challenging and Contested Concept T HE WIDESPREAD RISE OF INTEREST IN, AND SUPPORT FOR, THE CONCEPT OF SUSTAINABLE development is potentially an important shift in understanding relationships of humanity with nature and between people. It is in contrast to the dominant outlook of the last couple of hundred years, especially in the ‘North’, that has been based on the view of the separation of the environment from socio-economic issues. For most of the last couple of hundred years the environment has been largely seen as external to humanity, mostly to be used and exploited, with a few special areas preserved as wilderness or parks. Environmental problems were viewed mainly as local. On the whole the relationship between people and the environment was conceived as humanity’s triumph over nature. This Promethean view (Dryzek, 1997) was that human knowledge and technology could overcome all obstacles including natural and environmental ones. This view was linked with the development of capitalism, the industrial revolution and modern science. As Bacon, one of the founders of modern science, put it, ‘The world is made for * Correspondence to: Bill Hopwood, Sustainable Cities Research Institute, 6 North Street East, University of Northumbria, Newcastle on Tyne NE1 8ST, UK. E-mail: william.hopwood@unn.ac.uk Sustainable Development Sust. Dev. 13, 38–52 (2005) Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/sd.244 Mapping Different Approaches 39 man, not man for the world’. Environmental management and concern amongst most businesses and governments, apart from local problems and wilderness conservation, was at best based on natural resource management. A key example was the ideas of Pinchot in the USA (Dryzek, 1997), which recognized that humans do need natural resources and that these resources should be managed, rather than rapidly exploited, in order to ensure maximum long-term use. Economics came to be the dominating issue of human relations with economic growth, defined by increasing production, as the main priority (Douthwaite, 1992). This was the seen as the key to humanity’s well-being and, through growth, poverty would be overcome: as everyone floated higher those at the bottom would be raised out of poverty. The concept of sustainable development is the result of the growing awareness of the global links between mounting environmental problems, socio-economic issues to do with poverty and inequality and concerns about a healthy future for humanity. It strongly links environmental and socio-economic issues. The first important use of the term was in 1980 in the World Conservation Strategy (IUCN et al., 1980). This process of bringing together environmental and socio-economic questions was most famously expressed in the Brundtland Report’s definition of sustainable development as meeting ‘the needs of the present without compromising the ability of future generations to meet their needs’ (WCED, 1987, p. 43). This defines needs from a human standpoint; as Lee (2000, p. 32) has argued, ‘sustainable development is an unashamedly anthropocentric concept’. Brundtland’s definition and the ideas expressed in the report Our Common Future recognize the dependency of humans on the environment to meet needs and well-being in a much wider sense than merely exploiting resources: ‘ecology and economy are becoming ever more interwoven – locally, regionally, nationally and globally’ (WCED, 1987, p. 5). Rather than domination over nature our lives, activities and society are nested within the environment (Giddings et al., 2002). The report stresses that humanity, whether in an industrialized or a rural subsistence society, depends for security and basic existence on the environment; the economy and our well-being now and in the future need the environment. It also points to the planetwide interconnections: environmental problems are not local but global, so that actions and impacts have to be considered internationally to avoid displacing problems from one area to another by actions such as releasing pollution that crosses boundaries, moving polluting industries to another location or using up more than an equitable share of the earth’s resources (by an ecological footprint (Wackernagel and Rees, 1996) far in excess of the area inhabited). Environmental problems threaten people’s health, livelihoods and lives and can cause wars and threaten future generations. Sustainable development raises questions about the post-war claim, that still dominates much mainstream economic policy, that international prosperity and human well-being can be achieved through increased global trade and industry (Reid, 1995; Moffat, 1996; Sachs, 1999). It recognizes that past growth models have failed to eradicate poverty globally or within countries, ‘no trends, . . . no programmes or policies offer any real hope of narrowing the growing gap between rich and poor nations’ (WCED, 1987, p. xi). This pattern of growth has also damaged the environment upon which we depend, with a ‘downward spiral of poverty and environmental degradation’ (WCED, 1987, p. xii). Brundtland, recognizing this failure, calls for a different form of growth, ‘changing the quality of growth, meeting essential needs, merging environment and economics in decision making’ (WCED, 1987, p. 49), with an emphasis on human development, participation in decisions and equity in benefits. The development proposed is a means to eradicate poverty, meet human needs and ensure that all get a fair share of resources – very different from present development. Social justice today and in the future is a crucial component of the concept of sustainable development. There were, and are, long standing debates about both goals and means within theories dealing with both environmental and socio-economic questions which have inevitably flowed into ideas on sustainCopyright © 2005 John Wiley & Sons, Ltd and ERP Environment Sust. Dev. 13, 38–52 (2005) 40 B. Hopwood et al. able development. As Wackernagel and Rees (1996) have argued, the Brundtland Report attempted to bridge some of these debates by leaving a certain ambiguity, talking at the same time of the priorities of meeting the needs of the poor, protecting the environment and more rapid economic growth. The looseness of the concept and its theoretical underpinnings have enabled the use of the phrases ‘sustainable development’ and ‘sustainability’ to become de rigueur for politicians and business leaders, but as the Workshop on Urban Sustainability of the US National Science Foundation (2000, p. 1) pointed out, sustainability is ‘laden with so many definitions that it risks plunging into meaninglessness, at best, and becoming a catchphrase for demagogy, at worst. [It] is used to justify and legitimate a myriad of policies and practices ranging from communal agrarian utopianism to large-scale capital-intensive market development’. While many claim that sustainable development challenges the increased integration of the world in a capitalist economy dominated by multinationals (Middleton et al., 1993; Christie and Warburton, 2001), Brundtland’s ambiguity allows business and governments to be in favour of sustainability without any fundamental challenge to their present course, using Brundtland’s support for rapid growth to justify the phrase ‘sustainable growth’. Rees (1998) points out that this allows capitalism to continue to put forward economic growth as its ‘morally bankrupt solution’ to poverty. If the economy grows, eventually all will benefit (Dollar and Kraay, 2000): in modern parlance the trickle-down theory. Daly (1993) criticized the notion of ‘sustainable growth’ as ‘thought-stopping’ and oxymoronic in a world in which ecosystems are finite. At some point, economic growth with ever more use of resources and production of waste is unsustainable. Instead Daly argued for the term ‘sustainable development’ by which he, much more clearly than Brundtland, meant qualitative, rather than quantitative, improvements. Development is open to confusion, with some seeing it as an end in itself, so it has been suggested that greater clarity would be to speak of ‘sustainable livelihoods’, which is the aim that Brundtland outlined (Workshop on Urban Sustainability, 2000). Another area of debate is between the views of weak and strong sustainability (Haughton and Hunter, 1994). Weak sustainability sees natural and manufactured capital as interchangeable with technology able to fill human produced gaps in the natural world (Daly and Cobb, 1989) such as a lack of resources or damage to the environment. Solow put the case most strongly, stating that by substituting other factors for natural resources ‘the world can, in effect, get along without natural resources, so exhaustion is just an event, not a catastrophe’ (1974, p. 11). Strong ",
"title": ""
},
{
"docid": "ac94c03a72607f76e53ae0143349fff3",
"text": "Abrlracr-A h u l a for the cppecity et arbitrary sbgle-wer chrurwla without feedback (mot neccgdueily Wium\" stable, stationary, etc.) is proved. Capacity ie shown to e i p l the supremum, over all input processts, & the input-outpat infiqjknda QBnd as the llnainl ia praabiutJr d the normalized information density. The key to thir zbllljt is a ntw a\"c sppmrh bosed 811 a Ampie II(A Lenar trwrd eu the pralwbility of m-4v hgpothesb t#tcl UIOlls eq*rdIaN <hypotheses. A neassruy and d c i e n t coadition Eor the validity of the strong comeme is given, as well as g\"l expressions for eeapacity.",
"title": ""
},
{
"docid": "06848cf456dbbcd5891cd33522ab7b75",
"text": "Credit scoring models play a fundamental role in the risk management practice at most banks. They are used to quantify credit risk at counterparty or transaction level in the different phases of the credit cycle (e.g. application, behavioural, collection models). The credit score empowers users to make quick decisions or even to automate decisions and this is extremely desirable when banks are dealing with large volumes of clients and relatively small margin of profits at individual transaction level (i.e. consumer lending, but increasingly also small business lending). In this article, we analyze the history and new developments related to credit scoring models. We find that with the new Basel Capital Accord, credit scoring models have been remotivated and given unprecedented significance. Banks, in particular, and most financial institutions worldwide, have either recently developed or modified existing internal credit risk models to conform with the new rules and best practices recently updated in the market. Moreover, we analyze the key steps of the credit scoring model’s lifecycle (i.e. assessment, implementation, validation) highlighting the main requirement imposed by Basel II. We conclude that banks that are going to implement the most advanced approach to calculate their capital requirements under Basel II will need to increase their attention and consideration of credit scoring models in the next future. JEL classification: G17; G21",
"title": ""
},
{
"docid": "31e052aaf959a4c5d6f1f3af6587d6cd",
"text": "We introduce a learning framework called learning using privileged information (LUPI) to the computer vision field. We focus on the prototypical computer vision problem of teaching computers to recognize objects in images. We want the computers to be able to learn faster at the expense of providing extra information during training time. As additional information about the image data, we look at several scenarios that have been studied in computer vision before: attributes, bounding boxes and image tags. The information is privileged as it is available at training time but not at test time. We explore two maximum-margin techniques that are able to make use of this additional source of information, for binary and multiclass object classification. We interpret these methods as learning easiness and hardness of the objects in the privileged space and then transferring this knowledge to train a better classifier in the original space. We provide a thorough analysis and comparison of information transfer from privileged to the original data spaces for both LUPI methods. Our experiments show that incorporating privileged information can improve the classification accuracy. Finally, we conduct user studies to understand which samples are easy and which are hard for human learning, and explore how this information is related to easy and hard samples when learning a classifier.",
"title": ""
},
{
"docid": "1d3657515977304cd97a0f888cd3c5b2",
"text": "During the last fifteen years, soft computing methods have been successfully applied in building powerful and flexible credit scoring models and have been suggested to be a possible alternative to statistical methods. In this survey, the main soft computing methods applied in credit scoring models are presented and the advantages as well as the limitations of each method are outlined. The main modelling issues are discussed especially from the data mining point of view. The study concludes with a series of suggestions of other methods to be investigated for credit scoring modelling.",
"title": ""
},
{
"docid": "72d59a0605a82fc714020ac67ac1e52b",
"text": "We present an accurate stereo matching method using <italic>local expansion moves</italic> based on graph cuts. This new move-making scheme is used to efficiently infer per-pixel 3D plane labels on a pairwise Markov random field (MRF) that effectively combines recently proposed slanted patch matching and curvature regularization terms. The local expansion moves are presented as many <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math><alternatives> <inline-graphic xlink:href=\"taniai-ieq1-2766072.gif\"/></alternatives></inline-formula>-expansions defined for small grid regions. The local expansion moves extend traditional expansion moves by two ways: localization and spatial propagation. By localization, we use different candidate <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math> <alternatives><inline-graphic xlink:href=\"taniai-ieq2-2766072.gif\"/></alternatives></inline-formula>-labels according to the locations of local <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math><alternatives> <inline-graphic xlink:href=\"taniai-ieq3-2766072.gif\"/></alternatives></inline-formula>-expansions. By spatial propagation, we design our local <inline-formula><tex-math notation=\"LaTeX\">$\\alpha$</tex-math><alternatives> <inline-graphic xlink:href=\"taniai-ieq4-2766072.gif\"/></alternatives></inline-formula>-expansions to propagate currently assigned labels for nearby regions. With this localization and spatial propagation, our method can efficiently infer MRF models with a continuous label space using randomized search. Our method has several advantages over previous approaches that are based on fusion moves or belief propagation; it produces <italic>submodular moves </italic> deriving a <italic>subproblem optimality</italic>; it helps find good, smooth, piecewise linear disparity maps; it is suitable for parallelization; it can use cost-volume filtering techniques for accelerating the matching cost computations. Even using a simple pairwise MRF, our method is shown to have best performance in the Middlebury stereo benchmark V2 and V3.",
"title": ""
},
{
"docid": "ab0bcd1027d03eda0b465fb69384f2ab",
"text": "When we listen to rhythm, we often move spontaneously to the beat. This movement may result from processing of the beat by motor areas. Previous studies have shown that several motor areas respond when attending to rhythms. Here we investigate whether specific motor regions respond to beat in rhythm. We predicted that the basal ganglia and supplementary motor area (SMA) would respond in the presence of a regular beat. To establish what rhythm properties induce a beat, we asked subjects to reproduce different types of rhythmic sequences. Improved reproduction was observed for one rhythm type, which had integer ratio relationships between its intervals and regular perceptual accents. A subsequent functional magnetic resonance imaging study found that these rhythms also elicited higher activity in the basal ganglia and SMA. This finding was consistent across different levels of musical training, although musicians showed activation increases unrelated to rhythm type in the premotor cortex, cerebellum, and SMAs (pre-SMA and SMA). We conclude that, in addition to their role in movement production, the basal ganglia and SMAs may mediate beat perception.",
"title": ""
},
{
"docid": "980184e7f84cd0b285e055fbd7ec0b4a",
"text": "Gamification is the application of game mechanics and player incentives to non-game environments. When designed correctly, gamification has been found to increase engagement and encourage targeted behaviours among users. This paper presents the gamification of a university course in Computer Games Development using an online learning management tool, including how this might generalize to other courses.\n Our goal with gamification was to improve lecture attendance, content understanding, problem solving skills and general engagement. The success of this intervention was measured using course marks, lecturer evaluations, lecture attendance, and a questionnaire; all with strongly positive results. However, this must be balanced against the costs, both monetary and time, required to successfully implement gamification.",
"title": ""
},
{
"docid": "93f89a636828df50dfe48ffa3e868ea6",
"text": "The reparameterization trick enables the optimization of large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack continuous reparameterizations due to the discontinuous nature of discrete states. In this work we introduce concrete random variables – continuous relaxations of discrete random variables. The concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-likelihood of latent stochastic nodes) on the corresponding discrete graph. We demonstrate their effectiveness on density estimation and structured prediction tasks using neural networks.",
"title": ""
},
{
"docid": "69f6b21da3fa48f485fc612d385e7869",
"text": "Recurrent neural networks (RNN) have been successfully applied for recognition of cursive handwritten documents, both in English and Arabic scripts. Ability of RNNs to model context in sequence data like speech and text makes them a suitable candidate to develop OCR systems for printed Nabataean scripts (including Nastaleeq for which no OCR system is available to date). In this work, we have presented the results of applying RNN to printed Urdu text in Nastaleeq script. Bidirectional Long Short Term Memory (BLSTM) architecture with Connectionist Temporal Classification (CTC) output layer was employed to recognize printed Urdu text. We evaluated BLSTM networks for two cases: one ignoring the character's shape variations and the second is considering them. The recognition error rate at character level for first case is 5.15% and for the second is 13.6%. These results were obtained on synthetically generated UPTI dataset containing artificially degraded images to reflect some real-world scanning artifacts along with clean images. Comparison with shape-matching based method is also presented.",
"title": ""
}
] |
scidocsrr
|
b3bd0ea9c4a3a68a2c52cf938b1736be
|
Predicting the Analysis of Heart Disease Symptoms Using Medicinal Data Mining Methods
|
[
{
"docid": "424239765383edd8079d90f63b3fde1d",
"text": "The availability of huge amounts of medical data leads to the need for powerful data analysis tools to extract useful knowledge. Researchers have long been concerned with applying statistical and data mining tools to improve data analysis on large data sets. Disease diagnosis is one of the applications where data mining tools are proving successful results. Heart disease is the leading cause of death all over the world in the past ten years. Several researchers are using statistical and data mining tools to help health care professionals in the diagnosis of heart disease. Using single data mining technique in the diagnosis of heart disease has been comprehensively investigated showing acceptable levels of accuracy. Recently, researchers have been investigating the effect of hybridizing more than one technique showing enhanced results in the diagnosis of heart disease. However, using data mining techniques to identify a suitable treatment for heart disease patients has received less attention. This paper identifies gaps in the research on heart disease diagnosis and treatment and proposes a model to systematically close those gaps to discover if applying data mining techniques to heart disease treatment data can provide as reliable performance as that achieved in diagnosing heart disease.",
"title": ""
},
{
"docid": "e29774fe6bd529b769faca8e54202be1",
"text": "The main objective of this research is to develop a n Intelligent System using data mining modeling tec hnique, namely, Naive Bayes. It is implemented as web based applica tion in this user answers the predefined questions. It retrieves hidden data from stored database and compares the u er values with trained data set. It can answer com plex queries for diagnosing heart disease and thus assist healthcare practitioners to make intelligent clinical decisio ns which traditional decision support systems cannot. By providing effec tiv treatments, it also helps to reduce treatment cos s. Keyword: Data mining Naive bayes, heart disease, prediction",
"title": ""
},
{
"docid": "2a1335003528b2da0b0471096df4dade",
"text": "Data mining concerns theories, methodologies, and in particular, computer systems for knowledge extraction or mining from large amounts of data. Association rule mining is a general purpose rule discovery scheme. It has been widely used for discovering rules in medical applications. The diagnosis of diseases is a significant and tedious task in medicine. The detection of heart disease from various factors or symptoms is an issue which is not free from false presumptions often accompanied by unpredictable effects. Thus the effort to utilize knowledge and experience of numerous specialists and clinical screening data of patients collected in databases to facilitate the diagnosis process is considered a valuable option. In this paper, we presented an efficient approach for the prediction of heart attack risk levels from the heart disease database. Firstly, the heart disease database is clustered using the K-means clustering algorithm, which will extract the data relevant to heart attack from the database. This approach allows mastering the number of fragments through its k parameter. Subsequently the frequent patterns are mined from the extracted data, relevant to heart disease, using the MAFIA (Maximal Frequent Itemset Algorithm) algorithm. The machine learning algorithm is trained with the selected significant patterns for the effective prediction of heart attack. We have employed the ID3 algorithm as the training algorithm to show level of heart attack with the decision tree. The results showed that the designed prediction system is capable of predicting the heart attack effectively.",
"title": ""
}
] |
[
{
"docid": "fa05d004df469e8f83fa4fdee9909a6f",
"text": "Accurate velocity estimation is an important basis for robot control, but especially challenging for highly elastically driven robots. These robots show large swing or oscillation effects if they are not damped appropriately during the performed motion. In this letter, we consider an ultralightweight tendon-driven series elastic robot arm equipped with low-resolution joint position encoders. We propose an adaptive Kalman filter for velocity estimation that is suitable for these kinds of robots with a large range of possible velocities and oscillation frequencies. Based on an analysis of the parameter characteristics of the measurement noise variance, an update rule based on the filter position error is developed that is easy to adjust for use with different sensors. Evaluation of the filter both in simulation and in robot experiments shows a smooth and accurate performance, well suited for control purposes.",
"title": ""
},
{
"docid": "d0690dcac9bf28f1fe6e2153035f898c",
"text": "The estimation of the homography between two views is a key step in many applications involving multiple view geometry. The homography exists between two views between projections of points on a 3D plane. A homography exists also between projections of all points if the cameras have purely rotational motion. A number of algorithms have been proposed for the estimation of the homography relation between two images of a planar scene. They use features or primitives ranging from simple points to a complex ones like non-parametric curves. Different algorithms make different assumptions on the imaging setup and what is known about them. This article surveys several homography estimation techniques from the literature. The essential theory behind each method is presented briefly and compared with the others. Experiments aimed at providing a representative analysis and comparison of the methods discussed are also presented in the paper.",
"title": ""
},
{
"docid": "9b0ef1810b8fe40346460d88100d1291",
"text": "Existing real-time automatic video abstraction systems rely on local contrast only for identifying perceptually important information and abstract imagery by reducing contrast in low-contrast regions while artificially increasing contrast in higher contrast regions. These methods, however, may fail to accentuate an object against its background for the images with objects of low contrast over background of high contrast. To solve this problem, we propose a progressive abstraction method based on a region-of-interest function derived from an elaborate perception model. Visual contents in perceptually salient regions are emphasized, whereas the background is abstracted appropriately. In addition, the edge-preserving smoothing and line drawing algorithms in this paper are guided by a vector field which describes the flow of salient features of the input image. The whole pipeline can be executed automatically in real time on the GPU, without requiring any user intervention. Several experimental examples are shown to demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "5ff019e3c12f7b1c2b3518e0883e3b6f",
"text": "A novel PFC (Power Factor Corrected) Converter using Zeta DC-DC converter feeding a BLDC (Brush Less DC) motor drive using a single voltage sensor is proposed for fan applications. A single phase supply followed by an uncontrolled bridge rectifier and a Zeta DC-DC converter is used to control the voltage of a DC link capacitor which is lying between the Zeta converter and a VSI (Voltage Source Inverter). Voltage of a DC link capacitor of Zeta converter is controlled to achieve the speed control of BLDC motor. The Zeta converter is working as a front end converter operating in DICM (Discontinuous Inductor Current Mode) and thus using a voltage follower approach. The DC link capacitor of the Zeta converter is followed by a VSI which is feeding a BLDC motor. A sensorless control of BLDC motor is used to eliminate the requirement of Hall Effect position sensors. A MATLAB/Simulink environment is used to simulate the developed model to achieve a wide range of speed control with high PF (power Factor) and improved PQ (Power Quality) at the supply.",
"title": ""
},
{
"docid": "1d949b64320fce803048b981ae32ce38",
"text": "In the field of voice therapy, perceptual evaluation is widely used by expert listeners as a way to evaluate pathological and normal voice quality. This approach is understandably subjective as it is subject to listeners’ bias which high interand intra-listeners variability can be found. As such, research on automatic assessment of pathological voices using a combination of subjective and objective analyses emerged. The present study aimed to develop a complementary automatic assessment system for voice quality based on the well-known GRBAS scale by using a battery of multidimensional acoustical measures through Deep Neural Networks. A total of 44 dimensionality parameters including Mel-frequency Cepstral Coefficients, Smoothed Cepstral Peak Prominence and Long-Term Average Spectrum was adopted. In addition, the state-of-the-art automatic assessment system based on Modulation Spectrum (MS) features and GMM classifiers was used as comparison system. The classification results using the proposed method revealed a moderate correlation with subjective GRBAS scores of dysphonic severity, and yielded a better performance than MS-GMM system, with the best accuracy around 81.53%. The findings indicate that such assessment system can be used as an appropriate evaluation tool in determining the presence and severity of voice disorders.",
"title": ""
},
{
"docid": "7899f347c75c1f5be28279b5afe2884c",
"text": "In 1985 Luca Cardelli and Peter Wegner, my advisor, published an ACM Computing Surveys paper called \"On understanding types, data abstraction, and polymorphism\". Their work kicked off a flood of research on semantics and type theory for object-oriented programming, which continues to this day. Despite 25 years of research, there is still widespread confusion about the two forms of data abstraction, abstract data types and objects. This essay attempts to explain the differences and also why the differences matter.",
"title": ""
},
{
"docid": "a120d11f432017c3080bb4107dd7ea71",
"text": "Over the last decade, the zebrafish has entered the field of cardiovascular research as a new model organism. This is largely due to a number of highly successful small- and large-scale forward genetic screens, which have led to the identification of zebrafish mutants with cardiovascular defects. Genetic mapping and identification of the affected genes have resulted in novel insights into the molecular regulation of vertebrate cardiac development. More recently, the zebrafish has become an attractive model to study the effect of genetic variations identified in patients with cardiovascular defects by candidate gene or whole-genome-association studies. Thanks to an almost entirely sequenced genome and high conservation of gene function compared with humans, the zebrafish has proved highly informative to express and study human disease-related gene variants, providing novel insights into human cardiovascular disease mechanisms, and highlighting the suitability of the zebrafish as an excellent model to study human cardiovascular diseases. In this review, I discuss recent discoveries in the field of cardiac development and specific cases in which the zebrafish has been used to model human congenital and acquired cardiac diseases.",
"title": ""
},
{
"docid": "5ac4ad85b5a8e04b60c2cc9f58bf6520",
"text": "Despite much recent interest in the clinical neuroscience of music processing, the cognitive organization of music as a domain of non-verbal knowledge has been little studied. Here we addressed this issue systematically in two expert musicians with clinical diagnoses of semantic dementia and Alzheimer's disease, in comparison with a control group of healthy expert musicians. In a series of neuropsychological experiments, we investigated associative knowledge of musical compositions (musical objects), musical emotions, musical instruments (musical sources) and music notation (musical symbols). These aspects of music knowledge were assessed in relation to musical perceptual abilities and extra-musical neuropsychological functions. The patient with semantic dementia showed relatively preserved recognition of musical compositions and musical symbols despite severely impaired recognition of musical emotions and musical instruments from sound. In contrast, the patient with Alzheimer's disease showed impaired recognition of compositions, with somewhat better recognition of composer and musical era, and impaired comprehension of musical symbols, but normal recognition of musical emotions and musical instruments from sound. The findings suggest that music knowledge is fractionated, and superordinate musical knowledge is relatively more robust than knowledge of particular music. We propose that music constitutes a distinct domain of non-verbal knowledge but shares certain cognitive organizational features with other brain knowledge systems. Within the domain of music knowledge, dissociable cognitive mechanisms process knowledge derived from physical sources and the knowledge of abstract musical entities.",
"title": ""
},
{
"docid": "edba1a911c12daf7f02440e22da7decd",
"text": "Big data has been acknowledged for its enormous potential. In contrast to the potential, in a recent survey more than half of financial service organizations reported that big data has not delivered the expected value. One of the main reasons for this is related to data quality. The objective of this research is to identify the antecedents of big data quality in financial institutions. This will help to understand how data quality from big data analysis can be improved. For this, a literature review was performed and data was collected using three case studies, followed by content analysis. The overall findings indicate that there are no fundamentally new data quality issues in big data projects. Nevertheless, the complexity of the issues is higher, which makes it harder to assess and attain data quality in big data projects compared to the traditional projects. Ten antecedents of big data quality were identified encompassing data, technology, people, process and procedure, organization, and external aspects.",
"title": ""
},
{
"docid": "c68cfa9402dcc2a79e7ab2a7499cc683",
"text": "Stereo-pair images obtained from two cameras can be used to compute three-dimensional (3D) world coordinates of a point using triangulation. However, to apply this method, camera calibration parameters for each camera need to be experimentally obtained. Camera calibration is a rigorous experimental procedure in which typically 12 parameters are to be evaluated for each camera. The general camera model is often such that the system becomes nonlinear and requires good initial estimates to converge to a solution. We propose that, for stereo vision applications in which real-world coordinates are to be evaluated, arti® cial neural networks be used to train the system such that the need for camera calibration is eliminated. The training set for our neural network consists of a variety of stereo-pair images and corresponding 3D world coordinates. We present the results obtained on our prototype mobile robot that employs two cameras as its sole sensors and navigates through simple regular obstacles in a high-contrast environment. We observe that the percentage errors obtained from our set-up are comparable with those obtained through standard camera calibration techniques and that the system is accurate enough for most machine-vision applications.",
"title": ""
},
{
"docid": "15800830f8774211d48110980d08478a",
"text": "This paper surveys the problem of navigation for autonomous underwater vehicles (AUVs). Marine robotics technology has undergone a phase of dramatic increase in capability in recent years. Navigation is one of the key challenges that limits our capability to use AUVs to address problems of critical importance to society. Good navigation information is essential for safe operation and recovery of an AUV. For the data gathered by an AUV to be of value, the location from which the data has been acquired must be accurately known. The three primary methods for navigation of AUVs are (1) dead-reckoning and inertial navigation systems, (2) acoustic navigation, and (3) geophysical navigation techniques. The current state-of-the-art in each of these areas is summarized, and topics for future research are suggested.",
"title": ""
},
{
"docid": "7d6c87baff95b89d975b98bcf8a132c0",
"text": "There is precisely one complete language processing system to date: the human brain. Though there is debate on how much built-in bias human learne rs might have, we definitely acquire language in a primarily unsupervised fashio n. On the other hand, computational approaches to language processing are almost excl usively supervised, relying on hand-labeled corpora for training. This reliance is largel y due to unsupervised approaches having repeatedly exhibited discouraging performance. In particular, the problem of learning syntax (grammar) from completely unannotated text has r eceived a great deal of attention for well over a decade, with little in the way of positive results. We argue that previous methods for this task have generally underperformed becaus of the representations they used. Overly complex models are easily distracted by non-sy ntactic correlations (such as topical associations), while overly simple models aren’t r ich enough to capture important first-order properties of language (such as directionality , adjacency, and valence). In this work, we describe several syntactic representation s and associated probabilistic models which are designed to capture the basic character of natural language syntax as directly as possible. First, we examine a nested, distribut ional method which induces bracketed tree structures. Second, we examine a dependency model which induces word-to-word dependency structures. Finally, we demonstrate that these two models perform better in combination than they do alone. With these representations , high-quality analyses can be learned from surprisingly little text, with no labeled exam ples, in several languages (we show experiments with English, German, and Chinese). Our re sults show above-baseline performance in unsupervised parsing in each of these langua ges. Grammar induction methods are useful since parsed corpora e xist for only a small number of languages. More generally, most high-level NLP tasks , uch as machine translation",
"title": ""
},
{
"docid": "ee4d5fae117d6af503ceb65707814c1b",
"text": "We investigate the use of syntactically related pairs of words for the task of text classification. The set of all pairs of syntactically related words should intuitively provide a better description of what a document is about, than the set of proximity-based N-grams or selective syntactic phrases. We generate syntactically related word pairs using a dependency parser. We experimented with Support Vector Machines and Decision Tree learners on the 10 most frequent classes from the Reuters-21578 corpus. Results show that syntactically related pairs of words produce better results in terms of accuracy and precision when used alone or combined with unigrams, compared to unigrams alone.",
"title": ""
},
{
"docid": "f9474c31ca20f7374e7e1c5216a52c63",
"text": "The score function estimator is widely used for estimating gradients of stochastic objectives in Stochastic Computation Graphs (SCG), e.g., in reinforcement learning and meta-learning. While deriving the first order gradient estimators by differentiating a surrogate loss (SL) objective is computationally and conceptually simple, using the same approach for higher order gradients is more challenging. Firstly, analytically deriving and implementing such estimators is laborious and not compliant with automatic differentiation. Secondly, repeatedly applying SL to construct new objectives for each order gradient involves increasingly cumbersome graph manipulations. Lastly, to match the first order gradient under differentiation, SL treats part of the cost as a fixed sample, which we show leads to missing and wrong terms for higher order gradient estimators. To address all these shortcomings in a unified way, we introduce DICE, which provides a single objective that can be differentiated repeatedly, generating correct gradient estimators of any order in SCGs. Unlike SL, DICE relies on automatic differentiation for performing the requisite graph manipulations. We verify the correctness of DICE both through a proof and through numerical evaluation of the DICE gradient estimates. We also use DICE to propose and evaluate a novel approach for multi-agent learning. Our code is available at https://goo.gl/xkkGxN.",
"title": ""
},
{
"docid": "2fcd7e151c658e29cacda5c4f5542142",
"text": "The connection between gut microbiota and energy homeostasis and inflammation and its role in the pathogenesis of obesity-related disorders are increasingly recognized. Animals models of obesity connect an altered microbiota composition to the development of obesity, insulin resistance, and diabetes in the host through several mechanisms: increased energy harvest from the diet, altered fatty acid metabolism and composition in adipose tissue and liver, modulation of gut peptide YY and glucagon-like peptide (GLP)-1 secretion, activation of the lipopolysaccharide toll-like receptor-4 axis, and modulation of intestinal barrier integrity by GLP-2. Instrumental for gut microbiota manipulation is the understanding of mechanisms regulating gut microbiota composition. Several factors shape the gut microflora during infancy: mode of delivery, type of infant feeding, hospitalization, and prematurity. Furthermore, the key importance of antibiotic use and dietary nutrient composition are increasingly recognized. The role of the Western diet in promoting an obesogenic gut microbiota is being confirmation in subjects. Following encouraging results in animals, several short-term randomized controlled trials showed the benefit of prebiotics and probiotics on insulin sensitivity, inflammatory markers, postprandial incretins, and glucose tolerance. Future research is needed to unravel the hormonal, immunomodulatory, and metabolic mechanisms underlying microbe-microbe and microbiota-host interactions and the specific genes that determine the health benefit derived from probiotics. While awaiting further randomized trials assessing long-term safety and benefits on clinical end points, a healthy lifestyle--including breast lactation, appropriate antibiotic use, and the avoidance of excessive dietary fat intake--may ensure a friendly gut microbiota and positively affect prevention and treatment of metabolic disorders.",
"title": ""
},
{
"docid": "b19c7bb10169646a4a08c9c3cac677c7",
"text": "Metropolitan Manila, Philippines is one of the regions at high risk from flooding. Pandacan, is one of the districts in the city of Manila, located at the south of Pasig River which is prone to flooding. The purpose of this project is to provide a standalone flood water level monitoring system for the community in Kahilom Street Pandacan, Manila. The system is constructed through the use of Arduino Uno, GSM shield and sensors that will be powered by a solar panel with generator. The early warning device will be the three LED that is mounted to a PVC pipe and then the system will send an SMS notification to the people in the community. The functionality of the system was tested by the simulation of flooding. The results provided that the objectives of the design satisfied the needs of the client.",
"title": ""
},
{
"docid": "22ad9bc66f0a9274fcf76697152bab4d",
"text": "We consider the recovery of a (real- or complex-valued) signal from magnitude-only measurements, known as phase retrieval. We formulate phase retrieval as a convex optimization problem, which we call PhaseMax. Unlike other convex methods that use semidefinite relaxation and lift the phase retrieval problem to a higher dimension, PhaseMax is a “non-lifting” relaxation that operates in the original signal dimension. We show that the dual problem to PhaseMax is basis pursuit, which implies that the phase retrieval can be performed using algorithms initially designed for sparse signal recovery. We develop sharp lower bounds on the success probability of PhaseMax for a broad range of random measurement ensembles, and we analyze the impact of measurement noise on the solution accuracy. We use numerical results to demonstrate the accuracy of our recovery guarantees, and we showcase the efficacy and limits of PhaseMax in practice.",
"title": ""
},
{
"docid": "2bc379517b4acfd0cb1257e056ca414d",
"text": "Many studies of creative cognition with a neuroimaging component now exist; what do they say about where and how creativity arises in the brain? We reviewed 45 brain-imaging studies of creative cognition. We found little clear evidence of overlap in their results. Nearly as many different tests were used as there were studies; this test diversity makes it impossible to interpret the different findings across studies with any confidence. Our conclusion is that creativity research would benefit from psychometrically informed revision, and the addition of neuroimaging methods designed to provide greater spatial localization of function. Without such revision in the behavioral measures and study designs, it is hard to see the benefit of imaging. We set out eight suggestions in a manifesto for taking creativity research forward.",
"title": ""
},
{
"docid": "2b53b125dc8c79322aabb083a9c991e4",
"text": "Geographical location is vital to geospatial applications like local search and event detection. In this paper, we investigate and improve on the task of text-based geolocation prediction of Twitter users. Previous studies on this topic have typically assumed that geographical references (e.g., gazetteer terms, dialectal words) in a text are indicative of its author’s location. However, these references are often buried in informal, ungrammatical, and multilingual data, and are therefore non-trivial to identify and exploit. We present an integrated geolocation prediction framework and investigate what factors impact on prediction accuracy. First, we evaluate a range of feature selection methods to obtain “location indicative words”. We then evaluate the impact of nongeotagged tweets, language, and user-declared metadata on geolocation prediction. In addition, we evaluate the impact of temporal variance on model generalisation, and discuss how users differ in terms of their geolocatability. We achieve state-of-the-art results for the text-based Twitter user geolocation task, and also provide the most extensive exploration of the task to date. Our findings provide valuable insights into the design of robust, practical text-based geolocation prediction systems.",
"title": ""
}
] |
scidocsrr
|
38e2d692ddd15c5eefda65e88040a84f
|
Fitting the mind to the world: face adaptation and attractiveness aftereffects.
|
[
{
"docid": "d6f322f4dd7daa9525f778ead18c8b5e",
"text": "Face perception, perhaps the most highly developed visual skill in humans, is mediated by a distributed neural system in humans that is comprised of multiple, bilateral regions. We propose a model for the organization of this system that emphasizes a distinction between the representation of invariant and changeable aspects of faces. The representation of invariant aspects of faces underlies the recognition of individuals, whereas the representation of changeable aspects of faces, such as eye gaze, expression, and lip movement, underlies the perception of information that facilitates social communication. The model is also hierarchical insofar as it is divided into a core system and an extended system. The core system is comprised of occipitotemporal regions in extrastriate visual cortex that mediate the visual analysis of faces. In the core system, the representation of invariant aspects is mediated more by the face-responsive region in the fusiform gyrus, whereas the representation of changeable aspects is mediated more by the face-responsive region in the superior temporal sulcus. The extended system is comprised of regions from neural systems for other cognitive functions that can be recruited to act in concert with the regions in the core system to extract meaning from faces.",
"title": ""
}
] |
[
{
"docid": "e352b5d4bfe4557b27e6caaddbc4da61",
"text": "This paper presents ILGM (the Infant Learning to Grasp Model), the first computational model of infant grasp learning that is constrained by the infant motor development literature. By grasp learning we mean learning how to make motor plans in response to sensory stimuli such that open-loop execution of the plan leads to a successful grasp. The open-loop assumption is justified by the behavioral evidence that early grasping is based on open-loop control rather than on-line visual feedback. Key elements of the infancy period, namely elementary motor schemas, the exploratory nature of infant motor interaction, and inherent motor variability are captured in the model. In particular we show, through computational modeling, how an existing behavior (reaching) yields a more complex behavior (grasping) through interactive goal-directed trial and error learning. Our study focuses on how the infant learns to generate grasps that match the affordances presented by objects in the environment. ILGM was designed to learn execution parameters for controlling the hand movement as well as for modulating the reach to provide a successful grasp matching the target object affordance. Moreover, ILGM produces testable predictions regarding infant motor learning processes and poses new questions to experimentalists.",
"title": ""
},
{
"docid": "71034fd57c81f5787eb1642e24b44b82",
"text": "A novel dual-band microstrip antenna with omnidirectional circularly polarized (CP) and unidirectional CP characteristic for each band is proposed in this communication. Function of dual-band dual-mode is realized based on loading with metamaterial structure. Since the fields of the fundamental modes are most concentrated on the fringe of the radiating patch, modifying the geometry of the radiating patch has little effect on the radiation patterns of the two modes (<formula formulatype=\"inline\"><tex Notation=\"TeX\">$n = 0, + 1$</tex></formula> mode). CP property for the omnidirectional zeroth-order resonance (<formula formulatype=\"inline\"><tex Notation=\"TeX\">$n = 0$</tex> </formula> mode) is achieved by employing curved branches in the radiating patch. Then a 45<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}$</tex> </formula> inclined rectangular slot is etched in the center of the radiating patch to excite the CP property for the <formula formulatype=\"inline\"><tex Notation=\"TeX\">$n = + 1$</tex></formula> mode. A prototype is fabricated to verify the properties of the antenna. Both simulation and measurement results illustrate that this single-feed antenna is valuable in wireless communication for its low-profile, radiation pattern selectivity and CP characteristic.",
"title": ""
},
{
"docid": "9b086872cad65b92237696ec3a48550f",
"text": "Memory-augmented neural networks (MANNs) refer to a class of neural network models equipped with external memory (such as neural Turing machines and memory networks). These neural networks outperform conventional recurrent neural networks (RNNs) in terms of learning long-term dependency, allowing them to solve intriguing AI tasks that would otherwise be hard to address. This paper concerns the problem of quantizing MANNs. Quantization is known to be effective when we deploy deep models on embedded systems with limited resources. Furthermore, quantization can substantially reduce the energy consumption of the inference procedure. These benefits justify recent developments of quantized multilayer perceptrons, convolutional networks, and RNNs. However, no prior work has reported the successful quantization of MANNs. The in-depth analysis presented here reveals various challenges that do not appear in the quantization of the other networks. Without addressing them properly, quantized MANNs would normally suffer from excessive quantization error which leads to degraded performance. In this paper, we identify memory addressing (specifically, content-based addressing) as the main reason for the performance degradation and propose a robust quantization method for MANNs to address the challenge. In our experiments, we achieved a computation-energy gain of 22× with 8-bit fixed-point and binary quantization compared to the floating-point implementation. Measured on the bAbI dataset, the resulting model, named the quantized MANN (Q-MANN), improved the error rate by 46% and 30% with 8-bit fixed-point and binary quantization, respectively, compared to the MANN quantized using conventional techniques.",
"title": ""
},
{
"docid": "80224e331eacb3d3e0ec3a35a0582341",
"text": "This paper describes a frequency-tunable phase inverter based on a slot-line resonator for the first time. The control circuit is designed and located on the defected ground. None of dc block capacitors are needed in the microstrip line. A wide tuning frequency range is accomplished by the use of the slot-line resonator with two varactors and a single control voltage. A 180-degree phase inverter is achieved by means of reversing electric field with two metallic via holes connecting the microstrip and ground plane. The graphic method is used to estimate the operation frequency. For verification, a frequency-tunable phase inverter is fabricated and measured. The measured results show a wide tuning frequency range from 1.1 GHz to 1.75 GHz with better than 20-dB return loss. The measured results are in good agreement with the simulated ones.",
"title": ""
},
{
"docid": "258fbd58fdd85eebd1602bfb25389adf",
"text": "This paper proposes a fast implementation method for the general matrix-vector multiplication (GEMV) routine, which is one of the level-2 Basic Linear Algebra Subprograms (BLAS) subroutines, for a column-major and non-transposed matrix on NVIDIA Kepler architecture graphics processing units (GPUs). We began by implementing the GEMV kernel using typical blocking techniques for shared-memory and register along with 128-bit vector load/store instructions. In our initial investigation, we found that even though the kernel could approach actual peak GPU throughput at some matrix sizes, performance fluctuates periodically depending on the problem size. In our next step, we investigated the reason for the fluctuations using a performance model based on a thread-block scheduling mechanism, and then created a method of determining optimal thread-block sizes that avoids those fluctuations. As the results show, when run on two Kepler architecture GPUs, our single-precision GEMV (SGEMV) routine achieved better performance in terms of both throughput and performance stability (with respect to the problem size) when compared to existing implementations: CUBLAS 6.5, MAGMA 1.4.1 and KBLAS 1.0. Our implementation techniques can be used not only for SGEMV but also double-precision (DGEMV), single-complex (CGEMV), and double-complex (ZGEMV). While this paper discusses primarily Kepler architecture, we also explore the performance of proposal implementation on Maxwell architecture, which is the next generation of Kepler architecture.",
"title": ""
},
{
"docid": "3833e548f316f7c4e93cb49ec278379e",
"text": "Computational thinking (CT) is increasingly seen as a core literacy skill for the modern world on par with the longestablished skills of reading, writing, and arithmetic. To promote the learning of CT at a young age we capitalized on children's interest in play. We designed RabBit EscApe, a board game that challenges children, ages 610, to orient tangible, magnetized manipulatives to complete or create paths. We also ran an informal study to investigate the effectiveness of the game in fostering children's problemsolving capacity during collaborative game play. We used the results to inform our instructional interaction design that we think will better support the learning activities and help children hone the involved CT skills. Overall, we believe in the power of such games to challenge children to grow their understanding of CT in a focused and engaging activity.",
"title": ""
},
{
"docid": "fe70e1e6a00fec08f768669f152fd9e4",
"text": "Numerous efforts in balancing the trade-off between power, area and performance have been done in the medium performance, medium power region of the design spectrum. However, not much study has been done at the two extreme ends of the design spectrum, namely the ultra-low power with acceptable performance at one end (the focus of this paper), and high performance with power within limit at the other. One solution to achieve the ultra-low power requirement is to operate the digital logic gates in subthreshold region. We analyze both CMOS and Pseudo-NMOS logic families operating in subthreshold region. We compare the results with CMOS in normal strong inversion region and with other known low-power logic, namely, energy recovery logic. Our results show an energy per switching reduction of two orders of magnitude for an 8x8 carry save array multiplier when it is operated in subthreshold region.",
"title": ""
},
{
"docid": "6e92948714000d3a35175d85bb0d20b0",
"text": "One important application in computer vision is detection of objects. This paper discusses detection of fingertips by using Histogram of Gradients (HOG) as the feature descriptor and Support Vector Machines (SVM) as the classifier. The SVM is trained to produce a classifier that is able to distinguish whether an image contains a fingertip or not. A total of 4200 images were collected by using a commercialgrade webcam, consisting of 2100 fingertip images and 2100 non-fingertip images, were used in the experiment. Our work evaluates the performance of the fingertip detection and the effects of the cell’s size of the HOG and the number of the training data have been studied. It has been found that as expected, the performance of the detection is improved as the number of training data is increased. Additionally, it has also been observed that the 10 x 10 size gives the best results in terms of accuracy in the detection. The highest classification accuracy obtained was less than 90%, which is thought mainly due to the changing orientation of the fingertip and quality of the images.",
"title": ""
},
{
"docid": "a027c9dd3b4522cdf09a2238bfa4c37e",
"text": "Distributed word representations, or word vectors, have recently been applied to many tasks in natural language processing, leading to state-of-the-art performance. A key ingredient to the successful application of these representations is to train them on very large corpora, and use these pre-trained models in downstream tasks. In this paper, we describe how we trained such high quality word representations for 157 languages. We used two sources of data to train these models: the free online encyclopedia Wikipedia and data from the common crawl project. We also introduce three new word analogy datasets to evaluate these word vectors, for French, Hindi and Polish. Finally, we evaluate our pre-trained word vectors on 10 languages for which evaluation datasets exists, showing very strong performance compared to previous models.",
"title": ""
},
{
"docid": "e2ea8ec9139837feb95ac432a63afe88",
"text": "Augmented and virtual reality have the potential of being indistinguishable from the real world. Holographic displays, including head mounted units, support this vision by creating rich stereoscopic scenes, with objects that appear to float in thin air - often within arm's reach. However, one has but to reach out and grasp nothing but air to destroy the suspension of disbelief. Snake-charmer is an attempt to provide physical form to virtual objects by revisiting the concept of Robotic Graphics or Encountered-type Haptic interfaces with current commodity hardware. By means of a robotic arm, Snake-charmer brings physicality to a virtual scene and explores what it means to truly interact with an object. We go beyond texture and position simulation and explore what it means to have a physical presence inside a virtual scene. We demonstrate how to render surface characteristics beyond texture and position, including temperature; how to physically move objects; and how objects can physically interact with the user's hand. We analyze our implementation, present the performance characteristics, and provide guidance for the construction of future physical renderers.",
"title": ""
},
{
"docid": "73905bf74f0f66c7a02aeeb9ab231d7b",
"text": "This paper presents an anthropomorphic robot hand called the Gifu hand II, which has a thumb and four fingers, all the joints of which are driven by servomotors built into the fingers and the palm. The thumb has four joints with four-degrees-of-freedom (DOF); the other fingers have four joints with 3-DOF; and two axes of the joints near the palm cross orthogonally at one point, as is the case in the human hand. The Gifu hand II can be equipped with six-axes force sensor at each fingertip and a developed distributed tactile sensor with 624 detecting points on its surface. The design concepts and the specifications of the Gifu hand II, the basic characteristics of the tactile sensor, and the pressure distributions at the time of object grasping are described and discussed herein. Our results demonstrate that the Gifu hand II has a high potential to perform dexterous object manipulations like the human hand.",
"title": ""
},
{
"docid": "bdd56cd8b9ec6dcdc6ff87fa5bed80ac",
"text": "The battery is a fundamental component of electric vehicles, which represent a step forward towards sustainable mobility. Lithium chemistry is now acknowledged as the technology of choice for energy storage in electric vehicles. However, several research points are still open. They include the best choice of the cell materials and the development of electronic circuits and algorithms for a more effective battery utilization. This paper initially reviews the most interesting modeling approaches for predicting the battery performance and discusses the demanding requirements and standards that apply to ICs and systems for battery management. Then, a general and flexible architecture for battery management implementation and the main techniques for state-of-charge estimation and charge balancing are reported. Finally, we describe the design and implementation of an innovative BMS, which incorporates an almost fully-integrated active charge equalizer.",
"title": ""
},
{
"docid": "473302a395c2e67b178b5ad982dd1fe5",
"text": "OBJECTIVE: To generate normative data on the Brief Test of Attention (BTA) across 11 countries in Latin America, with country-specific adjustments for gender, age, and education, where appropriate. METHOD: The sample consisted of 3,977 healthy adults who were recruited from Mexico, Argentina, Peru, Paraguay, Honduras, Chile, Cuba, Puerto Rico, Guatemala, El Salvador, and Bolivia. Each subject was administered the BTA as part of a larger neuropsychological battery. A standardized five-step statistical procedure was used to generate the norms. RESULTS: The final multiple linear regression models explained between 11–41% of the variance in BTA scores. Although men had higher scores on the BTA in Honduras, there were no other significant gender differences, and this one effect size was small. As a result, gender-adjusted norms were not generated. CONCLUSIONS: This is the first normative multicenter study conducted in Latin America to create norms for the BTA; this study will have an impact on the future practice of neuropsychology throughout Latin America.",
"title": ""
},
{
"docid": "f1086cf6c27d39ea5e4a4c9b2522c74f",
"text": "This paper talks about the relationship between conceptual metaphor and semantic motivation of English and Chinese idioms from three aspects, namely, structural metaphor, orientation metaphor and ontological metaphor. Based on that, the author puts forward applying conceptual metaphor theory to English and Chinese idiom teaching.",
"title": ""
},
{
"docid": "2d58165d463ba1bf14d07727652a10f1",
"text": "This is a report on parents who have children who exhibit gender variant behaviors and who contacted an affirmative program in the United States for assistance. All parents completed the Child Behavior Checklist, the Gender Identity Questionnaire, and the Genderism and Transphobia Scale, as well as telephone interviews. The parents reported comparatively low levels of genderism and transphobia. When compared to children at other gender identity clinics in Canada and The Netherlands, parents rated their children's gender variance as no less extreme, but their children were overall less pathological. Indeed, none of the measures in this study could predict parents' ratings of their child's pathology. These findings support the contention that this affirmative program served children who were no less gender variant than in other programs, but they were overall less distressed.",
"title": ""
},
{
"docid": "f30d5e78d169868484eca015d946bd88",
"text": "In Hong Kong and Macao, horse racing is the most famous gambling with a long history. This study proposes a novel approach to predict the horse racing results in Hong Kong. A three-years-long race records dataset obtained from Hong Kong Jockey Club was used for training a support-vector-machine-based committee machine. Bet suggestions could be made to gamblers by studying previous data though machine learning. In experiment, there are 2691 races and 33532 horse records obtained. Experiments focus on accuracy and return rate were conducted separately through constructing a committee machine. Experimental results showed that the accuracy and return rate achieve 70.86% and 800,000% respectively.",
"title": ""
},
{
"docid": "835fd7a4410590a3d848222eb3159aeb",
"text": "Modularity in organizations can facilitate the creation and development of dynamic capabilities. Paradoxically, however, modular management can also stifle the strategic potential of such capabilities by conflicting with the horizontal integration of units. We address these issues through an examination of how modular management of information technology (IT), project teams and front-line personnel in concert with knowledge management (KM) interventions influence the creation and development of dynamic capabilities at a large Asia-based call center. Our findings suggest that a full capitalization of the efficiencies created by modularity may be closely linked to the strategic sense making abilities of senior managers to assess the long-term business value of the dominant designs available in the market. Drawing on our analysis we build a modular management-KM-dynamic capabilities model, which highlights the evolution of three different levels of dynamic capabilities and also suggests an inherent complementarity between modular and integrated approaches. © 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "745a9e841e74f99b397d448678eaee2d",
"text": "The integration of large-scale PV systems to the grid is a growing trend in modern power systems. The cascaded H-bridge (CHB) converter is a suitable candidate for the grid interconnection due to its modular characteristics, high-quality output waveforms and capability of connecting to medium-voltage grids. However, the CHB converter requires isolated DC sources. In order to avoid the leakage currents caused by the high potential differences across the parasitic capacitance of the PV panels to ground, an isolated DC-DC conversion stage is required when the CHB topology is used. The objective of this paper is to compare two PV system configurations based on the CHB multilevel converter using two isolated DC-DC converter topologies, namely the boost-half-bridge (BHB) and the flyback, for their performance on providing isolation and achieving individual MPPT at the DC-DC power conversion stage of large-scale PV power systems. Simulation results from a 263 kW PV system based on a seven-level CHB converter with the two aforementioned isolated DC-DC converters are provided for comparison and evaluation with different input PV voltages.",
"title": ""
},
{
"docid": "c49ae120bca82ef0d9e94115ad7107f2",
"text": "An evaluation and comparison of seven of the world’s major building codes and standards is conducted in this study, with specific discussion of their estimations of the alongwind, acrosswind, and torsional response, where applicable, for a given building. The codes and standards highlighted by this study are those of the United States, Japan, Australia, the United Kingdom, Canada, China, and Europe. In addition, the response predicted by using the measured power spectra of the alongwind, acrosswind, and torsional responses for several building shapes tested in a wind tunnel are presented, and a comparison between the response predicted by wind tunnel data and that estimated by some of the standards is conducted. This study serves not only as a comparison of the response estimates by international codes and standards, but also introduces a new set of wind tunnel data for validation of wind tunnel-based empirical expressions. 1.0 Introduction Under the influence of dynamic wind loads, typical high-rise buildings oscillate in the alongwind, acrosswind, and torsional directions. The alongwind motion primarily results from pressure fluctuations on the windward and leeward faces, which generally follows the fluctuations in the approach flow, at least in the low frequency range. Therefore, alongwind aerodynamic loads may be quantified analytically utilizing quasi-steady and strip theories, with dynamic effects customarily represented by a random-vibrationbased “Gust Factor Approach” (Davenport 1967, Vellozzi & Cohen 1968, Vickery 1970, Simiu 1976, Solari 1982, ESDU 1989, Gurley & Kareem 1993). However, the acrosswind motion is introduced by pressure fluctuations on the side faces which are influenced by fluctuations in the separated shear layers and wake dynamics (Kareem 1982). This renders the applicability of strip and quasi-steady theories rather doubtful. Similarly, the wind-induced torsional effects result from an imbalance in the instantaneous pressure distribution on the building surface. These load effects are further amplified in asymmetric buildings as a result of inertial coupling (Kareem 1985). Due to the complexity of the acrosswind and torsional responses, physical modeling of fluid-structure interactions remains the only viable means of obtaining information on wind loads, though recently, research in the area of computational fluid dynam1. Graduate Student & Corresponding Author, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556. e-mail: Tracy.L.Kijewski.1@nd.edu 2. Professor, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556",
"title": ""
},
{
"docid": "2bee8125c2a8a1c85ab7f044e28e2191",
"text": "To achieve instantaneous control of induction motor torque using field-orientation techniques, it is necessary that the phase currents be controlled to maintain precise instantaneous relationships. Failure to do so results in a noticeable degradation in torque response. Most of the currently used approaches to achieve this control employ classical control strategies which are only correct for steady-state conditions. A modern control theory approach which circumvents these limitations is developed. The approach uses a state-variable feedback control model of the field-oriented induction machine. This state-variable controller is shown to be intrinsically more robust than PI regulators. Experimental verification of the performance of this state-variable control strategy in achieving current-loop performance and torque control at high operating speeds is included.",
"title": ""
}
] |
scidocsrr
|
122a0038f1eed490b7051d980be4e042
|
pTCP: An End-to-End Transport Layer Protocol for Striped Connections
|
[
{
"docid": "59021dcb134a2b25122b3be73243bea6",
"text": "The path taken by a packet traveling across the Internet depends on a large number of factors, including routing protocols and per-network routing policies. The impact of these factors on the end-to-end performance experienced by users is poorly understood. In this paper, we conduct a measurement-based study comparing the performance seen using the \"default\" path taken in the Internet with the potential performance available using some alternate path. Our study uses five distinct datasets containing measurements of \"path quality\", such as round-trip time, loss rate, and bandwidth, taken between pairs of geographically diverse Internet hosts. We construct the set of potential alternate paths by composing these measurements to form new synthetic paths. We find that in 30-80% of the cases, there is an alternate path with significantly superior quality. We argue that the overall result is robust and we explore two hypotheses for explaining it.",
"title": ""
}
] |
[
{
"docid": "4dbcb1c6f2e855fa3e7d1a491b108689",
"text": "Guaranteed tuple processing has become critically important for many streaming applications. This paper describes how we enabled IBM Streams, an enterprise-grade stream processing system, to provide data processing guarantees. Our solution goes from language-level abstractions to a runtime protocol. As a result, with a couple of simple annotations at the source code level, IBM Streams developers can define consistent regions, allowing any subgraph of their streaming application to achieve guaranteed tuple processing. At runtime, a consistent region periodically executes a variation of the Chandy-Lamport snapshot algorithm to establish a consistent global state for that region. The coupling of consistent states with data replay enables guaranteed tuple processing.",
"title": ""
},
{
"docid": "66e100e31b2c100d2428024513fc4953",
"text": "In order to make the search engine transfer information efficiently and accurately and do this optimization to improve the web search ranking, beginning with understanding the principle of search engine, this paper exports the specific explanation of search engine optimization. And then it introduces the new website building concepts and design concepts for the purpose of the construction of search engine optimization. Through an empirical research from the fields of the internal coding method, the website content realizable form and website overall architecture, the paper expounds search engine optimization tools, strategies and methods, and analysis the new thought that the enterprise and e-commerce sites with the search engine do the effective website promotion. And when the user through the search engine to search, the website can get a good rankings position in the search results, so as to improve the site traffic and finally enhance the website sales ability or advocacy capacity.",
"title": ""
},
{
"docid": "f72150d92ff4e0422ae44c3c21e8345e",
"text": "There has been a recent paradigm shift in robotics to data-driven learning for planning and control. Due to large number of experiences required for training, most of these approaches use a self-supervised paradigm: using sensors to measure success/failure. However, in most cases, these sensors provide weak supervision at best. In this work, we propose an adversarial learning framework that pits an adversary against the robot learning the task. In an effort to defeat the adversary, the original robot learns to perform the task with more robustness leading to overall improved performance. We show that this adversarial framework forces the robot to learn a better grasping model in order to overcome the adversary. By grasping 82% of presented novel objects compared to 68% without an adversary, we demonstrate the utility of creating adversaries. We also demonstrate via experiments that having robots in adversarial setting might be a better learning strategy as compared to having collaborative multiple robots. For supplementary video see: youtu.be/QfK3Bqhc6Sk",
"title": ""
},
{
"docid": "3e9aa3bcc728f8d735f6b02e0d7f0502",
"text": "Linda Marion is a doctoral student at Drexel University. E-mail: Linda.Marion@drexel.edu. Abstract This exploratory study examined 250 online academic librarian employment ads posted during 2000 to determine current requirements for technologically oriented jobs. A content analysis software program was used to categorize the specific skills and characteristics listed in the ads. The results were analyzed using multivariate analysis (cluster analysis and multidimensional scaling). The results, displayed in a three-dimensional concept map, indicate 19 categories comprised of both computer related skills and behavioral characteristics that can be interpreted along three continua: (1) technical skills to people skills; (2) long-established technologies and behaviors to emerging trends; (3) technical service competencies to public service competencies. There was no identifiable “digital librarian” category.",
"title": ""
},
{
"docid": "141d607eb8caeb7512f777ee3dea5972",
"text": "DBSCAN is a base algorithm for density based clustering. It can detect the clusters of different shapes and sizes from the large amount of data which contains noise and outliers. However, it is fail to handle the local density variation that exists within the cluster. In this paper, we propose a density varied DBSCAN algorithm which is capable to handle local density variation within the cluster. It calculates the growing cluster density mean and then the cluster density variance for any core object, which is supposed to be expended further, by considering density of its -neighborhood with respect to cluster density mean. If cluster density variance for a core object is less than or equal to a threshold value and also satisfying the cluster similarity index, then it will allow the core object for expansion. The experimental results show that the proposed clustering algorithm gives optimized results.",
"title": ""
},
{
"docid": "2adcf4db59bb321132a10445292d7fe9",
"text": "In this paper, we present work on learning analytics that aims to support learners and teachers through dashboard applications, ranging from small mobile applications to learnscapes on large public displays. Dashboards typically capture and visualize traces of learning activities, in order to promote awareness, reflection, and sense-making, and to enable learners to define goals and track progress toward these goals. Based on an analysis of our own work and a broad range of similar learning dashboards, we identify HCI issues for this exciting research area.",
"title": ""
},
{
"docid": "78ef7cd54c9b5aa41096d7496e433f69",
"text": "To meet the requirements of some intelligent vehicle monitoring system, the software integrates Global Position System (GPS), Geographic Information System (GIS) and Global System for Mobile communications (GSM) in the whole. The structure, network topology, functions, main technical features and their implementation principles of the system are introduced. Then hardware design of the vehicle terminal is given in short. Communication process and data transmission between the server and the client (relay server) and client through TCP/IP and UDP protocol are discussed in detail in this paper. Testing result using LoadRunner software is also analyzed. Practice shows the robustness of the software and feasibility of object-oriented programming.",
"title": ""
},
{
"docid": "6c504c7a69dba18e8cbc6a3678ab4b09",
"text": "This letter presents a compact model for flexible analog/RF circuits design with amorphous indium-gallium-zinc oxide thin-film transistors (TFTs). The model is based on the MOSFET LEVEL=3 SPICE model template, where parameters are fitted to measurements for both dc and ac characteristics. The proposed TFT compact model shows good scalability of the drain current for device channel lengths ranging from 50 to 3.6 μm. The compact model is validated by comparing measurements and simulations of various TFT amplifier circuits. These include a two-stage cascode amplifier showing 10 dB of voltage gain and 2.9 MHz of bandwidth.",
"title": ""
},
{
"docid": "422c0890804654613ea37fbf1186fda1",
"text": "Because of the distance between the skull and brain and their di erent resistivities, electroencephalographic (EEG) data collected from any point on the human scalp includes activity generated within a large brain area. This spatial smearing of EEG data by volume conduction does not involve signi cant time delays, however, suggesting that the Independent Component Analysis (ICA) algorithm of Bell and Sejnowski [1] is suitable for performing blind source separation on EEG data. The ICA algorithm separates the problem of source identi cation from that of source localization. First results of applying the ICA algorithm to EEG and event-related potential (ERP) data collected during a sustained auditory detection task show: (1) ICA training is insensitive to di erent random seeds. (2) ICA may be used to segregate obvious artifactual EEG components (line and muscle noise, eye movements) from other sources. (3) ICA is capable of isolating overlapping EEG phenomena, including alpha and theta bursts and spatially-separable ERP components, to separate ICA channels. (4) Nonstationarities in EEG and behavioral state can be tracked using ICA via changes in the amount of residual correlation between ICAltered output channels.",
"title": ""
},
{
"docid": "a286f9f594ef563ba082fb454eddc8bc",
"text": "The visual inspection of Mura defects is still a challenging task in the quality control of panel displays because of the intrinsically nonuniform brightness and blurry contours of these defects. The current methods cannot detect all Mura defect types simultaneously, especially small defects. In this paper, we introduce an accurate Mura defect visual inspection (AMVI) method for the fast simultaneous inspection of various Mura defect types. The method consists of two parts: an outlier-prejudging-based image background construction (OPBC) algorithm is proposed to quickly reduce the influence of image backgrounds with uneven brightness and to coarsely estimate the candidate regions of Mura defects. Then, a novel region-gradient-based level set (RGLS) algorithm is applied only to these candidate regions to quickly and accurately segment the contours of the Mura defects. To demonstrate the performance of AMVI, several experiments are conducted to compare AMVI with other popular visual inspection methods are conducted. The experimental results show that AMVI tends to achieve better inspection performance and can quickly and accurately inspect a greater number of Mura defect types, especially for small and large Mura defects with uneven backlight. Note to Practitioners—The traditional Mura visual inspection method can address only medium-sized Mura defects, such as region Mura, cluster Mura, and vertical-band Mura, and is not suitable for small Mura defects, for example, spot Mura. The proposed accurate Mura defect visual inspection (AMVI) method can accurately and simultaneously inspect not only medium-sized Mura defects but also small and large Mura defects. The proposed outlier-prejudging-based image background construction (OPBC) algorithm of the AMVI method is employed to improve the Mura true detection rate, while the proposed region-gradient-based level set (RGLS) algorithm is used to reduce the Mura false detection rate. Moreover, this method can be applied to online vision inspection: OPBC can be implemented in parallel processing units, while RGLS is applied only to the candidate regions of the inspected image. In addition, AMVI can be extended to other low-contrast defect vision inspection tasks, such as the inspection of glass, steel strips, and ceramic tiles.",
"title": ""
},
{
"docid": "cbe3a584e8fcabbd42f732b5fe247736",
"text": "Wall‐climbing welding robots (WCWRs) can replace workers in manufacturing and maintaining large unstructured equipment, such as ships. The adhesion mechanism is the key component of WCWRs. As it is directly related to the robot’s ability in relation to adsorbing, moving flexibly and obstacle‐passing. In this paper, a novel non‐contact adjustably magnetic adhesion mechanism is proposed. The magnet suckers are mounted under the robot’s axils and the sucker and wall are in non‐contact. In order to pass obstacles, the sucker and the wheel unit can be pulled up and pushed down by a lifting mechanism. The magnetic adhesion force can be adjusted by changing the height of the gap between the sucker and the wall by the lifting mechanism. In order to increase the adhesion force, the value of the sucker’s magnetic energy density (MED) is maximized by optimizing the magnet sucker’s structure parameters with a finite element method. Experiments prove that the magnetic adhesion mechanism has enough adhesion force and that the WCWR can complete wall‐climbing work within a large unstructured environment.",
"title": ""
},
{
"docid": "ede1f31a32e59d29ee08c64c1a6ed5f7",
"text": "There are different approaches to the problem of assigning each word of a text with a parts-of-speech tag, which is known as Part-Of-Speech (POS) tagging. In this paper we compare the performance of a few POS tagging techniques for Bangla language, e.g. statistical approach (n-gram, HMM) and transformation based approach (Brill’s tagger). A supervised POS tagging approach requires a large amount of annotated training corpus to tag properly. At this initial stage of POS-tagging for Bangla, we have very limited resource of annotated corpus. We tried to see which technique maximizes the performance with this limited resource. We also checked the performance for English and tried to conclude how these techniques might perform if we can manage a substantial amount of annotated corpus.",
"title": ""
},
{
"docid": "e9251977f62ce9dddf16730dff8e47cb",
"text": "INTRODUCTION AND OBJECTIVE\nCircumcision is one of the oldest surgical procedures and one of the most frequently performed worldwide. It can be done by many different techniques. This prospective series presents the results of Plastibell® circumcision in children older than 2 years of age, evaluating surgical duration, immediate and late complications, time for plastic device separation and factors associated with it.\n\n\nMATERIALS AND METHODS\nWe prospectively analyzed 119 children submitted to Plastic Device Circumcision with Plastibell® by only one surgeon from December 2009 to June 2011. In all cases the surgery was done under general anesthesia associated with dorsal penile nerve block. Before surgery length of the penis and latero-lateral diameter of the glans were measured. Surgical duration, time of Plastibell® separation and use of analgesic medication in the post-operative period were evaluated. Patients were followed on days 15, 45, 90 and 120 after surgery.\n\n\nRESULTS\nAge at surgery varied from 2 to 12.5 (5.9 ± 2.9) years old. Mean surgical time was 3.7 ± 2.0 minutes (1.9 to 9 minutes). Time for plastic device separation ranged from 6 to 26 days (mean: 16 ± 4.2 days), being 14.8 days for children younger than 5 years of age and 17.4 days for those older than 5 years of age (p < 0.0001). The diameter of the Plastibell® does not interfered in separations time (p = 0,484). Late complications occurred in 32 (26.8%) subjects, being the great majority of low clinical significance, especially prepucial adherences, edema of the mucosa and discrete hypertrophy of the scar, all resolving with clinical treatment. One patient still using diaper had meatus stenosis and in one case the Plastibell® device stayed between the glans and the prepuce and needed to be removed manually.\n\n\nCONCLUSIONS\nCircumcision using a plastic device is a safe, quick and an easy technique with low complications, that when occur are of low clinical importance and of easy resolution. The mean time for the device to fall is shorter in children under 6 years of age and it is not influenced by the diameter of the device.",
"title": ""
},
{
"docid": "a2c93e5497ab4e0317b9e86db6d31dbb",
"text": "Digital photographs are often used in treatment monitoring for home care of less advanced pressure ulcers. We investigated assessment agreement when stage III and IV pressure ulcers in individuals with spinal cord injury were evaluated in person and with the use of digital photographs. Two wound-care nurses assessed 31 wounds among 15 participants. One nurse assessed all wounds in person, while the other used digital photographs. Twenty-four wound description categories were applied in the nurses' assessments. Kappa statistics were calculated to investigate agreement beyond chance (p < or = 0.05). For 10 randomly selected \"double-rated wounds,\" both nurses applied both assessment methods. Fewer categories were evaluated for the double-rated wounds, because some categories were chosen infrequently and agreement could not be measured. Interrater agreement with the two methods was observed for 12 of the 24 categories (50.0%). However, of the 12 categories with agreement beyond chance, agreement was only \"slight\" (kappa = 0-0.20) or \"fair\" (kappa = 0.21-0.40) for 6 categories. The highest agreement was found for the presence of undermining (kappa = 0.853, p < 0.001). Interrater agreement was similar to intramethod agreement (41.2% of the categories demonstrated agreement beyond chance) for the nurses' in-person assessment of the double-rated wounds. The moderate agreement observed may be attributed to variation in subjective perception of qualitative wound characteristics.",
"title": ""
},
{
"docid": "08c0044fa878bfcbe7f0316871fa4bf6",
"text": "The innate immune system evolved several strategies of self/nonself discrimination that are based on the recognition of molecular patterns demarcating infectious nonself, as well as normal and abnormal self. These patterns are deciphered by receptors that either induce or inhibit an immune response, depending on the meaning of these signals.",
"title": ""
},
{
"docid": "e74240aef79f42ac0345a2ae49ecde4a",
"text": "Existing automatic music generation approaches that feature deep learning can be broadly classified into two types: raw audio models and symbolic models. Symbolic models, which train and generate at the note level, are currently the more prevalent approach; these models can capture long-range dependencies of melodic structure, but fail to grasp the nuances and richness of raw audio generations. Raw audio models, such as DeepMind’s WaveNet, train directly on sampled audio waveforms, allowing them to produce realistic-sounding, albeit unstructured music. In this paper, we propose an automatic music generation methodology combining both of these approaches to create structured, realistic-sounding compositions. We consider a Long Short Term Memory network to learn the melodic structure of different styles of music, and then use the unique symbolic generations from this model as a conditioning input to a WaveNet-based raw audio generator, creating a model for automatic, novel music. We then evaluate this approach by showcasing results of this work.",
"title": ""
},
{
"docid": "9866baa32279757366e6e8d8e105e9e2",
"text": "Deep learning is a multilayer neural network learning algorithm which emerged in recent years. It has brought a new wave to machine learning, and making artificial intelligence and human-computer interaction advance with big strides. We applied deep learning to handwritten character recognition, and explored the two mainstream algorithm of deep learning: the Convolutional Neural Network (CNN) and the Deep Belief NetWork (DBN). We conduct the performance evaluation for CNN and DBN on the MNIST database and the real-world handwritten character database. The classification accuracy rate of CNN and DBN on the MNIST database is 99.28% and 98.12% respectively, and on the real-world handwritten character database is 92.91% and 91.66% respectively. The experiment results show that deep learning does have an excellent feature learning ability. It don't need to extract features manually. Deep learning can learn more nature features of the data.",
"title": ""
},
{
"docid": "d0e45de6baf9665123a43a21d25c18c2",
"text": "This paper studies the problem of computing optimal journeys in dynamic public transit networks. We introduce a novel algorithmic framework, called Connection Scan Algorithm (CSA), to compute journeys. It organizes data as a single array of connections, which it scans once per query. Despite its simplicity, our algorithm is very versatile. We use it to solve earliest arrival and multi-criteria profile queries. Moreover, we extend it to handle the minimum expected arrival time (MEAT) problem, which incorporates stochastic delays on the vehicles and asks for a set of (alternative) journeys that in its entirety minimizes the user’s expected arrival time at the destination. Our experiments on the dense metropolitan network of London show that CSA computes MEAT queries, our most complex scenario, in 272ms on average.",
"title": ""
},
{
"docid": "ef1ff363769f0b206222d6e14fda95d5",
"text": "In this paper, we propose a novel benchmark for evaluating local image descriptors. We demonstrate that the existing datasets and evaluation protocols do not specify unambiguously all aspects of evaluation, leading to ambiguities and inconsistencies in results reported in the literature. Furthermore, these datasets are nearly saturated due to the recent improvements in local descriptors obtained by learning them from large annotated datasets. Therefore, we introduce a new large dataset suitable for training and testing modern descriptors, together with strictly defined evaluation protocols in several tasks such as matching, retrieval and classification. This allows for more realistic, and thus more reliable comparisons in different application scenarios. We evaluate the performance of several state-of-the-art descriptors and analyse their properties. We show that a simple normalisation of traditional hand-crafted descriptors can boost their performance to the level of deep learning based descriptors within a realistic benchmarks evaluation.",
"title": ""
},
{
"docid": "338a998da4a1d3cd8b491c893f51bd18",
"text": "Class imbalance (i.e., scenarios in which classes are unequally represented in the training data) occurs in many real-world learning tasks. Yet despite its practical importance, there is no established theory of class imbalance, and existing methods for handling it are therefore not well motivated. In this work, we approach the problem of imbalance from a probabilistic perspective, and from this vantage identify dataset characteristics (such as dimensionality, sparsity, etc.) that exacerbate the problem. Motivated by this theory, we advocate the approach of bagging an ensemble of classifiers induced over balanced bootstrap training samples, arguing that this strategy will often succeed where others fail. Thus in addition to providing a theoretical understanding of class imbalance, corroborated by our experiments on both simulated and real datasets, we provide practical guidance for the data mining practitioner working with imbalanced data.",
"title": ""
}
] |
scidocsrr
|
8aacbe8f692ee331c2f3a716ca37e918
|
Hosting via Airbnb: Motivations and Financial Assurances in Monetized Network Hospitality
|
[
{
"docid": "9b176a25a16b05200341ac54778a8bfc",
"text": "This paper reports on a study of motivations for the use of peer-to-peer or sharing economy services. We interviewed both users and providers of these systems to obtain different perspectives and to determine if providers are matching their system designs to the most important drivers of use. We found that the motivational models implicit in providers' explanations of their systems' designs do not match well with what really seems to motivate users. Providers place great emphasis on idealistic motivations such as creating a better community and increasing sustainability. Users, on the other hand are looking for services that provide what they need whilst increasing value and convenience. We discuss the divergent models of providers and users and offer design implications for peer system providers.",
"title": ""
},
{
"docid": "3104e4ec0fe50f5499f219961c6d3c61",
"text": "Online marketplaces often contain information not only about products, but also about the people selling the products. In an effort to facilitate trust, many platforms encourage sellers to provide personal profiles and even to post pictures of themselves. However, these features may also facilitate discrimination based on sellers’ race, gender, age, or other aspects of appearance. In this paper, we test for racial discrimination against landlords in the online rental marketplace Airbnb.com. Using a new data set combining pictures of all New York City landlords on Airbnb with their rental prices and information about quality of the rentals, we show that non-black hosts charge approximately 12% more than black hosts for the equivalent rental. These effects are robust when controlling for all information visible in the Airbnb marketplace. These findings highlight the prevalence of discrimination in online marketplaces, suggesting an important unintended consequence of a seemingly-routine mechanism for building trust. 1 Harvard Business School, bedelman@hbs.edu 2 Harvard Business School, mluca@hbs.edu",
"title": ""
},
{
"docid": "84ece888e2302d13775973f552c6b810",
"text": "We present a qualitative study of hospitality exchange processes that take place via the online peer-to-peer platform Airbnb. We explore 1) what motivates individuals to monetize network hospitality and 2) how the presence of money ties in with the social interaction related to network hospitality. We approach the topic from the perspective of hosts -- that is, Airbnb users who participate by offering accommodation for other members in exchange for monetary compensation. We found that participants were motivated to monetize network hospitality for both financial and social reasons. Our analysis indicates that the presence of money can provide a helpful frame for network hospitality, supporting hosts in their efforts to accomplish desired sociability, select guests consistent with their preferences, and control the volume and type of demand. We conclude the paper with a critical discussion of the implications of our findings for network hospitality and, more broadly, for the so-called sharing economy.",
"title": ""
}
] |
[
{
"docid": "f69723ed73c7edd9856883bbb086ed0c",
"text": "An algorithm for license plate recognition (LPR) applied to the intelligent transportation system is proposed on the basis of a novel shadow removal technique and character recognition algorithms. This paper has two major contributions. One contribution is a new binary method, i.e., the shadow removal method, which is based on the improved Bernsen algorithm combined with the Gaussian filter. Our second contribution is a character recognition algorithm known as support vector machine (SVM) integration. In SVM integration, character features are extracted from the elastic mesh, and the entire address character string is taken as the object of study, as opposed to a single character. This paper also presents improved techniques for image tilt correction and image gray enhancement. Our algorithm is robust to the variance of illumination, view angle, position, size, and color of the license plates when working in a complex environment. The algorithm was tested with 9026 images, such as natural-scene vehicle images using different backgrounds and ambient illumination particularly for low-resolution images. The license plates were properly located and segmented as 97.16% and 98.34%, respectively. The optical character recognition system is the SVM integration with different character features, whose performance for numerals, Kana, and address recognition reached 99.5%, 98.6%, and 97.8%, respectively. Combining the preceding tests, the overall performance of success for the license plate achieves 93.54% when the system is used for LPR in various complex conditions.",
"title": ""
},
{
"docid": "03625364ccde0155f2c061b47e3a00b8",
"text": "The computation of selectional preferences, the admissible argument values for a relation, is a well-known NLP task with broad applicability. We present LDA-SP, which utilizes LinkLDA (Erosheva et al., 2004) to model selectional preferences. By simultaneously inferring latent topics and topic distributions over relations, LDA-SP combines the benefits of previous approaches: like traditional classbased approaches, it produces humaninterpretable classes describing each relation’s preferences, but it is competitive with non-class-based methods in predictive power. We compare LDA-SP to several state-ofthe-art methods achieving an 85% increase in recall at 0.9 precision over mutual information (Erk, 2007). We also evaluate LDA-SP’s effectiveness at filtering improper applications of inference rules, where we show substantial improvement over Pantel et al.’s system (Pantel et al., 2007).",
"title": ""
},
{
"docid": "4a37742db1c55b877733f53ea95ee3c6",
"text": "This paper presents an overview of an intelligence platform we have built to address threat hunting and incident investigation use-cases in the cyber security domain. Specifically, we focus on User and Entity Behavior Analytics (UEBA) modules that track and monitor behaviors of users, IP addresses and devices in an enterprise. Anomalous behavior is automatically detected using machine learning algorithms based on Singular Values Decomposition (SVD). Such anomalous behavior indicative of potentially malicious activity is alerted to analysts with relevant contextual information for further investigation and action. We provide a detailed description of the models, algorithms and implementation underlying the module and demonstrate the functionality with empirical examples.",
"title": ""
},
{
"docid": "ef2996a04c819777cc4b88c47f502c21",
"text": "Bioprinting is an emerging technology for constructing and fabricating artificial tissue and organ constructs. This technology surpasses the traditional scaffold fabrication approach in tissue engineering (TE). Currently, there is a plethora of research being done on bioprinting technology and its potential as a future source for implants and full organ transplantation. This review paper overviews the current state of the art in bioprinting technology, describing the broad range of bioprinters and bioink used in preclinical studies. Distinctions between laser-, extrusion-, and inkjet-based bioprinting technologies along with appropriate and recommended bioinks are discussed. In addition, the current state of the art in bioprinter technology is reviewed with a focus on the commercial point of view. Current challenges and limitations are highlighted, and future directions for next-generation bioprinting technology are also presented. [DOI: 10.1115/1.4028512]",
"title": ""
},
{
"docid": "cdcd2a627b1d7d94adc1bfa831667cf7",
"text": "Solving mazes is not just a fun pastime: They are prototype models in several areas of science and technology. However, when maze complexity increases, their solution becomes cumbersome and very time consuming. Here, we show that a network of memristors--resistors with memory--can solve such a nontrivial problem quite easily. In particular, maze solving by the network of memristors occurs in a massively parallel fashion since all memristors in the network participate simultaneously in the calculation. The result of the calculation is then recorded into the memristors' states and can be used and/or recovered at a later time. Furthermore, the network of memristors finds all possible solutions in multiple-solution mazes and sorts out the solution paths according to their length. Our results demonstrate not only the application of memristive networks to the field of massively parallel computing, but also an algorithm to solve mazes, which could find applications in different fields.",
"title": ""
},
{
"docid": "785c716d4f127a5a5fee02bc29aeb352",
"text": "In this paper we propose a novel, improved, phase generated carrier (PGC) demodulation algorithm based on the PGC-differential-cross-multiplying approach (PGC-DCM). The influence of phase modulation amplitude variation and light intensity disturbance (LID) on traditional PGC demodulation algorithms is analyzed theoretically and experimentally. An experimental system for remote no-contact microvibration measurement is set up to confirm the stability of the improved PGC algorithm with LID. In the experiment, when the LID with a frequency of 50 Hz and the depth of 0.3 is applied, the signal-to-noise and distortion ratio (SINAD) of the improved PGC algorithm is 19 dB, higher than the SINAD of the PGC-DCM algorithm, which is 8.7 dB.",
"title": ""
},
{
"docid": "a8e8bbe19ed505b3e1042783e5e363d6",
"text": "We study the topology of e-mail networks with e-mail addresses as nodes and e-mails as links using data from server log files. The resulting network exhibits a scale-free link distribution and pronounced small-world behavior, as observed in other social networks. These observations imply that the spreading of e-mail viruses is greatly facilitated in real e-mail networks compared to random architectures.",
"title": ""
},
{
"docid": "25b250495fd4989ce1a365d5ddaa526e",
"text": "Supervised automation of selected subtasks in Robot-Assisted Minimally Invasive Surgery (RMIS) has potential to reduce surgeon fatigue, operating time, and facilitate tele-surgery. Tumor resection is a multi-step multilateral surgical procedure to localize, expose, and debride (remove) a subcutaneous tumor, then seal the resulting wound with surgical adhesive. We developed a finite state machine using the novel devices to autonomously perform the tumor resection. The first device is an interchangeable instrument mount which uses the jaws and wrist of a standard RMIS gripping tool to securely hold and manipulate a variety of end-effectors. The second device is a fluid injection system that can facilitate precision delivery of material such as chemotherapy, stem cells, and surgical adhesives to specific targets using a single-use needle attached using the interchangeable instrument mount. Fluid flow through the needle is controlled via an externallymounted automated lead screw. Initial experiments suggest that an automated Intuitive Surgical dVRK system which uses these devices combined with a palpation probe and sensing model described in a previous paper can successfully complete the entire procedure in five of ten trials. We also show the most common failure phase, debridement, can be improved with visual feedback. Design details and video are available at: http://berkeleyautomation.github.io/surgical-tools.",
"title": ""
},
{
"docid": "e78a652a865494e4d05ad80d8a37224f",
"text": "This paper focuses on mechanical property of a articulated multi-unit wheel type in-pipe locomotion robot system. Through establishing the posture model of the robot system, can get the coordinates of wheel center and its corresponding contact point with pipe wall of each wheel for robot unit. Based on the posture model, the mechanical model of the robot unit is presented and the analysis is carried out in details. To confirm the effectiveness of the proposed theoretical analysis, an example about statics of the pipe robot is calculated, and the calculation results basically reflect the actual characteristics of the pipe robot. This provide theoretical basis for the selection of driving mechanism design and control mode of wheel type pipe robot.",
"title": ""
},
{
"docid": "58f1ba92eb199f4d105bf262b30dbbc5",
"text": "Before the big data era, scene recognition was often approached with two-step inference using localized intermediate representations (objects, topics, and so on). One of such approaches is the semantic manifold (SM), in which patches and images are modeled as points in a semantic probability simplex. Patch models are learned resorting to weak supervision via image labels, which leads to the problem of scene categories co-occurring in this semantic space. Fortunately, each category has its own co-occurrence patterns that are consistent across the images in that category. Thus, discovering and modeling these patterns are critical to improve the recognition performance in this representation. Since the emergence of large data sets, such as ImageNet and Places, these approaches have been relegated in favor of the much more powerful convolutional neural networks (CNNs), which can automatically learn multi-layered representations from the data. In this paper, we address many limitations of the original SM approach and related works. We propose discriminative patch representations using neural networks and further propose a hybrid architecture in which the semantic manifold is built on top of multiscale CNNs. Both representations can be computed significantly faster than the Gaussian mixture models of the original SM. To combine multiple scales, spatial relations, and multiple features, we formulate rich context models using Markov random fields. To solve the optimization problem, we analyze global and local approaches, where a top–down hierarchical algorithm has the best performance. Experimental results show that exploiting different types of contextual relations jointly consistently improves the recognition accuracy.",
"title": ""
},
{
"docid": "e6d6d3b41a0914036a77d5d151d745a8",
"text": "Only within the past decade has the potential of metal biosorption by biomass materials been well established. For economic reasons, of particular interest are abundant biomass types generated as a waste byproduct of large-scale industrial fermentations or certain metal-binding algae found in large quantities in the sea. These biomass types serve as a basis for newly developed metal biosorption processes foreseen particularly as a very competitive means for the detoxification of metal-bearing industrial effluents. The assessment of the metal-binding capacity of some new biosorbents is discussed. Lead and cadmium, for instance, have been effectively removed from very dilute solutions by the dried biomass of some ubiquitous species of brown marine algae such as Ascophyllum and Sargassum, which accumulate more than 30% of biomass dry weight in the metal. Mycelia of the industrial steroid-transforming fungi Rhizopus and Absidia are excellent biosorbents for lead, cadmium, copper, zinc, and uranium and also bind other heavy metals up to 25% of the biomass dry weight. Biosorption isotherm curves, derived from equilibrium batch sorption experiments, are used in the evaluation of metal uptake by different biosorbents. Further studies are focusing on the assessment of biosorbent performance in dynamic continuous-flow sorption systems. In the course of this work, new methodologies are being developed that are aimed at mathematical modeling of biosorption systems and their effective optimization. Elucidation of mechanisms active in metal biosorption is essential for successful exploitation of the phenomenon and for regeneration of biosorbent materials in multiple reuse cycles. The complex nature of biosorbent materials makes this task particularly challenging. Discussion focuses on the composition of marine algae polysaccharide structures, which seem instrumental in metal uptake and binding. The state of the art in the field of biosorption is reviewed in this article, with many references to recent reviews and key individual contributions.",
"title": ""
},
{
"docid": "5228454ef59c012b079885b2cce0c012",
"text": "As a contribution to the HICSS 50 Anniversary Conference, we proposed a new mini-track on Text Mining in Big Data Analytics. This mini-track builds on the successful HICSS Workshop on Text Mining and recognizes the growing importance of unstructured text as a data source for descriptive and predictive analytics in research on collaboration systems and technologies. In this initial iteration of the mini-track, we have accepted three papers that cover conceptual issues, methodological approaches to social media, and the development of categorization models and dictionaries useful in a corporate context. The minitrack highlights the potential of an interdisciplinary research community within the HICSS collaboration systems and technologies track.",
"title": ""
},
{
"docid": "6c17e311ff57efd4cce31416bf6ace54",
"text": "Demands faced by health care professionals include heavy caseloads, limited control over the work environment, long hours, as well as organizational structures and systems in transition. Such conditions have been directly linked to increased stress and symptoms of burnout, which in turn, have adverse consequences for clinicians and the quality of care that is provided to patients. Consequently, there exists an impetus for the development of curriculum aimed at fostering wellness and the necessary self-care skills for clinicians. This review will examine the potential benefits of mindfulness-based stress reduction (MBSR) programs aimed at enhancing well-being and coping with stress in this population. Empirical evidence indicates that participation in MBSR yields benefits for clinicians in the domains of physical and mental health. Conceptual and methodological limitations of the existing studies and suggestions for future research are discussed.",
"title": ""
},
{
"docid": "988503b179c3b49d60f2e9eeda6c45c2",
"text": "Previous work has shown that high quality phrasal paraphrases can be extracted from bilingual parallel corpora. However, it is not clear whether bitexts are an appropriate resource for extracting more sophisticated sentential paraphrases, which are more obviously learnable from monolingual parallel corpora. We extend bilingual paraphrase extraction to syntactic paraphrases and demonstrate its ability to learn a variety of general paraphrastic transformations, including passivization, dative shift, and topicalization. We discuss how our model can be adapted to many text generation tasks by augmenting its feature set, development data, and parameter estimation routine. We illustrate this adaptation by using our paraphrase model for the task of sentence compression and achieve results competitive with state-of-the-art compression systems.",
"title": ""
},
{
"docid": "09c9a0990946fd884df70d4eeab46ecc",
"text": "Studies of technological change constitute a field of growing importance and sophistication. In this paper we contribute to the discussion with a methodological reflection and application of multi-stage patent citation analysis for the mea surement of inventive progress. Investigating specific patterns of patent citation data, we conclude that single-stage citation analysis cannot reveal technological paths or linea ges. Therefore, one should also make use of indirect citations and bibliographical coupling. To measure aspects of cumulative inventive progress, we develop a “shared specialization measu r ” of patent families. We relate this measure to an expert rating of the technological va lue dded in the field of variable valve actuation for internal combustion engines. In sum, the study presents promising evidence for multi-stage patent citation analysis in order to ex plain aspects of technological change. JEL classification: O31",
"title": ""
},
{
"docid": "7adb0a3079fb3b64f7a503bd8eae623e",
"text": "Attack trees have found their way to practice because they have proved to be an intuitive aid in threat analysis. Despite, or perhaps thanks to, their apparent simplicity, they have not yet been provided with an unambiguous semantics. We argue that such a formal interpretation is indispensable to precisely understand how attack trees can be manipulated during construction and analysis. We provide a denotational semantics, based on a mapping to attack suites, which abstracts from the internal structure of an attack tree, we study transformations between attack trees, and we study the attribution and projection of an attack tree.",
"title": ""
},
{
"docid": "745a89e24f439b6f31cdadea25386b17",
"text": "Developmental imaging studies show that cortical grey matter decreases in volume during childhood and adolescence. However, considerably less research has addressed the development of subcortical regions (caudate, putamen, pallidum, accumbens, thalamus, amygdala, hippocampus and the cerebellar cortex), in particular not in longitudinal designs. We used the automatic labeling procedure in FreeSurfer to estimate the developmental trajectories of the volume of these subcortical structures in 147 participants (age 7.0-24.3years old, 94 males; 53 females) of whom 53 participants were scanned twice or more. A total of 223 magnetic resonance imaging (MRI) scans (acquired at 1.5-T) were analyzed. Substantial diversity in the developmental trajectories was observed between the different subcortical gray matter structures: the volume of caudate, putamen and nucleus accumbens decreased with age, whereas the volume of hippocampus, amygdala, pallidum and cerebellum showed an inverted U-shaped developmental trajectory. The thalamus showed an initial small increase in volume followed by a slight decrease. All structures had a larger volume in males than females over the whole age range, except for the cerebellum that had a sexually dimorphic developmental trajectory. Thus, subcortical structures appear to not yet be fully developed in childhood, similar to the cerebral cortex, and continue to show maturational changes into adolescence. In addition, there is substantial heterogeneity between the developmental trajectories of these structures.",
"title": ""
},
{
"docid": "9cd7c945291db3fc0cc0ece4cf03a186",
"text": "Coronary angiography is considered to be a safe tool for the evaluation of coronary artery disease and perform in approximately 12 million patients each year worldwide. [1] In most cases, angiograms are manually analyzed by a cardiologist. Actually, there are no clinical practice algorithms which could improve and automate this work. Neural networks show high efficiency in tasks of image analysis and they can be used for the analysis of angiograms and facilitate diagnostics. We have developed an algorithm based on Convolutional Neural Network and Neural Network U-Net [2] for vessels segmentation and defects detection such as stenosis. For our research we used anonymized angiography data obtained from one of the city’s hospitals and augmented them to improve learning efficiency. U-Net usage provided high quality segmentation and the combination of our algorithm with an ensemble of classifiers shows a good accuracy in the task of ischemia evaluation on test data. Subsequently, this approach can be served as a basis for the creation of an analytical system that could speed up the diagnosis of cardiovascular diseases and greatly facilitate the work of a specialist.",
"title": ""
},
{
"docid": "369c90f09cb52b7cff76f03ae99861f1",
"text": "The paper proposes a classification scheme for the roles of citations in empirical studies from the social sciences and related fields. The use of the classification, which has eight categories, is illustrated in sociology, education, demography, epidemiology and librarianship; its association with the citations' location within the paper is presented. The question of repeated citations of the same document is discussed. Several research questions to which this classification is relevant are proposed. The need for further critique, validation and experimentation is pointed out.",
"title": ""
}
] |
scidocsrr
|
ac2786525edcd616da5902880241dbe5
|
An overview of the HDF5 technology suite and its applications
|
[
{
"docid": "c5e37e68f7a7ce4b547b10a1888cf36f",
"text": "SciDB [4, 3] is a new open-source data management system intended primarily for use in application domains that involve very large (petabyte) scale array data; for example, scientific applications such as astronomy, remote sensing and climate modeling, bio-science information management, risk management systems in financial applications, and the analysis of web log data. In this talk we will describe our set of motivating examples and use them to explain the features of SciDB. We then briefly give an overview of the project 'in flight', explaining our novel storage manager, array data model, query language, and extensibility frameworks.",
"title": ""
}
] |
[
{
"docid": "9e8210e2030b78ea40f211f05359e5be",
"text": "Understanding the goals or intentions of other people requires a broad range of evaluative processes including the decoding of biological motion, knowing about object properties, and abilities for recognizing task space requirements and social contexts. It is becoming increasingly evident that some of this decoding is based in part on the simulation of other people's behavior within our own nervous system. This review focuses on aspects of action understanding that rely on embodied cognition, that is, the knowledge of the body and how it interacts with the world. This form of cognition provides an essential knowledge base from which action simulation can be used to decode at least some actions performed by others. Recent functional imaging studies or action understanding are interpreted with a goal of defining conditions when simulation operations occur and how this relates with other constructs, including top-down versus bottom-up processing and the functional distinctions between action observation and social networks. From this it is argued that action understanding emerges from the engagement of highly flexible computational hierarchies driven by simulation, object properties, social context, and kinematic constraints and where the hierarchy is driven by task structure rather than functional or strict anatomic rules.",
"title": ""
},
{
"docid": "ca990b1b43ca024366a2fe73e2a21dae",
"text": "Guanabenz (2,6-dichlorobenzylidene-amino-guanidine) is a centrally acting antihypertensive drug whose mechanism of action is via alpha2 adrenoceptors or, more likely, imidazoline receptors. Guanabenz is marketed as an antihypertensive agent in human medicine (Wytensin tablets, Wyeth Pharmaceuticals). Guanabenz has reportedly been administered to racing horses and is classified by the Association of Racing Commissioners International as a class 3 foreign substance. As such, its identification in a postrace sample may result in significant sanctions against the trainer of the horse. The present study examined liquid chromatographic/tandem quadrupole mass spectrometric (LC-MS/MS) detection of guanabenz in serum samples from horses treated with guanabenz by rapid i.v. injection at 0.04 and 0.2 mg/kg. Using a method adapted from previous work with clenbuterol, the parent compound was detected in serum with an apparent limit of detection of approximately 0.03 ng/ml and the limit of quantitation was 0.2 ng/ml. Serum concentrations of guanabenz peaked at approximately 100 ng/ml after the 0.2 mg/kg dose, and the parent compound was detected for up to 8 hours after the 0.04 mg/kg dose. Urine samples tested after administration of guanabenz at these dosages yielded evidence of at least one glucuronide metabolite, with the glucuronide ring apparently linked to a ring hydroxyl group or a guanidinium hydroxylamine. The LC-MS/MS results presented here form the basis of a confirmatory test for guanabenz in racing horses.",
"title": ""
},
{
"docid": "993a81f3b0ea8bbf255209d240bbaa56",
"text": "Fingerprints give a lot of information about various factors related to an individual. The main characteristic is that they are unique from person to person in many ways. The size, shape, pattern are some of the uniqueness factors seen, so they are area of research and study. Forensic science makes use of different evidences obtained out of which fingerprints are the one to be considered. Fingerprints play a vital role in getting details through the exact identification. Gender identification can also be done easily and efficiently through the fingerprints. Forensic anthropology has gender identification from fingerprints as an important part in order to identify the gender of a criminal and minimize the list of suspects search. Identification of fingerprints is studied and researched a lot in past and is continuously increasing day by day. The gender identification from fingerprints is carried in both spatial domain and frequency domain by applying different techniques. This paper studies frequency domain methods applied for gender identification from fingerprints. A survey of techniques show that DWT is widely used and also in combination with SVD and PCA for gender identification from fingerprints. An overall comparison of frequency domain techniques mainly focusing on DWT and its combinations is presented in this paper with a proposed canny edge detector and Haar DWT based fingerprint gender classification technique.",
"title": ""
},
{
"docid": "158de7fe10f35a78e4b62d2bc46d9b0d",
"text": "The Internet of Things promises ubiquitous connectivity of everything everywhere, which represents the biggest technology trend in the years to come. It is expected that by 2020 over 25 billion devices will be connected to cellular networks; far beyond the number of devices in current wireless networks. Machine-to-machine communications aims to provide the communication infrastructure for enabling IoT by facilitating the billions of multi-role devices to communicate with each other and with the underlying data transport infrastructure without, or with little, human intervention. Providing this infrastructure will require a dramatic shift from the current protocols mostly designed for human-to-human applications. This article reviews recent 3GPP solutions for enabling massive cellular IoT and investigates the random access strategies for M2M communications, which shows that cellular networks must evolve to handle the new ways in which devices will connect and communicate with the system. A massive non-orthogonal multiple access technique is then presented as a promising solution to support a massive number of IoT devices in cellular networks, where we also identify its practical challenges and future research directions.",
"title": ""
},
{
"docid": "5f0e1c63d60a4bdd8af5994b25b6654d",
"text": "The machine representation of floating point values has limited precision such that errors may be introduced during execution. These errors may get propagated and magnified by the following operations, leading to instability problems, e.g., control flow path may be undesirably altered and faulty output may be emitted. In this paper, we develop an on-the-fly efficient monitoring technique that can predict if an execution is stable. The technique does not explicitly compute errors as doing so incurs high overhead. Instead, it detects possible places where an error becomes substantially inflated regarding the corresponding value, and then tags the value with one bit to denote that it has an inflated error. It then tracks inflation bit propagation, taking care of operations that may cut off such propagation. It reports instability if any inflation bit reaches a critical execution point, such as a predicate, where the inflated error may induce substantial execution difference, such as different execution paths. Our experiment shows that with appropriate thresholds, the technique can correctly detect that over 99.999996% of the inputs of all the programs we studied are stable while a traditional technique relying solely on inflation detection mistakenly classifies majority of the inputs as unstable for some of the programs. Compared to the state of the art technique that is based on high precision computation and causes several hundred times slowdown, our technique only causes 7.91 times slowdown on average and can report all the true unstable executions with the appropriate thresholds.",
"title": ""
},
{
"docid": "48889a388562e195eff17488f57ca1e0",
"text": "To clarify the effects of changing shift schedules from a full-day to a half-day before a night shift, 12 single nurses and 18 married nurses with children that engaged in night shift work in a Japanese hospital were investigated. Subjects worked 2 different shift patterns consisting of a night shift after a half-day shift (HF-N) and a night shift after a day shift (D-N). Physical activity levels were recorded with a physical activity volume meter to measure sleep/wake time more precisely without restricting subjects' activities. The duration of sleep before a night shift of married nurses was significantly shorter than that of single nurses for both shift schedules. Changing shift from the D-N to the HF-N increased the duration of sleep before a night shift for both groups, and made wake-up time earlier for single nurses only. Repeated ANCOVA of the series of physical activities showed significant differences with shift (p < 0.01) and marriage (p < 0.01) for variances, and age (p < 0.05) for a covariance. The paired t-test to compare the effects of changing shift patterns in each subject group and ANCOVA for examining the hourly activity differences between single and married nurses showed that the effects of a change in shift schedules seemed to have less effect on married nurses than single nurses. These differences might due to the differences of their family/home responsibilities.",
"title": ""
},
{
"docid": "274a88ca3f662b6250d856148389b078",
"text": "This paper introduces NFQ, an algorithm for efficient and effective training of a Q-value function represented by a multi-layer perceptron. Based on the principle of storing and reusing transition experiences, a model-free, neural network based Reinforcement Learning algorithm is proposed. The method is evaluated on three benchmark problems. It is shown empirically, that reasonably few interactions with the plant are needed to generate control policies of high quality.",
"title": ""
},
{
"docid": "cadc31481c83e7fc413bdfb5d7bfd925",
"text": "A hierarchical model of approach and avoidance achievement motivation was proposed and tested in a college classroom. Mastery, performance-approach, and performance-avoidance goals were assessed and their antecedents and consequences examined. Results indicated that mastery goals were grounded in achievement motivation and high competence expectancies; performance-avoidance goals, in fear of failure and low competence expectancies; and performance-approach goals, in ach.ievement motivation, fear of failure, and high competence expectancies. Mastery goals facilitated intrinsic motivation, performance-approach goals enhanced graded performance, and performanceavoidance goals proved inimical to both intrinsic motivation and graded performance. The proposed model represents an integration of classic and contemporary approaches to the study of achievement motivation.",
"title": ""
},
{
"docid": "78a8eb1c05d8af52ca32ba29b3fcf89b",
"text": "Pediatric firearm-related deaths and injuries are a national public health crisis. In this Special Review Article, we characterize the epidemiology of firearm-related injuries in the United States and discuss public health programs, the role of pediatricians, and legislative efforts to address this health crisis. Firearm-related injuries are leading causes of unintentional injury deaths in children and adolescents. Children are more likely to be victims of unintentional injuries, the majority of which occur in the home, and adolescents are more likely to suffer from intentional injuries due to either assault or suicide attempts. Guns are present in 18% to 64% of US households, with significant variability by geographic region. Almost 40% of parents erroneously believe their children are unaware of the storage location of household guns, and 22% of parents wrongly believe that their children have never handled household guns. Public health interventions to increase firearm safety have demonstrated varying results, but the most effective programs have provided free gun safety devices to families. Pediatricians should continue working to reduce gun violence by asking patients and their families about firearm access, encouraging safe storage, and supporting firearm-related injury prevention research. Pediatricians should also play a role in educating trainees about gun violence. From a legislative perspective, universal background checks have been shown to decrease firearm homicides across all ages, and child safety laws have been shown to decrease unintentional firearm deaths and suicide deaths in youth. A collective, data-driven public health approach is crucial to halt the epidemic of pediatric firearm-related injury.",
"title": ""
},
{
"docid": "b1ece089b64d5cca3fcdf875ebf589d1",
"text": "This study investigated sex differences in young children's spatial skill. The authors developed a spatial transformation task, which showed a substantial male advantage by age 4 years 6 months. The size of this advantage was no more robust for rotation items than for translation items. This finding contrasts with studies of older children and adults, which report that sex differences are largest on mental rotation tasks. Comparable performance of boys and girls on a vocabulary task indicated that the male advantage on the spatial task was not attributable to an overall intellectual advantage of boys in the sample.",
"title": ""
},
{
"docid": "e3ca898c936009e149d5639a6e72359e",
"text": "Tracking bits through block ciphers and optimizing attacks at hand is one of the tedious task symmetric cryptanalysts have to deal with. It would be nice if a program will automatically handle them at least for well-known attack techniques, so that cryptanalysts will only focus on nding new attacks. However, current automatic tools cannot be used as is, either because they are tailored for speci c ciphers or because they only recover a speci c part of the attacks and cryptographers are still needed to nalize the analysis. In this paper we describe a generic algorithm exhausting the best meetin-the-middle and impossible di erential attacks on a very large class of block ciphers from byte to bit-oriented, SPN, Feistel and Lai-Massey block ciphers. Contrary to previous tools that target to nd the best di erential / linear paths in the cipher and leave the cryptanalysts to nd the attack using these paths, we automatically nd the best attacks by considering the cipher and the key schedule algorithms. The building blocks of our algorithm led to two algorithms designed to nd the best simple meet-in-the-middle attacks and the best impossible truncated differential attacks respectively. We recover and improve many attacks on AES, mCRYPTON, SIMON, IDEA, KTANTAN, PRINCE and ZORRO. We show that this tool can be used by designers to improve their analysis.",
"title": ""
},
{
"docid": "80fd7c3cfee5cbf234247848bc10c568",
"text": "A 2-GHz Si power MOSFET with 50% power-added efficiency and 1.0-W output power at a 3.6-V supply voltage has been developed for use as an RF high-power amplifier in wireless applications. This MOSFET achieves this performance by using a 0.4-/spl mu/m gate power device with an Al-shorted metal-silicide/Si gate structure and a reduced gate finger width pattern.",
"title": ""
},
{
"docid": "0a6a7d8b6b99d521e9610aa0792402cc",
"text": "Ajax is a new concept of web application development proposed in 2005. It is the acronym of Asynchronous JavaScript and XML. Once Ajax appeared, it is rapidly applied to the fields of Web development. Ajax application is different from the traditional Web development model, using asynchronous interaction. The client unnecessarily waits while the server processes the data submitted. So the use of Ajax can create Web user interface which is direct, highly available, richer, more dynamic and closer to a local desktop application. This article introduces the main technology and superiority of Ajax firstly, and then practices Web development using ASP.NET 2.0+Ajax. In this paper, Ajax is applied to the Website pass, which enables user to have better registration experience and enhances the user's enthusiasm. The registration functions are enhanced greatly as well. The experiments show that the Ajax Web application development model is superior to the traditional Web application development model significantly.",
"title": ""
},
{
"docid": "5ec4451889beb4698c6ffb6fba4a53a3",
"text": "We survey recent work on the elliptic curve discrete logarithm problem. In particular we review index calculus algorithms using summation polynomials, and claims about their complexity.",
"title": ""
},
{
"docid": "ca1b189815ce5eb56c2b44e2c0c154aa",
"text": "Synthetic data sets can be useful in a variety of situations, including repeatable regression testing and providing realistic - but not real - data to third parties for testing new software. Researchers, engineers, and software developers can test against a safe data set without affecting or even accessing the original data, insulating them from privacy and security concerns as well as letting them generate larger data sets than would be available using only real data. Practitioners use data mining technology to discover patterns in real data sets that aren't apparent at the outset. This article explores how to combine information derived from data mining applications with the descriptive ability of synthetic data generation software. Our goal is to demonstrate that at least some data mining techniques (in particular, a decision tree) can discover patterns that we can then use to inverse map into synthetic data sets. These synthetic data sets can be of any size and will faithfully exhibit the same (decision tree) patterns. Our work builds on two technologies: synthetic data definition language and predictive model markup language.",
"title": ""
},
{
"docid": "9dad87b0134d9f165b0208baf40c7f0f",
"text": "Frequent Itemset Mining (FIM) is the most important and time-consuming step of association rules mining. With the increment of data scale, many efficient single-machine algorithms of FIM, such as FP-growth and Apriori, cannot accomplish the computing tasks within reasonable time. As a result of the limitation of single-machine methods, researchers presented some distributed algorithms based on MapReduce and Spark, such as PFP and YAFIM. Nevertheless, the heavy disk I/O cost at each MapReduce operation makes PFP not efficient enough. YAFIM needs to generate candidate frequent itemsets in each iterative step. It makes YAFIM time-consuming. And if the scale of data is large enough, YAFIM algorithm will not work due to the limitation of memory since the candidate frequent itemsets need to be stored in the memory. And the size of candidate itemsets is very large especially facing the massive data. In this work, we propose a distributed FP-growth algorithm based on Spark, we call it DFPS. DFPS partitions computing tasks in such a way that each computing node builds the conditional FP-tree and adopts a pattern fragment growth method to mine the frequent itemsets independently. DFPS doesn't need to pass messages between nodes during mining frequent itemsets. Our performance study shows that DFPS algorithm is more excellent than YAFIM, especially when the length of transactions is long, the number of items is large and the data is massive. And DFPS has an excellent scalability. The experimental results show that DFPS is more than 10 times faster than YAFIM for T10I4D100K dataset and Pumsb_star dataset.",
"title": ""
},
{
"docid": "677e141690f1e40317bedfe754448b26",
"text": "Nowadays, secure data access control has become one of the major concerns in a cloud storage system. As a logical combination of attribute-based encryption and attribute-based signature, attribute-based signcryption (ABSC) can provide confidentiality and an anonymous authentication for sensitive data and is more efficient than traditional “encrypt-then-sign” or “sign-then-encrypt” strategies. Thus, ABSC is suitable for fine-grained access control in a semi-trusted cloud environment and is gaining more and more attention in recent years. However, in many previous ABSC schemes, user’s sensitive attributes can be disclosed to the authority, and only a single authority that is responsible for attribute management and key generation exists in the system. In this paper, we propose PMDAC-ABSC, a novel privacy-preserving data access control scheme based on Ciphertext-Policy ABSC, to provide a fine-grained control measure and attribute privacy protection simultaneously in a multi-authority cloud storage system. The attributes of both the signcryptor and the designcryptor can be protected to be known by the authorities and cloud server. Furthermore, the decryption overhead for user is significantly reduced by outsourcing the undesirable bilinear pairing operations to the cloud server without degrading the attribute privacy. The proposed scheme is proven to be secure in the standard model and has the ability to provide confidentiality, unforgeability, anonymous authentication, and public verifiability. The security analysis, asymptotic complexity comparison, and implementation results indicate that our construction can balance the security goals with practical efficiency in computation.",
"title": ""
},
{
"docid": "51620ef906b7fc5774e051fb3261d611",
"text": "Named Entity Recognition (NER) plays an important role in a variety of online information management tasks including text categorization, document clustering, and faceted search. While recent NER systems can achieve near-human performance on certain documents like news articles, they still remain highly domain-specific and thus cannot effectively identify entities such as original technical concepts in scientific documents. In this work, we propose novel approaches for NER on distinctive document collections (such as scientific articles) based on n-grams inspection and classification. We design and evaluate several entity recognition features---ranging from well-known part-of-speech tags to n-gram co-location statistics and decision trees---to classify candidates. In addition, we show how the use of external knowledge bases (either specific like DBLP or generic like DBPedia) can be leveraged to improve the effectiveness of NER for idiosyncratic collections. We evaluate our system on two test collections created from a set of Computer Science and Physics papers and compare it against state-of-the-art supervised methods. Experimental results show that a careful combination of the features we propose yield up to 85% NER accuracy over scientific collections and substantially outperforms state-of-the-art approaches such as those based on maximum entropy.",
"title": ""
},
{
"docid": "35c904cdbaddec5e7cd634978c0b415d",
"text": "Life-long visual localization is one of the most challenging topics in robotics over the last few years. The difficulty of this task is in the strong appearance changes that a place suffers due to dynamic elements, illumination, weather or seasons. In this paper, we propose a novel method (ABLE-M) to cope with the main problems of carrying out a robust visual topological localization along time. The novelty of our approach resides in the description of sequences of monocular images as binary codes, which are extracted from a global LDB descriptor and efficiently matched using FLANN for fast nearest neighbor search. Besides, an illumination invariant technique is applied. The usage of the proposed binary description and matching method provides a reduction of memory and computational costs, which is necessary for long-term performance. Our proposal is evaluated in different life-long navigation scenarios, where ABLE-M outperforms some of the main state-of-the-art algorithms, such as WI-SURF, BRIEF-Gist, FAB-MAP or SeqSLAM. Tests are presented for four public datasets where a same route is traversed at different times of day or night, along the months or across all four seasons.",
"title": ""
},
{
"docid": "8756441420669a6845254242030e0a79",
"text": "We propose a recurrent neural network (RNN) based model for image multi-label classification. Our model uniquely integrates and learning of visual attention and Long Short Term Memory (LSTM) layers, which jointly learns the labels of interest and their co-occurrences, while the associated image regions are visually attended. Different from existing approaches utilize either model in their network architectures, training of our model does not require pre-defined label orders. Moreover, a robust inference process is introduced so that prediction errors would not propagate and thus affect the performance. Our experiments on NUS-WISE and MS-COCO datasets confirm the design of our network and its effectiveness in solving multi-label classification problems.",
"title": ""
}
] |
scidocsrr
|
fd4a3d519df1ea1798b40ba1e8a8caab
|
Estimation of breathing rate and heart rate from photoplethysmogram
|
[
{
"docid": "e0f6edc7dcd7c80f81250d3a49129ee3",
"text": "Photoplethysmography is a non-invasive electro-optic method developed by Hertzman, which provides information on the blood volume flowing at a particular test site on the body close to the skin. PPG waveform contains two components; one, attributable to the pulsatile component in the vessels, i.e. the arterial pulse, which is caused by the heartbeat, and gives a rapidly alternating signal (AC component). The second one is due to the blood volume and its change in the skin which gives a steady signal that changes very slowly (DC component). PPG signal consists of not only the heart-beat information but also a respiratory signal. Estimation of respiration rates from Photoplethysmographic (PPG) signals would be an alternative approach for obtaining respiration related information.. There have been several efforts on PPG Derived Respiration (PDR), these methods are based on different signal processing techniques like filtering, wavelets and other statistical methods, which work by extraction of respiratory trend embedded into various physiological signals. PCA identifies patterns in data, and expresses the data in such a way as to highlight their similarities and differences. Since patterns in data can be hard to find in data of high dimension, where the luxury of graphical representation is not available, PCA is a powerful tool for analyzing such data. Due to external stimuli, biomedical signals are in general non-linear and non-stationary. Empirical Mode Decomposition is ideally suited to extract essential components which are characteristic of the underlying biological or physiological processes. The basis functions, called Intrinsic Mode Functions (IMFs) represent a complete set of locally orthogonal basis functions whose amplitude and frequency may vary over time. The contribution reviews the technique of EMD and related algorithms and discusses illustrative applications. Test results on PPG signals of the well known MIMIC database from Physiobank archive reveal that the proposed EMD method has efficiently extracted respiratory information from PPG signals. The evaluated similarity parameters in both time and frequency domains for original and estimated respiratory rates have shown the superiority of the method.",
"title": ""
}
] |
[
{
"docid": "ba67c3006c6167550bce500a144e63f1",
"text": "This paper provides an overview of different methods for evaluating automatic summarization systems. The challenges in evaluating summaries are characterized. Both intrinsic and extrinsic approaches are discussed. Methods for assessing informativeness and coherence are described. The advantages and disadvantages of specific methods are assessed, along with criteria for choosing among them. The paper concludes with some suggestions for future directions.",
"title": ""
},
{
"docid": "1ae3eb81ae75f6abfad4963ee0056be5",
"text": "Due to the shared responsibility model of clouds, tenants have to manage the security of their workloads and data. Developing security solutions using VMs or containers creates further problems as these resources also need to be secured. In this paper, we advocate for taking a serverless approach by proposing six serverless design patterns to build security services in the cloud. For each design pattern, we describe the key advantages and present applications and services utilizing the pattern. Using the proposed patterns as building blocks, we introduce a threat-intelligence platform that collects logs from various sources, alerts malicious activities, and takes actions against such behaviors. We also discuss the limitations of serverless design and how future implementations can overcome those limitations.",
"title": ""
},
{
"docid": "77749f228ebcadfbff9202ee17225752",
"text": "Temporal object detection has attracted significant attention, but most popular detection methods cannot leverage rich temporal information in videos. Very recently, many algorithms have been developed for video detection task, yet very few approaches can achieve real-time online object detection in videos. In this paper, based on the attention mechanism and convolutional long short-term memory (ConvLSTM), we propose a temporal single-shot detector (TSSD) for real-world detection. Distinct from the previous methods, we take aim at temporally integrating pyramidal feature hierarchy using ConvLSTM, and design a novel structure, including a low-level temporal unit as well as a high-level one for multiscale feature maps. Moreover, we develop a creative temporal analysis unit, namely, attentional ConvLSTM, in which a temporal attention mechanism is specially tailored for background suppression and scale suppression, while a ConvLSTM integrates attention-aware features across time. An association loss and a multistep training are designed for temporal coherence. Besides, an online tubelet analysis (OTA) is exploited for identification. Our framework is evaluated on ImageNet VID dataset and 2DMOT15 dataset. Extensive comparisons on the detection and tracking capability validate the superiority of the proposed approach. Consequently, the developed TSSD-OTA achieves a fast speed and an overall competitive performance in terms of detection and tracking. Finally, a real-world maneuver is conducted for underwater object grasping.",
"title": ""
},
{
"docid": "3180f7bd813bcd64065780bc9448dc12",
"text": "This paper reports on email classification and filtering, more specifically on spam versus ham and phishing versus spam classification, based on content features. We test the validity of several novel statistical feature extraction methods. The methods rely on dimensionality reduction in order to retain the most informative and discriminative features. We successfully test our methods under two schemas. The first one is a classic classification scenario using a 10-fold cross-validation technique for several corpora, including four ground truth standard corpora: Ling-Spam, SpamAssassin, PU1, and a subset of the TREC 2007 spam corpus, and one proprietary corpus. In the second schema, we test the anticipatory properties of our extracted features and classification models with two proprietary datasets, formed by phishing and spam emails sorted by date, and with the public TREC 2007 spam corpus. The contributions of our work are an exhaustive comparison of several feature selection and extraction methods in the frame of email classification on different benchmarking corpora, and the evidence that especially the technique of biased discriminant analysis offers better discriminative features for the classification, gives stable classification results notwithstanding the amount of features chosen, and robustly retains their discriminative value over time and data setups. These findings are especially useful in a commercial setting, where short profile rules are built based on a limited number of features for filtering emails.",
"title": ""
},
{
"docid": "ed5d0befb076f876de3a8a722e2e7d34",
"text": "Aircraft control system is designed to provide adequate stability during transition flight of a quad-tilt wing (QTW) UAV. The dynamic inversion method, which is a linearization method without an approximation algorithm, is applied to a control problem of the UAV, because of strong nonlinearity of its dynamical behavior. The validity of the proposed control system is verified through numerical simulation and experiment.",
"title": ""
},
{
"docid": "2831d24ae1b76a9a8204c9f79aec27e1",
"text": "Spittlebugs from the genus Aeneolamia are important pests of sugarcane. Although the use of the entomopathogenic fungus Metarhizum anisopliae s.l. for control of this pest is becoming more common in Mexico, fundamental information regarding M. anisopliae in sugarcane plantations is practically non-existent. Using phylogenetic analysis, we determined the specific diversity of Metarhizium spp. infecting adult spittlebugs in sugarcane plantations from four Mexican states. We obtained 29 isolates of M. anisopliae s.str. Haplotype network analysis revealed the existence of eight haplotypes. Eight selected isolates, representing the four Mexican states, were grown at different temperatures in vitro; isolates from Oaxaca achieved the greatest growth followed by isolates from Veracruz, San Luis Potosi and Tabasco. No relationship was found between in vitro growth and haplotype diversity. Our results represent a significant contribution to the better understanding of the ecology of Metarhizum spp. in the sugarcane agroecosystem.",
"title": ""
},
{
"docid": "303e7cfb73f6db763aa9dbe4418aaf91",
"text": "This paper presents a summary of the main types of snubber circuits; generally classified as dissipative and non-dissipative or active and passive snubbers. This type of circuits are commonly used because of they can suppress electrical spikes, allowing a better performance on the main electrical circuit. This article intent to describe the currently snubber circuits and its applications without getting into their design.",
"title": ""
},
{
"docid": "3585ee8052b23d2ea996dc8ad14cbb04",
"text": "The 5th generation (5G) of mobile radio access technologies is expected to become available for commercial launch around 2020. In this paper, we present our envisioned 5G system design optimized for small cell deployment taking a clean slate approach, i.e. removing most compatibility constraints with the previous generations of mobile radio access technologies. This paper mainly covers the physical layer aspects of the 5G concept design.",
"title": ""
},
{
"docid": "eea45eb670d380e722f3148479a0864d",
"text": "In this paper, we propose a hybrid Differential Evolution (DE) algorithm based on the fuzzy C-means clustering algorithm, referred to as FCDE. The fuzzy C-means clustering algorithm is incorporated with DE to utilize the information of the population efficiently, and hence it can generate good solutions and enhance the performance of the original DE. In addition, the population-based algorithmgenerator is adopted to efficiently update the population with the clustering offspring. In order to test the performance of our approach, 13 high-dimensional benchmark functions of diverse complexities are employed. The results show that our approach is effective and efficient. Compared with other state-of-the-art DE approaches, our approach performs better, or at least comparably, in terms of the quality of the final solutions and the reduction of the number of fitness function evaluations (NFFEs).",
"title": ""
},
{
"docid": "bb03f7d799b101966b4ea6e75cd17fea",
"text": "Fuzzy decision trees (FDTs) have shown to be an effective solution in the framework of fuzzy classification. The approaches proposed so far to FDT learning, however, have generally neglected time and space requirements. In this paper, we propose a distributed FDT learning scheme shaped according to the MapReduce programming model for generating both binary and multiway FDTs from big data. The scheme relies on a novel distributed fuzzy discretizer that generates a strong fuzzy partition for each continuous attribute based on fuzzy information entropy. The fuzzy partitions are, therefore, used as an input to the FDT learning algorithm, which employs fuzzy information gain for selecting the attributes at the decision nodes. We have implemented the FDT learning scheme on the Apache Spark framework. We have used ten real-world publicly available big datasets for evaluating the behavior of the scheme along three dimensions: 1) performance in terms of classification accuracy, model complexity, and execution time; 2) scalability varying the number of computing units; and 3) ability to efficiently accommodate an increasing dataset size. We have demonstrated that the proposed scheme turns out to be suitable for managing big datasets even with a modest commodity hardware support. Finally, we have used the distributed decision tree learning algorithm implemented in the MLLib library and the Chi-FRBCS-BigData algorithm, a MapReduce distributed fuzzy rule-based classification system, for comparative analysis.",
"title": ""
},
{
"docid": "88c0789e82c86b0e730480f44712012d",
"text": "In spite of their having sufficient immunogenicity, tumor vaccines remain largely ineffective. The mechanisms underlying this lack of efficacy are still unclear. Here we report a previously undescribed mechanism by which the tumor endothelium prevents T cell homing and hinders tumor immunotherapy. Transcriptional profiling of microdissected tumor endothelial cells from human ovarian cancers revealed genes associated with the absence or presence of tumor-infiltrating lymphocytes (TILs). Overexpression of the endothelin B receptor (ETBR) was associated with the absence of TILs and short patient survival time. The ETBR inhibitor BQ-788 increased T cell adhesion to human endothelium in vitro, an effect countered by intercellular adhesion molecule-1 (ICAM-1) blockade or treatment with NO donors. In mice, ETBR neutralization by BQ-788 increased T cell homing to tumors; this homing required ICAM-1 and enabled tumor response to otherwise ineffective immunotherapy in vivo without changes in systemic antitumor immune response. These findings highlight a molecular mechanism with the potential to be pharmacologically manipulated to enhance the efficacy of tumor immunotherapy in humans.",
"title": ""
},
{
"docid": "d6e76bfeeb127addcbe2eb77b1b0ad7e",
"text": "The choice of modeling units is critical to automatic speech recognition (ASR) tasks. Conventional ASR systems typically choose context-dependent states (CD-states) or contextdependent phonemes (CD-phonemes) as their modeling units. However, it has been challenged by sequence-to-sequence attention-based models, which integrate an acoustic, pronunciation and language model into a single neural network. On English ASR tasks, previous attempts have already shown that the modeling unit of graphemes can outperform that of phonemes by sequence-to-sequence attention-based model. In this paper, we are concerned with modeling units on Mandarin Chinese ASR tasks using sequence-to-sequence attention-based models with the Transformer. Five modeling units are explored including context-independent phonemes (CI-phonemes), syllables, words, sub-words and characters. Experiments on HKUST datasets demonstrate that the lexicon free modeling units can outperform lexicon related modeling units in terms of character error rate (CER). Among five modeling units, character based model performs best and establishes a new state-of-the-art CER of 26.64% on HKUST datasets without a hand-designed lexicon and an extra language model integration, which corresponds to a 4.8% relative improvement over the existing best CER of 28.0% by the joint CTC-attention based encoder-decoder network.",
"title": ""
},
{
"docid": "86aa13e31baf7923c3bdd83e7d50a16f",
"text": "Sentence pair modeling is a crucial problem in the field of natural language processing. In this paper, we propose a model to measure the similarity of a sentence pair focusing on the interaction information. We utilize the word level similarity matrix to discover fine-grained alignment of two sentences. It should be emphasized that each word in a sentence has a different importance from the perspective of semantic composition, so we exploit two novel and efficient strategies to explicitly calculate a weight for each word. Although the proposed model only use a sequential LSTM for sentence modeling without any external resource such as syntactic parser tree and additional lexicon features, experimental results show that our model achieves state-of-the-art performance on three datasets of two tasks.",
"title": ""
},
{
"docid": "01835769f2dc9391051869374e200a6a",
"text": "Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.",
"title": ""
},
{
"docid": "efcd3c0ddf0b254f07bc40d4d2f71dfd",
"text": "In this paper, we first provide a comprehensive investigation of four online job recommender systems (JRSs) from four different aspects: user profiling, recommendation strategies, recommendation output, and user feedback. In particular, we summarize the pros and cons of these online JRSs and highlight their differences. We then discuss the challenges in building high-quality JRSs. One main challenge lies on the design of recommendation strategies since different job applicants may have different characteristics. To address the aforementioned challenge, we develop an online JRS, iHR, which groups users into different clusters and employs different recommendation approaches for different user clusters. As a result, iHR has the capability of choosing the appropriate recommendation approaches according to users’ characteristics. Empirical results demonstrate the effectiveness of the proposed system.",
"title": ""
},
{
"docid": "3276c3fd88917d54db29f84995b5f88f",
"text": "Traditional sound event recognition methods based on informative front end features such as MFCC, with back end sequencing methods such as HMM, tend to perform poorly in the presence of interfering acoustic noise. Since noise corruption may be unavoidable in practical situations, it is important to develop more robust features and classifiers. Recent advances in this field use powerful machine learning techniques with high dimensional input features such as spectrograms or auditory image. These improve robustness largely thanks to the discriminative capabilities of the back end classifiers. We extend this further by proposing novel features derived from spectrogram energy triggering, allied with the powerful classification capabilities of a convolutional neural network (CNN). The proposed method demonstrates excellent performance under noise-corrupted conditions when compared against state-of-the-art approaches on standard evaluation tasks. To the author's knowledge this in the first application of CNN in this field.",
"title": ""
},
{
"docid": "ba39b85859548caa2d3f1d51a7763482",
"text": "A new antenna structure of internal LTE/WWAN laptop computer antenna formed by a coupled-fed loop antenna connected with two branch radiators is presented. The two branch radiators consist of one longer strip and one shorter strip, both contributing multi-resonant modes to enhance the bandwidth of the antenna. The antenna's lower band is formed by a dual-resonant mode mainly contributed by the longer branch strip, while the upper band is formed by three resonant modes contributed respectively by one higher-order resonant mode of the longer branch strip, one resonant mode of the coupled-fed loop antenna alone, and one resonant mode of the shorter branch strip. The antenna's lower and upper bands can therefore cover the desired 698~960 and 1710~2690 MHz bands, respectively. The proposed antenna is suitable to be mounted at the top shielding metal wall of the display ground of the laptop computer and occupies a small volume of 4 × 10 × 75 mm3 above the top shielding metal wall, which makes it promising to be embedded inside the casing of the laptop computer as an internal antenna.",
"title": ""
}
] |
scidocsrr
|
635f0d09cec7ecb43334f92736b62adc
|
Differential Recurrent Neural Networks for Action Recognition
|
[
{
"docid": "86f0e783a93fc783e10256c501008b0d",
"text": "We present a biologically-motivated system for the recognition of actions from video sequences. The approach builds on recent work on object recognition based on hierarchical feedforward architectures [25, 16, 20] and extends a neurobiological model of motion processing in the visual cortex [10]. The system consists of a hierarchy of spatio-temporal feature detectors of increasing complexity: an input sequence is first analyzed by an array of motion- direction sensitive units which, through a hierarchy of processing stages, lead to position-invariant spatio-temporal feature detectors. We experiment with different types of motion-direction sensitive units as well as different system architectures. As in [16], we find that sparse features in intermediate stages outperform dense ones and that using a simple feature selection approach leads to an efficient system that performs better with far fewer features. We test the approach on different publicly available action datasets, in all cases achieving the highest results reported to date.",
"title": ""
},
{
"docid": "4b33d61fce948b8c7942ca6180765a59",
"text": "We propose in this paper a fully automated deep model, which learns to classify human actions without using any prior knowledge. The first step of our scheme, based on the extension of Convolutional Neural Networks to 3D, automatically learns spatio-temporal features. A Recurrent Neural Network is then trained to classify each sequence considering the temporal evolution of the learned features for each timestep. Experimental results on the KTH dataset show that the proposed approach outperforms existing deep models, and gives comparable results with the best related works.",
"title": ""
}
] |
[
{
"docid": "e34ef27660f2e084d22863060b1c6ab1",
"text": "Plants are widely used in many indigenous systems of medicine for therapeutic purposes and are increasingly becoming popular in modern society as alternatives to synthetic medicines. Bioactive principles are derived from the products of plant primary metabolites, which are associated with the process of photosynthesis. The present review highlighted the chemical diversity and medicinal potentials of bioactive principles as well inherent toxicity concerns associated with the use of these plant products, which are of relevance to the clinician, pharmacist or toxicologist. Plant materials are composed of vast array of bioactive principles of which their isolation, identification and characterization for analytical evaluation requires expertise with cutting edge analytical protocols and instrumentations. Bioactive principles are responsible for the therapeutic activities of medicinal plants and provide unlimited opportunities for new drug leads because of their unmatched availability and chemical diversity. For the most part, the beneficial or toxic outcomes of standardized plant extracts depend on the chemical peculiarities of the containing bioactive principles.",
"title": ""
},
{
"docid": "97107561103eec062d9a2d4ae28ffb9e",
"text": "Development of loyalty in customers is a strategic goal of many firms and organizations and today, the main effort of many firms is allocated to retain customers and obtaining even more ones. Characteristics of loyal customers and method for formation of loyalty in customers in internet space are different to those in traditional one in some respects and study of them may be beneficial in improving performance of firms, organizations and shops involving in this field of business. Also it may help managers of these types of businesses to make efficient and effective decisions towards success of their organizations. Thus, present study aims to investigate the effects of e-service quality in three aspects of information, system and web-service on e-trust and e-satisfaction as key factors influencing creation of e-loyalty of Iranian customers in e-business context; Also it was tried to demonstrate moderating effect of situational factors e.g. time poverty, geographic distance, physical immobility and lack of transportation on e-loyalty level. Totally, 400 questionnaires were distributed to university students, that 382 questionnaires were used for the final analysis, which the results from analysis of them based on simple linear regression and multiple hierarchical regression show that customer loyalty to e-shops is directly influenced by e-trust in and e-satisfaction with e-shops which in turn are determined by e-service quality; also the obtained results shows that situational variables can moderate relationship between e-trust and/or e-satisfaction and e-loyalty. Therefore situational variables studied in present research can influence initiation of transaction of customer with online retailer and customer attitude importance and this in turn makes it necessary for managers to pay special attention to situational effects in examination of current attitude and behavior of customers.",
"title": ""
},
{
"docid": "833c110e040311909aa38b05e457b2af",
"text": "The scyphozoan Aurelia aurita (Linnaeus) s. l., is a cosmopolitan species-complex which blooms seasonally in a variety of coastal and shelf sea environments around the world. We hypothesized that ephyrae of Aurelia sp.1 are released from the inner part of the Jiaozhou Bay, China when water temperature is below 15°C in late autumn and winter. The seasonal occurrence, growth, and variation of the scyphomedusa Aurelia sp.1 were investigated in Jiaozhou Bay from January 2011 to December 2011. Ephyrae occurred from May through June with a peak abundance of 2.38 ± 0.56 ind/m3 in May, while the temperature during this period ranged from 12 to 18°C. The distribution of ephyrae was mainly restricted to the coastal area of the bay, and the abundance was higher in the dock of the bay than at the other inner bay stations. Young medusae derived from ephyrae with a median diameter of 9.74 ± 1.7 mm were present from May 22. Growth was rapid from May 22 to July 2 with a maximum daily growth rate of 39%. Median diameter of the medusae was 161.80 ± 18.39 mm at the beginning of July. In August, a high proportion of deteriorated specimens was observed and the median diameter decreased. The highest average abundance is 0.62 ± 1.06 ind/km2 in Jiaozhou Bay in August. The abundance of Aurelia sp.1 medusae was low from September and then decreased to zero. It is concluded that water temperature is the main driver regulating the life cycle of Aurelia sp.1 in Jiaozhou Bay.",
"title": ""
},
{
"docid": "3c4f19544e9cc51d307c6cc9aea63597",
"text": "Math anxiety is a negative affective reaction to situations involving math. Previous work demonstrates that math anxiety can negatively impact math problem solving by creating performance-related worries that disrupt the working memory needed for the task at hand. By leveraging knowledge about the mechanism underlying the math anxiety-performance relationship, we tested the effectiveness of a short expressive writing intervention that has been shown to reduce intrusive thoughts and improve working memory availability. Students (N = 80) varying in math anxiety were asked to sit quietly (control group) prior to completing difficulty-matched math and word problems or to write about their thoughts and feelings regarding the exam they were about to take (expressive writing group). For the control group, high math-anxious individuals (HMAs) performed significantly worse on the math problems than low math-anxious students (LMAs). In the expressive writing group, however, this difference in math performance across HMAs and LMAs was significantly reduced. Among HMAs, the use of words related to anxiety, cause, and insight in their writing was positively related to math performance. Expressive writing boosts the performance of anxious students in math-testing situations.",
"title": ""
},
{
"docid": "7f67fa8a662b3039cfcb18961628b2d8",
"text": "Contextual multi-armed bandit problems have gained increasing popularity and attention in recent years due to their capability of leveraging contextual information to deliver online personalized recommendation services (e.g., online advertising and news article selection). To predict the reward of each arm given a particular context, existing relevant research studies for contextual multi-armed bandit problems often assume the existence of a fixed yet unknown reward mapping function. However, this assumption rarely holds in practice, since real-world problems often involve underlying processes that are dynamically evolving over time.\n In this paper, we study the time varying contextual multi-armed problem where the reward mapping function changes over time. In particular, we propose a dynamical context drift model based on particle learning. In the proposed model, the drift on the reward mapping function is explicitly modeled as a set of random walk particles, where good fitted particles are selected to learn the mapping dynamically. Taking advantage of the fully adaptive inference strategy of particle learning, our model is able to effectively capture the context change and learn the latent parameters. In addition, those learnt parameters can be naturally integrated into existing multi-arm selection strategies such as LinUCB and Thompson sampling. Empirical studies on two real-world applications, including online personalized advertising and news recommendation, demonstrate the effectiveness of our proposed approach. The experimental results also show that our algorithm can dynamically track the changing reward over time and consequently improve the click-through rate.",
"title": ""
},
{
"docid": "1595cdc0f2af969e49525dd3fab419d9",
"text": "Video object detection is challenging because objects that are easily detected in one frame may be difficult to detect in another frame within the same clip. Recently, there have been major advances for doing object detection in a single image. These methods typically contain three phases: (i) object proposal generation (ii) object classification and (iii) post-processing. We propose a modification of the post-processing phase that uses high-scoring object detections from nearby frames to boost scores of weaker detections within the same clip. We show that our method obtains superior results to state-of-the-art single image object detection techniques. Our method placed 3 in the video object detection (VID) task of the ImageNet Large Scale Visual Recognition Challenge 2015 (ILSVRC2015).",
"title": ""
},
{
"docid": "90dfa19b821aeab985a96eba0c3037d3",
"text": "Carcass mass and carcass clothing are factors of potential high forensic importance. In casework, corpses differ in mass and kind or extent of clothing; hence, a question arises whether methods for post-mortem interval estimation should take these differences into account. Unfortunately, effects of carcass mass and clothing on specific processes in decomposition and related entomological phenomena are unclear. In this article, simultaneous effects of these factors are analysed. The experiment followed a complete factorial block design with four levels of carcass mass (small carcasses 5–15 kg, medium carcasses 15.1–30 kg, medium/large carcasses 35–50 kg, large carcasses 55–70 kg) and two levels of carcass clothing (clothed and unclothed). Pig carcasses (N = 24) were grouped into three blocks, which were separated in time. Generally, carcass mass revealed significant and frequently large effects in almost all analyses, whereas carcass clothing had only minor influence on some phenomena related to the advanced decay. Carcass mass differently affected particular gross processes in decomposition. Putrefaction was more efficient in larger carcasses, which manifested itself through earlier onset and longer duration of bloating. On the other hand, active decay was less efficient in these carcasses, with relatively low average rate, resulting in slower mass loss and later onset of advanced decay. The average rate of active decay showed a significant, logarithmic increase with an increase in carcass mass, but only in these carcasses on which active decay was driven solely by larval blowflies. If a blowfly-driven active decay was followed by active decay driven by larval Necrodes littoralis (Coleoptera: Silphidae), which was regularly found in medium/large and large carcasses, the average rate showed only a slight and insignificant increase with an increase in carcass mass. These results indicate that lower efficiency of active decay in larger carcasses is a consequence of a multi-guild and competition-related pattern of this process. Pattern of mass loss in large and medium/large carcasses was not sigmoidal, but rather exponential. The overall rate of decomposition was strongly, but not linearly, related to carcass mass. In a range of low mass decomposition rate increased with an increase in mass, then at about 30 kg, there was a distinct decrease in rate, and again at about 50 kg, the rate slightly increased. Until about 100 accumulated degree-days larger carcasses gained higher total body scores than smaller carcasses. Afterwards, the pattern was reversed; moreover, differences between classes of carcasses enlarged with the progress of decomposition. In conclusion, current results demonstrate that cadaver mass is a factor of key importance for decomposition, and as such, it should be taken into account by decomposition-related methods for post-mortem interval estimation.",
"title": ""
},
{
"docid": "b52a29cd426c5861dbb97aeb91efda4b",
"text": "In recent years, inexact computing has been increasingly regarded as one of the most promising approaches for slashing energy consumption in many applications that can tolerate a certain degree of inaccuracy. Driven by the principle of trading tolerable amounts of application accuracy in return for significant resource savings-the energy consumed, the (critical path) delay, and the (silicon) area-this approach has been limited to application-specified integrated circuits (ASICs) so far. These ASIC realizations have a narrow application scope and are often rigid in their tolerance to inaccuracy, as currently designed; the latter often determining the extent of resource savings we would achieve. In this paper, we propose to improve the application scope, error resilience and the energy savings of inexact computing by combining it with hardware neural networks. These neural networks are fast emerging as popular candidate accelerators for future heterogeneous multicore platforms and have flexible error resilience limits owing to their ability to be trained. Our results in 65-nm technology demonstrate that the proposed inexact neural network accelerator could achieve 1.78-2.67× savings in energy consumption (with corresponding delay and area savings being 1.23 and 1.46×, respectively) when compared to the existing baseline neural network implementation, at the cost of a small accuracy loss (mean squared error increases from 0.14 to 0.20 on average).",
"title": ""
},
{
"docid": "76ad212ccd103c93d45c1ffa0e208b45",
"text": "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors.",
"title": ""
},
{
"docid": "8e1947a9e890ef110c75a52d706eec2a",
"text": "Despite the rapid increase in online shopping, the literature is silent in terms of the interrelationship between perceived risk factors, the marketing impacts, and their influence on product and web-vendor consumer trust. This research focuses on holidaymakers’ perspectives using Internet bookings for their holidays. The findings reveal the associations between Internet perceived risks and the relatively equal influence of product and e-channel risks in consumers’ trust, and that online purchasing intentions are equally influenced by product and e-channel consumer trust. They also illustrate the relationship between marketing strategies and perceived risks, and provide managerial suggestions for further e-purchasing tourism improvement.",
"title": ""
},
{
"docid": "286fc2c4342a9269f40aa2701271f33a",
"text": "While Blockchain network brings tremendous benefits, there are concerns whether their performance would match up with the mainstream IT systems. This paper aims to investigate whether the consensus process using Practical Byzantine Fault Tolerance (PBFT) could be a performance bottleneck for networks with a large number of peers. We model the PBFT consensus process using Stochastic Reward Nets (SRN) to compute the mean time to complete consensus for networks up to 100 peers. We create a blockchain network using IBM Bluemix service, running a production-grade IoT application and use the data to parameterize and validate our models. We also conduct sensitivity analysis over a variety of system parameters and examine the performance of larger networks",
"title": ""
},
{
"docid": "e50c07aa28cafffc43dd7eb29892f10f",
"text": "Recent approaches to the Automatic Postediting (APE) of Machine Translation (MT) have shown that best results are obtained by neural multi-source models that correct the raw MT output by also considering information from the corresponding source sentence. To this aim, we present for the first time a neural multi-source APE model based on the Transformer architecture. Moreover, we employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics used for the task. These are the main features of our submissions to the WMT 2018 APE shared task, where we participated both in the PBSMT subtask (i.e. the correction of MT outputs from a phrase-based system) and in the NMT subtask (i.e. the correction of neural outputs). In the first subtask, our system improves over the baseline up to -5.3 TER and +8.23 BLEU points ranking second out of 11 submitted runs. In the second one, characterized by the higher quality of the initial translations, we report lower but statistically significant gains (up to -0.38 TER and +0.8 BLEU), ranking first out of 10 submissions.",
"title": ""
},
{
"docid": "09931e4753462f9f03070bf95637eca6",
"text": "Information visualization systems often present usability problems mainly because many of these tools are not submitted to complete evaluation studies. This paper presents an experimental study based on tests with users to evaluate two multidimensional information visualization techniques, Parallel Coordinates and Radviz. The tasks used in the experiments were defined based on a taxonomy of users' tasks for interaction with multidimensional visualizations. The study intended to identify usability problems following the ergonomic criteria from Bastien and Scapin on implementations of both techniques, especially built for these experiments with the InfoVis toolkit.",
"title": ""
},
{
"docid": "9d07ea41492191967ceaf7d7fe946726",
"text": "Physical Unclonable Functions (PUFs) have been introduced as a new cryptographic primitive, and whilst a large number of PUF designs and applications have been proposed, few studies has been undertaken on the theoretical foundation of PUFs. At the same time, many PUF designs have been found to be insecure, raising questions about their design methodology. Moreover, PUFs with efficient implementation are needed to enable many applications in practice. In this paper, we present novel results on the theoretical foundation and practical construction for PUFs. First, we prove that, for an l-bit-input and m-bit-output PUF containing n silicon components, if n < m2 l c where c is a constant, then 1) the PUF cannot be a random function, and 2) confusion and diffusion are necessary for the PUF to be a pseudorandom function. Then, we propose a helper data algorithm (HDA) that is secure against active attacks and significantly reduces PUF implementation overhead compared to previous HDAs. Finally, we integrate PUF construction into block cipher design to implement an efficient physical unclonable pseudorandom permutation (PUPRP); to the best of our knowledge, this is the first practical PUPRP using an integrated approach.",
"title": ""
},
{
"docid": "6719f6fb19a3fba64adc376c340a1954",
"text": "Server consolidation using virtualization technology has become increasingly important for improving data center efficiency. It enables one physical server to host multiple independent virtual machines (VMs), and the transparent movement of workloads from one server to another. Fine-grained virtual machine resource allocation and reallocation are possible in order to meet the performance targets of applications running on virtual machines. On the other hand, these capabilities create demands on system management, especially for large-scale data centers. In this paper, a two-level control system is proposed to manage the mappings of workloads to VMs and VMs to physical resources. The focus is on the VM placement problem which is posed as a multi-objective optimization problem of simultaneously minimizing total resource wastage, power consumption and thermal dissipation costs. An improved genetic algorithm with fuzzy multi-objective evaluation is proposed for efficiently searching the large solution space and conveniently combining possibly conflicting objectives. The simulation-based evaluation using power-consumption and thermal-dissipation models based on profiling of a Blade Center, demonstrates the good performance, scalability and robustness of our proposed approach. Compared with four well-known bin-packing algorithms and two single-objective approaches, the solutions obtained from our approach seek good balance among the conflicting objectives while others cannot.",
"title": ""
},
{
"docid": "85e867bd998e9c68540d4a22305d8bab",
"text": "Warped Gaussian processes (WGP) [1] model output observations in regression tasks as a parametric nonlinear transformation of a Gaussian process (GP). The use of this nonlinear transformation, which is included as part of the probabilistic model, was shown to enhance performance by providing a better prior model on several data sets. In order to learn its parameters, maximum likelihood was used. In this work we show that it is possible to use a non-parametric nonlinear transformation in WGP and variationally integrate it out. The resulting Bayesian WGP is then able to work in scenarios in which the maximum likelihood WGP failed: Low data regime, data with censored values, classification, etc. We demonstrate the superior performance of Bayesian warped GPs on several real data sets.",
"title": ""
},
{
"docid": "92f73017511570bd81ad62933ae2ba90",
"text": "Augmented Reality (AR) is a rapidly developing field with numerous potential applications. For example, building developers, public authorities and other construction industry stakeholders need to visually assess potential new developments with regard to aesthetics, health & safety, and other criteria. Current state-of-the-art visualization technologies are mainly fully virtual, while AR has the potential to enhance those visualizations by observing proposed designs directly within the real",
"title": ""
},
{
"docid": "c796a0c9fd09f795a32f2ef09b1c0405",
"text": "Vectors of data are at the heart of machine learning and data mining. Recently, vector quantization methods have shown great promise in reducing both the time and space costs of operating on vectors. We introduce a vector quantization algorithm that can compress vectors over 12x faster than existing techniques while also accelerating approximate vector operations such as distance and dot product computations by up to 10x. Because it can encode over 2GB of vectors per second, it makes vector quantization cheap enough to employ in many more circumstances. For example, using our technique to compute approximate dot products in a nested loop can multiply matrices faster than a state-of-the-art BLAS implementation, even when our algorithm must first compress the matrices. In addition to showing the above speedups, we demonstrate that our approach can accelerate nearest neighbor search and maximum inner product search by over 100x compared to floating point operations and 10x compared to other vector quantization methods. Our approximate Euclidean distance and dot product computations are not only faster than those of related algorithms with slower encodings, but also faster than Hamming distance computations, which have direct hardware support on the tested platforms. We also assess the errors of our algorithm's approximate distances and dot products, and find that it is competitive with existing, slower vector quantization algorithms.",
"title": ""
},
{
"docid": "d0a41ebc758439b91f96b44c40dd711b",
"text": "Chirp signals are very common in radar, communication, sonar, and etc. Little is known about chirp images, i.e., 2-D chirp signals. In fact, such images frequently appear in optics and medical science. Newton's rings fringe pattern is a classical example of the images, which is widely used in optical metrology. It is known that the fractional Fourier transform(FRFT) is a convenient method for processing chirp signals. Furthermore, it can be extended to 2-D fractional Fourier transform for processing 2-D chirp signals. It is interesting to observe the chirp images in the 2-D fractional Fourier transform domain and extract some physical parameters hidden in the images. Besides that, in the FRFT domain, it is easy to separate the 2-D chirp signal from other signals to obtain the desired image.",
"title": ""
},
{
"docid": "a740207cc7d4a0db263dae2b7c9402d9",
"text": "In this paper we propose a Deep Autoencoder Mixture Clustering (DAMIC) algorithm based on a mixture of deep autoencoders where each cluster is represented by an autoencoder. A clustering network transforms the data into another space and then selects one of the clusters. Next, the autoencoder associated with this cluster is used to reconstruct the data-point. The clustering algorithm jointly learns the nonlinear data representation and the set of autoencoders. The optimal clustering is found by minimizing the reconstruction loss of the mixture of autoencoder network. Unlike other deep clustering algorithms, no regularization term is needed to avoid data collapsing to a single point. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.