query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
b962900165fd75d987baa83a2dc44fe4
|
ChAirGest: a challenge for multimodal mid-air gesture recognition for close HCI
|
[
{
"docid": "a44fb81a08d6444093160c86a9e5b98d",
"text": "While research on articulated human motion and pose estimation has progressed rapidly in the last few years, there has been no systematic quantitative evaluation of competing methods to establish the current state of the art. We present data obtained using a hardware system that is able to capture synchronized video and ground-truth 3D motion. The resulting HumanEva datasets contain multiple subjects performing a set of predefined actions with a number of repetitions. On the order of 40,000 frames of synchronized motion capture and multi-view video (resulting in over one quarter million image frames in total) were collected at 60 Hz with an additional 37,000 time instants of pure motion capture data. A standard set of error measures is defined for evaluating both 2D and 3D pose estimation and tracking algorithms. We also describe a baseline algorithm for 3D articulated tracking that uses a relatively standard Bayesian framework with optimization in the form of Sequential Importance Resampling and Annealed Particle Filtering. In the context of this baseline algorithm we explore a variety of likelihood functions, prior models of human motion and the effects of algorithm parameters. Our experiments suggest that image observation models and motion priors play important roles in performance, and that in a multi-view laboratory environment, where initialization is available, Bayesian filtering tends to perform well. The datasets and the software are made available to the research community. This infrastructure will support the development of new articulated motion and pose estimation algorithms, will provide a baseline for the evaluation and comparison of new methods, and will help establish the current state of the art in human pose estimation and tracking.",
"title": ""
}
] |
[
{
"docid": "fd641e35cd9372731416b5830fdaed51",
"text": "This paper describes a macroergonomic framework for computer and information security (CIS). Moving away from the current emphasis of technologycentered approaches to CIS, our framework shifts to a multi-dimensional examination of CIS. This examination includes four subsystems of a computer and information security system: the technical, social, organizational, and external environment subsystems. Our framework emphasizes that the interactions within and among these subsystems create technical computer and information security vulnerabilities. The contribution of this framework is a complete view of the complex and multivarious nature of CIS systems. This view is important for understanding the etiology of CIS system vulnerabilties. With this understanding, we can build more secure computer and information systems to remediate CIS breaches and attacks.",
"title": ""
},
{
"docid": "bbd93aa92e52fd40cff395170ede851f",
"text": "Recently, many authentication protocols have been presented using smartcard for the telecare medicine information system (TMIS). In 2014, Xu et al. put forward a two-factor mutual authentication with key agreement protocol using elliptic curve cryptography (ECC). However, the authors have proved that the protocol is not appropriate for practical use as it has many problems (1) it fails to achieve strong authentication in login and authentication phases; (2) it fails to update the password correctly in the password change phase; (3) it fails to provide the revocation of lost/stolen smartcard; and (4) it fails to protect the strong replay attack. We then devised an anonymous and provably secure two-factor authentication protocol based on ECC. Our protocol is analyzed with the random oracle model and demonstrated to be formally secured against the hardness assumption of computational Diffie-Hellman problem. The performance evaluation demonstrated that our protocol outperforms from the perspective of security, functionality and computation costs over other existing designs.",
"title": ""
},
{
"docid": "45b5072faafa8a26cfe320bd5faedbcd",
"text": "METIS-II was an EU-FET MT project running from October 2004 to September 2007, which aimed at translating free text input without resorting to parallel corpora. The idea was to use “basic” linguistic tools and representations and to link them with patterns and statistics from the monolingual target-language corpus. The METIS-II project has four partners, translating from their “home” languages Greek, Dutch, German, and Spanish into English. The paper outlines the basic ideas of the project, their implementation, the resources used, and the results obtained. It also gives examples of how METIS-II has continued beyond its lifetime and the original scope of the project. On the basis of the results and experiences obtained, we believe that the approach is promising and offers the potential for development in various directions.",
"title": ""
},
{
"docid": "08bd4d2c48ebde047a8b36ce72fe61b6",
"text": "S imultaneous localization and mapping (SLAM) is the process by which a mobile robot can build a map of the environment and, at the same time, use this map to compute its location. The past decade has seen rapid and exciting progress in solving the SLAM problem together with many compelling implementations of SLAM methods. The great majority of work has focused on improving computational efficiency while ensuring consistent and accurate estimates for the map and vehicle pose. However, there has also been much research on issues such as nonlinearity, data association , and landmark characterization, all of which are vital in achieving a practical and robust SLAM implementation. This tutorial focuses on the recursive Bayesian formulation of the SLAM problem in which probability distributions or estimates of absolute or relative locations of landmarks and vehicle pose are obtained. Part I of this tutorial (IEEE Robotics & Auomation Magazine, vol. 13, no. 2) surveyed the development of the essential SLAM algorithm in state-space and particle filter form, described a number of key implementations, and cited locations of source code and real-world data for evaluation of SLAM algorithms. Part II of this tutorial (this article), surveys the current state of the art in SLAM research with a focus on three key areas: computational complexity, data association, and environment representation. Much of the mathematical notation and essential concepts used in this article are defined in Part I of this tutorial and, therefore, are not repeated here. SLAM, in its naive form, scales quadratically with the number of landmarks in a map. For real-time implementation, this scaling is potentially a substantial limitation in the use of SLAM methods. The complexity section surveys the many approaches that have been developed to reduce this complexity. These include linear-time state augmentation, sparsifica-tion in information form, partitioned updates, and submapping methods. A second major hurdle to overcome in the implementation of SLAM methods is to correctly associate observations of landmarks with landmarks held in the map. Incorrect association can lead to catastrophic failure of the SLAM algorithm. Data association is particularly important when a vehicle returns to a previously mapped region after a long excursion, the so-called loop-closure problem. The data association section surveys current data association methods used in SLAM. These include batch-validation methods that exploit constraints inherent in the SLAM formulation, appearance based methods, and multihypothesis techniques. The third development discussed in this tutorial is …",
"title": ""
},
{
"docid": "d1fa477646e636a3062312d6f6444081",
"text": "This paper proposes a novel attention model for semantic segmentation, which aggregates multi-scale and context features to refine prediction. Specifically, the skeleton convolutional neural network framework takes in multiple different scales inputs, by which means the CNN can get representations in different scales. The proposed attention model will handle the features from different scale streams respectively and integrate them. Then location attention branch of the model learns to softly weight the multi-scale features at each pixel location. Moreover, we add an recalibrating branch, parallel to where location attention comes out, to recalibrate the score map per class. We achieve quite competitive results on PASCAL VOC 2012 and ADE20K datasets, which surpass baseline and related works.",
"title": ""
},
{
"docid": "6cede97ce872c5b95ea483851a09c707",
"text": "The UK construction industry has been branded as an inefficient, fragmented and non-value delivering industry by prominent critics such as Michael Latham and Sir. John Egan; both have insisted on the need to change the way the industry delivers and manages assets through integrated project processes. Sir. John Egan specifically highlighted the need for significant reduction in project time and cost. As a result, for the last few years the main emphasis within the construction industry is on integrating various project processes by using the integrated approach and enabling technologies that bring all the stakeholders in a close relationship for achieving greater success. One of the most critical project processes is the cost management that involves cost estimating, control of expenditure and cost advice on cash flow and payments. Furthermore, Building Information Modeling (BIM) has emerged as a very powerful approach and set of information technologies that allows the project stakeholders to work collaboratively on highly technical and comprehensive models using parametric design components and visualize design in 3D. The UK government has made the delivery of public procured projects through BIM mandatory from 2016. Thus, it has become critical to investigate the prospects of cost management practice in this context and determination of how BIM can help in its improvement. The aim of this paper is to examine the importance of BIM in the UK construction sector with a specific focus on cost management through a state-of-the-art review of literature in order to highlight the significance of BIM for potential improvements in area of cost management.",
"title": ""
},
{
"docid": "314e1b8bbcc0a5735d86bb751d524a93",
"text": "Ubiquinone (coenzyme Q), in addition to its function as an electron and proton carrier in mitochondrial and bacterial electron transport linked to ATP synthesis, acts in its reduced form (ubiquinol) as an antioxidant, preventing the initiation and/or propagation of lipid peroxidation in biological membranes and in serum low-density lipoprotein. The antioxidant activity of ubiquinol is independent of the effect of vitamin E, which acts as a chain-breaking antioxidant inhibiting the propagation of lipid peroxidation. In addition, ubiquinol can efficiently sustain the effect of vitamin E by regenerating the vitamin from the tocopheroxyl radical, which otherwise must rely on water-soluble agents such as ascorbate (vitamin C). Ubiquinol is the only known lipid-soluble antioxidant that animal cells can synthesize de novo, and for which there exist enzymic mechanisms that can regenerate the antioxidant from its oxidized form resulting from its inhibitory effect of lipid peroxidation. These features, together with its high degree of hydrophobicity and its widespread occurrence in biological membranes and in low-density lipoprotein, suggest an important role of ubiquinol in cellular defense against oxidative damage. Degenerative diseases and aging may bc 1 manifestations of a decreased capacity to maintain adequate ubiquinol levels.",
"title": ""
},
{
"docid": "25bb62673d1bfadfc751bd10413c94dd",
"text": "Phase-change materials are some of the most promising materials for data-storage applications. They are already used in rewriteable optical data storage and offer great potential as an emerging non-volatile electronic memory. This review looks at the unique property combination that characterizes phase-change materials. The crystalline state often shows an octahedral-like atomic arrangement, frequently accompanied by pronounced lattice distortions and huge vacancy concentrations. This can be attributed to the chemical bonding in phase-change alloys, which is promoted by p-orbitals. From this insight, phase-change alloys with desired properties can be designed. This is demonstrated for the optical properties of phase-change alloys, in particular the contrast between the amorphous and crystalline states. The origin of the fast crystallization kinetics is also discussed.",
"title": ""
},
{
"docid": "5357d90787090ec822d0b540d09b6c6b",
"text": "Providing accurate attendance marking system in real-time is challenging. It is tough to mark the attendance of a student in the large classroom when there are many students attending the class. Many attendance management systems have been implemented in the recent research. However, the attendance management system based on facial recognition still has issues. Thus many research have been conducted to improve system. This paper reviewed the previous works on attendance management system based on facial recognition. This article does not only provide the literature review on the earlier work or related work, but it also provides the deep analysis of Principal Component Analysis, discussion, suggestions for future work.",
"title": ""
},
{
"docid": "9c4845279d61619594461d140cfd9311",
"text": "This paper presents a fusion approach for improving human action recognition based on two differing modality sensors consisting of a depth camera and an inertial body sensor. Computationally efficient action features are extracted from depth images provided by the depth camera and from accelerometer signals provided by the inertial body sensor. These features consist of depth motion maps and statistical signal attributes. For action recognition, both feature-level fusion and decision-level fusion are examined by using a collaborative representation classifier. In the feature-level fusion, features generated from the two differing modality sensors are merged before classification, while in the decision-level fusion, the Dempster-Shafer theory is used to combine the classification outcomes from two classifiers, each corresponding to one sensor. The introduced fusion framework is evaluated using the Berkeley multimodal human action database. The results indicate that because of the complementary aspect of the data from these sensors, the introduced fusion approaches lead to 2% to 23% recognition rate improvements depending on the action over the situations when each sensor is used individually.",
"title": ""
},
{
"docid": "8f65f1971405e0c225e3625bb682a2d4",
"text": "We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet (Chang et al. Shapenet: an information-rich 3d model repository, 2015. arXiv:1512.03012) and ModelNet (Wu et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2015) as well as on real robotics data from KITTI (Geiger et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2012) and Kinect (Yang et al., 3d object dense reconstruction from a single depth view, 2018. arXiv:1802.00411), we demonstrate that the proposed amortized maximum likelihood approach is able to compete with the fully supervised baseline of Dai et al. (in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2017) and outperforms the data-driven approach of Engelmann et al. (in: Proceedings of the German conference on pattern recognition (GCPR), 2016), while requiring less supervision and being significantly faster.",
"title": ""
},
{
"docid": "c82f4117c7c96d0650eff810f539c424",
"text": "The Stock Market is known for its volatile and unstable nature. A particular stock could be thriving in one period and declining in the next. Stock traders make money from buying equity when they are at their lowest and selling when they are at their highest. The logical question would be: \"What Causes Stock Prices To Change?\". At the most fundamental level, the answer to this would be the demand and supply. In reality, there are many theories as to why stock prices fluctuate, but there is no generic theory that explains all, simply because not all stocks are identical, and one theory that may apply for today, may not necessarily apply for tomorrow. This paper covers various approaches taken to attempt to predict the stock market without extensive prior knowledge or experience in the subject area, highlighting the advantages and limitations of the different techniques such as regression and classification. We formulate both short term and long term predictions. Through experimentation we achieve 81% accuracy for future trend direction using classification, 0.0117 RMSE for next day price and 0.0613 RMSE for next day change in price using regression techniques. The results obtained in this paper are achieved using only historic prices and technical indicators. Various methods, tools and evaluation techniques will be assessed throughout the course of this paper, the result of this contributes as to which techniques will be selected and enhanced in the final artefact of a stock prediction model. Further work will be conducted utilising deep learning techniques to approach the problem. This paper will serve as a preliminary guide to researchers wishing to expose themselves to this area.",
"title": ""
},
{
"docid": "b64a91ca7cdeb3dfbe5678eee8962aa7",
"text": "Computational thinking is gaining recognition as an important skill set for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course within the curriculum, and there is little consensus on what exactly computational thinking entails and how to teach and evaluate it. To address these concerns, we have developed a computational thinking framework to be used as a planning and evaluative tool. Within this framework, we aim to unify the differing opinions about what computational thinking should involve. As a case study, we have applied the framework to Light-Bot, an educational game with a strong focus on programming, and found that the framework provides us with insight into the usefulness of the game to reinforce computer science concepts.",
"title": ""
},
{
"docid": "ba7b51dc253da1a17aaf12becb1abfed",
"text": "This papers aims to design a new approach in order to increase the performance of the decision making in model-based fault diagnosis when signature vectors of various faults are identical or closed. The proposed approach consists on taking into account the knowledge issued from the reliability analysis and the model-based fault diagnosis. The decision making, formalised as a bayesian network, is established with a priori knowledge on the dynamic component degradation through Markov chains. The effectiveness and performances of the technique are illustrated on a heating water process corrupted by faults. Copyright © 2006 IFAC",
"title": ""
},
{
"docid": "bf87ee431012af3a0648fe0ed9aeb61f",
"text": "Despite the importance attached to homework in cognitive-behavioral therapy for depression, quantitative studies of its impact on outcome have been limited. One aim of the present study was to replicate a previous finding suggesting that improvement can be predicted from the quality of the client's compliance early in treatment. If homework is indeed an effective ingredient in this form of treatment, it is important to know how compliance can be influenced. The second aim of the present study was to examine the effectiveness of several methods of enhancing compliance that have frequently been recommended to therapists. The data were drawn from 235 sessions received by 25 clients. Therapists' ratings of compliance following the first two sessions of treatment contributed significantly to the prediction of improvement at termination (though not at followup). However, compliance itself could not be predicted from any of the clients' ratings of therapist behavior in recommending the assignments.",
"title": ""
},
{
"docid": "ec8847a65f015a52ce90bdd304103658",
"text": "This study has a purpose to investigate the adoption of online games technologies among adolescents and their behavior in playing online games. The findings showed that half of them had experience ten months or less in playing online games with ten hours or less for each time playing per week. Nearly fifty-four percent played up to five times each week where sixty-six percent played two hours or less. Behavioral Intention has significant correlation to model variables naming Perceived Enjoyment, Flow Experience, Performance Expectancy, Effort Expectancy, Social Influence, and Facilitating Conditions; Experience; and the number and duration of game sessions. The last, Performance Expectancy and Facilitating Condition had a positive, medium, and statistically direct effect on Behavioral Intention. Four other variables Perceived Enjoyment, Flow Experience, Effort Expectancy, and Social Influence had positive or negative, medium or small, and not statistically direct effect on Behavioral Intention. Additionally, Flow Experience and Social Influence have no significant different between the mean value for male and female. Other variables have significant different regard to gender, where mean value of male was significantly greater than female except for Age. Practical implications of this study are relevant to groups who have interest to enhance or to decrease the adoption of online games technologies. Those to enhance the adoption of online games technologies must: preserve Performance Expectancy and Facilitating Conditions; enhance Flow Experience, Perceived Enjoyment, Effort Expectancy, and Social Influence; and engage the adolescent's online games behavior, specifically supporting them in longer playing games and in enhancing their experience. The opposite actions to these proposed can be considered to decrease the adoption.",
"title": ""
},
{
"docid": "1288abeaddded1564b607c9f31924697",
"text": "Dynamic time warping (DTW) is used for the comparison and processing of nonlinear signals and constitutes a widely researched field of study. The method has been initially designed for, and applied to, signals representing audio data. Afterwords it has been successfully modified and applied to many other fields of study. In this paper, we present the results of researches on the generalized DTW method designed for use with rotational sets of data parameterized by quaternions. The need to compare and process quaternion time series has been gaining in importance recently. Three-dimensional motion data processing is one of the most important applications here. Specifically, it is applied in the context of motion capture, and in many cases all rotational signals are described in this way. We propose a construction of generalized method called quaternion dynamic time warping (QDTW), which makes use of specific properties of quaternion space. It allows for the creation of a family of algorithms that deal with the higher order features of the rotational trajectory. This paper focuses on the analysis of the properties of this new approach. Numerical results show that the proposed method allows for efficient element assignment. Moreover, when used as the measure of similarity for a clustering task, the method helps to obtain good clustering performance both for synthetic and real datasets.",
"title": ""
},
{
"docid": "5542f4693a4251edcf995e7608fbda56",
"text": "This paper investigates the antecedents and consequences of customer loyalty in an online business-to-consumer (B2C) context. We identify eight factors (the 8Cs—customization, contact interactivity, care, community, convenience, cultivation, choice, and character) that potentially impact e-loyalty and develop scales to measure these factors. Data collected from 1,211 online customers demonstrate that all these factors, except convenience, impact e-loyalty. The data also reveal that e-loyalty has an impact on two customer-related outcomes: word-ofmouth promotion and willingness to pay more. © 2002 by New York University. All rights reserved.",
"title": ""
},
{
"docid": "3fd7611b349d80f08c0bc2b16f2e0c58",
"text": "A rapid pattern-recognition approach to characterize driver's curve-negotiating behavior is proposed. To shorten the recognition time and improve the recognition of driving styles, a k-means clustering-based support vector machine (kMC-SVM) method is developed and used for classifying drivers into two types: aggressive and moderate. First, vehicle speed and throttle opening are treated as the feature parameters to reflect the driving styles. Second, to discriminate driver curve-negotiating behaviors and reduce the number of support vectors, the k-means clustering method is used to extract and gather the two types of driving data and shorten the recognition time. Then, based on the clustering results, a support vector machine approach is utilized to generate the hyperplane for judging and predicting to which types the human driver are subject. Lastly, to verify the validity of the kMC-SVM method, a cross-validation experiment is designed and conducted. The research results show that the kMC-SVM is an effective method to classify driving styles with a short time, compared with SVM method.",
"title": ""
},
{
"docid": "68f380f3a0dbced0ac3fdb052009aacd",
"text": "We introduce a certain class of so-called perfectoid rings and spaces, which give a natural framework for Faltings’ almost purity theorem, and for which there is a natural tilting operation which exchanges characteristic 0 and characteristic p. We deduce the weight-monodromy conjecture in certain cases by reduction to equal characteristic.",
"title": ""
}
] |
scidocsrr
|
593de3db50578fd348bc5de06dd68ba5
|
Automotive power generation and control
|
[
{
"docid": "d2b545b4f9c0e7323760632c65206480",
"text": "This brief presents a quantitative analysis of the operating characteristics of three-phase diode bridge rectifiers with ac-side reactance and constant-voltage loads. We focus on the case where the ac-side currents vary continuously (continuous ac-side conduction mode). This operating mode is of particular importance in alternators and generators, for example. Simple approximate expressions are derived for the line and output current characteristics as well as the input power factor. Expressions describing the necessary operating conditions for continuous ac-side conduction are also developed. The derived analytical expressions are applied to practical examples and both simulations and experimental results are utilized to validate the analytical results. It is shown that the derived expressions are far more accurate than calculations based on traditional constant-current models.",
"title": ""
}
] |
[
{
"docid": "507cddc2df8ab2775395efb8387dad93",
"text": "A novel band-reject element for the design of inline waveguide pseudoelliptic band-reject filters is introduced. The element consists of an offset partial-height post in a rectangular waveguide in which the dominant TE10 mode is propagating. The location of the attenuation pole is primarily determined by the height of the post that generates it. The element allows the implementation of weak, as well as strong coupling coefficients that are encountered in asymmetric band-reject responses with broad stopbands. The coupling strength is controlled by the offset of the post with respect to the center of the main waveguide. The posts are separated by uniform sections of the main waveguide. An equivalent low-pass circuit based on the extracted pole technique is first used in a preliminary design. An improved equivalent low-pass circuit that includes a more accurate equivalent circuit of the band-reject element is then introduced. A synthesis method of the enhanced network is also presented. Filters based on the introduced element are designed, fabricated, and tested. Good agreement between measured and simulated results is achieved",
"title": ""
},
{
"docid": "7cc5c8250ad7ffaa8983d00b398c6ea9",
"text": "Decisions are powerfully affected by anticipated regret, and people anticipate feeling more regret when they lose by a narrow margin than when they lose by a wide margin. But research suggests that people are remarkably good at avoiding self-blame, and hence they may be better at avoiding regret than they realize. Four studies measured people's anticipations and experiences of regret and self-blame. In Study 1, students overestimated how much more regret they would feel when they \"nearly won\" than when they \"clearly lost\" a contest. In Studies 2, 3a, and 3b, subway riders overestimated how much more regret and self-blame they would feel if they \"nearly caught\" their trains than if they \"clearly missed\" their trains. These results suggest that people are less susceptible to regret than they imagine, and that decision makers who pay to avoid future regrets may be buying emotional insurance that they do not actually need.",
"title": ""
},
{
"docid": "5169d59af7f5cae888a998f891d99b18",
"text": "Reviewing 60 studies on natural gaze behavior in sports, it becomes clear that, over the last 40 years, the use of eye-tracking devices has considerably increased. Specifically, this review reveals the large variance of methods applied, analyses performed, and measures derived within the field. The results of sub-sample analyses suggest that sports-related eye-tracking research strives, on the one hand, for ecologically valid test settings (i.e., viewing conditions and response modes), while on the other, for experimental control along with high measurement accuracy (i.e., controlled test conditions with high-frequency eye-trackers linked to algorithmic analyses). To meet both demands, some promising compromises of methodological solutions have been proposed-in particular, the integration of robust mobile eye-trackers in motion-capture systems. However, as the fundamental trade-off between laboratory and field research cannot be solved by technological means, researchers need to carefully weigh the arguments for one or the other approach by accounting for the respective consequences. Nevertheless, for future research on dynamic gaze behavior in sports, further development of the current mobile eye-tracking methodology seems highly advisable to allow for the acquisition and algorithmic analyses of larger amounts of gaze-data and further, to increase the explanatory power of the derived results.",
"title": ""
},
{
"docid": "bf180a4ed173ef81c91594a2ee651c8c",
"text": "Recent emergence of low-cost and easy-operating depth cameras has reinvigorated the research in skeleton-based human action recognition. However, most existing approaches overlook the intrinsic interdependencies between skeleton joints and action classes, thus suffering from unsatisfactory recognition performance. In this paper, a novel latent max-margin multitask learning model is proposed for 3-D action recognition. Specifically, we exploit skelets as the mid-level granularity of joints to describe actions. We then apply the learning model to capture the correlations between the latent skelets and action classes each of which accounts for a task. By leveraging structured sparsity inducing regularization, the common information belonging to the same class can be discovered from the latent skelets, while the private information across different classes can also be preserved. The proposed model is evaluated on three challenging action data sets captured by depth cameras. Experimental results show that our model consistently achieves superior performance over recent state-of-the-art approaches.",
"title": ""
},
{
"docid": "2a8f464e709dcae4e34f73654aefe31f",
"text": "LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems.\n In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.",
"title": ""
},
{
"docid": "2ae80b030c82bf97bcf3662386cb2ec8",
"text": "A system model and its corresponding inversion for synthetic aperture radar (SAR) imaging are presented. The system model incorporates the spherical nature of a radar's radiation pattern at far field. The inverse method based on this model performs a spatial Fourier transform (Doppler processing) on the recorded signals with respect to the available coordinates of a translational radar (SAR) or target (inverse SAR). It is shown that the transformed data provide samples of the spatial Fourier transform of the target's reflectivity function. The inverse method can be modified to incorporate deviations of the radar's motion from its prescribed straight line path. The effects of finite aperture on resolution, reconstruction, and sampling constraints for the imaging problem are discussed.",
"title": ""
},
{
"docid": "d03a86459dd461dcfac842ae55ae4ebb",
"text": "Convolutional networks are the de-facto standard for analyzing spatio-temporal data such as images, videos, and 3D shapes. Whilst some of this data is naturally dense (e.g., photos), many other data sources are inherently sparse. Examples include 3D point clouds that were obtained using a LiDAR scanner or RGB-D camera. Standard \"dense\" implementations of convolutional networks are very inefficient when applied on such sparse data. We introduce new sparse convolutional operations that are designed to process spatially-sparse data more efficiently, and use them to develop spatially-sparse convolutional networks. We demonstrate the strong performance of the resulting models, called submanifold sparse convolutional networks (SS-CNs), on two tasks involving semantic segmentation of 3D point clouds. In particular, our models outperform all prior state-of-the-art on the test set of a recent semantic segmentation competition.",
"title": ""
},
{
"docid": "e610893c12836cf6019fa37c888e1666",
"text": "A new type of uncertainty relation is presented, concerning the information-bearing properti a discrete quantum system. A natural link is then revealed between basic quantum theory a linear error correcting codes of classical information theory. A subset of the known codes is desc having properties which are important for error correction in quantum communication. It is shown a pair of states which are, in a certain sense, “macroscopically different,” can form a superposit which the interference phase between the two parts is measurable. This provides a highly sta “Schrödinger cat” state. [S0031-9007(96)00779-X]",
"title": ""
},
{
"docid": "467637b1f55d4673d0ddd5322a130979",
"text": "In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyperparameter selection. The starting point of this paper is the observation that unrolled iterative methods have the form of a CNN (filtering followed by pointwise non-linearity) when the normal operator (<inline-formula> <tex-math notation=\"LaTeX\">$H^{*}H$ </tex-math></inline-formula>, where <inline-formula> <tex-math notation=\"LaTeX\">$H^{*}$ </tex-math></inline-formula> is the adjoint of the forward imaging operator, <inline-formula> <tex-math notation=\"LaTeX\">$H$ </tex-math></inline-formula>) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a <inline-formula> <tex-math notation=\"LaTeX\">$512\\times 512$ </tex-math></inline-formula> image on the GPU.",
"title": ""
},
{
"docid": "64a77ec55d5b0a729206d9af6d5c7094",
"text": "In this paper, we propose an Internet of Things (IoT) virtualization framework to support connected objects sensor event processing and reasoning by providing a semantic overlay of underlying IoT cloud. The framework uses the sensor-as-aservice notion to expose IoT cloud's connected objects functional aspects in the form of web services. The framework uses an adapter oriented approach to address the issue of connectivity with various types of sensor nodes. We employ semantic enhanced access polices to ensure that only authorized parties can access the IoT framework services, which result in enhancing overall security of the proposed framework. Furthermore, the use of event-driven service oriented architecture (e-SOA) paradigm assists the framework to leverage the monitoring process by dynamically sensing and responding to different connected objects sensor events. We present our design principles, implementations, and demonstrate the development of IoT application with reasoning capability by using a green school motorcycle (GSMC) case study. Our exploration shows that amalgamation of e-SOA, semantic web technologies and virtualization paves the way to address the connectivity, security and monitoring issues of IoT domain.",
"title": ""
},
{
"docid": "717e11d1a112557abdc4160afe75ce16",
"text": "Various types of lipids and their metabolic products associated with the biological membrane play a crucial role in signal transduction, modulation, and activation of receptors and as precursors of bioactive lipid mediators. Dysfunction in the lipid homeostasis in the brain could be a risk factor for the many types of neurodegenerative disorders, including Alzheimer’s disease, Huntington’s disease, Parkinson’s disease, and amyotrophic lateral sclerosis. These neurodegenerative disorders are marked by extensive neuronal apoptosis, gliosis, and alteration in the differentiation, proliferation, and development of neurons. Sphingomyelin, a constituent of plasma membrane, as well as its primary metabolite ceramide acts as a potential lipid second messenger molecule linked with the modulation of various cellular signaling pathways. Excessive production of reactive oxygen species associated with enhanced oxidative stress has been implicated with these molecules and involved in the regulation of a variety of different neurodegenerative and neuroinflammatory disorders. Studies have shown that alterations in the levels of plasma lipid/cholesterol concentration may result to neurodegenerative diseases. Alteration in the levels of inflammatory cytokines and mediators in the brain has also been found to be implicated in the pathophysiology of neurodegenerative diseases. Although several mechanisms involved in neuronal apoptosis have been described, the molecular mechanisms underlying the correlation between lipid metabolism and the neurological deficits are not clearly understood. In the present review, an attempt has been made to provide detailed information about the association of lipids in neurodegeneration especially in Alzheimer’s disease.",
"title": ""
},
{
"docid": "4107fe17e6834f96a954e13cbb920f78",
"text": "Non-orthogonal multiple access (NOMA) can support more users than OMA techniques using the same wireless resources, which is expected to support massive connectivity for Internet of Things in 5G. Furthermore, in order to reduce the transmission latency and signaling overhead, grant-free transmission is highly expected in the uplink NOMA systems, where user activity has to be detected. In this letter, by exploiting the temporal correlation of active user sets, we propose a dynamic compressive sensing (DCS)-based multi-user detection (MUD) to realize both user activity and data detection in several continuous time slots. In particular, as the temporal correlation of the active user sets between adjacent time slots exists, we can use the estimated active user set in the current time slot as the prior information to estimate the active user set in the next time slot. Simulation results show that the proposed DCS-based MUD can achieve much better performance than that of the conventional CS-based MUD in NOMA systems.",
"title": ""
},
{
"docid": "20173b723d2ed8cf17970ef119c11571",
"text": "In recent years, there have been amazing advances in deep learning methods for machine reading. In machine reading, the machine reader has to extract the answer from the given ground truth paragraph. Recently, the stateof-the-art machine reading models achieve human level performance in SQuAD which is a reading comprehension-style question answering (QA) task. The success of machine reading has inspired researchers to combine information retrieval with machine reading to tackle open-domain QA. However, these systems perform poorly compared to reading comprehension-style QA because it is difficult to retrieve the pieces of paragraphs that contain the answer to the question. In this study, we propose two neural network rankers that assign scores to different passages based on their likelihood of containing the answer to a given question. Additionally, we analyze the relative importance of semantic similarity and word level relevance matching in open-domain QA.",
"title": ""
},
{
"docid": "0c8d6441b5756d94cd4c3a0376f94fdc",
"text": "Electronic word of mouth (eWOM) has been an important factor influencing consumer purchase decisions. Using the ABC model of attitude, this study proposes a model to explain how eWOM affects online discussion forums. Specifically, we propose that platform (Web site reputation and source credibility) and customer (obtaining buying-related information and social orientation through information) factors influence purchase intentions via perceived positive eWOM review credibility, as well as product and Web site attitudes in an online community context. A total of 353 online discussion forum users in an online community (Fashion Guide) in Taiwan were recruited, and structural equation modeling (SEM) was used to test the research hypotheses. The results indicate that Web site reputation, source credibility, obtaining buying-related information, and social orientation through information positively influence perceived positive eWOM review credibility. In turn, perceived positive eWOM review credibility directly influences purchase intentions and also indirectly influences purchase intentions via product and Web site attitudes. Finally, we discuss the theoretical and managerial implications of the findings.",
"title": ""
},
{
"docid": "aa3178c1b4d7ae8f9e3e97fabea3d6a1",
"text": "This study continues landmark research, by Katz in 1984 and Hartland and Londoner in 1997, on characteristics of effective teaching by nurse anesthesia clinical instructors. Based on the literature review, there is a highlighted gap in research evaluating current teaching characteristics of clinical nurse anesthesia instructors that are valuable and effective from an instructor's and student's point of view. This study used a descriptive, quantitative research approach to assess (1) the importance of 24 characteristics (22 effective clinical teaching characteristics identified by Katz, and 2 items added for this study) of student registered nurse anesthetists (SRNAs) and clinical preceptors, who are Certified Registered Nurse Anesthetists, and (2) the congruence between the student and preceptor perceptions. A Likert-scale survey was used to assess the importance of each characteristic. The study was conducted at a large Midwestern hospital. The findings of this study did not support the results found by Hartland and Londoner based on the Friedman 2-way analysis. The rankings of the 24 characteristics by the students and the clinical preceptors in the current research were not significantly congruent based on the Kendall coefficient analysis. The results can help clinical preceptors increase their teaching effectiveness and generate effective learning environments for SRNAs.",
"title": ""
},
{
"docid": "d3fbf7429dff6f68ec06014467b0217a",
"text": "This paper presents a hierarchical framework for detecting local and global anomalies via hierarchical feature representation and Gaussian process regression (GPR) which is fully non-parametric and robust to the noisy training data, and supports sparse features. While most research on anomaly detection has focused more on detecting local anomalies, we are more interested in global anomalies that involve multiple normal events interacting in an unusual manner, such as car accidents. To simultaneously detect local and global anomalies, we cast the extraction of normal interactions from the training videos as a problem of finding the frequent geometric relations of the nearby sparse spatio-temporal interest points (STIPs). A codebook of interaction templates is then constructed and modeled using the GPR, based on which a novel inference method for computing the likelihood of an observed interaction is also developed. Thereafter, these local likelihood scores are integrated into globally consistent anomaly masks, from which anomalies can be succinctly identified. To the best of our knowledge, it is the first time GPR is employed to model the relationship of the nearby STIPs for anomaly detection. Simulations based on four widespread datasets show that the new method outperforms the main state-of-the-art methods with lower computational burden.",
"title": ""
},
{
"docid": "5bd713c468f48313e42b399f441bb709",
"text": "Nowadays, malware is affecting not only PCs but also mobile devices, which became pervasive in everyday life. Mobile devices can access and store personal information (e.g., location, photos, and messages) and thus are appealing to malware authors. One of the most promising approach to analyze malware is by monitoring its execution in a sandbox (i.e., via dynamic analysis). In particular, most malware sandboxing solutions for Android rely on an emulator, rather than a real device. This motivates malware authors to include runtime checks in order to detect whether the malware is running in a virtualized environment. In that case, the malicious app does not trigger the malicious payload. The presence of differences between real devices and Android emulators started an arms race between security researchers and malware authors, where the former want to hide these differences and the latter try to seek them out. In this paper we present Mirage, a malware sandbox architecture for Android focused on dynamic analysis evasion attacks. We designed the components of Mirage to be extensible via software modules, in order to build specific countermeasures against such attacks. To the best of our knowledge, Mirage is the first modular sandbox architecture that is robust against sandbox detection techniques. As a representative case study, we present a proof of concept implementation of Mirage with a module that tackles evasion attacks based on sensors API return values.",
"title": ""
},
{
"docid": "d5be665f8ce9fb442c87da6dd4baa6a6",
"text": "In this paper we propose a novel kernel sparse representation classification (SRC) framework and utilize the local binary pattern (LBP) descriptor in this framework for robust face recognition. First we develop a kernel coordinate descent (KCD) algorithm for 11 minimization in the kernel space, which is based on the covariance update technique. Then we extract LBP descriptors from each image and apply two types of kernels (χ2 distance based and Hamming distance based) with the proposed KCD algorithm under the SRC framework for face recognition. Experiments on both the Extended Yale B and the PIE face databases show that the proposed method is more robust against noise, occlusion, and illumination variations, even with small number of training samples.",
"title": ""
},
{
"docid": "83a4a89d3819009d61123a146b38d0e9",
"text": "OBJECTIVE\nBehçet's disease (BD) is a chronic, relapsing, inflammatory vascular disease with no pathognomonic test. Low sensitivity of the currently applied International Study Group (ISG) clinical diagnostic criteria led to their reassessment.\n\n\nMETHODS\nAn International Team for the Revision of the International Criteria for BD (from 27 countries) submitted data from 2556 clinically diagnosed BD patients and 1163 controls with BD-mimicking diseases or presenting at least one major BD sign. These were randomly divided into training and validation sets. Logistic regression, 'leave-one-country-out' cross-validation and clinical judgement were employed to develop new International Criteria for BD (ICBD) with the training data. Existing and new criteria were tested for their performance in the validation set.\n\n\nRESULTS\nFor the ICBD, ocular lesions, oral aphthosis and genital aphthosis are each assigned 2 points, while skin lesions, central nervous system involvement and vascular manifestations 1 point each. The pathergy test, when used, was assigned 1 point. A patient scoring ≥4 points is classified as having BD. In the training set, 93.9% sensitivity and 92.1% specificity were assessed compared with 81.2% sensitivity and 95.9% specificity for the ISG criteria. In the validation set, ICBD demonstrated an unbiased estimate of sensitivity of 94.8% (95% CI: 93.4-95.9%), considerably higher than that of the ISG criteria (85.0%). Specificity (90.5%, 95% CI: 87.9-92.8%) was lower than that of the ISG-criteria (96.0%), yet still reasonably high. For countries with at least 90%-of-cases and controls having a pathergy test, adding 1 point for pathergy test increased the estimate of sensitivity from 95.5% to 98.5%, while barely reducing specificity from 92.1% to 91.6%.\n\n\nCONCLUSION\nThe new proposed criteria derived from multinational data exhibits much improved sensitivity over the ISG criteria while maintaining reasonable specificity. It is proposed that the ICBD criteria to be adopted both as a guide for diagnosis and classification of BD.",
"title": ""
},
{
"docid": "2bb36d78294b15000b78acd7a0831762",
"text": "This study aimed to verify whether achieving a dist inctive academic performance is unlikely for students at high risk of smartphone addiction. Additionally, it verified whether this phenomenon was equally applicable to male and femal e students. After implementing systematic random sampling, 293 university students participated by completing an online survey questionnaire posted on the university’s stu dent information system. The survey questionnaire collected demographic information and responses to the Smartphone Addiction Scale-Short Version (SAS-SV) items. The results sho wed that male and female university students were equally susceptible to smartphone add iction. Additionally, male and female university students were equal in achieving cumulat ive GPAs with distinction or higher within the same levels of smartphone addiction. Fur thermore, undergraduate students who were at a high risk of smartphone addiction were le ss likely to achieve cumulative GPAs of distinction or higher.",
"title": ""
}
] |
scidocsrr
|
1483bb0c391bd654416b1079bb86a79b
|
Smoke detection using spatial and temporal analyses
|
[
{
"docid": "70e88fe5fc43e0815a1efa05e17f7277",
"text": "Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas. Many commercial smoke detection sensors exist but most of them cannot be applied in open space or outdoor scenarios. With this aim, the paper presents a smoke detection system that uses a common CCD camera sensor to detect smoke in images and trigger alarms. First, a proper background model is proposed to reliably extract smoke regions and avoid over-segmentation and false positives in outdoor scenarios where many distractors are present, such as moving trees or light reflexes. A novel Bayesian approach is adopted to detect smoke regions in the scene analyzing image energy by means of the Wavelet Transform coefficients and Color Information. A statistical model of image energy is built, using a temporal Gaussian Mixture, to analyze the energy decay that typically occurs when smoke covers the scene then the detection is strengthen evaluating the color blending between a reference smoke color and the input frame. The proposed system is capable of detecting rapidly smoke events both in night and in day conditions with a reduced number of false alarms hence is particularly suitable for monitoring large outdoor scenarios where common sensors would fail. An extensive experimental campaign both on recorded videos and live cameras evaluates the efficacy and efficiency of the system in many real world scenarios, such as outdoor storages and forests.",
"title": ""
}
] |
[
{
"docid": "e597f9fbd0d42066b991c6e917a1e767",
"text": "While Open Data initiatives are diverse, they aim to create and contribute to public value. Yet several potential contradictions exist between public values, such as trust, transparency, privacy, and security, and Open Data policies. To bridge these contradictions, we present the notion of precommitment as a restriction of one’s choices. Conceptualized as a policy instrument, precommitment can be applied by an organization to restrict the extent to which an Open Data policy might conflict with public values. To illustrate the use of precommitment, we present two case studies at two public sector organizations, where precommitment is applied during a data request procedure to reconcile conflicting values. In this procedure, precommitment is operationalized in three phases. In the first phase, restrictions are defined on the type and the content of the data that might be requested. The second phase involves the preparation of the data to be delivered according to legal requirements and the decisions taken in phase 1. Data preparation includes amongst others the deletion of privacy sensitive or other problematic attributes. Finally, phase 3 pertains to the establishment of the conditions of reuse of the data, limiting the use to restricted user groups or opening the data for everyone.",
"title": ""
},
{
"docid": "e0fae6d662cdeb4815ed29a828747491",
"text": "In this paper, a novel framework is developed to achieve effective summarization of large-scale image collection by treating the problem of automatic image summarization as the problem of dictionary learning for sparse representation, e.g., the summarization task can be treated as a dictionary learning task (i.e., the given image set can be reconstructed sparsely with this dictionary). For image set of a specific category or a mixture of multiple categories, we have built a sparsity model to reconstruct all its images by using a subset of most representative images (i.e., image summary); and we adopted the simulated annealing algorithm to learn such sparse dictionary by minimizing an explicit optimization function. By investigating their reconstruction ability under sparsity constrain and diversity constrain, we have quantitatively measure the performance of various summarization algorithms. Our experimental results have shown that our dictionary learning for sparse representation algorithm can obtain more accurate summary as compared with other baseline algorithms.",
"title": ""
},
{
"docid": "87e56672751a8eb4d5a08f0459e525ca",
"text": "— The Internet of Things (IoT) has transformed many aspects of modern manufacturing, from design to production to quality control. In particular, IoT and digital manufacturing technologies have substantially accelerated product development cycles and manufacturers can now create products of a complexity and precision not heretofore possible. New threats to supply chain security have arisen from connecting machines to the Internet and introducing complex IoT-based systems controlling manufacturing processes. By attacking these IoT-based manufacturing systems and tampering with digital files, attackers can manipulate physical characteristics of parts and change the dimensions, shapes, or mechanical properties of the parts, which can result in parts that fail in the field. These defects increase manufacturing costs and allow silent problems to occur only under certain loads that can threaten safety and/or lives. To understand potential dangers and protect manufacturing system safety, this paper presents two taxonomies: one for classifying cyber-physical attacks against manufacturing processes and another for quality control measures for counteracting these attacks. We systematically identify and classify possible cyber-physical attacks and connect the attacks with variations in manufacturing processes and quality control measures. Our tax-onomies also provide a scheme for linking emerging IoT-based manufacturing system vulnerabilities to possible attacks and quality control measures.",
"title": ""
},
{
"docid": "d1444f26cee6036f1c2df67a23d753be",
"text": "Text mining has becoming an emerging research area in now-a-days that helps to extract useful information from large amount of natural language text documents. The need of grouping similar documents together for different applications has gaining the attention of researchers in this area. Document clustering organizes the documents into different groups called as clusters. The documents in one cluster have higher degree of similarity than the documents in other cluster. The paper provides an overview of the document clustering reviewed from different papers and the challenges in document clustering. KeywordsText Mining, Document Clustering, Similarity Measures, Challenges in Document Clustering",
"title": ""
},
{
"docid": "027fca90352f826948d2d42bbeb6c863",
"text": "Inspired by the theory of Leitner’s learning box from the field of psychology, we propose DropSample, a new method for training deep convolutional neural networks (DCNNs), and apply it to large-scale online handwritten Chinese character recognition (HCCR). According to the principle of DropSample, each training sample is associated with a quota function that is dynamically adjusted on the basis of the classification confidence given by the DCNN softmax output. After a learning iteration, samples with low confidence will have a higher probability of being selected as training data in the next iteration; in contrast, well-trained and well-recognized samples with very high confidence will have a lower probability of being involved in the next training iteration and can be gradually eliminated. As a result, the learning process becomes more efficient as it progresses. Furthermore, we investigate the use of domain-specific knowledge to enhance the performance of DCNN by adding a domain knowledge layer before the traditional CNN. By adopting DropSample together with different types of domain-specific knowledge, the accuracy of HCCR can be improved efficiently. Experiments on the CASIA-OLHDWB 1.0, CASIA-OLHWDB 1.1, and ICDAR 2013 online HCCR competition datasets yield outstanding recognition rates of 97.33%, 97.06%, and 97.51% respectively, all of which are significantly better than the previous best results reported in the literature.",
"title": ""
},
{
"docid": "eed788297c1b49895f8f19012b6231f2",
"text": "Can the choice of words and tone used by the authors of financial news articles correlate to measurable stock price movements? If so, can the magnitude of price movement be predicted using these same variables? We investigate these questions using the Arizona Financial Text (AZFinText) system, a financial news article prediction system, and pair it with a sentiment analysis tool. Through our analysis, we found that subjective news articles were easier to predict in price direction (59.0% versus 50.0% of chance alone) and using a simple trading engine, subjective articles garnered a 3.30% return. Looking further into the role of author tone in financial news articles, we found that articles with a negative sentiment were easiest to predict in price direction (50.9% versus 50.0% of chance alone) and a 3.04% trading return. Investigating negative sentiment further, we found that our system was able to predict price decreases in articles of a positive sentiment 53.5% of the time, and price increases in articles of a negative",
"title": ""
},
{
"docid": "0dd334ac819bfb77094e06dc0c00efee",
"text": "How to propagate label information from labeled examples to unlabeled examples over a graph has been intensively studied for a long time. Existing graph-based propagation algorithms usually treat unlabeled examples equally, and transmit seed labels to the unlabeled examples that are connected to the labeled examples in a neighborhood graph. However, such a popular propagation scheme is very likely to yield inaccurate propagation, because it falls short of tackling ambiguous but critical data points (e.g., outliers). To this end, this paper treats the unlabeled examples in different levels of difficulties by assessing their reliability and discriminability, and explicitly optimizes the propagation quality by manipulating the propagation sequence to move from simple to difficult examples. In particular, we propose a novel iterative label propagation algorithm in which each propagation alternates between two paradigms, teaching-to-learn and learning-to-teach (TLLT). In the teaching-to-learn step, the learner conducts the propagation on the simplest unlabeled examples designated by the teacher. In the learning-to-teach step, the teacher incorporates the learner’s feedback to adjust the choice of the subsequent simplest examples. The proposed TLLT strategy critically improves the accuracy of label propagation, making our algorithm substantially robust to the values of tuning parameters, such as the Gaussian kernel width used in graph construction. The merits of our algorithm are theoretically justified and empirically demonstrated through experiments performed on both synthetic and real-world data sets.",
"title": ""
},
{
"docid": "27c6fa2e390fe1cbe1a47b9ef6667d35",
"text": "In this paper, we present a comprehensive study on supervised domain adaptation of PLDA based i-vector speaker recognition systems. After describing the system parameters subject to adaptation, we study the impact of their adaptation on recognition performance. Using the recently designed domain adaptation challenge, we observe that the adaptation of the PLDA parameters (i.e. across-class and within-class co variances) produces the largest gains. Nonetheless, length-normalization is also important; whereas using an indomani UBM and T matrix is not crucial. For the PLDA adaptation, we compare four approaches. Three of them are proposed in this work, and a fourth one was previously published. Overall, the four techniques are successful at leveraging varying amounts of labeled in-domain data and their performance is quite similar. However, our approaches are less involved, and two of them are applicable to a larger class of models (low-rank across-class).",
"title": ""
},
{
"docid": "67070d149bcee51cc93a81f21f15ad71",
"text": "As an important and fundamental tool for analyzing the schedulability of a real-time task set on the multiprocessor platform, response time analysis (RTA) has been researched for several years on both Global Fixed Priority (G-FP) and Global Earliest Deadline First (G-EDF) scheduling. This paper proposes a new analysis that improves over current state-of-the-art RTA methods for both G-FP and G-EDF scheduling, by reducing their pessimism. The key observation is that when estimating the carry-in workload, all the existing RTA techniques depend on the worst case scenario in which the carry-in job should execute as late as possible and just finishes execution before its worst case response time (WCRT). But the carry-in workload calculated under this assumption may be over-estimated, and thus the accuracy of the response time analysis may be impacted. To address this problem, we first propose a new method to estimate the carry-in workload more precisely. The proposed method does not depend on any specific scheduling algorithm and can be used for both G-FP and G-EDF scheduling. We then propose a general RTA algorithm that can improve most existing RTA tests by incorporating our carry-in estimation method. To further improve the execution efficiency, we also introduce an optimization technique for our RTA tests. Experiments with randomly generated task sets are conducted and the results show that, compared with the state-of-the-art technologies, the proposed tests exhibit considerable performance improvements, up to 9 and 7.8 percent under G-FP and G-EDF scheduling respectively, in terms of schedulability test precision.",
"title": ""
},
{
"docid": "2f9de2e94c6af95e9c2e9eb294a7696c",
"text": "The rapid growth of Electronic Health Records (EHRs), as well as the accompanied opportunities in Data-Driven Healthcare (DDH), has been attracting widespread interests and attentions. Recent progress in the design and applications of deep learning methods has shown promising results and is forcing massive changes in healthcare academia and industry, but most of these methods rely on massive labeled data. In this work, we propose a general deep learning framework which is able to boost risk prediction performance with limited EHR data. Our model takes a modified generative adversarial network namely ehrGAN, which can provide plausible labeled EHR data by mimicking real patient records, to augment the training dataset in a semi-supervised learning manner. We use this generative model together with a convolutional neural network (CNN) based prediction model to improve the onset prediction performance. Experiments on two real healthcare datasets demonstrate that our proposed framework produces realistic data samples and achieves significant improvements on classification tasks with the generated data over several stat-of-the-art baselines.",
"title": ""
},
{
"docid": "5398b76e55bce3c8e2c1cd89403b8bad",
"text": "To cite: He A, Kwatra SG, Kazi N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/bcr-2016215335 DESCRIPTION A woman aged 45 years presented for evaluation of skin lesions. She reported an 8–9-year history of occasionally tender, waxing-and-waning skin nodules refractory to dapsone, prednisone and methotrexate. Examination revealed multiple indurated subcutaneous nodules distributed on the upper extremities, with scattered patches of lipoatrophy in areas of nodule regression (figure 1). Her medical history was unremarkable; CBC and CMP were within normal limits, with no history of radiotherapy or evidence of internal organ involvement. She had a positive ANA titre (1:160, speckled), but negative anti-dsDNA, anti-Smith, anti-Ro and anti-La antibodies. Differential diagnosis included erythema nodosum (EN), erythema induratum of Bazin (EIB), lupus profundus (LP) and cutaneous lymphoma. Initial wedge biopsy in 2008 disclosed a predominantly lobular panniculitic process with some septal involvement (figure 2A). Broad zones of necrosis were present (figure 2B). The infiltrate consisted of a pleomorphic population of lymphocytes with occasional larger atypical lymphocytes (figure 2C). There were foci of adipocyte rimming by the atypical lymphocytes (figure 2C). Immunophenotyping revealed predominance of CD3+ T cells with some CD20+ B-cell aggregates. The atypical cells stained CD4 and CD8 in approximately equal ratios. TIA-1 was positive in many of the atypical cells but not prominently enough to render a diagnosis of cytotoxic T-cell lymphoma. T-cell receptor PCR studies showed polyclonality. Subsequent biopsies performed annually after treatment with prednisone in 2008 and 2010, dapsone in 2009 and methotrexate in 2012 showed very similar pathological and molecular features. Adipocyte rimming and TCR polyclonality persisted. EN is characterised by subcutaneous nodules on the lower extremities in association with elevated erythrocyte sedimentation rate (ESR) and C reactive protein (CRP), influenza-like prodrome preceding nodule formation and self-limiting course. Histologically, EN shows a mostly septal panniculitis with radial granulomas. EN was ruled out on the basis of normal ESR (6) and CRP (<0.1), chronic relapsing course and predominantly lobular panniculitis process histologically. EIB typically presents with violaceous nodules located on the posterior lower extremities, with arms rarely affected, of patients with a history of tuberculosis (TB). Histologically, EIB shows granulomatous inflammation with focal necrosis, vasculitis and septal fibrosis. Our patient had no evidence or history of TB infection and presented with nodules of a different clinical morphology. Ultimately, this constellation of histological and immunophenotypic findings showed an atypical panniculitic T-lymphocytic infiltrate. Although the lesion showed a lobular panniculitis with features that could be seen in subcutaneous panniculitis-like T-cell lymphoma (SPTCL), the presence of plasma cells, absence of CD8 and TIA restriction and T-cell polyclonality did not definitively support that",
"title": ""
},
{
"docid": "9e9be149fc44552b6ac9eb2d90d4a4ba",
"text": "In this work, a level set energy for segmenting the lungs from digital Posterior-Anterior (PA) chest x-ray images is presented. The primary challenge in using active contours for lung segmentation is local minima due to shading effects and presence of strong edges due to the rib cage and clavicle. We have used the availability of good contrast at the lung boundaries to extract a multi-scale set of edge/corner feature points and drive our active contour model using these features. We found these features when supplemented with a simple region based data term and a shape term based on the average lung shape, able to handle the above local minima issues. The algorithm was tested on 1130 clinical images, giving promising results.",
"title": ""
},
{
"docid": "a00cc13a716439c75a5b785407b02812",
"text": "A novel current feedback programming principle and circuit architecture are presented, compatible with LED displays utilizing the 2T1C pixel structure. The new pixel programming approach is compatible with all TFT backplane technologies and can compensate for non-uniformities in both threshold voltage and carrier mobility of the OLED pixel drive TFT, due to a feedback loop that modulates the gate of the driving transistor according to the OLED current. The circuit can be internal or external to the integrated display data driver. Based on simulations and data gathered through a fabricated prototype driver, a pixel drive current of 20 nA can be programmed within an addressing time ranging from 10 μs to 50 μs.",
"title": ""
},
{
"docid": "f584b2d89bacacf31158496460d6f546",
"text": "Significant advances in clinical practice as well as basic and translational science were presented at the American Transplant Congress this year. Topics included innovative clinical trials to recent advances in our basic understanding of the scientific underpinnings of transplant immunology. Key areas of interest included the following: clinical trials utilizing hepatitis C virus-positive (HCV+ ) donors for HCV- recipients, the impact of the new allocation policies, normothermic perfusion, novel treatments for desensitization, attempts at precision medicine, advances in xenotransplantation, the role of mitochondria and exosomes in rejection, nanomedicine, and the impact of the microbiota on transplant outcomes. This review highlights some of the most interesting and noteworthy presentations from the meeting.",
"title": ""
},
{
"docid": "8ea6c2f9ee972ef321e12b26dd1f9022",
"text": "This paper describes a simultaneous localization and mapping (SLAM) algorithm for use in unstructured environments that is effective regardless of the geometric complexity of the environment. Features are described using B-splines as modeling tool, and the set of control points defining their shape is used to form a complete and compact description of the environment, thus making it feasible to use an extended Kalman-filter (EKF) based SLAM algorithm. This method is the first known EKF-SLAM implementation capable of describing general free-form features in a parametric manner. Efficient strategies for computing the relevant Jacobians, perform data association, initialization, and map enlargement are presented. The algorithms are evaluated for accuracy and consistency using computer simulations, and for effectiveness using experimental data gathered from different real environments.",
"title": ""
},
{
"docid": "1026bd2ccbea3a7cbb0f337de6ce2981",
"text": "Helicobacter pylori (H. pylori) is an extremely common, yet underappreciated, pathogen that is able to alter host physiology and subvert the host immune response, allowing it to persist for the life of the host. H. pylori is the primary cause of peptic ulcers and gastric cancer. In the United States, the annual cost associated with peptic ulcer disease is estimated to be $6 billion and gastric cancer kills over 700000 people per year globally. The prevalence of H. pylori infection remains high (> 50%) in much of the world, although the infection rates are dropping in some developed nations. The drop in H. pylori prevalence could be a double-edged sword, reducing the incidence of gastric diseases while increasing the risk of allergies and esophageal diseases. The list of diseases potentially caused by H. pylori continues to grow; however, mechanistic explanations of how H. pylori could contribute to extragastric diseases lag far behind clinical studies. A number of host factors and H. pylori virulence factors act in concert to determine which individuals are at the highest risk of disease. These include bacterial cytotoxins and polymorphisms in host genes responsible for directing the immune response. This review discusses the latest advances in H. pylori pathogenesis, diagnosis, and treatment. Up-to-date information on correlations between H. pylori and extragastric diseases is also provided.",
"title": ""
},
{
"docid": "e5bbf88eedf547551d28a731bd4ebed7",
"text": "We conduct an empirical study to test the ability of convolutional neural networks (CNNs) to reduce the effects of nuisance transformations of the input data, such as location, scale and aspect ratio. We isolate factors by adopting a common convolutional architecture either deployed globally on the image to compute class posterior distributions, or restricted locally to compute class conditional distributions given location, scale and aspect ratios of bounding boxes determined by proposal heuristics. In theory, averaging the latter should yield inferior performance compared to proper marginalization. Yet empirical evidence suggests the converse, leading us to conclude that - at the current level of complexity of convolutional architectures and scale of the data sets used to train them - CNNs are not very effective at marginalizing nuisance variability. We also quantify the effects of context on the overall classification task and its impact on the performance of CNNs, and propose improved sampling techniques for heuristic proposal schemes that improve end-to-end performance to state-of-the-art levels. We test our hypothesis on a classification task using the ImageNet Challenge benchmark and on a wide-baseline matching task using the Oxford and Fischer's datasets.",
"title": ""
},
{
"docid": "e7fb4643c062e092a52ac84928ab46e9",
"text": "Object detection and tracking are main tasks in video surveillance systems. Extracting the background is an intensive task with high computational cost. This work proposes a hardware computing engine to perform background subtraction on low-cost field programmable gate arrays (FPGAs), focused on resource-limited environments. Our approach is based on the codebook algorithm and offers very low accuracy degradation. We have analyzed resource consumption and performance trade-offs in Spartan-3 FPGAs by Xilinx. In addition, an accuracy evaluation with standard benchmark sequences has been performed, obtaining better results than previous hardware approaches. The implementation is able to segment objects in sequences with resolution $$768\\times 576$$ at 50 fps using a robust and accurate approach, and an estimated power consumption of 5.13 W.",
"title": ""
},
{
"docid": "473968c14db4b189af126936fd5486ca",
"text": "Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.",
"title": ""
},
{
"docid": "69bf97d8a40757a19ca9431c6bad0f07",
"text": "To detect scene text in the video is valuable to many content-based video applications. In this paper, we present a novel scene text detection and tracking method for videos, which effectively exploits the cues of the background regions of the text. Specifically, we first extract text candidates and potential background regions of text from the video frame. Then, we exploit the spatial, shape and motional correlations between the text and its background region with a bipartite graph model and the random walk algorithm to refine the text candidates for improved accuracy. We also present an effective tracking framework for text in the video, making use of the temporal correlation of text cues across successive frames, which contributes to enhancing both the precision and the recall of the final text detection result. Experiments on public scene text video datasets demonstrate the state-of-the-art performance of the proposed method.",
"title": ""
}
] |
scidocsrr
|
f1030390f40dc904d8ab89d57572128c
|
Which Are the Best Features for Automatic Verb Classification
|
[
{
"docid": "ef1f5eaa9c6f38bbe791e512a7d89dab",
"text": "Lexical-semantic verb classifications have proved useful in supporting various natural language processing (NLP) tasks. The largest and the most widely deployed classification in English is Levin’s (1993) taxonomy of verbs and their classes. While this resource is attractive in being extensive enough for some NLP use, it is not comprehensive. In this paper, we present a substantial extension to Levin’s taxonomy which incorporates 57 novel classes for verbs not covered (comprehensively) by Levin. We also introduce 106 novel diathesis alternations, created as a side product of constructing the new classes. We demonstrate the utility of our novel classes by using them to support automatic subcategorization acquisition and show that the resulting extended classification has extensive coverage over the English verb lexicon.",
"title": ""
},
{
"docid": "69fabbf2e0cc50dbcf28de6cc174159d",
"text": "This paper presents an automatic word sense disambiguation (WSD) system that uses Part-of-Speech (POS) tags along with word classes as the discrete features. Word Classes are derived from the Word Class Assigner using the Word Exchange Algorithm from statistical language processing. Naïve-Bayes classifier is employed from Weka in both the training and testing phases to perform the supervised learning on the standard Senseval-3 data set. Experiments were performing using 10-fold cross-validation on the training set and the training and testing data for training the model and evaluating it. In both experiments, the features will either used separately or combined together to produce the accuracies. Results indicate that word class features did not provide any discrimination for word sense disambiguation. POS tag features produced a small improvement over the baseline. The combination of both word class and POS tag features did not increase the accuracy results. Overall, further study is likely needed to possibly improve the implementation of the word class features in the system.",
"title": ""
}
] |
[
{
"docid": "38bc206d9caac1d2dbe767d7e39b7aa0",
"text": "We discuss the idea that addictions can be treated by changing the mechanisms involved in self-control with or without regard to intention. The core clinical symptoms of addiction include an enhanced incentive for drug taking (craving), impaired self-control (impulsivity and compulsivity), negative mood, and increased stress re-activity. Symptoms related to impaired self-control involve reduced activity in control networks including anterior cingulate (ACC), adjacent prefrontal cortex (mPFC), and striatum. Behavioral training such as mindfulness meditation can increase the function of control networks and may be a promising approach for the treatment of addiction, even among those without intention to quit.",
"title": ""
},
{
"docid": "cc3d0d9676ad19f71b4a630148c4211f",
"text": "OBJECTIVES\nPrevious studies have revealed that memory performance is diminished in chronic pain patients. Few studies, however, have assessed multiple components of memory in a single sample. It is currently also unknown whether attentional problems, which are commonly observed in chronic pain, mediate the decline in memory. Finally, previous studies have focused on middle-aged adults, and a possible detrimental effect of aging on memory performance in chronic pain patients has been commonly disregarded. This study, therefore, aimed at describing the pattern of semantic, working, and visual and verbal episodic memory performance in participants with chronic pain, while testing for possible contributions of attention and age to task performance.\n\n\nMETHODS\nThirty-four participants with chronic pain and 32 pain-free participants completed tests of episodic, semantic, and working memory to assess memory performance and a test of attention.\n\n\nRESULTS\nParticipants with chronic pain performed worse on tests of working memory and verbal episodic memory. A decline in attention explained some, but not all, group differences in memory performance. Finally, no additional effect of age on the diminished task performance in participants with chronic pain was observed.\n\n\nDISCUSSION\nTaken together, the results indicate that chronic pain significantly affects memory performance. Part of this effect may be caused by underlying attentional dysfunction, although this could not fully explain the observed memory decline. An increase in age in combination with the presence of chronic pain did not additionally affect memory performance.",
"title": ""
},
{
"docid": "6fb006066fa1a25ae348037aa1ee7be3",
"text": "Reducing redundancy in data representation leads to decreased data storage requirements and lower costs for data communication.",
"title": ""
},
{
"docid": "01638567bf915e26bf9398132ca27264",
"text": "Uncontrolled bleeding from the cystic artery and its branches is a serious problem that may increase the risk of intraoperative lesions to vital vascular and biliary structures. On laparoscopic visualization anatomic relations are seen differently than during conventional surgery, so proper knowledge of the hepatobiliary triangle anatomic structures under the conditions of laparoscopic visualization is required. We present an original classification of the anatomic variations of the cystic artery into two main groups based on our experience with 200 laparoscopic cholecystectomies, with due consideration of the known anatomicotopographic relations. Group I designates a cystic artery situated within the hepatobiliary triangle on laparoscopic visualization. This group included three types: (1) normally lying cystic artery, found in 147 (73.5%) patients; (2) most common cystic artery variation, manifesting as its doubling, present in 31 (15.5%) patients; and (3) the cystic artery originating from the aberrant right hepatic artery, observed in 11 (5.5%) patients. Group II designates a cystic artery that could not be found within the hepatobiliary triangle on laparoscopic dissection. This group included two types of variation: (1) cystic artery originating from the gastroduodenal artery, found in nine (4.5%) patients; and (2) cystic artery originating from the left hepatic artery, recorded in two (1%) patients.",
"title": ""
},
{
"docid": "b5b91947716e3594e3ddbb300ea80d36",
"text": "In this paper, a novel drive method, which is different from the traditional motor drive techniques, for high-speed brushless DC (BLDC) motor is proposed and verified by a series of experiments. It is well known that the BLDC motor can be driven by either pulse-width modulation (PWM) techniques with a constant dc-link voltage or pulse-amplitude modulation (PAM) techniques with an adjustable dc-link voltage. However, to our best knowledge, there is rare study providing a proper drive method for a high-speed BLDC motor with a large power over a wide speed range. Therefore, the detailed theoretical analysis comparison of the PWM control and the PAM control for high-speed BLDC motor is first given. Then, a conclusion that the PAM control is superior to the PWM control at high speed is obtained because of decreasing the commutation delay and high-frequency harmonic wave. Meanwhile, a new high-speed BLDC motor drive method based on the hybrid approach combining PWM and PAM is proposed. Finally, the feasibility and effectiveness of the performance analysis comparison and the new drive method are verified by several experiments.",
"title": ""
},
{
"docid": "350cda71dae32245b45d96b5fdd37731",
"text": "In this work, we focus on cyclic codes over the ring F2+uF2+vF2+uvF2, which is not a finite chain ring. We use ideas from group rings and works of AbuAlrub et al. in (Des Codes Crypt 42:273–287, 2007) to characterize the ring (F2 + uF2 + vF2 + uvF2)/(x − 1) and cyclic codes of odd length. Some good binary codes are obtained as the images of cyclic codes over F2+uF2+vF2+uvF2 under two Gray maps that are defined. We also characterize the binary images of cyclic codes over F2 + uF2 + vF2 + uvF2 in general.",
"title": ""
},
{
"docid": "2805fdd4cd97931497b6c42263a20534",
"text": "The well-established Modulation Transfer Function (MTF) is an imaging performance parameter that is well suited to describing certain sources of detail loss, such as optical focus and motion blur. As performance standards have developed for digital imaging systems, the MTF concept has been adapted and applied as the spatial frequency response (SFR). The international standard for measuring digital camera resolution, ISO 12233, was adopted over a decade ago. Since then the slanted edge-gradient analysis method on which it was based has been improved and applied beyond digital camera evaluation. Practitioners have modified minor elements of the standard method to suit specific system characteristics, unique measurement needs, or computational shortcomings in the original method. Some of these adaptations have been documented and benchmarked, but a number have not. In this paper we describe several of these modifications, and how they have improved the reliability of the resulting system evaluations. We also review several ways the method has been adapted and applied beyond camera resolution.",
"title": ""
},
{
"docid": "c716e7dc1c0e770001bcb57eab871968",
"text": "We present a new method to visualize from an ensemble of flow fields the statistical properties of streamlines passing through a selected location. We use principal component analysis to transform the set of streamlines into a low-dimensional Euclidean space. In this space the streamlines are clustered into major trends, and each cluster is in turn approximated by a multivariate Gaussian distribution. This yields a probabilistic mixture model for the streamline distribution, from which confidence regions can be derived in which the streamlines are most likely to reside. This is achieved by transforming the Gaussian random distributions from the low-dimensional Euclidean space into a streamline distribution that follows the statistical model, and by visualizing confidence regions in this distribution via iso-contours. We further make use of the principal component representation to introduce a new concept of streamline-median, based on existing median concepts in multidimensional Euclidean spaces. We demonstrate the potential of our method in a number of real-world examples, and we compare our results to alternative clustering approaches for particle trajectories as well as curve boxplots.",
"title": ""
},
{
"docid": "3cda92028692a25411d74e5a002740ac",
"text": "Protecting sensitive information from unauthorized disclosure is a major concern of every organization. As an organization’s employees need to access such information in order to carry out their daily work, data leakage detection is both an essential and challenging task. Whether caused by malicious intent or an inadvertent mistake, data loss can result in significant damage to the organization. Fingerprinting is a content-based method used for detecting data leakage. In fingerprinting, signatures of known confidential content are extracted and matched with outgoing content in order to detect leakage of sensitive content. Existing fingerprinting methods, however, suffer from two major limitations. First, fingerprinting can be bypassed by rephrasing (or minor modification) of the confidential content, and second, usually the whole content of document is fingerprinted (including non-confidential parts), resulting in false alarms. In this paper we propose an extension to the fingerprinting approach that is based on sorted k-skip-n-grams. The proposed method is able to produce a fingerprint of the core confidential content which ignores non-relevant (non-confidential) sections. In addition, the proposed fingerprint method is more robust to rephrasing and can also be used to detect a previously unseen confidential document and therefore provide better detection of intentional leakage incidents.",
"title": ""
},
{
"docid": "d1e062c5c91e93a29b9cd1a015d5e135",
"text": "Experimental acoustic cell separation methods have been widely used to perform separation for different types of blood cells. However, numerical simulation of acoustic cell separation has not gained enough attention and needs further investigation since by using numerical methods, it is possible to optimize different parameters involved in the design of an acoustic device and calculate particle trajectories in a simple and low cost manner before spending time and effort for fabricating these devices. In this study, we present a comprehensive finite element-based simulation of acoustic separation of platelets, red blood cells and white blood cells, using standing surface acoustic waves (SSAWs). A microfluidic channel with three inlets, including the middle inlet for sheath flow and two symmetrical tilted angle inlets for the cells were used to drive the cells through the channel. Two interdigital transducers were also considered in this device and by implementing an alternating voltage to the transducers, an acoustic field was created which can exert the acoustic radiation force to the cells. Since this force is dependent to the size of the cells, the cells are pushed towards the midline of the channel with different path lines. Particle trajectories for different cells were obtained and compared with a theoretical equation. Two types of separations were observed as a result of varying the amplitude of the acoustic field. In the first mode of separation, white blood cells were sorted out through the middle outlet and in the second mode of separation, platelets were sorted out through the side outlets. Depending on the clinical needs and by using the studied microfluidic device, each of these modes can be applied to separate the desired cells.",
"title": ""
},
{
"docid": "f64c7c6d068b0e2f9500d3b1e2d79178",
"text": "The proposed protocol is for a systematic review and meta-analysis on the effects of whole-grains (WG) on non-communicable diseases such as type 2 diabetes, cardiovascular disease, hypertension and obesity. The primary objectives is to explore the mechanisms of WG intake on multiple biomarkers of NCDs such as fasting glucose, fasting insulin and many others. The secondary objective will look at the dose-response relationship between these various mechanisms. The protocol outlines the motive and scope for the review, and methodology including the risk of bias, statistical analysis, screening and study criteria.",
"title": ""
},
{
"docid": "60af8669ea0acb73e8edcd90abf0ce3e",
"text": "The physical mechanism of seed germination and its inhibition by abscisic acid (ABA) in Brassica napus L. was investigated, using volumetric growth (= water uptake) rate (dV/dt), water conductance (L), cell wall extensibility coefficient (m), osmotic pressure ( product operator(i)), water potential (Psi(i)), turgor pressure (P), and minimum turgor for cell expansion (Y) of the intact embryo as experimental parameters. dV/dt, product operator(i), and Psi(i) were measured directly, while m, P, and Y were derived by calculation. Based on the general equation of hydraulic cell growth [dV/dt = Lm/(L + m) (Delta product operator - Y), where Delta product operator = product operator(i) - product operator of the external medium], the terms (Lm/(L + m) and product operator(i) - Y were defined as growth coefficient (k(G)) and growth potential (GP), respectively. Both k(G) and GP were estimated from curves relating dV/dt (steady state) to product operator of osmotic test solutions (polyethylene glycol 6000).During the imbibition phase (0-12 hours after sowing), k(G) remains very small while GP approaches a stable level of about 10 bar. During the subsequent growth phase of the embryo, k(G) increases about 10-fold. ABA, added before the onset of the growth phase, prevents the rise of k(G) and lowers GP. These effects are rapidly abolished when germination is induced by removal of ABA. Neither L (as judged from the kinetics of osmotic water efflux) nor the amount of extractable solutes are affected by these changes. product operator(i) and Psi(i) remain at a high level in the ABA-treated seed but drop upon induction of germination, and this adds up to a large decrease of P, indicating that water uptake of the germinating embryo is controlled by cell wall loosening rather than by changes of product operator(i) or L. ABA inhibits water uptake by preventing cell wall loosening. By calculating Y and m from the growth equation, it is further shown that cell wall loosening during germination comprises both a decrease of Y from about 10 to 0 bar and an at least 10-fold increase of m. ABA-mediated embryo dormancy is caused by a reversible inhibition of both of these changes in cell wall stability.",
"title": ""
},
{
"docid": "ec69b95261fc19183a43c0e102f39016",
"text": "The selection of a surgical approach for the treatment of tibia plateau fractures is an important decision. Approximately 7% of all tibia plateau fractures affect the posterolateral corner. Displaced posterolateral tibia plateau fractures require anatomic articular reduction and buttress plate fixation on the posterior aspect. These aims are difficult to reach through a lateral or anterolateral approach. The standard posterolateral approach with fibula osteotomy and release of the posterolateral corner is a traumatic procedure, which includes the risk of fragment denudation. Isolated posterior approaches do not allow sufficient visual control of fracture reduction, especially if the fracture is complex. Therefore, the aim of this work was to present a surgical approach for posterolateral tibial plateau fractures that both protects the soft tissue and allows for good visual control of fracture reduction. The approach involves a lateral arthrotomy for visualizing the joint surface and a posterolateral approach for the fracture reduction and plate fixation, which are both achieved through one posterolateral skin incision. Using this approach, we achieved reduction of the articular surface and stable fixation in six of seven patients at the final follow-up visit. No complications and no loss of reduction were observed. Additionally, the new posterolateral approach permits direct visual exposure and facilitates the application of a buttress plate. Our approach does not require fibular osteotomy, and fragments of the posterolateral corner do not have to be detached from the soft tissue network.",
"title": ""
},
{
"docid": "58042f8c83e5cc4aa41e136bb4e0dc1f",
"text": "In this paper, we propose wire-free integrated sensors that monitor pulse wave velocity (PWV) and respiration, both non-electrical vital signs, by using an all-electrical method. The key techniques that we employ to obtain all-electrical and wire-free measurement are bio-impedance (BI) and analog-modulated body-channel communication (BCC), respectively. For PWV, time difference between ECG signal from the heart and BI signal from the wrist is measured. To remove wires and avoid sampling rate mismatch between ECG and BI sensors, ECG signal is sent to the BI sensor via analog BCC without any sampling. For respiration measurement, BI sensor is located at the abdomen to detect volume change during inhalation and exhalation. A prototype chip fabricated in 0.11 μm CMOS process consists of ECG, BI sensor and BCC transceiver. Measurement results show that heart rate and PWV are both within their normal physiological range. The chip consumes 1.28 mW at 1.2 V supply while occupying 5 mm×2.5 mm of area.",
"title": ""
},
{
"docid": "b3ebbff355dfc23b4dfbab3bc3012980",
"text": "Research with young children has shown that, like adults, they focus selectively on the aspects of an actor's behavior that are relevant to his or her underlying intentions. The current studies used the visual habituation paradigm to ask whether infants would similarly attend to those aspects of an action that are related to the actor's goals. Infants saw an actor reach for and grasp one of two toys sitting side by side on a curtained stage. After habituation, the positions of the toys were switched and babies saw test events in which there was a change in either the path of motion taken by the actor's arm or the object that was grasped by the actor. In the first study, 9-month-old infants looked longer when the actor grasped a new toy than when she moved through a new path. Nine-month-olds who saw an inanimate object of approximately the same dimensions as the actor's arm touch the toy did not show this pattern in test. In the second study, 5-month-old infants showed similar, though weaker, patterns. A third study provided evidence that the findings for the events involving a person were not due to perceptual changes in the objects caused by occlusion by the hand. A fourth study replicated the 9 month results for a human grasp at 6 months, and revealed that these effects did not emerge when infants saw an inanimate object with digits that moved to grasp the toy. Taken together, these findings indicate that young infants distinguish in their reasoning about human action and object motion, and that by 6 months infants encode the actions of other people in ways that are consistent with more mature understandings of goal-directed action.",
"title": ""
},
{
"docid": "197f5af02ea53b1dd32167780c4126ed",
"text": "A new technique for summarization is presented here for summarizing articles known as text summarization using neural network and rhetorical structure theory. A neural network is trained to learn the relevant characteristics of sentences by using back propagation technique to train the neural network which will be used in the summary of the article. After training neural network is then modified to feature fusion and pruning the relevant characteristics apparent in summary sentences. Finally, the modified neural network is used to summarize articles and combining it with the rhetorical structure theory to form final summary of an article.",
"title": ""
},
{
"docid": "7bce92a72a19aef0079651c805883eb5",
"text": "Highly realistic virtual human models are rapidly becoming commonplace in computer graphics. These models, often represented by complex shape and requiring labor-intensive process, challenge the problem of automatic modeling. This paper studies the problem and solutions to automatic modeling of animatable virtual humans. Methods for capturing the shape of real people, parameterization techniques for modeling static shape (the variety of human body shapes) and dynamic shape (how the body shape changes as it moves) of virtual humans are classified, summarized and compared. Finally, methods for clothed virtual humans are reviewed.",
"title": ""
},
{
"docid": "88cb8c2f7f4fd5cdc95cc8e48faa3cb7",
"text": "Prediction or prognostication is at the core of modern evidence-based medicine. Prediction of overall mortality and cardiovascular disease can be improved by a systematic evaluation of measurements from large-scale epidemiological studies or by using nested sampling designs to discover new markers from omics technologies. In study I, we investigated if prediction measures such as calibration, discrimination and reclassification could be calculated within traditional sampling designs and which of these designs were the most efficient. We found that is possible to calculate prediction measures by using a proper weighting system and that a stratified casecohort design is a reasonable choice both in terms of efficiency and simplicity. In study II, we investigated the clinical utility of several genetic scores for incident coronary heart disease. We found that genetic information could be of clinical value in improving the allocation of patients to correct risk strata and that the assessment of a genetic risk score among intermediate risk subjects could help to prevent about one coronary heart disease event every 318 people screened. In study III, we explored the association between circulating metabolites and incident coronary heart disease. We found four new metabolites associated with coronary heart disease independently of established cardiovascular risk factors and with evidence of clinical utility. By using genetic information we determined a potential causal effect on coronary heart disease of one of these novel metabolites. In study IV, we compared a large number of demographics, health and lifestyle measurements for association with all-cause and cause-specific mortality. By ranking measurements in terms of their predictive abilities we could provide new insights about their relative importance, as well as reveal some unexpected associations. Moreover we developed and validated a prediction score for five-year mortality with good discrimination ability and calibrated it for the entire UK population. In conclusion, we applied a translational approach spanning from the discovery of novel biomarkers to their evaluation in terms of clinical utility. We combined this effort with methodological improvements aimed to expand prediction measures in settings that were not previously explored. We identified promising novel metabolomics markers for cardiovascular disease and supported the potential clinical utility of a genetic score in primary prevention. Our results might fuel future studies aimed to implement these findings in clinical practice.",
"title": ""
},
{
"docid": "934680e03cfaccd2426ee8e8e311ef06",
"text": "Photocatalytic water splitting using particulate semiconductors is a potentially scalable and economically feasible technology for converting solar energy into hydrogen. Z-scheme systems based on two-step photoexcitation of a hydrogen evolution photocatalyst (HEP) and an oxygen evolution photocatalyst (OEP) are suited to harvesting of sunlight because semiconductors with either water reduction or oxidation activity can be applied to the water splitting reaction. However, it is challenging to achieve efficient transfer of electrons between HEP and OEP particles. Here, we present photocatalyst sheets based on La- and Rh-codoped SrTiO3 (SrTiO3:La, Rh; ref. ) and Mo-doped BiVO4 (BiVO4:Mo) powders embedded into a gold (Au) layer. Enhancement of the electron relay by annealing and suppression of undesirable reactions through surface modification allow pure water (pH 6.8) splitting with a solar-to-hydrogen energy conversion efficiency of 1.1% and an apparent quantum yield of over 30% at 419 nm. The photocatalyst sheet design enables efficient and scalable water splitting using particulate semiconductors.",
"title": ""
},
{
"docid": "ac044ce167d7296675ddfa1f9387c75d",
"text": "Over the years, many millimeter-wave circulator techniques have been presented, such as nonradiative dielectric and fin-line circulators. Although excellent results have been demonstrated in the literature, their proliferation in commercial devices has been hindered by complex assembly cost. This paper presents a study of substrate-integrated millimeter-wave degree-2 circulators. Although the substrate integrated-circuits technique may be applied to virtually any planar transmission medium, the one adopted in this paper is the substrate integrated waveguide (SIW). Two design configurations are possible: a planar one that is suitable for thin substrate materials and a turnstile one for thicker substrate materials. The turnstile circulator is ideal for systems where the conductor losses associated with the thin SIW cannot be tolerated. The design methodology adopted in this paper is to characterize the complex gyrator circuit as a preamble to design. This is done via a commercial finite-element package",
"title": ""
}
] |
scidocsrr
|
31d12f6f3af91826a57eb83ddb829ae9
|
Linking Cybersecurity Knowledge : Cybersecurity Information Discovery Mechanism
|
[
{
"docid": "e913a4d2206be999f0278d48caa4708a",
"text": "Widespread deployment of the Internet enabled building of an emerging IT delivery model, i.e., cloud computing. Albeit cloud computing-based services have rapidly developed, their security aspects are still at the initial stage of development. In order to preserve cybersecurity in cloud computing, cybersecurity information that will be exchanged within it needs to be identified and discussed. For this purpose, we propose an ontological approach to cybersecurity in cloud computing. We build an ontology for cybersecurity operational information based on actual cybersecurity operations mainly focused on non-cloud computing. In order to discuss necessary cybersecurity information in cloud computing, we apply the ontology to cloud computing. Through the discussion, we identify essential changes in cloud computing such as data-asset decoupling and clarify the cybersecurity information required by the changes such as data provenance and resource dependency information.",
"title": ""
}
] |
[
{
"docid": "42c2e599dbbb00784e2a6837ebd17ade",
"text": "Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples. However, standard classification methods do not take these costs into account, and assume a constant cost of misclassification errors. State-of-the-art example-dependent cost-sensitive techniques only introduce the cost to the algorithm, either before or after training, therefore, leaving opportunities to investigate the potential impact of algorithms that take into account the real financial example-dependent costs during an algorithm training. In this paper, we propose an example-dependent cost-sensitive decision tree algorithm, by incorporating the different example-dependent costs into a new cost-based impurity measure and a new cost-based pruning criteria. Then, using three different databases, from three real-world applications: credit card fraud detection, credit scoring and direct marketing, we evaluate the proposed method. The results show that the proposed algorithm is the best performing method for all databases. Furthermore, when compared against a standard decision tree, our method builds significantly smaller trees in only a fifth of the time, while having a superior performance measured by cost savings, leading to a method that not only has more business-oriented results, but also a method that creates simpler models that are easier to analyze. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "28cfe864acc8c40eb8759261273cf3bb",
"text": "Mobile-edge computing (MEC) has recently emerged as a promising paradigm to liberate mobile devices from increasingly intensive computation workloads, as well as to improve the quality of computation experience. In this paper, we investigate the tradeoff between two critical but conflicting objectives in multi-user MEC systems, namely, the power consumption of mobile devices and the execution delay of computation tasks. A power consumption minimization problem with task buffer stability constraints is formulated to investigate the tradeoff, and an online algorithm that decides the local execution and computation offloading policy is developed based on Lyapunov optimization. Specifically, at each time slot, the optimal frequencies of the local CPUs are obtained in closed forms, while the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method. Performance analysis is conducted for the proposed algorithm, which indicates that the power consumption and execution delay obeys an $\\left[O\\left(1\\slash V\\right),O\\left(V\\right)\\right]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters to the system performance.",
"title": ""
},
{
"docid": "b85330c2d0816abe6f28fd300e5f9b75",
"text": "This paper presents a novel dual polarized planar aperture antenna using the low-temperature cofired ceramics technology to realize a novel antenna-in-package for a 60-GHz CMOS differential transceiver chip. Planar aperture antenna technology ensures high gain and wide bandwidth. Differential feeding is adopted to be compatible with the chip. Dual polarization makes the antenna function as a pair of single polarized antennas but occupies much less area. The antenna is ±45° dual polarized, and each polarization acts as either a transmitting (TX) or receiving (RX) antenna. This improves the signal-to-noise ratio of the wireless channel in a point-to-point communication, because the TX/RX polarization of one antenna is naturally copolarized with the RX/TX polarization of the other antenna. A prototype of the proposed antenna is designed, fabricated, and measured, whose size is 12 mm × 12 mm × 1.128 mm (2.4λ0 × 2.4λ0 × 0.226λ0). The measurement shows that the -10 dB impedance bandwidth covers the entire 60 GHz unlicensed band (57-64 GHz) for both polarizations. Within the bandwidth, the isolation between the ports of the two polarizations is better than 26 dB, and the gain is higher than 10 dBi with a peak of around 12 dBi for both polarizations.",
"title": ""
},
{
"docid": "c5d0d79bc6a0b58cf09c5d8eb0dc2ecf",
"text": "FRAME SEMANTICS is a research program in empirical semantics which emphasizes the continuities between language and experience, and provides a framework for presenting the results of that research. A FRAME is any system of concepts related in such a way that to understand any one concept it is necessary to understand the entire system; introducing any one concept results in all of them becoming available. In Frame Semantics, a word represents a category of experience; part of the research endeavor is the uncovering of reasons a speech community has for creating the category represented by the word and including that reason in the description of the meaning of the word.",
"title": ""
},
{
"docid": "cee3c61474bf14158d4abf0c794a9c2a",
"text": "This course will focus on describing techniques for handling datasets larger than main memory in scientific visualization and computer graphics. Recently, several external memory techniques have been developed for a wide variety of graphics and visualization problems, including surface simplification, volume rendering, isosurface generation, ray tracing, surface reconstruction, and so on. This work has had significant impact given that in recent years there has been a rapid increase in the raw size of datasets. Several technological trends are contributing to this, such as the development of high-resolution 3D scanners, and the need to visualize ASCI-size (Accelerated Strategic Computing Initiative) datasets. Another important push for this kind of technology is the growing speed gap between main memory and caches, such a gap penalizes algorithms which do not optimize for coherence of access. Because of these reasons, much research in computer graphics focuses on developing out-of-core (and often cache-friendly) techniques. This course reviews fundamental issues, current problems, and unresolved solutions, and presents an in-depth study of external memory algorithms developed in recent years. Its goal is to provide students and graphics researchers and professionals with an effective knowledge of current techniques, as well as the foundation to develop novel techniques on their own. Schedule (tentative) 5 min Introduction to the course Silva 45 min Overview of external memory algorithms Chiang 40 min Out-of-core scientific visualization Silva",
"title": ""
},
{
"docid": "024c5cd20c5764f29f62a1f35288eef2",
"text": "This paper presents a low-loss and high Tx-to-Rx isolation single-pole double-throw (SPDT) millimeter-wave switch for true time delay applications. The switch is designed based on matching-network and double-shunt transistors with quarter-wavelength transmission lines. The insertion loss and isolation characteristics of the switches are analyzed revealing that optimization of the transistor size with a matching-network switch on the receiver side and a double-shunt switch on the transmitter side can enhance the isolation performance with low loss. Implemented in 90-nm CMOS, the switch achieves a measured insertion loss and Tx-to-Rx isolation of 1.9 and 39 dB at 60 GHz, respectively. The input 1-dB gain compression point is 10 dBm at 60 GHz, and the return loss of the SPDT switch ports is greater than 10 dB at 48-67 GHz.",
"title": ""
},
{
"docid": "2cd7bbaf04f773c2248ad5e76cb5bf5d",
"text": "This paper presents a unified framework for Behavior Trees (BTs), a plan representation and execution tool. The available literature lacks the consistency and mathematical rigor required for robotic and control applications. Therefore, we approach this problem in two steps: first, reviewing the most popular BT literature exposing the aforementioned issues; second, describing our unified BT framework along with equivalence notions between BTs and Controlled Hybrid Dynamical Systems (CHDSs). This paper improves on the existing state of the art as it describes BTs in a more accurate and compact way, while providing insight about their actual representation capabilities. Lastly, we demonstrate the applicability of our framework to real systems scheduling open-loop actions in a grasping mission that involves a NAO robot and our BT library.",
"title": ""
},
{
"docid": "2259232b86607e964393c884340efe79",
"text": "Dynamic task allocation is an essential requirement for multi-robot systems functioning in unknown dynamic environments. It allows robots to change their behavior in response to environmental changes or actions of other robots in order to improve overall system performance. Emergent coordination algorithms for task allocation that use only local sensing and no direct communication between robots are attractive because they are robust and scalable. However, a lack of formal analysis tools makes emergent coordination algorithms difficult to design. In this paper we present a mathematical model of a general dynamic task allocation mechanism. Robots using this mechanism have to choose between two types of task, and the goal is to achieve a desired task division in the absence of explicit communication and global knowledge. Robots estimate the state of the environment from repeated local observations and decide which task to choose based on these observations. We model the robots and observations as stochastic processes and study the dynamics of individual robots and the collective behavior. We analyze the effect that the number of observations and the choice of decision functions have on the performance of the system. We validate the mathematical models on a multi-foraging scenario in a multi-robot system. We find that the model’s predictions agree very closely with experimental results from sensor-based simulations.",
"title": ""
},
{
"docid": "52c0c6d1deacdca44df5000b2b437c78",
"text": "This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear classifier and a subset of predictor variables (or features) that are most relevant to the classification task. The approach uses heavy-tailed priors to promote sparsity in the utilization of both basis functions and features; these priors act as regularizers for the likelihood function that rewards good classification on the training data. We derive an expectation- maximization (EM) algorithm to efficiently compute a maximum a posteriori (MAP) point estimate of the various parameters. The algorithm is an extension of recent state-of-the-art sparse Bayesian classifiers, which in turn can be seen as Bayesian counterparts of support vector machines. Experimental comparisons using kernel classifiers demonstrate both parsimonious feature selection and excellent classification accuracy on a range of synthetic and benchmark data sets.",
"title": ""
},
{
"docid": "e5016e84bdbd016e880f12bfdfd99cb5",
"text": "The subject of this paper is a method which suppresses systematic errors of resolvers and optical encoders with sinusoidal line signals. The proposed method does not require any additional hardware and the computational efforts are minimal. Since this method does not cause any time delay, the dynamic of the speed control is not affected. By means of this new scheme, dynamic and smooth running characteristics of drive systems are improved considerably.",
"title": ""
},
{
"docid": "3ad19b3710faeda90db45e2f7cebebe8",
"text": "Motion planning is a fundamental problem in robotics. It comes in a variety of forms, but the simplest version is as follows. We are given a robot system B, which may consist of several rigid objects attached to each other through various joints, hinges, and links, or moving independently, and a 2D or 3D environment V cluttered with obstacles. We assume that the shape and location of the obstacles and the shape of B are known to the planning system. Given an initial placement Z1 and a final placement Z2 of B, we wish to determine whether there exists a collisionavoiding motion of B from Z1 to Z2, and, if so, to plan such a motion. In this simplified and purely geometric setup, we ignore issues such as incomplete information, nonholonomic constraints, control issues related to inaccuracies in sensing and motion, nonstationary obstacles, optimality of the planned motion, and so on. Since the early 1980s, motion planning has been an intensive area of study in robotics and computational geometry. In this chapter we will focus on algorithmic motion planning, emphasizing theoretical algorithmic analysis of the problem and seeking worst-case asymptotic bounds, and only mention briefly practical heuristic approaches to the problem. The majority of this chapter is devoted to the simplified version of motion planning, as stated above. Section 51.1 presents general techniques and lower bounds. Section 51.2 considers efficient solutions to a variety of specific moving systems with a small number of degrees of freedom. These efficient solutions exploit various sophisticated methods in computational and combinatorial geometry related to arrangements of curves and surfaces (Chapter 30). Section 51.3 then briefly discusses various extensions of the motion planning problem such as computing optimal paths with respect to various quality measures, computing the path of a tethered robot, incorporating uncertainty, moving obstacles, and more.",
"title": ""
},
{
"docid": "1f3f352c7584fb6ec1924ca3621fb1fb",
"text": "The National Firearms Forensic Intelligence Database (NFFID (c) Crown Copyright 2003-2008) was developed by The Forensic Science Service (FSS) as an investigative tool for collating and comparing information from items submitted to the FSS to provide intelligence reports for the police and relevant government agencies. The purpose of these intelligence reports was to highlight current firearm and ammunition trends and their distribution within the country. This study reviews all the trends that have been highlighted by NFFID between September 2003 and September 2008. A total of 8887 guns of all types have been submitted to the FSS over the last 5 years, where an average of 21% of annual submissions are converted weapons. The makes, models, and modes of conversion of these weapons are described in detail. The number of trends identified by NFFID shows that this has been a valuable tool in the analysis of firearms-related crime.",
"title": ""
},
{
"docid": "ac0a6e663caa3cb8cdcb1a144561e624",
"text": "A two-stage process is performed by human operator for cleaning windows. The first being the application of cleaning fluid, which is usually achieved by using a wetted applicator. The aim of this task being to cover the whole window area in the shortest possible time. This depends on two parameters: the size of the applicator and the path which the applicator travels without significantly overlapping previously wetted area. The second is the removal of cleaning fluid by a squeegee blade without spillage on to other areas of the facade or previously cleaned areas of glass. This is particularly difficult for example if the window is located on the roof of a building and cleaning is performed from inside by the human window cleaner.",
"title": ""
},
{
"docid": "3b32ade20fbdd7474ee10fc10d80d90a",
"text": "We report the modulation performance of micro-light-emitting diode arrays with peak emission ranging from 370 to 520 nm, and emitter diameters ranging from 14 to 84 μm. Bandwidths in excess of 400 MHz and error-free data transmission up to 1.1Gbit/s is shown. These devices are shown integrated with electronic drivers, allowing convenient control of individual array emitters. Transmission using such a device is shown at 512 Mbit/s.",
"title": ""
},
{
"docid": "6d61da17db5c16611409356bd79006c4",
"text": "We examine empirical evidence for religious prosociality, the hypothesis that religions facilitate costly behaviors that benefit other people. Although sociological surveys reveal an association between self-reports of religiosity and prosociality, experiments measuring religiosity and actual prosocial behavior suggest that this association emerges primarily in contexts where reputational concerns are heightened. Experimentally induced religious thoughts reduce rates of cheating and increase altruistic behavior among anonymous strangers. Experiments demonstrate an association between apparent profession of religious devotion and greater trust. Cross-cultural evidence suggests an association between the cultural presence of morally concerned deities and large group size in humans. We synthesize converging evidence from various fields for religious prosociality, address its specific boundary conditions, and point to unresolved questions and novel predictions.",
"title": ""
},
{
"docid": "55895dab9cc43c20aac200876da5722e",
"text": "We show the equivalence of two stateof-the-art models for link prediction/ knowledge graph completion: Nickel et al’s holographic embeddings and Trouillon et al.’s complex embeddings. We first consider a spectral version of the holographic embeddings, exploiting the frequency domain in the Fourier transform for efficient computation. The analysis of the resulting model reveals that it can be viewed as an instance of the complex embeddings with a certain constraint imposed on the initial vectors upon training. Conversely, any set of complex embeddings can be converted to a set of equivalent holographic embeddings.",
"title": ""
},
{
"docid": "a579a45a917999f48846a29cd09a92f4",
"text": "Over the last fifty years, the “Big Five” model of personality traits has become a standard in psychology, and research has systematically documented correlations between a wide range of linguistic variables and the Big Five traits. A distinct line of research has explored methods for automatically generating language that varies along personality dimensions. We present PERSONAGE (PERSONAlity GEnerator), the first highly parametrizable language generator for extraversion, an important aspect of personality. We evaluate two personality generation methods: (1) direct generation with particular parameter settings suggested by the psychology literature; and (2) overgeneration and selection using statistical models trained from judge’s ratings. Results show that both methods reliably generate utterances that vary along the extraversion dimension, according to human judges.",
"title": ""
},
{
"docid": "fe116849575dd91759a6c1ef7ed239f3",
"text": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.",
"title": ""
},
{
"docid": "0765510720f450736135efd797097450",
"text": "In this paper we discuss the re-orientation of human-computer interaction as an aesthetic field. We argue that mainstream approaches lack of general openness and ability to assess experience aspects of interaction, but that this can indeed be remedied. We introduce the concept of interface criticism as a way to turn the conceptual re-orientation into handles for practical design, and we present and discuss an interface criticism guide.",
"title": ""
},
{
"docid": "f33e96f81e63510f0a5e34609a390c2d",
"text": "Authentication based on passwords is used largely in applications for computer security and privacy. However, human actions such as choosing bad passwords and inputting passwords in an insecure way are regarded as “the weakest link” in the authentication chain. Rather than arbitrary alphanumeric strings, users tend to choose passwords either short or meaningful for easy memorization. With web applications and mobile apps piling up, people can access these applications anytime and anywhere with various devices. This evolution brings great convenience but also increases the probability of exposing passwords to shoulder surfing attacks. Attackers can observe directly or use external recording devices to collect users’ credentials. To overcome this problem, we proposed a novel authentication system PassMatrix, based on graphical passwords to resist shoulder surfing attacks. With a one-time valid login indicator and circulative horizontal and vertical bars covering the entire scope of pass-images, PassMatrix offers no hint for attackers to figure out or narrow down the password even they conduct multiple camera-based attacks. We also implemented a PassMatrix prototype on Android and carried out real user experiments to evaluate its memorability and usability. From the experimental result, the proposed system achieves better resistance to shoulder surfing attacks while maintaining usability.",
"title": ""
}
] |
scidocsrr
|
056abfb0b87cae946c1658a942bffce3
|
Storytelling of Photo Stream with Bidirectional Multi-thread Recurrent Neural Network
|
[
{
"docid": "4f58d355a60eb61b1c2ee71a457cf5fe",
"text": "Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).",
"title": ""
}
] |
[
{
"docid": "7ca908e7896afc49a0641218e1c4febf",
"text": "Timely and accurate classification and interpretation of high-resolution images are very important for urban planning and disaster rescue. However, as spatial resolution gets finer, it is increasingly difficult to recognize complex patterns in high-resolution remote sensing images. Deep learning offers an efficient strategy to fill the gap between complex image patterns and their semantic labels. However, due to the hierarchical abstract nature of deep learning methods, it is difficult to capture the precise outline of different objects at the pixel level. To further reduce this problem, we propose an object-based deep learning method to accurately classify the high-resolution imagery without intensive human involvement. In this study, high-resolution images were used to accurately classify three different urban scenes: Beijing (China), Pavia (Italy), and Vaihingen (Germany). The proposed method is built on a combination of a deep feature learning strategy and an object-based classification for the interpretation of high-resolution images. Specifically, high-level feature representations extracted through the convolutional neural networks framework have been systematically investigated over five different layer configurations. Furthermore, to improve the classification accuracy, an object-based classification method also has been integrated with the deep learning strategy for more efficient image classification. Experimental results indicate that with the combination of deep learning and object-based classification, it is possible to discriminate different building types in Beijing Scene, such as commercial buildings and residential buildings with classification accuracies above 90%.",
"title": ""
},
{
"docid": "d04a6ca9c09b8c10daf64c9f7830c992",
"text": "Slave servo clocks have an essential role in hardware and software synchronization techniques based on Precision Time Protocol (PTP). The objective of servo clocks is to remove the drift between slave and master nodes, while keeping the output timing jitter within given uncertainty boundaries. Up to now, no univocal criteria exist for servo clock design. In fact, the relationship between controller design, performances and uncertainty sources is quite evanescent. In this paper, we propose a quite simple, but exhaustive linear model, which is expected to be used in the design of enhanced servo clock architectures.",
"title": ""
},
{
"docid": "44cda3da01ebd82fe39d886f8520ce13",
"text": "This paper describes some of the work on stereo that has been going on at INRIA in the last four years. The work has concentrated on obtaining dense, accurate, and reliable range maps of the environment at rates compatible with the real-time constraints of such applications as the navigation of mobile vehicles in man-made or natural environments. The class of algorithms which has been selected among several is the class of correlationbased stereo algorithms because they are the only ones that can produce su ciently dense range maps with an algorithmic structure which lends itself nicely to fast implementations because of the simplicity of the underlying computation. We describe the various improvements that we have brought to the original idea, including validation and characterization of the quality of the matches, a recursive implementation of the score computation which makes the method independent of the size of the correlation window, and a calibration method which does not require the use of a calibration pattern. We then describe two implementations of this algorithm on two very di erent pieces of hardware. The rst implementation is on a board with four Digital Signal Processors designed jointly with Matra MSII. This implementation can produce 64 64 range maps at rates varying between 200 and 400 ms, depending upon the range of disparities. The second implementation is on a board developed by DEC-PRL and can perform the cross-correlation of two 256 256 images in 140 ms. The rst implementation has been integrated in the navigation system of the INRIA cart and used to correct for inertial and odometric errors in navigation experiments both indoors and outdoors on road. This is the rst application of our correlation-based algorithm which is described in the paper. The second application has been done jointly with people from the french national space agency (CNES) to study the possibility of using stereo on a future planetary rover for the construction of Digital Elevation Maps. We have shown that real time stereo is possible today at low-cost and can be applied in real applications. The algorithm that has been described is not the most sophisticated available but we have made it robust and reliable thanks to a number of improvements. Even though each of these improvements is not earth-shattering from the pure research point of view, altogether they have allowed us to go beyond a very important threshold. This threshold measures the di erence between a program that runs in the laboratory on a few images and one that works continuously for hours on a sequence of stereo pairs and produces results at such rates and of such quality that they can be used to guide a real vehicle or to produce Discrete Elevation Maps. We believe that this threshold has only been reached in a very small number of cases.",
"title": ""
},
{
"docid": "45233b0580decd90135922ee8991791c",
"text": "In this paper, we present an object recognition and pose estimation framework consisting of a novel global object descriptor, so called Viewpoint oriented Color-Shape Histogram (VCSH), which combines object's color and shape information. During the phase of object modeling and feature extraction, the whole object's color point cloud model is built by registration from multi-view color point clouds. VCSH is trained using partial-view object color point clouds generated from different synthetic viewpoints. During the recognition phase, the object is identified and the closest viewpoint is extracted using the built feature database and object's features from real scene. The estimated closest viewpoint provides a good initialization for object pose estimation optimization using the iterative closest point strategy. Finally, objects in real scene are recognized and their accurate poses are retrieved. A set of experiments is realized where our proposed approach is proven to outperform other existing methods by guaranteeing highly accurate object recognition, fast and accurate pose estimation as well as exhibiting the capability of dealing with environmental illumination changes.",
"title": ""
},
{
"docid": "3db1f5eea78fc6a763e58c261502d156",
"text": "Deceptive opinion spam detection has attracted significant attention from both business and research communities. Existing approaches are based on manual discrete features, which can capture linguistic and psychological cues. However, such features fail to encode the semantic meaning of a document from the discourse perspective, which limits the performance. In this paper, we empirically explore a neural network model to learn document-level representation for detecting deceptive opinion spam. In particular, given a document, the model learns sentence representations with a convolutional neural network, which are combined using a gated recurrent neural network with attention mechanism to model discourse information and yield a document vector. Finally, the document representation is used directly as features to identify deceptive opinion spam. Experimental results on three domains (Hotel, Restaurant, and Doctor) show that our proposed method outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "d0b29aaf696df14670d223e22d801fe5",
"text": "The moment capacity of a laterally braced cold-formed steel flexural member with edge stiffened flanges (e.g., a channel or zee section) may be affected adversely by local or distortional buckling. New procedures for hand prediction of the buckling stress in the local and distortional mode are presented and verified. Numerical investigations are employed to highlight postbuckling behavior unique to the distortional mode. Compared with the local mode, the distortional mode is shown to have (1) heightened imperfection sensitivity, (2) lower postbuckling capacity, and (3) the ability to control the failure mechanism even in cases when the elastic buckling stress in the local mode is lower than in the distortional mode. Traditional design methods do not explicitly recognize distortional buckling, nor do they account for the observed phenomena in this mode. A new design method that integrates distortional buckling into the unified effective width approach, currently used in most cold-formed steel design specifications, is presented. For each element a local buckling stress and a reduced distortional buckling stress are compared to determine the effective width. Comparison with experimental tests shows that the new approach is more consistent and reliable than existing design methods.",
"title": ""
},
{
"docid": "4a4a0dde01536789bd53ec180a136877",
"text": "CONTEXT\nCurrent assessment formats for physicians and trainees reliably test core knowledge and basic skills. However, they may underemphasize some important domains of professional medical practice, including interpersonal skills, lifelong learning, professionalism, and integration of core knowledge into clinical practice.\n\n\nOBJECTIVES\nTo propose a definition of professional competence, to review current means for assessing it, and to suggest new approaches to assessment.\n\n\nDATA SOURCES\nWe searched the MEDLINE database from 1966 to 2001 and reference lists of relevant articles for English-language studies of reliability or validity of measures of competence of physicians, medical students, and residents.\n\n\nSTUDY SELECTION\nWe excluded articles of a purely descriptive nature, duplicate reports, reviews, and opinions and position statements, which yielded 195 relevant citations.\n\n\nDATA EXTRACTION\nData were abstracted by 1 of us (R.M.E.). Quality criteria for inclusion were broad, given the heterogeneity of interventions, complexity of outcome measures, and paucity of randomized or longitudinal study designs.\n\n\nDATA SYNTHESIS\nWe generated an inclusive definition of competence: the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served. Aside from protecting the public and limiting access to advanced training, assessments should foster habits of learning and self-reflection and drive institutional change. Subjective, multiple-choice, and standardized patient assessments, although reliable, underemphasize important domains of professional competence: integration of knowledge and skills, context of care, information management, teamwork, health systems, and patient-physician relationships. Few assessments observe trainees in real-life situations, incorporate the perspectives of peers and patients, or use measures that predict clinical outcomes.\n\n\nCONCLUSIONS\nIn addition to assessments of basic skills, new formats that assess clinical reasoning, expert judgment, management of ambiguity, professionalism, time management, learning strategies, and teamwork promise a multidimensional assessment while maintaining adequate reliability and validity. Institutional support, reflection, and mentoring must accompany the development of assessment programs.",
"title": ""
},
{
"docid": "4b03e14540f4f38398dfea2dcd9950be",
"text": "This paper presents a simple method for segmenting colour regions into categories like red, green, blue, and yellow. We are interested in studying how colour categories influence colour selection during scientific visualization. The ability to name individual colours is also important in other problem domains like real-time displays, user-interface design, and medical imaging systems. Our algorithm uses the Munsell and CIE LUV colour models to automatically segment a colour space like RGB or CIE XYZ into ten colour categories. Users are then asked to name a small number of representative colours from each category. This provides three important results: a measure of the perceptual overlap between neighbouring categories, a measure of a category’s strength, and a user-chosen name for each strong category. We evaluated our technique by segmenting known colour regions from the RGB, HSV, and CIE LUV colour models. The names we obtained were accurate, and the boundaries between different colour categories were well defined. We concluded our investigation by conducting an experiment to obtain user-chosen names and perceptual overlap for ten colour categories along the circumference of a colour wheel in CIE LUV.",
"title": ""
},
{
"docid": "63b78edf4fe9578d576ba89da14c850a",
"text": "Growth of internet era and corporate sector dealings communication online has introduced crucial security challenges in cyber space. Statistics of recent large scale attacks defined new class of threat to online world, advanced persistent threat (APT) able to impact national security and economic stability of any country. From all APTs, botnet is one of the well-articulated and stealthy attacks to perform cybercrime. Botnet owners and their criminal organizations are continuously developing innovative ways to infect new targets into their networks and exploit them. The concept of botnet refers collection of compromised computers (bots) infected by automated software robots, that interact to accomplish some distributed task which run without human intervention for illegal purposes. They are mostly malicious in nature and allow cyber criminals to control the infected machines remotely without the victim's knowledge. They use various techniques, communication protocols and topologies in different stages of their lifecycle; also specifically they can upgrade their methods at any time. Botnet is global in nature and their target is to steal or destroy valuable information from organizations as well as individuals. In this paper we present real world botnet (APTs) survey.",
"title": ""
},
{
"docid": "76089ed248c88e78c92c45fd1cbd914d",
"text": "Social relationships play a key role in depression. This is apparent in its etiology, symptomatology, and effective treatment. However, there has been little consensus about the best way to conceptualize the link between depression and social relationships. Furthermore, the extensive social-psychological literature on the nature of social relationships, and in particular, research on social identity, has not been integrated with depression research. This review presents evidence that social connectedness is key to understanding the development and resolution of clinical depression. The social identity approach is then used as a basis for conceptualizing the role of social relationships in depression, operationalized in terms of six central hypotheses. Research relevant to these hypotheses is then reviewed. Finally, we present an agenda for future research to advance theoretical and empirical understanding of the link between social identity and depression, and to translate the insights of this approach into clinical practice.",
"title": ""
},
{
"docid": "67265d70b2d704c0ab2898c933776dc2",
"text": "The intima-media thickness (IMT) of the common carotid artery (CCA) is widely used as an early indicator of cardiovascular disease (CVD). Typically, the IMT grows with age and this is used as a sign of increased risk of CVD. Beyond thickness, there is also clinical interest in identifying how the composition and texture of the intima-media complex (IMC) changed and how these textural changes grow into atherosclerotic plaques that can cause stroke. Clearly though texture analysis of ultrasound images can be greatly affected by speckle noise, our goal here is to develop effective despeckle noise methods that can recover image texture associated with increased rates of atherosclerosis disease. In this study, we perform a comparative evaluation of several despeckle filtering methods, on 100 ultrasound images of the CCA, based on the extracted multiscale Amplitude-Modulation Frequency-Modulation (AM-FM) texture features and visual image quality assessment by two clinical experts. Texture features were extracted from the automatically segmented IMC for three different age groups. The despeckle filters hybrid median and the homogeneous mask area filter showed the best performance by improving the class separation between the three age groups and also yielded significantly improved image quality.",
"title": ""
},
{
"docid": "37ad695a33cd19b664788964653d81b0",
"text": "Commonsense reasoning and probabilistic planning are two of the most important research areas in artificial intelligence. This paper focuses on Integrated commonsense Reasoning and probabilistic Planning (IRP) problems. On one hand, commonsense reasoning algorithms aim at drawing conclusions using structured knowledge that is typically provided in a declarative way. On the other hand, probabilistic planning algorithms aim at generating an action policy that can be used for action selection under uncertainty. Intuitively, reasoning and planning techniques are good at “understanding the world” and “accomplishing the task” respectively. This paper discusses the complementary features of the two computing paradigms, presents the (potential) advantages of their integration, and summarizes existing research on this topic.",
"title": ""
},
{
"docid": "0fb45311d5e6a7348917eaa12ffeab46",
"text": "Question Answering is a task which requires building models capable of providing answers to questions expressed in human language. Full question answering involves some form of reasoning ability. We introduce a neural network architecture for this task, which is a form of Memory Network, that recognizes entities and their relations to answers through a focus attention mechanism. Our model is named Question Dependent Recurrent Entity Network and extends Recurrent Entity Network by exploiting aspects of the question during the memorization process. We validate the model on both synthetic and real datasets: the bAbI question answering dataset and the CNN & Daily News reading comprehension dataset. In our experiments, the models achieved a State-ofThe-Art in the former and competitive results in the latter.",
"title": ""
},
{
"docid": "2c7b61aaca38051230122bef872002cc",
"text": "Signal-based Surveillance systems such as Closed Circuits Televisions (CCTV) have been widely installed in public places. Those systems are normally used to find the events with security interest, and play a significant role in public safety. Though such systems are still heavily reliant on human labour to monitor the captured information, there have been a number of automatic techniques proposed to analysing the data. This article provides an overview of automatic surveillance event detection techniques . Despite it’s popularity in research, it is still too challenging a problem to be realised in a real world deployment. The challenges come from not only the detection techniques such as signal processing and machine learning, but also the experimental design with factors such as data collection, evaluation protocols, and ground-truth annotation. Finally, this article propose that multi-disciplinary research is the path towards a solution to this problem.",
"title": ""
},
{
"docid": "59c3a118e537752f9647ff6d1f585bf3",
"text": "OBJECTIVE\nThis study sought to investigate the prevalence of laparoscopic surgeon injury/illness symptoms and evaluate associations between symptoms and operating room ergonomics.\n\n\nBACKGROUND\nAlthough laparoscopic procedures significantly benefit patients in terms of decreased recovery times and improved outcomes, they contribute to mental fatigue and musculoskeletal problems among surgeons. A variety of ergonomic interventions and applications are implemented by surgeons to reduce health problems. Currently, there is a gap in knowledge regarding a surgeon's individual assessment of the operating room, an assessment that, in turn, would prompt the implementation of these interventions.\n\n\nMETHOD\nA new survey instrument solicited information from surgeons (N = 61) regarding surgeon demographics, perception, frequency of operating room equipment adjustment, and self-reported symptoms. Surgeons responded to questions addressing safety, ergonomics, and fatigue in the operating room, using a 5-point Likert-type scale that included the option undecided.\n\n\nRESULTS\nSurgeons who responded undecided were more likely to experience symptoms of injury/illness than respondents who were able to assess the features of their operating rooms. Symptoms were experienced by 100% of participants. The most prevalent symptoms were neck stiffness, back stiffness, and back pain.\n\n\nCONCLUSION\nThis study supports hypotheses that surgeons are experiencing body part discomfort and indicators of fatigue that may be associated with performing laparoscopy. Results suggest that awareness, knowledge, and utilization of ergonomic principles could protect surgeons against symptoms that lead to occupational injury.\n\n\nAPPLICATION\nThe purpose of this brief report is to convey the importance of ergonomic principles in the operating room, specific to laparoscopic surgery and surgeon injury/illness symptoms.",
"title": ""
},
{
"docid": "08ebd914f39a284fb3ba6810bd1b0802",
"text": "The recent influx in generation, storage and availability of textual data presents researchers with the challenge of developing suitable methods for their analysis. Latent Semantic Analysis (LSA), a member of a family of methodological approaches that offers an opportunity to address this gap by describing the semantic content in textual data as a set of vectors, was pioneered by researchers in psychology, information retrieval, and bibliometrics. LSA involves a matrix operation called singular value decomposition, an extension of principal component analysis. LSA generates latent semantic dimensions that are either interpreted, if the researcher’s primary interest lies with the understanding of the thematic structure in the textual data, or used for purposes of clustering, categorisation and predictive modelling, if the interest lies with the conversion of raw text into numerical data, as a precursor to subsequent analysis. This paper reviews five methodological issues that need to be addressed by the researcher who will embark on LSA. We examine the dilemmas, present the choices, and discuss the considerations under which good methodological decisions are made. We illustrate these issues with the help of four small studies, involving the analysis of abstracts for papers published in the European Journal of Information Systems.",
"title": ""
},
{
"docid": "7882a3a5796052253db44cbb76f2e1eb",
"text": "The discovery of regulated cell death presents tantalizing possibilities for gaining control over the life–death decisions made by cells in disease. Although apoptosis has been the focus of drug discovery for many years, recent research has identified regulatory mechanisms and signalling pathways for previously unrecognized, regulated necrotic cell death routines. Distinct critical nodes have been characterized for some of these alternative cell death routines, whereas other cell death routines are just beginning to be unravelled. In this Review, we describe forms of regulated necrotic cell death, including necroptosis, the emerging cell death modality of ferroptosis (and the related oxytosis) and the less well comprehended parthanatos and cyclophilin D-mediated necrosis. We focus on small molecules, proteins and pathways that can induce and inhibit these non-apoptotic forms of cell death, and discuss strategies for translating this understanding into new therapeutics for certain disease contexts.",
"title": ""
},
{
"docid": "f649a975dcec02ea82bebb95dafd5eab",
"text": "Online games have emerged as popular computer applications and gamer loyalty is vital to game providers, since online gamers frequently switch between games. Online gamers often participate in teams also. This study investigates whether and how team participation improves loyalty. We utilized a cross-sectional design and an online survey, with 546 valid responses from online game subjects. Confirmatory factor analysis was applied to assess measurement reliability and validity directly, and structural equation modeling was utilized to test our hypotheses. The results indicate that participation in teams motivates online gamers to adhere to team norms and satisfies their social needs, also enhancing their loyalty. The contribution of this research is the introduction of social norms to explain online gamer loyalty. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4e63f4a95d501641b80fcdf9bc0f89f6",
"text": "Streptococcus milleri was isolated from the active lesions of three patients with perineal hidradenitis suppurativa. In each patient, elimination of this organism by appropriate antibiotic therapy was accompanied by marked clinical improvement.",
"title": ""
},
{
"docid": "fd786ae1792e559352c75940d84600af",
"text": "In this paper, we obtain an (1 − e−1)-approximation algorithm for maximizing a nondecreasing submodular set function subject to a knapsack constraint. This algorithm requires O(n) function value computations. c © 2003 Published by Elsevier B.V.",
"title": ""
}
] |
scidocsrr
|
716fea4cbfe4446d6ae7a354264986be
|
Extracting Opinions, Opinion Holders, And Topics Expressed In Online News Media Text
|
[
{
"docid": "03b3d8220753570a6b2f21916fe4f423",
"text": "Recent systems have been developed for sentiment classification, opinion recogni tion, and opinion analysis (e.g., detect ing polarity and strength). We pursue an other aspect of opinion analysis: identi fying the sources of opinions, emotions, and sentiments. We view this problem as an information extraction task and adopt a hybrid approach that combines Con ditional Random Fields (Lafferty et al., 2001) and a variation of AutoSlog (Riloff, 1996a). While CRFs model source iden tification as a sequence tagging task, Au toSlog learns extraction patterns. Our re sults show that the combination of these two methods performs better than either one alone. The resulting system identifies opinion sources with precision and recall using a head noun matching measure, and precision and recall using an overlap measure.",
"title": ""
}
] |
[
{
"docid": "2ec0db3840965993e857b75bd87a43b7",
"text": "Light field cameras capture full spatio-angular information of the light field, and enable many novel photographic and scientific applications. It is often stated that there is a fundamental trade-off between spatial and angular resolution, but there has been limited understanding of this trade-off theoretically or numerically. Moreover, it is very difficult to evaluate the design of a light field camera because a new design is usually reported with its prototype and rendering algorithm, both of which affect resolution.\n In this article, we develop a light transport framework for understanding the fundamental limits of light field camera resolution. We first derive the prefiltering model of lenslet-based light field cameras. The main novelty of our model is in considering the full space-angle sensitivity profile of the photosensor—in particular, real pixels have nonuniform angular sensitivity, responding more to light along the optical axis rather than at grazing angles. We show that the full sensor profile plays an important role in defining the performance of a light field camera. The proposed method can model all existing lenslet-based light field cameras and allows to compare them in a unified way in simulation, independent of the practical differences between particular prototypes. We further extend our framework to analyze the performance of two rendering methods: the simple projection-based method and the inverse light transport process. We validate our framework with both flatland simulation and real data from the Lytro light field camera.",
"title": ""
},
{
"docid": "97a1d44956f339a678da4c7a32b63bf6",
"text": "As a first step towards agents learning to communicate about their visual environment, we propose a system that, given visual representations of a referent (CAT) and a context (SOFA), identifies their discriminative attributes, i.e., properties that distinguish them (has_tail). Moreover, although supervision is only provided in terms of discriminativeness of attributes for pairs, the model learns to assign plausible attributes to specific objects (SOFA-has_cushion). Finally, we present a preliminary experiment confirming the referential success of the predicted discriminative attributes.",
"title": ""
},
{
"docid": "f5a4d05c8b8c42cdca540794000afad5",
"text": "Design thinking (DT) is regarded as a system of three overlapping spaces—viability, desirability, and feasibility—where innovation increases when all three perspectives are addressed. Understanding how innovation within teams can be supported by DT methods and tools captivates the interest of business communities. This paper aims to examine how DT methods and tools foster innovation in teams. A case study approach, based on two workshops, examined three DT methods with a software tool. The findings support the use of DT methods and tools as a way of incubating ideas and creating innovative solutions within teams when team collaboration and software limitations are balanced. The paper proposes guidelines for utilizing DT methods and tools in innovation",
"title": ""
},
{
"docid": "bed3e58bc8e69242e6e00c7d13dabb93",
"text": "The convergence of online learning algorithms is analyzed using the tools of the stochastic approximation theory, and proved under very weak conditions. A general framework for online learning algorithms is first presented. This framework encompasses the most common online learning algorithms in use today, as illustrated by several examples. The stochastic approximation theory then provides general results describing the convergence of all these learning algorithms at once. Revised version, May 2018.",
"title": ""
},
{
"docid": "c3650b4a82790147a7ab911ce8b0c424",
"text": "OBJECTIVES\nTo demonstrate through a clinical case the systemic effetcss and complications that can arise after an acute gastric dilatation caused by an eating binge.\n\n\nCLINICAL CASE\nA young woman diagnosed of bulimia nervosa presents to the emergency room after a massive food intake. She shows important abdominal distention and refers inability to self-induce vomit. A few hours later she commences to show signs of hemodynamic instability and oliguria. A CT scan is performed; it shows bilateral renal infarctions due to compression of the abdominal aorta and some of its visceral branches.\n\n\nINTERVENTIONS\nThe evaluation procedures included quantification of the gastric volume by CT. A decompression gastrostomy was performed; it allowed the evacuation of a large amount of gastric content and restored blood supply to the abdomen, which improved renal perfusion.\n\n\nCONCLUSIONS\nCT is a basic diagnostic tool that not only allows us to quantify the degree of acute gastric dilatation but can also evaluate the integrity of the adjacent organs which may be suffering compression hypoperfusion.",
"title": ""
},
{
"docid": "d27d17176181b09a74c9c8115bc6a66e",
"text": "In this chapter, we provide definitions of Business Intelligence (BI) and outline the development of BI over time, particularly carving out current questions of BI. Different scenarios of BI applications are considered and business perspectives and views of BI on the business process are identified. Further, the goals and tasks of BI are discussed from a management and analysis point of view and a method format for BI applications is proposed. This format also gives an outline of the book’s contents. Finally, examples from different domain areas are introduced which are used for demonstration in later chapters of the book. 1.1 Definition of Business Intelligence If one looks for a definition of the term Business Intelligence (BI) one will find the first reference already in 1958 in a paper of H.P. Luhn (cf. [14]). Starting from the definition of the terms “Intelligence” as “the ability to apprehend the interrelationships of presented facts in such a way as to guide action towards a desired goal” and “Business” as “a collection of activities carried on for whatever purpose, be it science, technology, commerce, industry, law, government, defense, et cetera”, he specifies a business intelligence system as “[an] automatic system [that] is being developed to disseminate information to the various sections of any industrial, scientific or government organization.” The main task of Luhn’s system was automatic abstracting of documents and delivering this information to appropriate so-called action points. This definition did not come into effect for 30 years, and in 1989Howard Dresner coined the term Business Intelligence (BI) again. He introduced it as an umbrella term for a set of concepts and methods to improve business decision making, using systems based on facts. Many similar definitions have been given since. In Negash [18], important aspects of BI are emphasized by stating that “. . . business intelligence systems provide actionable information delivered at the right time, at the right location, and in the right form to assist decision makers.” Today one can find many different definitions which show that at the top level the intention of BI has not changed so much. For example, in [20] BI is defined as “an integrated, company-specific, IT-based total approach for managerial decision © Springer-Verlag Berlin Heidelberg 2015 W. Grossmann, S. Rinderle-Ma, Fundamentals of Business Intelligence, Data-Centric Systems and Applications, DOI 10.1007/978-3-662-46531-8_1 1",
"title": ""
},
{
"docid": "da695403ee969f71ea01a4b16477556f",
"text": "Data augmentation is a widely used technique in many machine learning tasks, such as image classification, to virtually enlarge the training dataset size and avoid overfitting. Traditional data augmentation techniques for image classification tasks create new samples from the original training data by, for example, flipping, distorting, adding a small amount of noise to, or cropping a patch from an original image. In this paper, we introduce a simple but surprisingly effective data augmentation technique for image classification tasks. With our technique, named SamplePairing, we synthesize a new sample from one image by overlaying another image randomly chosen from the training data (i.e., taking an average of two images for each pixel). By using two images randomly selected from the training set, we can generate N new samples from N training samples. This simple data augmentation technique significantly improved classification accuracy for all the tested datasets; for example, the top-1 error rate was reduced from 33.5% to 29.0% for the ILSVRC 2012 dataset with GoogLeNet and from 8.22% to 6.93% in the CIFAR-10 dataset. We also show that our SamplePairing technique largely improved accuracy when the number of samples in the training set was very small. Therefore, our technique is more valuable for tasks with a limited amount of training data, such as medical imaging tasks.",
"title": ""
},
{
"docid": "0d0d11c1e340e67939cfba0cde4783ed",
"text": "Recent research effort in poem composition has focused on the use of automatic language generation to produce a polished poem. A less explored question is how effectively a computer can serve as an interactive assistant to a poet. For this purpose, we built a web application that combines rich linguistic knowledge from classical Chinese philology with statistical natural language processing techniques. The application assists users in composing a ‘couplet’—a pair of lines in a traditional Chinese poem—by making suggestions for the next and corresponding characters. A couplet must meet a complicated set of requirements on phonology, syntax, and parallelism, which are challenging for an amateur poet to master. The application checks conformance to these requirements and makes suggestions for characters based on lexical, syntactic, and semantic properties. A distinguishing feature of the application is its extensive use of linguistic knowledge, enabling it to inform users of specific phonological principles in detail, and to explicitly model semantic parallelism, an essential characteristic of Chinese poetry. We evaluate the quality of poems composed solely with characters suggested by the application, and the coverage of its character suggestions. .................................................................................................................................................................................",
"title": ""
},
{
"docid": "4d3ba5824551b06c861fc51a6cae41a5",
"text": "This paper shows a gate driver design for 1.7 kV SiC MOSFET module as well a Rogowski-coil based current sensor for effective short circuit protection. The design begins with the power architecture selection for better common-mode noise immunity as the driver is subjected to high dv/dt due to the very high switching speed of the SiC MOSFET modules. The selection of the most appropriate gate driver IC is made to ensure the best performance and full functionalities of the driver, followed by the circuitry designs of paralleled external current booster, Soft Turn-Off, and Miller Clamp. In addition to desaturation, a high bandwidth PCB-based Rogowski current sensor is proposed to serve as a more effective method for the short circuit protection for the high-cost SiC MOSFET modules.",
"title": ""
},
{
"docid": "41f7d66c6e2c593eb7bda22c72a7c048",
"text": "Artificial neural networks are algorithms that can be used to perform nonlinear statistical modeling and provide a new alternative to logistic regression, the most commonly used method for developing predictive models for dichotomous outcomes in medicine. Neural networks offer a number of advantages, including requiring less formal statistical training, ability to implicitly detect complex nonlinear relationships between dependent and independent variables, ability to detect all possible interactions between predictor variables, and the availability of multiple training algorithms. Disadvantages include its \"black box\" nature, greater computational burden, proneness to overfitting, and the empirical nature of model development. An overview of the features of neural networks and logistic regression is presented, and the advantages and disadvantages of using this modeling technique are discussed.",
"title": ""
},
{
"docid": "98269ed4d72abecb6112c35e831fc727",
"text": "The goal of this article is to place the role that social media plays in collective action within a more general theoretical structure, using the events of the Arab Spring as a case study. The article presents two broad theoretical principles. The first is that one cannot understand the role of social media in collective action without first taking into account the political environment in which they operate. The second principle states that a significant increase in the use of the new media is much more likely to follow a significant amount of protest activity than to precede it. The study examines these two principles using political, media, and protest data from twenty Arab countries and the Palestinian Authority. The findings provide strong support for the validity of the claims.",
"title": ""
},
{
"docid": "333fd7802029f38bda35cd2077e7de59",
"text": "Human shape estimation is an important task for video editing, animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric bodypart segmentation.",
"title": ""
},
{
"docid": "2cb0c74e57dea6fead692d35f8a8fac6",
"text": "Matching local image descriptors is a key step in many computer vision applications. For more than a decade, hand-crafted descriptors such as SIFT have been used for this task. Recently, multiple new descriptors learned from data have been proposed and shown to improve on SIFT in terms of discriminative power. This paper is dedicated to an extensive experimental evaluation of learned local features to establish a single evaluation protocol that ensures comparable results. In terms of matching performance, we evaluate the different descriptors regarding standard criteria. However, considering matching performance in isolation only provides an incomplete measure of a descriptors quality. For example, finding additional correct matches between similar images does not necessarily lead to a better performance when trying to match images under extreme viewpoint or illumination changes. Besides pure descriptor matching, we thus also evaluate the different descriptors in the context of image-based reconstruction. This enables us to study the descriptor performance on a set of more practical criteria including image retrieval, the ability to register images under strong viewpoint and illumination changes, and the accuracy and completeness of the reconstructed cameras and scenes. To facilitate future research, the full evaluation pipeline is made publicly available.",
"title": ""
},
{
"docid": "d208033e210816d7a9454749080587d9",
"text": "Graph classification is a problem with practical applications in many different domains. Most of the existing methods take the entire graph into account when calculating graph features. In a graphlet-based approach, for instance, the entire graph is processed to get the total count of different graphlets or subgraphs. In the real-world, however, graphs can be both large and noisy with discriminative patterns confined to certain regions in the graph only. In this work, we study the problem of attentional processing for graph classification. The use of attention allows us to focus on small but informative parts of the graph, avoiding noise in the rest of the graph. We present a novel RNN model, called the Graph Attention Model (GAM), that processes only a portion of the graph by adaptively selecting a sequence of “interesting” nodes. The model is equipped with an external memory component which allows it to integrate information gathered from different parts of the graph. We demonstrate the effectiveness of the model through various experiments.",
"title": ""
},
{
"docid": "d18faf207a0dbccc030e5dcc202949ab",
"text": "This manuscript conducts a comparison on modern object detection systems in their ability to detect multiple maritime vessel classes. Three highly scoring algorithms from the Pascal VOC Challenge, Histogram of Oriented Gradients by Dalal and Triggs, Exemplar-SVM by Malisiewicz, and Latent-SVM with Deformable Part Models by Felzenszwalb, were compared to determine performance of recognition within a specific category rather than the general classes from the original challenge. In all cases, the histogram of oriented edges was used as the feature set and support vector machines were used for classification. A summary and comparison of the learning algorithms is presented and a new image corpus of maritime vessels was collected. Precision-recall results show improved recognition performance is achieved when accounting for vessel pose. In particular, the deformable part model has the best performance when considering the various components of a maritime vessel.",
"title": ""
},
{
"docid": "4b69831f2736ae08049be81e05dd4046",
"text": "One of the most important aspects in playing the piano is using the appropriate fingers to facilitate movement and transitions. The fingering arrangement depends to a ce rtain extent on the size of the musician’s hand. We hav e developed an automatic fingering system that, given a sequence of pitches, suggests which fingers should be used. The output can be personalized to agree with t he limitations of the user’s hand. We also consider this system to be the base of a more complex future system: a score reduction system that will reduce orchestra scor e to piano scores. This paper describes: • “Vertical cost” model: the stretch induced by a given hand position. • “Horizontal cost” model: transition between two hand positions. • A system that computes low-cost fingering for a given piece of music. • A machine learning technique used to learn the appropriate parameters in the models.",
"title": ""
},
{
"docid": "9dfcba284d0bf3320d893d4379042225",
"text": "Botnet is a hybrid of previous threats integrated with a command and control system and hundreds of millions of computers are infected. Although botnets are widespread development, the research and solutions for botnets are not mature. In this paper, we present an overview of research on botnets. We discuss in detail the botnet and related research including infection mechanism, botnet malicious behavior, command and control models, communication protocols, botnet detection, and botnet defense. We also present a simple case study of IRC-based SpyBot.",
"title": ""
},
{
"docid": "7d0fb12fce0ef052684a8664a3f5c543",
"text": "In this paper, we consider a finite-horizon Markov decision process (MDP) for which the objective at each stage is to minimize a quantile-based risk measure (QBRM) of the sequence of future costs; we call the overall objective a dynamic quantile-based risk measure (DQBRM). In particular, we consider optimizing dynamic risk measures where the one-step risk measures are QBRMs, a class of risk measures that includes the popular value at risk (VaR) and the conditional value at risk (CVaR). Although there is considerable theoretical development of risk-averse MDPs in the literature, the computational challenges have not been explored as thoroughly. We propose datadriven and simulation-based approximate dynamic programming (ADP) algorithms to solve the risk-averse sequential decision problem. We address the issue of inefficient sampling for risk applications in simulated settings and present a procedure, based on importance sampling, to direct samples toward the “risky region” as the ADP algorithm progresses. Finally, we show numerical results of our algorithms in the context of an application involving risk-averse bidding for energy storage.",
"title": ""
},
{
"docid": "328aad76b94b34bf49719b98ae391cfe",
"text": "We discuss methods for statistically analyzing the output from stochastic discrete-event or Monte Carlo simulations. Terminating and steady-state simulations are considered.",
"title": ""
},
{
"docid": "fb0e9f6f58051b9209388f81e1d018ff",
"text": "Because many databases contain or can be embellished with structural information, a method for identifying interesting and repetitive substructures is an essential component to discovering knowledge in such databases. This paper describes the SUBDUE system, which uses the minimum description length (MDL) principle to discover substructures that compress the database and represent structural concepts in the data. By replacing previously-discovered substructures in the data, multiple passes of SUBDUE produce a hierarchical description of the structural regularities in the data. Inclusion of background knowledgeguides SUBDUE toward appropriate substructures for a particular domain or discovery goal, and the use of an inexact graph match allows a controlled amount of deviations in the instance of a substructure concept. We describe the application of SUBDUE to a variety of domains. We also discuss approaches to combining SUBDUE with non-structural discovery systems.",
"title": ""
}
] |
scidocsrr
|
f774a5e356a6460e24685ecf50fc1d06
|
The role of orienting in vibrissal touch sensing
|
[
{
"docid": "0d723c344ab5f99447f7ad2ff72c0455",
"text": "The aim of this study was to determine the pattern of fixations during the performance of a well-learned task in a natural setting (making tea), and to classify the types of monitoring action that the eyes perform. We used a head-mounted eye-movement video camera, which provided a continuous view of the scene ahead, with a dot indicating foveal direction with an accuracy of about 1 deg. A second video camera recorded the subject's activities from across the room. The videos were linked and analysed frame by frame. Foveal direction was always close to the object being manipulated, and very few fixations were irrelevant to the task. The first object-related fixation typically led the first indication of manipulation by 0.56 s, and vision moved to the next object about 0.61 s before manipulation of the previous object was complete. Each object-related act that did not involve a waiting period lasted an average of 3.3 s and involved about 7 fixations. Roughly a third of all fixations on objects could be definitely identified with one of four monitoring functions: locating objects used later in the process, directing the hand or object in the hand to a new location, guiding the approach of one object to another (e.g. kettle and lid), and checking the state of some variable (e.g. water level). We conclude that although the actions of tea-making are 'automated' and proceed with little conscious involvement, the eyes closely monitor every step of the process. This type of unconscious attention must be a common phenomenon in everyday life.",
"title": ""
}
] |
[
{
"docid": "625f54bb3157e429af1af8f0d04f0713",
"text": "Proof theory is a powerful tool for understanding computational phenomena, as most famously exemplified by the Curry–Howard isomorphism between intuitionistic logic and the simply-typed λ-calculus. In this paper, we identify a fragment of intuitionistic linear logic with least fixed points and establish a Curry–Howard isomorphism between a class of proofs in this fragment and deterministic finite automata. Proof-theoretically, closure of regular languages under complementation, union, and intersection can then be understood in terms of cut elimination. We also establish an isomorphism between a different class of proofs and subsequential string transducers. Because prior work has shown that linear proofs can be seen as session-typed processes, a concurrent semantics of transducer composition is obtained for free. 1998 ACM Subject Classification F.4.1 Mathematical Logic; F.1.1 Models of Computation",
"title": ""
},
{
"docid": "8b764c3b6576e8334979503d9d76a8d3",
"text": "Twitter is a well-known micro-blogging website which allows millions of users to interact over different types of communities, topics, and tweeting trends. The big data being generated on Twitter daily, and its significant impact on social networking, has motivated the application of data mining (analysis) to extract useful information from tweets. In this paper, we analyze the impact of tweets in predicting the winner of the recent 2013 election held in Pakistan. We identify relevant Twitter users, pre-process their tweets, and construct predictive models for three representative political parties which were significantly tweeted, i.e., Pakistan Tehreek-e-Insaaf (PTI), Pakistan Muslim League Nawaz (PMLN), and Muttahida Qaumi Movement (MQM). The predictions for last four days before the elections showed that PTI will emerge as the election winner, which was actually won by PMLN. However, considering that PTI obtained landslide victory in one province and bagged several important seats across the country, we conclude that Twitter can have some type of a positive influence on the election result, although it cannot be considered representative of the overall voting population.",
"title": ""
},
{
"docid": "ec5e3b472973e3f77812976b1dd300a5",
"text": "In this thesis we investigate different methods of automating behavioral analysis in animal videos using shapeand motion-based models, with a focus on classifying large datasets of rodent footage. In order to leverage the recent advances in deep learning techniques a massive number of training samples is required, which has lead to the development of a data transfer pipeline to gather footage from multiple video sources and a custom-built web-based video annotation tool to create annotation datasets. Finally we develop and compare new deep convolutional and recurrent-convolutional neural network architectures that outperform existing systems.",
"title": ""
},
{
"docid": "2749fc2afe66efab60abc7ca33cc666a",
"text": "The pure methods in a program are those that exhibit functional or side effect free behaviour, a useful property in many contexts. However, existing purity investigations present primarily staticresults. We perform a detailed examination of dynamic method purityin Java programs using a JVM-based analysis. We evaluate multiple purity definitions that range from strong to weak, consider purity forms specific to dynamic execution, and accomodate constraintsimposed by an example consumer application, memoization. We show that while dynamic method purity is actually fairly consistent between programs, examining pure invocation counts and the percentage of the byte code instruction stream contained within some pure method reveals great variation. We also show that while weakening purity definitions exposes considerable dynamic purity, consumer requirements can limitthe actual utility of this information.",
"title": ""
},
{
"docid": "c50cf41ef8cc85be0558f9132c60b1f5",
"text": "A System Architecture for Context-Aware Mobile Computing William Noah Schilit Computer applications traditionally expect a static execution environment. However, this precondition is generally not possible for mobile systems, where the world around an application is constantly changing. This thesis explores how to support and also exploit the dynamic configurations and social settings characteristic of mobile systems. More specifically, it advances the following goals: (1) enabling seamless interaction across devices; (2) creating physical spaces that are responsive to users; and (3) and building applications that are aware of the context of their use. Examples of these goals are: continuing in your office a program started at home; using a PDA to control someone else’s windowing UI; automatically canceling phone forwarding upon return to your office; having an airport overheaddisplay highlight the flight information viewers are likely to be interested in; easily locating and using the nearest printer or fax machine; and automatically turning off a PDA’s audible e-mail notification when in a meeting. The contribution of this thesis is an architecture to support context-aware computing; that is, application adaptation triggered by such things as the location of use, the collection of nearby people, the presence of accessible devices and other kinds of objects, as well as changes to all these things over time. Three key issues are addressed: (1) the information needs of applications, (2) where applications get various pieces of information and (3) how information can be efficiently distributed. A dynamic environment communication model is introduced as a general mechanism for quickly and efficiently learning about changes occurring in the environment in a fault tolerant manner. For purposes of scalability, multiple dynamic environment servers store user, device, and, for each geographic region, context information. In order to efficiently disseminate information from these components to applications, a dynamic collection of multicast groups is employed. The thesis also describes a demonstration system based on the Xerox PARCTAB, a wireless palmtop computer.",
"title": ""
},
{
"docid": "0e4cf084d126a0c87e88e3e95ec2cf42",
"text": "Owing to the increasing importance of genomic information, obtaining genomic DNA easily from biological specimens has become more and more important. This article proposes an efficient method for obtaining genomic DNA from nail clippings. Nail clippings can be easily obtained, are thermostable and easy to transport, and have low infectivity. The drawback of their use, however, has been the difficulty of extracting genomic material from them. We have overcome this obstacle using the protease solution obtained from Cucumis melo. The keratinolytic activity of the protease solution was 1.78-fold higher than that of proteinase K, which is commonly used to degrade keratin. With the protease solution, three times more DNA was extracted than when proteinase K was used. In order to verify the integrity of the extracted DNA, genotype analysis on 170 subjects was performed by both PCR-RFLP and Real Time PCR. The results of the genotyping showed that the extracted DNA was suitable for genotyping analysis. In conclusion, we have developed an efficient extraction method for using nail clippings as a genome source and a research tool in molecular epidemiology, medical diagnostics, and forensic science.",
"title": ""
},
{
"docid": "ae5976a021bd0c4ff5ce14525c1716e7",
"text": "We present PARAM 1.0, a model checker for parametric discrete-time Markov chains (PMCs). PARAM can evaluate temporal properties of PMCs and certain extensions of this class. Due to parametricity, evaluation results are polynomials or rational functions. By instantiating the parameters in the result function, one can cheaply obtain results for multiple individual instantiations, based on only a single more expensive analysis. In addition, it is possible to post-process the result function symbolically using for instance computer algebra packages, to derive optimum parameters or to identify worst cases.",
"title": ""
},
{
"docid": "30c7bc7bd823935969e6086a9e728515",
"text": "A systematic methodology for layout optimization of active devices for millimeter-wave (mm-wave) application is proposed. A hybrid mm-wave modeling technique was developed to extend the validity of the device compact models up to 100 GHz. These methods resulted in the design of a customized 90 nm device layout which yields an extrapolated of 300 GHz from an intrinsic device . The device is incorporated into a low-power 60 GHz amplifier consuming 10.5 mW, providing 12.2 dB of gain, and an output of 4 dBm. An experimental three-stage 104 GHz tuned amplifier has a measured peak gain of 9.3 dB. Finally, a Colpitts oscillator operating at 104 GHz delivers up to 5 dBm of output power while consuming 6.5 mW.",
"title": ""
},
{
"docid": "df11a24f72f6964e4ca123bc8f6e1e5e",
"text": "The matching performance of automated face recognition has significantly improved over the past decade. At the same time several challenges remain that significantly affect the deployment of such systems in security applications. In this work, we study the impact of a commonly used face altering technique that has received limited attention in the biometric literature, viz., non-permanent facial makeup. Towards understanding its impact, we first assemble two databases containing face images of subjects, before and after applying makeup. We present experimental results on both databases that reveal the effect of makeup on automated face recognition and suggest that this simple alteration can indeed compromise the accuracy of a biometric system. While these are early results, our findings clearly indicate the need for a better understanding of this face altering scheme and the importance of designing algorithms that can successfully overcome the obstacle imposed by the application of facial makeup.",
"title": ""
},
{
"docid": "c64d46b03514b427766410a0dcefe3c2",
"text": "We introduce a rate-based congestion control mechanism for Content-Centric Networking (CCN). It builds on the fact that one Interest retrieves at most one Data packet. Congestion can occur when aggregate conversations arrive in excess and fill up the transmission queue of a CCN router. We compute the available capacity of each CCN router in a distributed way in order to shape their conversations Interest rate and therefore, adjust dynamically their Data rate and transmission buffer occupancy. We demonstrate the convergence properties of this Hop-by-hop Interest Shaping mechanism (HoBHIS) and provide a performance analysis based on various scenarios using our ns2 simulation environment.",
"title": ""
},
{
"docid": "40735be327c91882fdfc2cb57ad12f37",
"text": "BACKGROUND\nPolymorphism in the gene for angiotensin-converting enzyme (ACE), especially the DD genotype, is associated with risk for cardiovascular disease. Glomerulosclerosis has similarities to atherosclerosis, and we looked at ACE gene polymorphism in patients with kidney disease who were in a trial of long-term therapy with an ACE inhibitor or a beta-blocker.\n\n\nMETHODS\n81 patients with non-diabetic renal disease had been entered into a randomised comparison of oral atenolol or enalapril to prevent progressive decline in renal function. The dose was titrated to a goal diastolic blood pressure of 10 mm Hg below baseline and/or below 95 mm Hg. The mean (SE) age was 50 (1) years, and the group included 49 men. Their renal function had been monitored over 3-4 years. We have looked at their ACE genotype, which we assessed with PCR.\n\n\nFINDINGS\n27 patients had the II genotype, 37 were ID, and 17 were DD. 11 patients were lost to follow-up over 1-3 years. The decline of glomerular filtration rate over the years was significantly steeper in the DD group than in the ID and the II groups (p = 0.02; means -3.79, -1.37, and -1.12 mL/min per year, respectively). The DD patients treated with enalapril fared as equally a bad course as the DD patients treated with atenolol. Neither drug lowered the degree of proteinuria in the DD group.\n\n\nINTERPRETATION\nOur data show that patients with the DD genotype are resistant to commonly advocated renoprotective therapy.",
"title": ""
},
{
"docid": "8eb84b8d29c8f9b71c92696508c9c580",
"text": "We introduce a novel in-ear sensor which satisfies key design requirements for wearable electroencephalography (EEG)-it is discreet, unobtrusive, and capable of capturing high-quality brain activity from the ear canal. Unlike our initial designs, which utilize custom earpieces and require a costly and time-consuming manufacturing process, we here introduce the generic earpieces to make ear-EEG suitable for immediate and widespread use. Our approach represents a departure from silicone earmoulds to provide a sensor based on a viscoelastic substrate and conductive cloth electrodes, both of which are shown to possess a number of desirable mechanical and electrical properties. Owing to its viscoelastic nature, such an earpiece exhibits good conformance to the shape of the ear canal, thus providing stable electrode-skin interface, while cloth electrodes require only saline solution to establish low impedance contact. The analysis highlights the distinguishing advantages compared with the current state-of-the-art in ear-EEG. We demonstrate that such a device can be readily used for the measurement of various EEG responses.",
"title": ""
},
{
"docid": "70e88fe5fc43e0815a1efa05e17f7277",
"text": "Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas. Many commercial smoke detection sensors exist but most of them cannot be applied in open space or outdoor scenarios. With this aim, the paper presents a smoke detection system that uses a common CCD camera sensor to detect smoke in images and trigger alarms. First, a proper background model is proposed to reliably extract smoke regions and avoid over-segmentation and false positives in outdoor scenarios where many distractors are present, such as moving trees or light reflexes. A novel Bayesian approach is adopted to detect smoke regions in the scene analyzing image energy by means of the Wavelet Transform coefficients and Color Information. A statistical model of image energy is built, using a temporal Gaussian Mixture, to analyze the energy decay that typically occurs when smoke covers the scene then the detection is strengthen evaluating the color blending between a reference smoke color and the input frame. The proposed system is capable of detecting rapidly smoke events both in night and in day conditions with a reduced number of false alarms hence is particularly suitable for monitoring large outdoor scenarios where common sensors would fail. An extensive experimental campaign both on recorded videos and live cameras evaluates the efficacy and efficiency of the system in many real world scenarios, such as outdoor storages and forests.",
"title": ""
},
{
"docid": "e5380801d69c3acf7bfe36e868b1dadb",
"text": "Skin-mountable chemical sensors using flexible chemically sensitive nanomaterials are of great interest for electronic skin (e-skin) application. To build these sensors, the emerging atomically thin two-dimensional (2D) layered semiconductors could be a good material candidate. Herein, we show that a large-area WS2 film synthesized by sulfurization of a tungsten film exhibits high humidity sensing performance both in natural flat and high mechanical flexible states (bending curvature down to 5 mm). The conductivity of as-synthesized WS2 increases sensitively over a wide relative humidity range (up to 90%) with fast response and recovery times in a few seconds. By using graphene as electrodes and thin polydimethylsiloxane (PDMS) as substrate, a transparent, flexible, and stretchable humidity sensor was fabricated. This senor can be well laminated onto skin and shows stable water moisture sensing behaviors in the undeformed relaxed state as well as under compressive and tensile loadings. Furthermore, its high sensing performance enables real-time monitoring of human breath, indicating a potential mask-free breath monitoring for healthcare application. We believe that such a skin-activity compatible WS2 humidity sensor may shed light on developing low power consumption wearable chemical sensors based on 2D semiconductors.",
"title": ""
},
{
"docid": "de119196672efda310f457b15f0b6e63",
"text": "Agile processes focus on facilitating early and fast production of working code, and are based on software development process models that support iterative, incremental development of software. Although agile methods have existed for a number of years now, answers to questions concerning the suitability of agile processes to particular software development environments are still often based on anecdotal accounts of experiences. An appreciation of the (often unstated) assumptions underlying agile processes can lead to a better understanding of the applicability of agile processes to particular situations. Agile processes are less likely to be applicable in situations in which core assumptions do not hold. This paper examines the principles and advocated practices of agile processes to identify underlying assumptions. The paper also identifies limitations that may arise from these assumptions and outlines how the limitations can be addresses by incorporating other software development techniques and practices into agile development environments.",
"title": ""
},
{
"docid": "1e607279360f3318f3f020e19e1bd86f",
"text": "Only one late period is allowed for this homework (11:59pm 2/23). Submission instructions: These questions require thought but do not require long answers. Please be as concise as possible. You should submit your answers as a writeup in PDF format via GradeScope and code via the Snap submission site. Submitting writeup: Prepare answers to the homework questions into a single PDF file and submit it via http://gradescope.com. Make sure that the answer to each question is on a separate page. On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. It is also important to tag your answers correctly on Gradescope. We will deduct 5/N points for each incorrectly tagged subproblem (where N is the number of subproblems). This means you can lose up to 5 points for incorrect tagging. Put all the code for a single question into a single file and upload it. Warning: This problem requires substantial computing time (it can be a few hours on some systems). Don't start it at the last minute. 7 7 7 The goal of this problem is to implement the Stochastic Gradient Descent algorithm to build a Latent Factor Recommendation system. We can use it to recommend movies to users.",
"title": ""
},
{
"docid": "488c7437a32daec6fbad12e07bb31f4c",
"text": "Studying characters plays a vital role in computationally representing and interpreting narratives. Unlike previous work, which has focused on inferring character roles, we focus on the problem of modeling their relationships. Rather than assuming a fixed relationship for a character pair, we hypothesize that relationships temporally evolve with the progress of the narrative, and formulate the problem of relationship modeling as a structured prediction problem. We propose a semisupervised framework to learn relationship sequences from fully as well as partially labeled data. We present a Markovian model capable of accumulating historical beliefs about the relationship and status changes. We use a set of rich linguistic and semantically motivated features that incorporate world knowledge to investigate the textual content of narrative. We empirically demonstrate that such a framework outperforms competitive baselines.",
"title": ""
},
{
"docid": "22d9a5bbb35890bfbe4fb64e289d102b",
"text": "A secure slip knot is very important in the field of arthroscopy. The new giant knot, developed by the first author, has the properties of being a one-way self-locking slip knot, which is secured without additional half hitches and can tolerate higher forces to be untied.",
"title": ""
},
{
"docid": "af740d54f1b6d168500934a089a1adc8",
"text": "Abstract In this paper, unsteady laminar flow around a circular cylinder has been studied. Navier-stokes equations solved by Simple C algorithm exerted to specified structured and unstructured grids. Equations solved by staggered method and discretization of those done by upwind method. The mean drag coefficient, lift coefficient and strouhal number are compared from current work at three different Reynolds numbers with experimental and numerical values.",
"title": ""
},
{
"docid": "8be1a6ae2328bbcc2d0265df167ecbb3",
"text": "It is increasingly necessary for researchers in all fields to write computer code, and in order to reproduce research results, it is important that this code is published. We present Jupyter notebooks, a document format for publishing code, results and explanations in a form that is both readable and executable. We discuss various tools and use cases for notebook documents.",
"title": ""
}
] |
scidocsrr
|
2fb09a232707bc66402adb6041acc012
|
Towards Scalable and Private Industrial Blockchains
|
[
{
"docid": "9f6e103a331ab52b303a12779d0d5ef6",
"text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
"title": ""
},
{
"docid": "668953b5f6fbfc440bb6f3a91ee7d06b",
"text": "Proof of Work (PoW) powered blockchains currently account for more than 90% of the total market capitalization of existing digital cryptocurrencies. Although the security provisions of Bitcoin have been thoroughly analysed, the security guarantees of variant (forked) PoW blockchains (which were instantiated with different parameters) have not received much attention in the literature. This opens the question whether existing security analysis of Bitcoin's PoW applies to other implementations which have been instantiated with different consensus and/or network parameters.\n In this paper, we introduce a novel quantitative framework to analyse the security and performance implications of various consensus and network parameters of PoW blockchains. Based on our framework, we devise optimal adversarial strategies for double-spending and selfish mining while taking into account real world constraints such as network propagation, different block sizes, block generation intervals, information propagation mechanism, and the impact of eclipse attacks. Our framework therefore allows us to capture existing PoW-based deployments as well as PoW blockchain variants that are instantiated with different parameters, and to objectively compare the tradeoffs between their performance and security provisions.",
"title": ""
}
] |
[
{
"docid": "cd73d3acb274d179b52ec6930f6f26bd",
"text": "We present the design and implementation of new inexact Newton type Bundle Adjustment algorithms that exploit hardware parallelism for efficiently solving large scale 3D scene reconstruction problems. We explore the use of multicore CPU as well as multicore GPUs for this purpose. We show that overcoming the severe memory and bandwidth limitations of current generation GPUs not only leads to more space efficient algorithms, but also to surprising savings in runtime. Our CPU based system is up to ten times and our GPU based system is up to thirty times faster than the current state of the art methods [1], while maintaining comparable convergence behavior. The code and additional results are available at http://grail.cs. washington.edu/projects/mcba.",
"title": ""
},
{
"docid": "52ebff6e9509b27185f9f12bc65d86f8",
"text": "We address the problem of simplifying Portuguese texts at the sentence level by treating it as a \"translation task\". We use the Statistical Machine Translation (SMT) framework to learn how to translate from complex to simplified sentences. Given a parallel corpus of original and simplified texts, aligned at the sentence level, we train a standard SMT system and evaluate the \"translations\" produced using both standard SMT metrics like BLEU and manual inspection. Results are promising according to both evaluations, showing that while the model is usually overcautious in producing simplifications, the overall quality of the sentences is not degraded and certain types of simplification operations, mainly lexical, are appropriately captured.",
"title": ""
},
{
"docid": "bbfcce9ec7294cb542195cca1dfbcc6c",
"text": "We propose a new algorithm, DASSO, for fitting the entire coef fici nt path of the Dantzig selector with a similar computational cost to the LA RS algorithm that is used to compute the Lasso. DASSO efficiently constructs a piecewi s linear path through a sequential simplex-like algorithm, which is remarkably si milar to LARS. Comparison of the two algorithms sheds new light on the question of how th e Lasso and Dantzig selector are related. In addition, we provide theoretical c onditions on the design matrix, X, under which the Lasso and Dantzig selector coefficient esti mates will be identical for certain tuning parameters. As a consequence, in many instances, we are able to extend the powerful non-asymptotic bounds that have been de veloped for the Dantzig selector to the Lasso. Finally, through empirical studies o f imulated and real world data sets we show that in practice, when the bounds hold for th e Dantzig selector, they almost always also hold for the Lasso. Some key words : Dantzig selector; LARS; Lasso; DASSO",
"title": ""
},
{
"docid": "0b0e389556e7c132690d7f2a706664d1",
"text": "E-government challenges are well researched in literature and well known by governments. However, being aware of the challenges of e-government implementation is not sufficient, as challenges may interrelate and impact each other. Therefore, a systematic analysis of the challenges and their interrelationships contributes to providing a better understanding of how to tackle the challenges and how to develop sustainable solutions. This paper aims to investigate existing challenges of e-government and their interdependencies in Tanzania. The collection of e-government challenges in Tanzania is implemented through interviews, desk research and observations of actors in their job. In total, 32 challenges are identified. The subsequent PESTEL analysis studied interrelationships of challenges and identified 34 interrelationships. The analysis of the interrelationships informs policy decision makers of issues to focus on along the planning of successfully implementing the existing e-government strategy in Tanzania. The study also identified future research needs in evaluating the findings through quantitative analysis.",
"title": ""
},
{
"docid": "af973255ab5f85a5dfb8dd73c19891a0",
"text": "I use the example of the 2000 US Presidential election to show that political controversies with technical underpinnings are not resolved by technical means. Then, drawing from examples such as climate change, genetically modified foods, and nuclear waste disposal, I explore the idea that scientific inquiry is inherently and unavoidably subject to becoming politicized in environmental controversies. I discuss three reasons for this. First, science supplies contesting parties with their own bodies of relevant, legitimated facts about nature, chosen in part because they help make sense of, and are made sensible by, particular interests and normative frameworks. Second, competing disciplinary approaches to understanding the scientific bases of an environmental controversy may be causally tied to competing value-based political or ethical positions. The necessity of looking at nature through a variety of disciplinary lenses brings with it a variety of normative lenses, as well. Third, it follows from the foregoing that scientific uncertainty, which so often occupies a central place in environmental controversies, can be understood not as a lack of scientific understanding but as the lack of coherence among competing scientific understandings, amplified by the various political, cultural, and institutional contexts within which science is carried out. In light of these observations, I briefly explore the problem of why some types of political controversies become “scientized” and others do not, and conclude that the value bases of disputes underlying environmental controversies must be fully articulated and adjudicated through political means before science can play an effective role in resolving environmental problems. © 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ba10bfce4c5deabb663b5ca490c320c9",
"text": "OBJECTIVE\nAlthough the relationship between religious practice and health is well established, the relationship between spirituality and health is not as well studied. The objective of this study was to ascertain whether participation in the mindfulness-based stress reduction (MBSR) program was associated with increases in mindfulness and spirituality, and to examine the associations between mindfulness, spirituality, and medical and psychological symptoms.\n\n\nMETHODS\nForty-four participants in the University of Massachusetts Medical School's MBSR program were assessed preprogram and postprogram on trait (Mindful Attention and Awareness Scale) and state (Toronto Mindfulness Scale) mindfulness, spirituality (Functional Assessment of Chronic Illness Therapy--Spiritual Well-Being Scale), psychological distress, and reported medical symptoms. Participants also kept a log of daily home mindfulness practice. Mean changes in scores were computed, and relationships between changes in variables were examined using mixed-model linear regression.\n\n\nRESULTS\nThere were significant improvements in spirituality, state and trait mindfulness, psychological distress, and reported medical symptoms. Increases in both state and trait mindfulness were associated with increases in spirituality. Increases in trait mindfulness and spirituality were associated with decreases in psychological distress and reported medical symptoms. Changes in both trait and state mindfulness were independently associated with changes in spirituality, but only changes in trait mindfulness and spirituality were associated with reductions in psychological distress and reported medical symptoms. No association was found between outcomes and home mindfulness practice.\n\n\nCONCLUSIONS\nParticipation in the MBSR program appears to be associated with improvements in trait and state mindfulness, psychological distress, and medical symptoms. Improvements in trait mindfulness and spirituality appear, in turn, to be associated with improvements in psychological and medical symptoms.",
"title": ""
},
{
"docid": "012a194f9296a510f209e0cd33f2f3da",
"text": "Virtual reality is the use of interactive simulations to present users with opportunities to perform in virtual environments that appear, sound, and less frequently, feel similar to real-world objects and events. Interactive computer play refers to the use of a game where a child interacts and plays with virtual objects in a computer-generated environment. Because of their distinctive attributes that provide ecologically realistic and motivating opportunities for active learning, these technologies have been used in pediatric rehabilitation over the past 15 years. The ability of virtual reality to create opportunities for active repetitive motor/sensory practice adds to their potential for neuroplasticity and learning in individuals with neurologic disorders. The objectives of this article is to provide an overview of how virtual reality and gaming are used clinically, to present the results of several example studies that demonstrate their use in research, and to briefly remark on future developments.",
"title": ""
},
{
"docid": "b9838e512912f4bcaf3c224df3548d95",
"text": "In this paper, we develop a system for training human calligraphy skills. For such a development, the so-called dynamic font and augmented reality (AR) are employed. The dynamic font is used to generate a model character, in which the character are formed as the result of 3-dimensional motion of a virtual writing device on a virtual writing plane. Using the AR technology, we then produce a visual information consisting of not only static writing path but also dynamic writing process of model character. Such a visual information of model character is given some trainee through a head mounted display. The performance is demonstrated by some experimental studies.",
"title": ""
},
{
"docid": "e5304e89e53b05b26f144ae5b2859512",
"text": "This paper describes an agent based simulation used to model human actions in belief space, a high-dimensional subset of information space associated with opinions. Using insights from animal collective behavior, we are able to simulate and identify behavior patterns that are similar to nomadic, flocking and stampeding patterns of animal groups. These behaviors have analogous manifestations in human interaction, emerging as solitary explorers, the fashion-conscious, and echo chambers, whose members are only aware of each other. We demonstrate that a small portion of nomadic agents that widely traverse belief space can disrupt a larger population of stampeding agents. We then model the concept of Adversarial Herding, where trolls, adversaries or other bad actors can exploit properties of technologically mediated communication to artificially create self sustaining runaway polarization. We call this condition the Pishkin Effect as it recalls the large scale buffalo stampedes that could be created by native Americans hunters. We then discuss opportunities for system design that could leverage the ability to recognize these negative patterns, and discuss affordances that may disrupt the formation of natural and deliberate echo chambers.",
"title": ""
},
{
"docid": "2d7d20d578573dab8af8aff960010fea",
"text": "Two flavors of the recommendation problem are the explicit and the implicit feedback settings. In the explicit feedback case, users rate items and the user-item preference relationship can be modelled on the basis of the ratings. In the harder but more common implicit feedback case, the system has to infer user preferences from indirect information: presence or absence of events, such as a user viewed an item. One approach for handling implicit feedback is to minimize a ranking objective function instead of the conventional prediction mean squared error. The naive minimization of a ranking objective function is typically expensive. This difficulty is usually overcome by a trade-off: sacrificing the accuracy to some extent for computational efficiency by sampling the objective function. In this paper, we present a computationally effective approach for the direct minimization of a ranking objective function, without sampling. We demonstrate by experiments on the Y!Music and Netflix data sets that the proposed method outperforms other implicit feedback recommenders in many cases in terms of the ErrorRate, ARP and Recall evaluation metrics.",
"title": ""
},
{
"docid": "f379233f96e68bbbb01038dc16d54a4f",
"text": "CMOS under-voltage lockout (UVLO) circuit configured Schmitt trigger was fabricated and tested. The UVLO circuit consist of current source inverter, Schmitt trigger, delay circuit, and waveform shaper of the output. The tested result shows that the UVLO has size of die was 136 μm × 85 μm, turn on and off voltage was at 1.85V and 1.70V, respectively. The fabrication process was Magna/Hynix CMOS 0.35 μm process and supply voltage was 3.3V. Power dissipation of the UVLO was 0.18mW.",
"title": ""
},
{
"docid": "66b9ad378e1444a6d5a1284a2a036296",
"text": "The relationship between nonverbal behavior and severity of depression was investigated by following depressed participants over the course of treatment and video recording a series of clinical interviews. Facial expressions and head pose were analyzed from video using manual and automatic systems. Both systems were highly consistent for FACS action units (AUs) and showed similar effects for change over time in depression severity. When symptom severity was high, participants made fewer affiliative facial expressions (AUs 12 and 15) and more non-affiliative facial expressions (AU 14). Participants also exhibited diminished head motion (i.e., amplitude and velocity) when symptom severity was high. These results are consistent with the Social Withdrawal hypothesis: that depressed individuals use nonverbal behavior to maintain or increase interpersonal distance. As individuals recover, they send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and revealed the same pattern of findings suggests that automatic facial expression analysis may be ready to relieve the burden of manual coding in behavioral and clinical science.",
"title": ""
},
{
"docid": "ef84f7f53b60cf38972ff1eb04d0f6a5",
"text": "OBJECTIVE\nThe purpose of this prospective study was to evaluate the efficacy and safety of screw fixation without bone fusion for unstable thoracolumbar and lumbar burst fracture.\n\n\nMETHODS\nNine patients younger than 40 years underwent screw fixation without bone fusion, following postural reduction using a soft roll at the involved vertebra, in cases of burst fracture. Their motor power was intact in spite of severe canal compromise. The surgical procedure included postural reduction for 3 days and screw fixations at one level above, one level below and at the fractured level itself. The patients underwent removal of implants 12 months after the initial operation, due to possibility of implant failure. Imaging and clinical findings, including canal encroachment, vertebral height, clinical outcome, and complications were analyzed.\n\n\nRESULTS\nPrior to surgery, the mean pain score (visual analogue scale) was 8.2, which decreased to 2.2 at 12 months after screw fixation. None of the patients complained of worsening of pain during 6 months after implant removal. All patients were graded as having excellent or good outcomes at 6 months after implant removal. The proportion of canal compromise at the fractured level improved from 55% to 35% at 12 months after surgery. The mean preoperative vertebral height loss was 45.3%, which improved to 20.6% at 6 months after implant removal. There were no neurological deficits related to neural injury. The improved vertebral height and canal compromise were maintained at 6 months after implant removal.\n\n\nCONCLUSION\nShort segment pedicle screw fixation, including fractured level itself, without bone fusion following postural reduction can be an effective and safe operative technique in the management of selected young patients suffering from unstable burst fracture.",
"title": ""
},
{
"docid": "7f16ed65f6fd2bcff084d22f76740ff3",
"text": "The past few years have witnessed a growth in size and computational requirements for training and inference with neural networks. Currently, a common approach to address these requirements is to use a heterogeneous distributed environment with a mixture of hardware devices such as CPUs and GPUs. Importantly, the decision of placing parts of the neural models on devices is often made by human experts based on simple heuristics and intuitions. In this paper, we propose a method which learns to optimize device placement for TensorFlow computational graphs. Key to our method is the use of a sequence-tosequence model to predict which subsets of operations in a TensorFlow graph should run on which of the available devices. The execution time of the predicted placements is then used as the reward signal to optimize the parameters of the sequence-to-sequence model. Our main result is that on Inception-V3 for ImageNet classification, and on RNN LSTM, for language modeling and neural machine translation, our model finds non-trivial device placements that outperform hand-crafted heuristics and traditional algorithmic methods.",
"title": ""
},
{
"docid": "ba5f6d151fea9e8715991ac37448c43e",
"text": "In this paper we present an analysis of the effect of large scale video data augmentation for semantic segmentation in driving scenarios. Our work is motivated by a strong correlation between the high performance of most recent deep learning based methods and the availability of large volumes of ground truth labels. To generate additional labelled data, we make use of an occlusion-aware and uncertainty-enabled label propagation algorithm [8]. As a result we increase the availability of high-resolution labelled frames by a factor of 20, yielding in a 6.8% to 10.8% rise in average classification accuracy and/or IoU scores for several semantic segmentation networks. Our key contributions include: (a) augmented CityScapes and CamVid datasets providing 56.2K and 6.5K additional labelled frames of object classes respectively, (b) detailed empirical analysis of the effect of the use of augmented data as well as (c) extension of proposed framework to instance segmentation.",
"title": ""
},
{
"docid": "fef448324e17aeaa7bb0149369631103",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Python Photogrammetry Toolbox: A free solution for Three-Dimensional Documentation Pierre Moulon, Alessandro Bezzi",
"title": ""
},
{
"docid": "6aab23ee181e8db06cc4ca3f7f7367be",
"text": "In their original article, Ericsson, Krampe, and Tesch-Römer (1993) reviewed the evidence concerning the conditions of optimal learning and found that individualized practice with training tasks (selected by a supervising teacher) with a clear performance goal and immediate informative feedback was associated with marked improvement. We found that this type of deliberate practice was prevalent when advanced musicians practice alone and found its accumulated duration related to attained music performance. In contrast, Macnamara, Moreau, and Hambrick's (2016, this issue) main meta-analysis examines the use of the term deliberate practice to refer to a much broader and less defined concept including virtually any type of sport-specific activity, such as group activities, watching games on television, and even play and competitions. Summing up every hour of any type of practice during an individual's career implies that the impact of all types of practice activity on performance is equal-an assumption that I show is inconsistent with the evidence. Future research should collect objective measures of representative performance with a longitudinal description of all the changes in different aspects of the performance so that any proximal conditions of deliberate practice related to effective improvements can be identified and analyzed experimentally.",
"title": ""
},
{
"docid": "85ba8c2cb24fcd991f9f5193f92e736a",
"text": "Energy-efficient operation is a challenge for wireless sensor networks (WSNs). A common method employed for this purpose is duty-cycled operation, which extends battery lifetime yet incurs several types of energy wastes and challenges. A promising alternative to duty-cycled operation is the use of wake-up radio (WuR), where the main microcontroller unit (MCU) and transceiver, that is, the two most energy-consuming elements, are kept in energy-saving mode until a special signal from another node is received by an attached, secondary, ultra-low power receiver. Next, this so-called wake-up receiver generates an interrupt to activate the receiver node's MCU and, consequently, the main radio. This article presents a complete wake-up radio design that targets simplicity in design for the monetary cost and flexibility concerns, along with a good operation range and very low power consumption. Both the transmitter (WuTx) and the receiver (WuRx) designs are presented with the accompanying physical experiments for several design alternatives. Detailed analysis of the end system is provided in terms of both operational distance (more than 10 m) and current consumption (less than 1 μA). As a reference, a commercial WuR system is analyzed and compared to the presented system by expressing the trade-offs and advantages of both systems.",
"title": ""
},
{
"docid": "db04a402e0c7d93afdaf34c0d55ded9a",
"text": " Drowsiness and increased tendency to fall asleep during daytime is still a generally underestimated problem. An increased tendency to fall asleep limits the efficiency at work and substantially increases the risk of accidents. Reduced alertness is difficult to assess, particularly under real life settings. Most of the available measuring procedures are laboratory-oriented and their applicability under field conditions is limited; their validity and sensitivity are often a matter of controversy. The spontaneous eye blink is considered to be a suitable ocular indicator for fatigue diagnostics. To evaluate eye blink parameters as a drowsiness indicator, a contact-free method for the measurement of spontaneous eye blinks was developed. An infrared sensor clipped to an eyeglass frame records eyelid movements continuously. In a series of sessions with 60 healthy adult participants, the validity of spontaneous blink parameters was investigated. The subjective state was determined by means of questionnaires immediately before the recording of eye blinks. The results show that several parameters of the spontaneous eye blink can be used as indicators in fatigue diagnostics. The parameters blink duration and reopening time in particular change reliably with increasing drowsiness. Furthermore, the proportion of long closure duration blinks proves to be an informative parameter. The results demonstrate that the measurement of eye blink parameters provides reliable information about drowsiness/sleepiness, which may also be applied to the continuous monitoring of the tendency to fall asleep.",
"title": ""
},
{
"docid": "ba7f3478b72d5dc47e9894225d9decd1",
"text": "Identifying records that refer to the same entity is a fundamental step for data integration. Since it is prohibitively expensive to compare every pair of records, blocking techniques are typically employed to reduce the complexity of this task. These techniques partition records into blocks and limit the comparison to records co-occurring in a block. Generally, to deal with highly heterogeneous and noisy data (e.g. semi-structured data of the Web), these techniques rely on redundancy to reduce the chance of missing matches. Meta-blocking is the task of restructuring blocks generated by redundancy-based blocking techniques, removing superfluous comparisons. Existing meta-blocking approaches rely exclusively on schema-agnostic features. In this paper, we demonstrate how “loose” schema information (i.e., statistics collected directly from the data) can be exploited to enhance the quality of the blocks in a holistic loosely schema-aware (meta-)blocking approach that can be used to speed up your favorite Entity Resolution algorithm. We call it Blast (Blocking with Loosely-Aware Schema Techniques). We show how Blast can automatically extract this loose information by adopting a LSH-based step for efficiently scaling to large datasets. We experimentally demonstrate, on real-world datasets, how Blast outperforms the state-of-the-art unsupervised meta-blocking approaches, and, in many cases, also the supervised one.",
"title": ""
}
] |
scidocsrr
|
b6b2065a22d872b07a3f09b1e57fa3ee
|
How do People Evaluate Electronic Word-Of-Mouth? Informational and Normative Based Determinants of Perceived Credibility of Online Consumer Recommendations in China
|
[
{
"docid": "26a60d17d524425cfcfa92838ef8ea06",
"text": "This paper develops and tests a model of consumer trust in an electronic commerce vendor. Building consumer trust is a strategic imperative for web-based vendors because trust strongly influences consumer intentions to transact with unfamiliar vendors via the web. Trust allows consumers to overcome perceptions of risk and uncertainty, and to engage in the following three behaviors that are critical to the realization of a web-based vendor’s strategic objectives: following advice offered by the web vendor, sharing personal information with the vendor, and purchasing from the vendor’s web site. Trust in the vendor is defined as a multi-dimensional construct with two inter-related components—trusting beliefs (perceptions of the competence, benevolence, and integrity of the vendor), and trusting intentions—willingness to depend (that is, a decision to make oneself vulnerable to the vendor). Three factors are proposed for building consumer trust in the vendor: structural assurance (that is, consumer perceptions of the safety of the web environment), perceived web vendor reputation, and perceived web site quality. The model is tested in the context of a hypothetical web site offering legal advice. All three factors significantly influenced consumer trust in the web vendor. That is, these factors, especially web site quality and reputation, are powerful levers that vendors can use to build consumer trust, in order to overcome the negative perceptions people often have about the safety of the web environment. The study also demonstrates that perceived Internet risk negatively affects consumer intentions to transact with a web-based vendor. q 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "2a487ff4b9218900e9a0e480c23e4c25",
"text": "5.1 CONVENTIONAL ACTUATORS, SHAPE MEMORY ALLOYS, AND ELECTRORHEOLOGICAL FLUIDS ............................................................................................................................................................. 1 5.1.",
"title": ""
},
{
"docid": "c1918430cadc2bf8355f3fb8beef80f6",
"text": "This paper presents the research results of an ongoing technology transfer project carried out in cooperation between the University of Salerno and a small software company. The project is aimed at developing and transferring migration technology to the industrial partner. The partner should be enabled to migrate monolithic multi-user COBOL legacy systems to a multi-tier Web-based architecture. The assessment of the legacy systems of the partner company revealed that these systems had a very low level of decomposability with spaghetti-like code and embedded control flow and database accesses within the user interface descriptions. For this reason, it was decided to adopt an incremental migration strategy based on the reengineering of the user interface using Web technology, on the transformation of interactive legacy programs into batch programs, and the wrapping of the legacy programs. A middleware framework links the new Web-based user interface with the Wrapped Legacy System. An Eclipse plug-in, named MELIS (migration environment for legacy information systems), was also developed to support the migration process. Both the migration strategy and the tool have been applied to two essential subsystems of the most business critical legacy system of the partner company. Copyright © 2008 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "80a5eaec904b8412cebfe17e392e448a",
"text": "Distributional semantic models learn vector representations of words through the contexts they occur in. Although the choice of context (which often takes the form of a sliding window) has a direct influence on the resulting embeddings, the exact role of this model component is still not fully understood. This paper presents a systematic analysis of context windows based on a set of four distinct hyperparameters. We train continuous SkipGram models on two English-language corpora for various combinations of these hyper-parameters, and evaluate them on both lexical similarity and analogy tasks. Notable experimental results are the positive impact of cross-sentential contexts and the surprisingly good performance of right-context windows.",
"title": ""
},
{
"docid": "d91e433a23545cac171006c40c2c2006",
"text": "In this paper, we revisit the impact of skilled emigration on human capital accumulation using new panel data covering 147 countries on the period 1975-2000. We derive testable predictions from a stylized theoretical model and test them in dynamic regression models. Our empirical analysis predicts conditional convergence of human capital indicators. Our
ndings also reveal that skilled migration prospects foster human capital accumulation in low-income countries. In these countries, a net brain gain can be obtained if the skilled emigration rate is not too large (i.e. does not exceed 20 to 30 percent depending on other country characteristics). On the contrary, we
nd no evidence of a signi
cant incentive mechanism in middle-income and, unsuprisingly, in high-income countries. JEL Classi
cations: O15-O40-F22-F43 Keywords: human capital, convergence, brain drain We thank anonymous referees for their helpful comments. Suggestions from Barry Chiswick, Hubert Jayet, Joel Hellier and Fatemeh Shadman-Mehta were also appreciated. This article bene
ted from comments received at the SIUTE seminar (Lille, January 2006), the CReAM conference on Immigration: Impacts, Integration and Intergenerational Issues (London, March 2006), the Spring Meeting of Young Economists (Sevilla, May 2006), the XIV Villa Mondragone International Economic Seminar (Rome, July 2006) and the ESPE meeting (Chicago, 2007). The third author is grateful for the
nancial support from the Belgian French-speaking Communitys programme Action de recherches concertées (ARC 03/08 -302) and from the Belgian Federal Government (PAI grant P6/07 Economic Policy and Finance in the Global Equilibrium Analysis and Social Evaluation). The usual disclaimers apply. Corresponding author: Michel Beine (michel.beine@uni.lu), University of Luxembourg, 162a av. de la Faiencerie, L-1511 Luxembourg.",
"title": ""
},
{
"docid": "f58d69de4b5bcc4100a3bfe3426fa81f",
"text": "This study evaluated the factor structure of the Rosenberg Self-Esteem Scale (RSES) with a diverse sample of 1,248 European American, Latino, Armenian, and Iranian adolescents. Adolescents completed the 10-item RSES during school as part of a larger study on parental influences and academic outcomes. Findings suggested that method effects in the RSES are more strongly associated with negatively worded items across three diverse groups but also more pronounced among ethnic minority adolescents. Findings also suggested that accounting for method effects is necessary to avoid biased conclusions regarding cultural differences in selfesteem and how predictors are related to the RSES. Moreover, the two RSES factors (positive self-esteem and negative self-esteem) were differentially predicted by parenting behaviors and academic motivation. Substantive and methodological implications of these findings for crosscultural research on adolescent self-esteem are discussed.",
"title": ""
},
{
"docid": "56a35139eefd215fe83811281e4e2279",
"text": "Querying graph data is a fundamental problem that witnesses an increasing interest especially for massive graph databases which come as a promising alternative to relational databases for big data modeling. In this paper, we study the problem of subgraph isomorphism search which consists to enumerate the embedding of a query graph in a data graph. The most known solutions of this NPcomplete problem are backtracking-based and result in a high computational cost when we deal with massive graph databases. We address this problem and its challenges via graph compression with modular decomposition. In our approach, subgraph isomorphism search is performed on compressed graphs without decompressing them yielding substantial reduction of the search space and consequently a significant saving in processing time as well as in storage space for the graphs. We evaluated our algorithms on nine real-word datasets. The experimental results show that our approach is efficient and scalable. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ddae24f2ec721524968b4c527e34ff22",
"text": "Labeled text classification datasets are typically only available in a few select languages. In order to train a model for e.g news categorization in a language Lt without a suitable text classification dataset there are two options. The first option is to create a new labeled dataset by hand, and the second option is to transfer label information from an existing labeled dataset in a source language Ls to the target language Lt. In this paper we propose a method for sharing label information across languages by means of a language independent text encoder. The encoder will give almost identical representations to multilingual versions of the same text. This means that labeled data in one language can be used to train a classifier that works for the rest of the languages. The encoder is trained independently of any concrete classification task and can therefore subsequently be used for any classification task. We show that it is possible to obtain good performance even in the case where only a comparable corpus of texts is available.",
"title": ""
},
{
"docid": "a23fd89da025d456f9fe3e8a47595c6a",
"text": "Mobile devices are especially vulnerable nowadays to malware attacks, thanks to the current trend of increased app downloads. Despite the significant security and privacy concerns it received, effective malware detection (MD) remains a significant challenge. This paper tackles this challenge by introducing a streaminglized machine learning-based MD framework, StormDroid: (i) The core of StormDroid is based on machine learning, enhanced with a novel combination of contributed features that we observed over a fairly large collection of data set; and (ii) we streaminglize the whole MD process to support large-scale analysis, yielding an efficient and scalable MD technique that observes app behaviors statically and dynamically. Evaluated on roughly 8,000 applications, our combination of contributed features improves MD accuracy by almost 10% compared with state-of-the-art antivirus systems; in parallel our streaminglized process, StormDroid, further improves efficiency rate by approximately three times than a single thread.",
"title": ""
},
{
"docid": "6af7479d44717a58216dfd986f7f56e5",
"text": "Mobile payment systems can be divided into five categories including mobile payment at the POS, mobile payment as the POS, mobile payment platform, independent mobile payment system, and direct carrier billing. Although mobile payment has gained its popularity in many regions due to its convenience, it also faces many threats and security challenges. In this paper, we present a mobile payment processing model and introduce each type of mobile payment systems. We summarize the security services desired in mobile payment systems and also the security mechanisms which are currently in place. We further identify and discuss three security threats, i.e., malware, SSL/TLS vulnerabilities, and data breaches, and four security challenges, i.e., malware detection, multi-factor authentication, data breach prevention, and fraud detection and prevention, in mobile payment systems.",
"title": ""
},
{
"docid": "868b55cc5b83ea6997000aa6aab84128",
"text": "Job boards and professional social networks heavily use recommender systems in order to better support users in exploring job advertisements. Detecting the similarity between job advertisements is important for job recommendation systems as it allows, for example, the application of item-to-item based recommendations. In this work, we research the usage of dense vector representations to enhance a large-scale job recommendation system and to rank German job advertisements regarding their similarity. We follow a two-folded evaluation scheme: (1) we exploit historic user interactions to automatically create a dataset of similar jobs that enables an offline evaluation. (2) In addition, we conduct an online A/B test and evaluate the best performing method on our platform reaching more than 1 million users. We achieve the best results by combining job titles with full-text job descriptions. In particular, this method builds dense document representation using words of the titles to weigh the importance of words of the full-text description. In the online evaluation, this approach allows us to increase the click-through rate on job recommendations for active users by 8.0%.",
"title": ""
},
{
"docid": "f0c5f3cce1a0538e3c177ef00eab0b75",
"text": "Clickstream data are defined as the electronic record of Internet usage collected by Web servers or third-party services. The authors discuss the nature of clickstream data, noting key strengths and limitations of these data for research in marketing. The paper reviews major developments from the analysis of these data, covering advances in understanding (1) browsing and site usage behavior on the Internet, (2) the Internet’s role and efficacy as a new medium for advertising and persuasion, and (3) shopping behavior on the Internet (i.e., electronic commerce). The authors outline opportunities for new research and highlight several emerging areas likely to grow in future importance. Inherent limitations of clickstream data for understanding and predicting the behavior of Internet users or researching marketing phenomena are also discussed.",
"title": ""
},
{
"docid": "acde8d57fbfb14a21db4f3cdf1e1e10c",
"text": "We introduce a novel proximity-coupled stacked patch antenna to cover all three GPS bands (L1, L2 and L5 bands). The proposed antenna has an aperture size of 1.2\" times 1.2\" (lambda/8 at the L5 band) and has an RHCP gain greater than 2 dBi over all three GPS bands. Design challenges, corresponding numerical simulations and measurements are presented",
"title": ""
},
{
"docid": "d9a0bafe145879f67c57b1cfdab52a50",
"text": "ion to yield the following: (if (zero?#(struct:sig2 . . . )) (add1seconds)",
"title": ""
},
{
"docid": "b148cfa9a0c03c6ca0af7aa8e007d39b",
"text": "Feedforward deep neural networks (DNNs), artificial neural networks with multiple hidden layers, have recently demonstrated a record-breaking performance in multiple areas of applications in computer vision and speech processing. Following the success, DNNs have been applied to neuroimaging modalities including functional/structural magnetic resonance imaging (MRI) and positron-emission tomography data. However, no study has explicitly applied DNNs to 3D whole-brain fMRI volumes and thereby extracted hidden volumetric representations of fMRI that are discriminative for a task performed as the fMRI volume was acquired. Our study applied fully connected feedforward DNN to fMRI volumes collected in four sensorimotor tasks (i.e., left-hand clenching, right-hand clenching, auditory attention, and visual stimulus) undertaken by 12 healthy participants. Using a leave-one-subject-out cross-validation scheme, a restricted Boltzmann machine-based deep belief network was pretrained and used to initialize weights of the DNN. The pretrained DNN was fine-tuned while systematically controlling weight-sparsity levels across hidden layers. Optimal weight-sparsity levels were determined from a minimum validation error rate of fMRI volume classification. Minimum error rates (mean±standard deviation; %) of 6.9 (±3.8) were obtained from the three-layer DNN with the sparsest condition of weights across the three hidden layers. These error rates were even lower than the error rates from the single-layer network (9.4±4.6) and the two-layer network (7.4±4.1). The estimated DNN weights showed spatial patterns that are remarkably task-specific, particularly in the higher layers. The output values of the third hidden layer represented distinct patterns/codes of the 3D whole-brain fMRI volume and encoded the information of the tasks as evaluated from representational similarity analysis. Our reported findings show the ability of the DNN to classify a single fMRI volume based on the extraction of hidden representations of fMRI volumes associated with tasks across multiple hidden layers. Our study may be beneficial to the automatic classification/diagnosis of neuropsychiatric and neurological diseases and prediction of disease severity and recovery in (pre-) clinical settings using fMRI volumes without requiring an estimation of activation patterns or ad hoc statistical evaluation.",
"title": ""
},
{
"docid": "a8d7f6dcaf55ebd5ec580b2b4d104dd9",
"text": "In this paper we investigate social tags as a novel highvolume source of semantic metadata for music, using techniques from the fields of information retrieval and multivariate data analysis. We show that, despite the ad hoc and informal language of tagging, tags define a low-dimensional semantic space that is extremely well-behaved at the track level, in particular being highly organised by artist and musical genre. We introduce the use of Correspondence Analysis to visualise this semantic space, and show how it can be applied to create a browse-by-mood interface for a psychologically-motivated two-dimensional subspace rep resenting musical emotion.",
"title": ""
},
{
"docid": "7ef2f4a771aa0d1724127c97aa21e1ea",
"text": "This paper demonstrates the efficient use of Internet of Things for the traditional agriculture. It shows the use of Arduino and ESP8266 based monitored and controlled smart irrigation systems, which is also cost-effective and simple. It is beneficial for farmers to irrigate there land conveniently by the application of automatic irrigation system. This smart irrigation system has pH sensor, water flow sensor, temperature sensor and soil moisture sensor that measure respectively and based on these sensors arduino microcontroller drives the servo motor and pump. Arduino received the information and transmitted with ESP8266 Wi-Fi module wirelessly to the website through internet. This transmitted information is monitor and control by using IOT. This enables the remote control mechanism through a secure internet web connection to the user. A website has been prepared which present the actual time values and reference values of various factors needed by crops. Users can control water pumps and sprinklers through the website and keep an eye on the reference values which will help the farmer increase production with quality crops.",
"title": ""
},
{
"docid": "6e9432d2669ae81a350814df94f9edc3",
"text": "In parallel with the meteoric rise of mobile software, we are witnessing an alarming escalation in the number and sophistication of the security threats targeted at mobile platforms, particularly Android, as the dominant platform. While existing research has made significant progress towards detection and mitigation of Android security, gaps and challenges remain. This paper contributes a comprehensive taxonomy to classify and characterize the state-of-the-art research in this area. We have carefully followed the systematic literature review process, and analyzed the results of more than 100 research papers, resulting in the most comprehensive and elaborate investigation of the literature in this area of research. The systematic analysis of the research literature has revealed patterns, trends, and gaps in the existing literature, and underlined key challenges and opportunities that will shape the focus of future research efforts.",
"title": ""
},
{
"docid": "a31287791b12f55adebacbb93a03c8bc",
"text": "Emotional adaptation increases pro-social behavior of humans towards robotic interaction partners. Social cues are an important factor in this context. This work investigates, if emotional adaptation still works under absence of human-like facial Action Units. A human-robot dialog scenario is chosen using NAO pretending to work for a supermarket and involving humans providing object names to the robot for training purposes. In a user study, two conditions are implemented with or without explicit emotional adaptation of NAO to the human user in a between-subjects design. Evaluations of user experience and acceptance are conducted based on evaluated measures of human-robot interaction (HRI). The results of the user study reveal a significant increase of helpfulness (number of named objects), anthropomorphism, and empathy in the explicit emotional adaptation condition even without social cues of facial Action Units, but only in case of prior robot contact of the test persons. Otherwise, an opposite effect is found. These findings suggest, that reduction of these social cues can be overcome by robot experience prior to the interaction task, e.g. realizable by an additional bonding phase, confirming the importance of such from previous work. Additionally, an interaction with academic background of the participants is found.",
"title": ""
},
{
"docid": "67f13c2b686593398320d8273d53852f",
"text": "Drug-drug interactions (DDIs) may cause serious side-effects that draw great attention from both academia and industry. Since some DDIs are mediated by unexpected drug-human protein interactions, it is reasonable to analyze the chemical-protein interactome (CPI) profiles of the drugs to predict their DDIs. Here we introduce the DDI-CPI server, which can make real-time DDI predictions based only on molecular structure. When the user submits a molecule, the server will dock user's molecule across 611 human proteins, generating a CPI profile that can be used as a feature vector for the pre-constructed prediction model. It can suggest potential DDIs between the user's molecule and our library of 2515 drug molecules. In cross-validation and independent validation, the server achieved an AUC greater than 0.85. Additionally, by investigating the CPI profiles of predicted DDI, users can explore the PK/PD proteins that might be involved in a particular DDI. A 3D visualization of the drug-protein interaction will be provided as well. The DDI-CPI is freely accessible at http://cpi.bio-x.cn/ddi/.",
"title": ""
},
{
"docid": "76034cd981a64059f749338a2107e446",
"text": "We examine how financial assurance structures and the clearly defined financial transaction at the core of monetized network hospitality reduce uncertainty for Airbnb hosts and guests. We apply the principles of social exchange and intrinsic and extrinsic motivation to a qualitative study of Airbnb hosts to 1) describe activities that are facilitated by the peer-to-peer exchange platform and 2) how the assurance of the initial financial exchange facilitates additional social exchanges between hosts and guests. The study illustrates that the financial benefits of hosting do not necessarily crowd out intrinsic motivations for hosting but instead strengthen them and even act as a gateway to further social exchange and interpersonal interaction. We describe the assurance structures in networked peer-to-peer exchange, and explain how such assurances can reconcile contention between extrinsic and intrinsic motivations. We conclude with implications for design and future research.",
"title": ""
}
] |
scidocsrr
|
df395f0c0bfbd028f8e0f02f2777aac0
|
Hatman: Intra-cloud Trust Management for Hadoop
|
[
{
"docid": "0a97c254e5218637235a7e23597f572b",
"text": "We investigate the design of a reputation system for decentralized unstructured P2P networks like Gnutella. Having reliable reputation information about peers can form the basis of an incentive system and can guide peers in their decision making (e.g., who to download a file from). The reputation system uses objective criteria to track each peer's contribution in the system and allows peers to store their reputations locally. Reputation are computed using either of the two schemes, debit-credit reputation computation (DCRC) and credit-only reputation computation (CORC). Using a reputation computation agent (RCA), we design a public key based mechanism that periodically updates the peer reputations in a secure, light-weight, and partially distributed manner. We evaluate using simulations the performance tradeoffs inherent in the design of our system.",
"title": ""
},
{
"docid": "56dabbcf36d734211acc0b4a53f23255",
"text": "Cloud computing is a way to increase the capacity or add capabilities dynamically without investing in new infrastructure, training new personnel, or licensing new software. It extends Information Technology’s (IT) existing capabilities. In the last few years, cloud computing has grown from being a promising business concept to one of the fast growing segments of the IT industry. But as more and more information on individuals and companies are placed in the cloud, concerns are beginning to grow about just how safe an environment it is. Despite of all the hype surrounding the cloud, enterprise customers are still reluctant to deploy their business in the cloud. Security is one of the major issues which reduces the growth of cloud computing and complications with data privacy and data protection continue to plague the market. The advent of an advanced model should not negotiate with the required functionalities and capabilities present in the current model. A new model targeting at improving features of an existing model must not risk or threaten other important features of the current model. The architecture of cloud poses such a threat to the security of the existing technologies when deployed in a cloud environment. Cloud service users need to be vigilant in understanding the risks of data breaches in this new environment. In this paper, a survey of the different security risks that pose a threat to the cloud is presented. This paper is a survey more specific to the different security issues that has emanated due to the nature of the service delivery models of a cloud computing system. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b42e92aba32ff037362ecc40b816d063",
"text": "In this paper we discuss security issues for cloud computing including storage security, data security, and network security and secure virtualization. Then we select some topics and describe them in more detail. In particular, we discuss a scheme for secure third party publications of documents in a cloud. Next we discuss secure federated query processing with map Reduce and Hadoop. Next we discuss the use of secure coprocessors for cloud computing. Third we discuss XACML implementation for Hadoop. We believe that building trusted applications from untrusted components will be a major aspect of secure cloud computing.",
"title": ""
}
] |
[
{
"docid": "fc26f9bcbd28125607c90e15c3069cab",
"text": "Topological data analysis (TDA) is an emerging mathematical concept for characterizing shapes in complex data. In TDA, persistence diagrams are widely recognized as a useful descriptor of data, and can distinguish robust and noisy topological properties. This paper proposes a kernel method on persistence diagrams to develop a statistical framework in TDA. The proposed kernel satisfies the stability property and provides explicit control on the effect of persistence. Furthermore, the method allows a fast approximation technique. The method is applied into practical data on proteins and oxide glasses, and the results show the advantage of our method compared to other relevant methods on persistence diagrams.",
"title": ""
},
{
"docid": "e7aff52a045e6ec0f3f40e7c2f023f72",
"text": "Autism is a developmental condition, characterized by difficulties of social interaction and communication, as well as restricted interests and repetitive behaviors. Although several important conceptions have shed light on specific facets, there is still no consensus about a universal yet specific theory in terms of its underlying mechanisms. While some theories have exclusively focused on sensory aspects, others have emphasized social difficulties. However, sensory and social processes in autism might be interconnected to a higher degree than what has been traditionally thought. We propose that a mismatch in sensory abilities across individuals can lead to difficulties on a social, i.e. interpersonal level and vice versa. In this article, we, therefore, selectively review evidence indicating an interrelationship between perceptual and social difficulties in autism. Additionally, we link this body of research with studies, which investigate the mechanisms of action control in social contexts. By doing so, we highlight that autistic traits are also crucially related to differences in integration, anticipation and automatic responding to social cues, rather than a mere inability to register and learn from social cues. Importantly, such differences may only manifest themselves in sufficiently complex situations, such as real-life social interactions, where such processes are inextricably linked.",
"title": ""
},
{
"docid": "8d45954f6c038910586d55e9ca3ba924",
"text": "IAA produced by bacteria of the genus Azospirillum spp. can promote plant growth by stimulating root formation. Native Azospirillum spp., isolated from Irannian soils had been evaluated this ability in both qualitative and quantitative methods and registered the effects of superior ones on morphological, physiological and root growth of wheat. The roots of wheat seedling responded positively to the several bacteria inoculations by an increase in root length, dry weight and by the lateral root hairs.",
"title": ""
},
{
"docid": "a1d300bd5ac779e1b21a7ed20b3b01ad",
"text": "a r t i c l e i n f o Keywords: Luxury brands Perceived social media marketing (SMM) activities Value equity Relationship equity Brand equity Customer equity Purchase intention In light of a growing interest in the use of social media marketing (SMM) among luxury fashion brands, this study set out to identify attributes of SMM activities and examine the relationships among those perceived activities, value equity, relationship equity, brand equity, customer equity, and purchase intention through a structural equation model. Five constructs of perceived SSM activities of luxury fashion brands are entertainment , interaction, trendiness, customization, and word of mouth. Their effects on value equity, relationship equity, and brand equity are significantly positive. For the relationship between customer equity drivers and customer equity, brand equity has significant negative effect on customer equity while value equity and relationship equity show no significant effect. As for purchase intention, value equity and relationship equity had significant positive effects, while relationship equity had no significant influence. Finally, the relationship between purchase intention and customer equity has significance. The findings of this study can enable luxury brands to forecast the future purchasing behavior of their customers more accurately and provide a guide to managing their assets and marketing activities as well. The luxury market has attained maturity, along with the gradual expansion of the scope of its market and a rapid growth in the number of customers. Luxury market is a high value-added industry basing on high brand assets. Due to the increased demand for luxury in emerging markets such as China, India, and the Middle East, opportunities abound to expand the business more than ever. In the past, luxury fashion brands could rely on strong brand assets and secure regular customers. However, the recent entrance of numerous fashion brands into the luxury market, followed by heated competition, signals unforeseen changes in the market. A decrease in sales related to a global economic downturn drives luxury businesses to change. Now they can no longer depend solely on their brand symbol but must focus on brand legacy, quality, esthetic value, and trustworthy customer relationships in order to succeed. A key element to luxury industry becomes providing values to customers in every way possible. As a means to constitute customer assets through effective communication with consumers, luxury brands have tilted their eyes toward social media. Marketing communication using social media such as Twitter, Facebook, and …",
"title": ""
},
{
"docid": "ad1a5bf472c819de460b610fe5a910f6",
"text": "Since the appearance of OpenFlow back in 2008, software-defined networking (SDN) has gained momentum. Although there are some discrepancies between the standards developing organizations working with SDN about what SDN is and how it is defined, they all outline traffic engineering (TE) as a key application. One of the most common objectives of TE is the congestion minimization, where techniques such as traffic splitting among multiple paths or advanced reservation systems are used. In such a scenario, this manuscript surveys the role of a comprehensive list of SDN protocols in TE solutions, in order to assess how these protocols can benefit TE. The SDN protocols have been categorized using the SDN architecture proposed by the open networking foundation, which differentiates among data-controller plane interfaces, application-controller plane interfaces, and management interfaces, in order to state how the interface type in which they operate influences TE. In addition, the impact of the SDN protocols on TE has been evaluated by comparing them with the path computation element (PCE)-based architecture. The PCE-based architecture has been selected to measure the impact of SDN on TE because it is the most novel TE architecture until the date, and because it already defines a set of metrics to measure the performance of TE solutions. We conclude that using the three types of interfaces simultaneously will result in more powerful and enhanced TE solutions, since they benefit TE in complementary ways.",
"title": ""
},
{
"docid": "1969bf5a07349cc5a9b498e0437e41fe",
"text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.",
"title": ""
},
{
"docid": "dc9abfd745d4267a5fcd66ce1d977acb",
"text": "Advances in information technology and its widespread growth in several areas of business, engineering, medical, and scientific studies are resulting in information/data explosion. Knowledge discovery and decision-making from such rapidly growing voluminous data are a challenging task in terms of data organization and processing, which is an emerging trend known as big data computing, a new paradigm that combines large-scale compute, new data-intensive techniques, and mathematical models to build data analytics. Big data computing demands a huge storage and computing for data curation and processing that could be delivered from on-premise or clouds infrastructures. This paper discusses the evolution of big data computing, differences between traditional data warehousing and big data, taxonomy of big data computing and underpinning technologies, integrated platform of big data and clouds known as big data clouds, layered architecture and components of big data cloud, and finally open-technical challenges and future directions. Copyright © 2015 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "7c7adec92afb1fc3137de500d00c8c89",
"text": "Automatic service discovery is essential to realizing the full potential of the Internet of Things (IoT). While discovery protocols like Multicast DNS, Apple AirDrop, and Bluetooth Low Energy have gained widespread adoption across both IoT and mobile devices, most of these protocols do not offer any form of privacy control for the service, and often leak sensitive information such as service type, device hostname, device owner’s identity, and more in the clear. To address the need for better privacy in both the IoT and the mobile landscape, we develop two protocols for private service discovery and private mutual authentication. Our protocols provide private and authentic service advertisements, zero round-trip (0-RTT) mutual authentication, and are provably secure in the Canetti-Krawczyk key-exchange model. In contrast to alternatives, our protocols are lightweight and require minimal modification to existing key-exchange protocols. We integrate our protocols into an existing open-source distributed applications framework, and provide benchmarks on multiple hardware platforms: Intel Edisons, Raspberry Pis, smartphones, laptops, and desktops. Finally, we discuss some privacy limitations of the Apple AirDrop protocol (a peer-to-peer file sharing mechanism) and show how to improve the privacy of Apple AirDrop using our private mutual authentication protocol.",
"title": ""
},
{
"docid": "3db4ed6fb68bd1c6249e747fdb8067db",
"text": "National exercises for the evaluation of research activity by universities are becoming regular practice in ever more countries. These exercises have mainly been conducted through the application of peer-review methods. Bibliometrics has not been able to offer a valid large-scale alternative because of almost overwhelming difficulties in identifying the true author of each publication. We will address this problem by presenting a heuristic approach to author name disambiguation in bibliometric datasets for large-scale research assessments. The application proposed concerns the Italian university system, consisting of 80 universities and a research staff of over 60,000 scientists. The key advantage of the proposed approach is the ease of implementation. The algorithms are of practical application and have considerably better scalability and expandability properties than state-of-the-art unsupervised approaches. Moreover, the performance in terms of precision and recall, which can be further improved, seems thoroughly adequate for the typical needs of large-scale bibliometric research assessments.",
"title": ""
},
{
"docid": "cb859c62d1845828f05755338434eed4",
"text": "Customers and stakeholders have substantial investments in, and are comfortable with the performance, security and stability of, industry-standard platforms like the JVM and CLR. While Java and C# developers on those platforms may envy the succinctness, flexibility and productivity of dynamic languages, they have concerns about running on customer-approved infrastructure, access to their existing code base and libraries, and performance. In addition, they face ongoing problems dealing with concurrency using native threads and locking. Clojure is an effort in pragmatic dynamic language design in this context. It endeavors to be a general-purpose language suitable in those areas where Java is suitable. It reflects the reality that, for the concurrent programming future, pervasive, unmoderated mutation simply has to go. Clojure meets its goals by: embracing an industry-standard, open platform - the JVM; modernizing a venerable language - Lisp; fostering functional programming with immutable persistent data structures; and providing built-in concurrency support via software transactional memory and asynchronous agents. The result is robust, practical, and fast. This talk will focus on the motivations, mechanisms and experiences of the implementation of Clojure.",
"title": ""
},
{
"docid": "cf90703045e958c48282d758f84f2568",
"text": "One expectation about the future Internet is the participation of billions of sensor nodes, integrating the physical with the digital world. This Internet of Things can offer new and enhanced services and applications based on knowledge about the environment and the entities within. Millions of micro-providers could come into existence, forming a highly fragmented market place with new business opportunities to offer commercial services. In the related field of Internet and Telecommunication services, the design of markets and pricing schemes has been a vital research area in itself. We discuss how these findings can be transferred to the Internet of Things. Both the appropriate market structure and corresponding pricing schemes need to be well understood to enable a commercial success of sensor-based services. We show some steps that an evolutionary establishment of this market might have to take.",
"title": ""
},
{
"docid": "c1e12a4feec78d480c8f0c02cdb9cb7d",
"text": "Although the Parthenon has stood on the Athenian Acropolis for nearly 2,500 years, its sculptural decorations have been scattered to museums around the world. Many of its sculptures have been damaged or lost. Fortunately, most of the decoration survives through drawings, descriptions, and casts. A component of our Parthenon Project has been to assemble digital models of the sculptures and virtually reunite them with the Parthenon. This sketch details our effort to digitally record the Parthenon sculpture collection in the Basel Skulpturhalle museum, which exhibits plaster casts of almost all of the existing pediments, metopes, and frieze. Our techniques have been designed to work as quickly as possible and at low cost.",
"title": ""
},
{
"docid": "f70447a47fb31fc94d6b57ca3ef57ad3",
"text": "BACKGROUND\nOn Aug 14, 2014, the US Food and Drug Administration approved the antiangiogenesis drug bevacizumab for women with advanced cervical cancer on the basis of improved overall survival (OS) after the second interim analysis (in 2012) of 271 deaths in the Gynecologic Oncology Group (GOG) 240 trial. In this study, we report the prespecified final analysis of the primary objectives, OS and adverse events.\n\n\nMETHODS\nIn this randomised, controlled, open-label, phase 3 trial, we recruited patients with metastatic, persistent, or recurrent cervical carcinoma from 81 centres in the USA, Canada, and Spain. Inclusion criteria included a GOG performance status score of 0 or 1; adequate renal, hepatic, and bone marrow function; adequately anticoagulated thromboembolism; a urine protein to creatinine ratio of less than 1; and measurable disease. Patients who had received chemotherapy for recurrence and those with non-healing wounds or active bleeding conditions were ineligible. We randomly allocated patients 1:1:1:1 (blocking used; block size of four) to intravenous chemotherapy of either cisplatin (50 mg/m2 on day 1 or 2) plus paclitaxel (135 mg/m2 or 175 mg/m2 on day 1) or topotecan (0·75 mg/m2 on days 1-3) plus paclitaxel (175 mg/m2 on day 1) with or without intravenous bevacizumab (15 mg/kg on day 1) in 21 day cycles until disease progression, unacceptable toxic effects, voluntary withdrawal by the patient, or complete response. We stratified randomisation by GOG performance status (0 vs 1), previous radiosensitising platinum-based chemotherapy, and disease status (recurrent or persistent vs metastatic). We gave treatment open label. Primary outcomes were OS (analysed in the intention-to-treat population) and adverse events (analysed in all patients who received treatment and submitted adverse event information), assessed at the second interim and final analysis by the masked Data and Safety Monitoring Board. The cutoff for final analysis was 450 patients with 346 deaths. This trial is registered with ClinicalTrials.gov, number NCT00803062.\n\n\nFINDINGS\nBetween April 6, 2009, and Jan 3, 2012, we enrolled 452 patients (225 [50%] in the two chemotherapy-alone groups and 227 [50%] in the two chemotherapy plus bevacizumab groups). By March 7, 2014, 348 deaths had occurred, meeting the prespecified cutoff for final analysis. The chemotherapy plus bevacizumab groups continued to show significant improvement in OS compared with the chemotherapy-alone groups: 16·8 months in the chemotherapy plus bevacizumab groups versus 13·3 months in the chemotherapy-alone groups (hazard ratio 0·77 [95% CI 0·62-0·95]; p=0·007). Final OS among patients not receiving previous pelvic radiotherapy was 24·5 months versus 16·8 months (0·64 [0·37-1·10]; p=0·11). Postprogression OS was not significantly different between the chemotherapy plus bevacizumab groups (8·4 months) and chemotherapy-alone groups (7·1 months; 0·83 [0·66-1·05]; p=0·06). Fistula (any grade) occurred in 32 (15%) of 220 patients in the chemotherapy plus bevacizumab groups (all previously irradiated) versus three (1%) of 220 in the chemotherapy-alone groups (all previously irradiated). Grade 3 fistula developed in 13 (6%) versus one (<1%). No fistulas resulted in surgical emergencies, sepsis, or death.\n\n\nINTERPRETATION\nThe benefit conferred by incorporation of bevacizumab is sustained with extended follow-up as evidenced by the overall survival curves remaining separated. After progression while receiving bevacizumab, we did not observe a negative rebound effect (ie, shorter survival after bevacizumab is stopped than after chemotherapy alone is stopped). These findings represent proof-of-concept of the efficacy and tolerability of antiangiogenesis therapy in advanced cervical cancer.\n\n\nFUNDING\nNational Cancer Institute.",
"title": ""
},
{
"docid": "92e955705aa333923bb7b14af946fc2f",
"text": "This study examines the role of online daters’ physical attractiveness in their profile selfpresentation and, in particular, their use of deception. Sixty-nine online daters identified the deceptions in their online dating profiles and had their photograph taken in the lab. Independent judges rated the online daters’ physical attractiveness. Results show that the lower online daters’ attractiveness, the more likely they were to enhance their profile photographs and lie about their physical descriptors (height, weight, age). The association between attractiveness and deception did not extend to profile elements unrelated to their physical appearance (e.g., income, occupation), suggesting that their deceptions were limited and strategic. Results are discussed in terms of (a) evolutionary theories about the importance of physical attractiveness in the dating realm and (b) the technological affordances that allow online daters to engage in selective self-presentation.",
"title": ""
},
{
"docid": "cf281d60ea830892a441bc91fe05ab72",
"text": "The signal-to-noise ratio (SNR) is the gold standard metric for capturing wireless link quality, but offers limited predictability. Recent work shows that frequency diversity causes limited predictability in SNR, and proposes effective SNR. Owing to its significant improvement over SNR, effective SNR has become a widely adopted metric for measuring wireless channel quality and served as the basis for many recent rate adaptation schemes. In this paper, we first conduct trace driven evaluation, and find that the accuracy of effective SNR is still inadequate due to frequency diversity and bursty errors. While common wisdom says that interleaving should remove the bursty errors, bursty errors still persist under the WiFi interleaver. Therefore, we develop two complementary methods for computing frame delivery rate to capture the bursty errors under the WiFi interleaver. We then design a new interleaver to reduce the burstiness of errors, and improve the frame delivery rate. We further design a rate adaptation scheme based on our delivery rate estimation. It can support both WiFi and our interleaver. Using extensive evaluation, we show our delivery rate estimation is accurate and significantly out-performs effective SNR; our interleaver improves the delivery rate over the WiFi interleaver; and our rate adaptation improves both throughput and energy.",
"title": ""
},
{
"docid": "6af7f066a59a8e13f4c9f5924932e774",
"text": "The state-of-the-art (SOTA) for mixed precision training is dominated by variants of low precision floating point operations, and in particular FP16 accumulating into FP32 Micikevicius et al. (2017). On the other hand, while a lot of research has also happened in the domain of low and mixed-precision Integer training, these works either present results for non-SOTA networks (for instance only AlexNet for ImageNet-1K), or relatively small datasets (like CIFAR-10). In this work, we train state-of-the-art visual understanding neural networks on ImageNet-1K dataset, with Integer operations on General Purpose (GP) hardware. In particular, we focus on Integer Fused-Multiply-and-Accumulate (FMA) operations which take two pairs of INT16 operands and accumulate results into an INT32 output.We propose a shared exponent representation of tensors, and develop a Dynamic Fixed Point (DFP) scheme suitable for common neural network operations. The nuances of developing an efficient integer convolution kernel is examined, including methods to handle overflow of the INT32 accumulator. We implement CNN training for ResNet-50, GoogLeNet-v1, VGG-16 and AlexNet; and these networks achieve or exceed SOTA accuracy within the same number of iterations as their FP32 counterparts without any change in hyper-parameters and with a 1.8X improvement in end-to-end training throughput. To the best of our knowledge these results represent the first INT16 training results on GP hardware for ImageNet-1K dataset using SOTA CNNs and achieve highest reported accuracy using half precision representation.",
"title": ""
},
{
"docid": "e059d7e04c3dba8ed570ad1d72a647b5",
"text": "An electronic throttle is a low-power dc servo drive which positions the throttle plate. Its application in modern automotive engines leads to improvements in vehicle drivability, fuel economy, and emissions. Transmission friction and the return spring limp-home nonlinearity significantly affect the electronic throttle performance. The influence of these effects is analyzed by means of computer simulations, experiments, and analytical calculations. A dynamic friction model is developed in order to adequately capture the experimentally observed characteristics of the presliding-displacement and breakaway effects. The linear part of electronic throttle process model is also analyzed and experimentally identified. A nonlinear control strategy is proposed, consisting of a proportional-integral-derivative (PID) controller and a feedback compensator for friction and limp-home effects. The PID controller parameters are analytically optimized according to the damping optimum criterion. The proposed control strategy is verified by computer simulations and experiments.",
"title": ""
},
{
"docid": "e84dfdba40e25e3705a8aeee2f2e65f2",
"text": "Alopecia areata (AA) is a common form of autoimmune nonscarring hair loss of scalp and/or body. Atypical hair regrowth in AA is considered a rare phenomenon. It includes atypical pattern of hair growth (sudden graying, perinevoid alopecia, Renbok phenomenon, castling phenomenon, and concentric or targetoid regrowth) and atypical dark color hair regrowth. We report a case of AA that resulted in a concentric targetoid hair regrowth and discuss the possible related theories regarding the significance of this phenomenon.",
"title": ""
},
{
"docid": "caf01ca9e0bb31bbaf3e32741637477c",
"text": "Deep convolutional neural networks (DCNNs) have been used to achieve state-of-the-art performance on many computer vision tasks (e.g., object recognition, object detection, semantic segmentation) thanks to a large repository of annotated image data. Large labeled datasets for other sensor modalities, e.g., multispectral imagery (MSI), are not available due to the large cost and manpower required. In this paper, we adapt state-of-the-art DCNN frameworks in computer vision for semantic segmentation for MSI imagery. To overcome label scarcity for MSI data, we substitute real MSI for generated synthetic MSI in order to initialize a DCNN framework. We evaluate our network initialization scheme on the new RIT-18 dataset that we present in this paper. This dataset contains very-high resolution MSI collected by an unmanned aircraft system. The models initialized with synthetic imagery were less prone to over-fitting and provide a state-of-the-art baseline for future work.",
"title": ""
},
{
"docid": "87737f028cf03a360a3e7affe84c9bc9",
"text": "This article provides an empirical statistical analysis and discussion of the predictive abilities of selected customer lifetime value (CLV) models that could be used in online shopping within e-commerce business settings. The comparison of CLV predictive abilities, using selected evaluation metrics, is made on selected CLV models: Extended Pareto/NBD model (EP/NBD), Markov chain model and Status Quo model. The article uses six online store datasets with annual revenues in the order of tens of millions of euros for the comparison. The EP/NBD model has outperformed other selected models in a majority of evaluation metrics and can be considered good and stable for non-contractual relations in online shopping. The implications for the deployment of selected CLV models in practice, as well as suggestions for future research, are also discussed.",
"title": ""
}
] |
scidocsrr
|
d32d0e12de36097d5358cdc570a4fb06
|
Hierarchical Game-Theoretic Planning for Autonomous Vehicles
|
[
{
"docid": "205c1939369c6cc80838f562a57156a5",
"text": "This paper examines the role of the human driver as the primary control element within the traditional driver-vehicle system. Lateral and longitudinal control tasks such as path-following, obstacle avoidance, and headway control are examples of steering and braking activities performed by the human driver. Physical limitations as well as various attributes that make the human driver unique and help to characterize human control behavior are described. Example driver models containing such traits and that are commonly used to predict the performance of the combined driver-vehicle system in lateral and longitudinal control tasks are identified.",
"title": ""
},
{
"docid": "1df73f7558216e726e6165f09dec2222",
"text": "This paper presents a method for constructing human-robot interaction policies in settings where multimodality, i.e., the possibility of multiple highly distinct futures, plays a critical role in decision making. We are motivated in this work by the example of traffic weaving, e.g., at highway on-ramps/off-ramps, where entering and exiting cars must swap lanes in a short distance-a challenging negotiation even for experienced drivers due to the inherent multimodal uncertainty of who will pass whom. Our approach is to learn multimodal probability distributions over future human actions from a dataset of human-human exemplars and perform real-time robot policy construction in the resulting environment model through massively parallel sampling of human responses to candidate robot action sequences. Direct learning of these distributions is made possible by recent advances in the theory of conditional variational autoencoders (CVAEs), whereby we learn action distributions simultaneously conditioned on the present interaction history, as well as candidate future robot actions in order to take into account response dynamics. We demonstrate the efficacy of this approach with a human-in-the-loop simulation of a traffic weaving scenario.",
"title": ""
}
] |
[
{
"docid": "4dc05debbbe6c8103d772d634f91c86c",
"text": "In this paper we shows the experimental results using a microcontroller and hardware integration with the EMC2 software, using the Fuzzy Gain Scheduling PI Controller in a mechatronic prototype. The structure of the fuzzy 157 Research in Computing Science 116 (2016) pp. 157–169; rec. 2016-03-23; acc. 2016-05-11 controller is composed by two-inputs and two-outputs, is a TITO system. The error control feedback and their derivative are the inputs, while the proportional and integral gains are the fuzzy controller outputs. Was defined five Gaussian membership functions for the fuzzy sets by each input, the product fuzzy logic operator (AND connective) and the centroid defuzzifier was used to infer the gains outputs. The structure of fuzzy rule base are type Sugeno, zero-order. The experimental result in closed-loop shows the viability end effectiveness of the position fuzzy controller strategy. To verify the robustness of this controller structure, two different experiments was making: undisturbed and disturbance both in closed-loop. This work presents comparative experimental results, using the Classical tune rule of Ziegler-Nichols and the Fuzzy Gain Scheduling PI Controller, for a mechatronic system widely used in various industries applications.",
"title": ""
},
{
"docid": "23d26c14a9aa480b98bcaa633fc378e5",
"text": "In this paper we present novel sensory feedbacks named ”King-Kong Effects” to enhance the sensation of walking in virtual environments. King Kong Effects are inspired by special effects in movies in which the incoming of a gigantic creature is suggested by adding visual vibrations/pulses to the camera at each of its steps. In this paper, we propose to add artificial visual or tactile vibrations (King-Kong Effects or KKE) at each footstep detected (or simulated) during the virtual walk of the user. The user can be seated, and our system proposes to use vibrotactile tiles located under his/her feet for tactile rendering, in addition to the visual display. We have designed different kinds of KKE based on vertical or lateral oscillations, physical or metaphorical patterns, and one or two peaks for heal-toe contacts simulation. We have conducted different experiments to evaluate the preferences of users navigating with or without the various KKE. Taken together, our results identify the best choices for future uses of visual and tactile KKE, and they suggest a preference for multisensory combinations. Our King-Kong effects could be used in a variety of VR applications targeting the immersion of a user walking in a 3D virtual scene.",
"title": ""
},
{
"docid": "d2da78ef79900138fe8d27105c38a082",
"text": "Intrauterine contraceptive device (IUCD) is a common method of contraception among women because of its low cost and high efficacy. Perforations are possible resulting in multiple complications including urinary complications. Obstructive hydronephrosis and hydroureter is one of the main clinical concerns in genitourinary practice leading to radiological investigations for determination of the cause. Determination of the cause leads to early treatment, hence saving the renal function. In this case report, we describe hydronephrosis and hydroureter secondary to a migrated/displaced IUCD.",
"title": ""
},
{
"docid": "3a00a29587af4f7c5ce974a8e6970413",
"text": "After reviewing six senses of abstraction, this article focuses on abstractions that take the form of summary representations. Three central properties of these abstractions are established: ( i ) type-token interpretation; (ii) structured representation; and (iii) dynamic realization. Traditional theories of representation handle interpretation and structure well but are not sufficiently dynamical. Conversely, connectionist theories are exquisitely dynamic but have problems with structure. Perceptual symbol systems offer an approach that implements all three properties naturally. Within this framework, a loose collection of property and relation simulators develops to represent abstractions. Type-token interpretation results from binding a property simulator to a region of a perceived or simulated category member. Structured representation results from binding a configuration of property and relation simulators to multiple regions in an integrated manner. Dynamic realization results from applying different subsets of property and relation simulators to category members on different occasions. From this standpoint, there are no permanent or complete abstractions of a category in memory. Instead, abstraction is the skill to construct temporary online interpretations of a category's members. Although an infinite number of abstractions are possible, attractors develop for habitual approaches to interpretation. This approach provides new ways of thinking about abstraction phenomena in categorization, inference, background knowledge and learning.",
"title": ""
},
{
"docid": "6b01a80b6502cb818024e0ac3b00114b",
"text": "BACKGROUND\nArithmetical skills are essential to the effective exercise of citizenship in a numerate society. How these skills are acquired, or fail to be acquired, is of great importance not only to individual children but to the organisation of formal education and its role in society.\n\n\nMETHOD\nThe evidence on the normal and abnormal developmental progression of arithmetical abilities is reviewed; in particular, evidence for arithmetical ability arising from innate specific cognitive skills (innate numerosity) vs. general cognitive abilities (the Piagetian view) is compared.\n\n\nRESULTS\nThese include evidence from infancy research, neuropsychological studies of developmental dyscalculia, neuroimaging and genetics. The development of arithmetical abilities can be described in terms of the idea of numerosity -- the number of objects in a set. Early arithmetic is usually thought of as the effects on numerosity of operations on sets such as set union. The child's concept of numerosity appears to be innate, as infants, even in the first week of life, seem to discriminate visual arrays on the basis of numerosity. Development can be seen in terms of an increasingly sophisticated understanding of numerosity and its implications, and in increasing skill in manipulating numerosities. The impairment in the capacity to learn arithmetic -- dyscalculia -- can be interpreted in many cases as a deficit in the concept in the child's concept of numerosity. The neuroanatomical bases of arithmetical development and other outstanding issues are discussed.\n\n\nCONCLUSIONS\nThe evidence broadly supports the idea of an innate specific capacity for acquiring arithmetical skills, but the effects of the content of learning, and the timing of learning in the course of development, requires further investigation.",
"title": ""
},
{
"docid": "28cc2608d7a82a8e598be80989f859fe",
"text": "Medical Image processing is one of the most challenging topics in research field. The main objective of image segmentation is to extract various features of the image that are used for analysing, interpretation and understanding of images. Medical Resonance Image plays a major role in Medical diagnostics. Image processing in MRI of brain is highly essential due to accurate detection of the type of brain abnormality which can reduce the chance of fatal result. This paper outlines an efficient image segmentation technique that can distinguish the pathological tissues such as edema and tumour from the normal tissues such as White Matter (WM), Grey Matter (GM), and Cerebrospinal Fluid (CSF). Thresholding is simpler and most commonly used techniques in image segmentation. This technique can be used to detect the contour of the tumour in brain.",
"title": ""
},
{
"docid": "51f5ba274068c0c03e5126bda056ba98",
"text": "Electricity is conceivably the most multipurpose energy carrier in modern global economy, and therefore primarily linked to human and economic development. Energy sector reform is critical to sustainable energy development and includes reviewing and reforming subsidies, establishing credible regulatory frameworks, developing policy environments through regulatory interventions, and creating marketbased approaches. Energy security has recently become an important policy driver and privatization of the electricity sector has secured energy supply and provided cheaper energy services in some countries in the short term, but has led to contrary effects elsewhere due to increasing competition, resulting in deferred investments in plant and infrastructure due to longer-term uncertainties. On the other hand global dependence on fossil fuels has led to the release of over 1100 GtCO2 into the atmosphere since the mid-19th century. Currently, energy-related GHG emissions, mainly from fossil fuel combustion for heat supply, electricity generation and transport, account for around 70% of total emissions including carbon dioxide, methane and some traces of nitrous oxide. This multitude of aspects play a role in societal debate in comparing electricity generating and supply options, such as cost, GHG emissions, radiological and toxicological exposure, occupational health and safety, employment, domestic energy security, and social impressions. Energy systems engineering provides a methodological scientific framework to arrive at realistic integrated solutions to complex energy problems, by adopting a holistic, systems-based approach, especially at decision making and planning stage. Modeling and optimization found widespread applications in the study of physical and chemical systems, production planning and scheduling systems, location and transportation problems, resource allocation in financial systems, and engineering design. This article reviews the literature on power and supply sector developments and analyzes the role of modeling and optimization in this sector as well as the future prospective of optimization modeling as a tool for sustainable energy systems. © 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4cce019f5f4c4cfa934e599ddf9137cb",
"text": "Many distributed graph processing frameworks have emerged for helping doing large scale data analysis for many applications including social network and data mining. The existing frameworks usually focus on the system scalability without consideration of local computing performance. We have observed two locality issues which greatly influence the local computing performance in existing systems. One is the locality of the data associated with each vertex/edge. The data are often considered as a logical undividable unit and put into continuous memory. However, it is quite common that for some computing steps, only some portions of data (called as some properties) are needed. The current data layout incurs large amount of interleaved memory access. The other issue is their execution engine applies computation at a granularity of vertex. Making optimization for the locality of source vertex of each edge will often hurt the locality of target vertex or vice versa. We have built a distributed graph processing framework called Photon to address the above issues. Photon employs Property View to store the same type of property for all vertices and edges together. This will improve the locality while doing computation with a portion of properties. Photon also employs an edge-centric execution engine with Hilbert-Order that improve the locality during computation. We have evaluated Photon with five graph applications using five real-world graphs and compared it with four existing systems. The results show that Property View and edge-centric execution design improve graph processing by 2.4X.",
"title": ""
},
{
"docid": "2b57b32fcb378fe6a9a78699142d36c6",
"text": "Navigating through unstructured environments is a basic capability of intelligent creatures, and thus is of fundamental interest in the study and development of artificial intelligence. Long-range navigation is a complex cognitive task that relies on developing an internal representation of space, grounded by recognisable landmarks and robust visual processing, that can simultaneously support continuous self-localisation (“I am here”) and a representation of the goal (“I am going there”). Building upon recent research that applies deep reinforcement learning to maze navigation problems, we present an end-to-end deep reinforcement learning approach that can be applied on a city scale. Recognising that successful navigation relies on integration of general policies with locale-specific knowledge, we propose a dual pathway architecture that allows locale-specific features to be encapsulated, while still enabling transfer to multiple cities. A key contribution of this paper is an interactive navigation environment that uses Google Street View for its photographic content and worldwide coverage. Our baselines demonstrate that deep reinforcement learning agents can learn to navigate in multiple cities and to traverse to target destinations that may be kilometres away. The project webpage http://streetlearn.cc contains a video summarizing our research and showing the trained agent in diverse city environments and on the transfer task, the form to request the StreetLearn dataset and links to further resources. The StreetLearn environment code is available at https://github.com/deepmind/streetlearn.",
"title": ""
},
{
"docid": "895f0424cb71c79b86ecbd11a4f2eb8e",
"text": "A chronic alcoholic who had also been submitted to partial gastrectomy developed a syndrome of continuous motor unit activity responsive to phenytoin therapy. There were signs of minimal distal sensorimotor polyneuropathy. Symptoms of the syndrome of continuous motor unit activity were fasciculation, muscle stiffness, myokymia, impaired muscular relaxation and percussion myotonia. Electromyography at rest showed fasciculation, doublets, triplets, multiplets, trains of repetitive discharges and myotonic discharges. Trousseau's and Chvostek's signs were absent. No abnormality of serum potassium, calcium, magnesium, creatine kinase, alkaline phosphatase, arterial blood gases and pH were demonstrated, but the serum Vitamin B12 level was reduced. The electrophysiological findings and muscle biopsy were compatible with a mixed sensorimotor polyneuropathy. Tests of neuromuscular transmission showed a significant decrement in the amplitude of the evoked muscle action potential in the abductor digiti minimi on repetitive nerve stimulation. These findings suggest that hyperexcitability and hyperactivity of the peripheral motor axons underlie the syndrome of continuous motor unit activity in the present case. Ein chronischer Alkoholiker, mit subtotaler Gastrectomie, litt an einem Syndrom dauernder Muskelfaseraktivität, das mit Diphenylhydantoin behandelt wurde. Der Patient wies minimale Störungen im Sinne einer distalen sensori-motorischen Polyneuropathie auf. Die Symptome dieses Syndroms bestehen in: Fazikulationen, Muskelsteife, Myokymien, eine gestörte Erschlaffung nach der Willküraktivität und eine Myotonie nach Beklopfen des Muskels. Das Elektromyogramm in Ruhe zeigt: Faszikulationen, Doublets, Triplets, Multiplets, Trains repetitiver Potentiale und myotonische Entladungen. Trousseau- und Chvostek-Zeichen waren nicht nachweisbar. Gleichzeitig lagen die Kalium-, Calcium-, Magnesium-, Kreatinkinase- und Alkalinphosphatase-Werte im Serumspiegel sowie O2, CO2 und pH des arteriellen Blutes im Normbereich. Aber das Niveau des Vitamin B12 im Serumspiegel war deutlich herabgesetzt. Die muskelbioptische und elektrophysiologische Veränderungen weisen auf eine gemischte sensori-motorische Polyneuropathie hin. Die Abnahme der Amplitude der evozierten Potentiale, vom M. abductor digiti minimi abgeleitet, bei repetitiver Reizung des N. ulnaris, stellten eine Störung der neuromuskulären Überleitung dar. Aufgrund unserer klinischen und elektrophysiologischen Befunde könnten wir die Hypererregbarkeit und Hyperaktivität der peripheren motorischen Axonen als Hauptmechanismus des Syndroms dauernder motorischer Einheitsaktivität betrachten.",
"title": ""
},
{
"docid": "2beb54c58abcc3a0abaeda878d5351f2",
"text": "The railway station is the basic unit of transportation networks. It contains cluster of data which should be organized appropriately to effectively extract useful information. Building Information Modeling (BIM) process revolves around a virtual information-rich 3D model. Once a model has been developed in a BIM software such as Revit, third-party add-ins application can be used to further leverage the data. In this paper, authors develop a Revit-based extension tool which enables railway engineer to design and represent the railway station simultaneously. It has three major components: (1) Access database that includes the standard design rules, (2) 3D models of station elements represented separately as Revit families, and (3) C# program code is used for performing the design process. The model has been verified and applied to a real case study of XiaMen station, China. As a result, this model highlights any possible conflicts before the real construction starts. By the enhancement of databases linked to the developed tool, the BIM techniques could be considered as a potential direction for modeling conceptualization of intermediate railway stations.",
"title": ""
},
{
"docid": "f72150d92ff4e0422ae44c3c21e8345e",
"text": "There has been a recent paradigm shift in robotics to data-driven learning for planning and control. Due to large number of experiences required for training, most of these approaches use a self-supervised paradigm: using sensors to measure success/failure. However, in most cases, these sensors provide weak supervision at best. In this work, we propose an adversarial learning framework that pits an adversary against the robot learning the task. In an effort to defeat the adversary, the original robot learns to perform the task with more robustness leading to overall improved performance. We show that this adversarial framework forces the robot to learn a better grasping model in order to overcome the adversary. By grasping 82% of presented novel objects compared to 68% without an adversary, we demonstrate the utility of creating adversaries. We also demonstrate via experiments that having robots in adversarial setting might be a better learning strategy as compared to having collaborative multiple robots. For supplementary video see: youtu.be/QfK3Bqhc6Sk",
"title": ""
},
{
"docid": "034943e26879bedd5c25079b986851e6",
"text": "3D Time-of-Flight sensing technology provides distant measurements from the camera to the scene in the field of view, for complete depth map of a scene. It works by illuminating the scene with a modulated light sources and measuring the phase change between illuminated and reflected light. This is translated to distance, for each pixel simultaneously. The sensor receives the radiance which is combination of light received along multiple paths due to global illumination. This global radiance causes multi-path interference. Separating these components to recover scene depths is challenging for corner shaped and coronel shaped scene as number of multiple path increases. It is observed that for different scenes, global radiance disappears with increase in frequencies beyond some threshold level. This observation is used to develop a novel technique to recover unambiguous depth map of a scene. It requires minimum two frequencies and 3 to 4 measurements which gives minimum computations.",
"title": ""
},
{
"docid": "c4e11f7bbb252b18910a64c0145edec2",
"text": "Cluster analysis represents one of the most versatile methods in statistical science. It is employed in empirical sciences for the summarization of datasets into groups of similar objects, with the purpose of facilitating the interpretation and further analysis of the data. Cluster analysis is of particular importance in the exploratory investigation of data of high complexity, such as that derived from molecular biology or image databases. Consequently, recent work in the field of cluster analysis, including the work presented in this thesis, has focused on designing algorithms that can provide meaningful solutions for data with high cardinality and/or dimensionality, under the natural restriction of limited resources. In the first part of the thesis, a novel algorithm for the clustering of large, highdimensional datasets is presented. The developed method is based on the principles of projection pursuit and grid partitioning, and focuses on reducing computational requirements for large datasets without loss of performance. To achieve that, the algorithm relies on procedures such as sampling of objects, feature selection, and quick density estimation using histograms. The algorithm searches for low-density points in potentially favorable one-dimensional projections, and partitions the data by a hyperplane passing through the best split point found. Tests on synthetic and reference data indicated that the proposed method can quickly and efficiently recover clusters that are distinguishable from the remaining objects on at least one direction; linearly non-separable clusters were usually subdivided. In addition, the clustering solution was proved to be robust in the presence of noise in moderate levels, and when the clusters are partially overlapping. In the second part of the thesis, a novel method for generating synthetic datasets with variable structure and clustering difficulty is presented. The developed algorithm can construct clusters with different sizes, shapes, and orientations, consisting of objects sampled from different probability distributions. In addition, some of the clusters can have multimodal distributions, curvilinear shapes, or they can be defined only in restricted subsets of dimensions. The clusters are distributed within the data space using a greedy geometrical procedure, with the overall degree of cluster overlap adjusted by scaling the clusters. Evaluation tests indicated that the proposed approach is highly effective in prescribing the cluster overlap. Furthermore, it can be extended to allow for the production of datasets containing non-overlapping clusters with defined degrees of separation. In the third part of the thesis, a novel system for the semi-supervised annotation of images is described and evaluated. The system is based on a visual vocabulary of prototype visual features, which is constructed through the clustering of visual features extracted from training images with accurate textual annotations. Consequently, each training image is associated with the visual words representing its detected features. In addition, each such image is associated with the concepts extracted from the linked textual data. These two sets of associations are combined into a direct linkage scheme between textual concepts and visual words, thus constructing an automatic image classifier that can annotate new images with text-based concepts using only their visual features. As an initial application, the developed method was successfully employed in a person classification task.",
"title": ""
},
{
"docid": "f66dfbbd6d2043744d32b44dba145ef2",
"text": "Newly emerging location-based and event-based social network services provide us with a new platform to understand users' preferences based on their activity history. A user can only visit a limited number of venues/events and most of them are within a limited distance range, so the user-item matrix is very sparse, which creates a big challenge for traditional collaborative filtering-based recommender systems. The problem becomes more challenging when people travel to a new city where they have no activity history.\n In this paper, we propose LCARS, a location-content-aware recommender system that offers a particular user a set of venues (e.g., restaurants) or events (e.g., concerts and exhibitions) by giving consideration to both personal interest and local preference. This recommender system can facilitate people's travel not only near the area in which they live, but also in a city that is new to them. Specifically, LCARS consists of two components: offline modeling and online recommendation. The offline modeling part, called LCA-LDA, is designed to learn the interest of each individual user and the local preference of each individual city by capturing item co-occurrence patterns and exploiting item contents. The online recommendation part automatically combines the learnt interest of the querying user and the local preference of the querying city to produce the top-k recommendations. To speed up this online process, a scalable query processing technique is developed by extending the classic Threshold Algorithm (TA). We evaluate the performance of our recommender system on two large-scale real data sets, DoubanEvent and Foursquare. The results show the superiority of LCARS in recommending spatial items for users, especially when traveling to new cities, in terms of both effectiveness and efficiency.",
"title": ""
},
{
"docid": "fa62c54cf22c7d0822c7a4171a3d8bcd",
"text": "Interaction with robot systems for specification of manufacturing tasks and motions needs to be simple, to enable wide-spread use of robots in SMEs. In the best case, existing practices from manual work could be used, to smoothly let current employees start using robot technology as a natural part of their work. Our aim is to simplify the robot programming task by allowing the user to simply make technical drawings on a sheet of paper. Craftsman use paper and raw sketches for several situations; to share ideas, to get a better imagination or to remember the customer situation. Currently these sketches have either to be interpreted by the worker when producing the final product by hand, or transferred into CAD file using an according tool. The former means that no automation is included, the latter means extra work and much experience in using the CAD tool. Our approach is to use the digital pen and paper from Anoto as input devices for SME robotic tasks, thereby creating simpler and more user friendly alternatives for programming, parameterization and commanding actions. To this end, the basic technology has been investigated and fully working prototypes have been developed to explore the possibilities and limitation in the context of typical SME applications. Based on the encouraging experimental results, we believe that drawings on digital paper will, among other means of human-robot interaction, play an important role in manufacturing SMEs in the future. Index Terms — CAD, Human machine interfaces, Industrial Robots, Robot programming.",
"title": ""
},
{
"docid": "73bf620a97b2eadeb2398dd718b85fe8",
"text": "The Semeval task 5 was an opportunity for experimenting with the key term extraction module of GROBID, a system for extracting and generating bibliographical information from technical and scientific documents. The tool first uses GROBID’s facilities for analyzing the structure of scientific articles, resulting in a first set of structural features. A second set of features captures content properties based on phraseness, informativeness and keywordness measures. Two knowledge bases, GRISP and Wikipedia, are then exploited for producing a last set of lexical/semantic features. Bagged decision trees appeared to be the most efficient machine learning algorithm for generating a list of ranked key term candidates. Finally a post ranking was realized based on statistics of cousage of keywords in HAL, a large Open Access publication repository.",
"title": ""
}
] |
scidocsrr
|
63039210409b90f497b59b58a384f4cb
|
Boosted Cascaded Convnets for Multilabel Classification of Thoracic Diseases in Chest Radiographs
|
[
{
"docid": "b8f66ef5e046f0c9e7772b2233571594",
"text": "Cascaded classifiers have been widely used in pedestrian detection and achieved great success. These classifiers are trained sequentially without joint optimization. In this paper, we propose a new deep model that can jointly train multi-stage classifiers through several stages of back propagation. It keeps the score map output by a classifier within a local region and uses it as contextual information to support the decision at the next stage. Through a specific design of the training strategy, this deep architecture is able to simulate the cascaded classifiers by mining hard samples to train the network stage-by-stage. Each classifier handles samples at a different difficulty level. Unsupervised pre-training and specifically designed stage-wise supervised training are used to regularize the optimization problem. Both theoretical analysis and experimental results show that the training strategy helps to avoid over fitting. Experimental results on three datasets (Caltech, ETH and TUD-Brussels) show that our approach outperforms the state-of-the-art approaches.",
"title": ""
},
{
"docid": "5c9ba6384b6983a26212e8161e502484",
"text": "The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures. Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies. Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset. This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples – ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks. We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training. Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice.",
"title": ""
},
{
"docid": "fcfc16b94f06bf6120431a348e97b9ac",
"text": "Multi-label classification is a practical yet challenging task in machine learning related fields, since it requires the prediction of more than one label category for each input instance. We propose a novel deep neural networks (DNN) based model, Canonical Correlated AutoEncoder (C2AE), for solving this task. Aiming at better relating feature and label domain data for improved classification, we uniquely perform joint feature and label embedding by deriving a deep latent space, followed by the introduction of label-correlation sensitive loss function for recovering the predicted label outputs. Our C2AE is achieved by integrating the DNN architectures of canonical correlation analysis and autoencoder, which allows end-to-end learning and prediction with the ability to exploit label dependency. Moreover, our C2AE can be easily extended to address the learning problem with missing labels. Our experiments on multiple datasets with different scales confirm the effectiveness and robustness of our proposed method, which is shown to perform favorably against state-of-the-art methods for multi-label classification.",
"title": ""
}
] |
[
{
"docid": "f46136360aef128b54860caf50e8cc77",
"text": "We propose an FPGA chip architecture based on a conventional FPGA logic array core, in which I/O pins are clocked at a much higher rate than that of the logic array that they serve. Wide data paths within the chip are time multiplexed at the edge of the chip into much faster and narrower data paths that run offchip. This kind of arrangement makes it possible to interface a relatively slow FPGA core with high speed memories and data streams, and is useful for many pin-limited FPGA applications. For efficient use of the highest bandwidth DRAM’s, our proposed chip includes a RAMBUS DRAM interface, a burst-transfer controller, and burst buffers. This proposal is motivated by our work with virtual processor cellular automata (CA) machines—a kind of SIMD computer. Our next generation of CA machines requires reconfigurable FPGA-like processors coupled to the highest speed DRAM’s and SRAM’s available. Unfortunately, no current FPGA chips have appropriate DRAM I/O support or the speed needed to easily interface with pipelined SRAM’s. The chips proposed here would make a wide range of large-scale CA simulations of 3D physical systems practical and economical—simulations that are currently well beyond the reach of any existing computer. These chips would also be well suited to a broad range of other simulation, graphics, and DSP-like applications.",
"title": ""
},
{
"docid": "80ad0fe6b3c216573e9d2805af90fd10",
"text": "Recently Vapnik et al. [11, 12, 13] introduced a new learning model, called Learning Using Privileged Information (LUPI). In this model, along with standard training data, the teacher supplies the student with additional (privileged) information. In the optimistic case, the LUPI model can improve the bound for the probability of test error from O(1/ √ n) to O(1/n), where n is the number of training examples. Since semi-supervised learning model with n labeled and N unlabeled examples can only achieve the bound O(1/ √ n + N) in the optimistic case, the LUPI model can thus significantly outperform it. To implement LUPI model, Vapnik et al. [11, 12, 13] suggested to use an SVM-type algorithm called SVM+, which requires, however, to solve a more difficult optimization problem than the one that is traditionally used to solve SVM. In this paper we develop two new algorithms for solving the optimization problem of SVM+. Our algorithms have the structure similar to the empirically successful SMO algorithm for solving SVM. Our experiments show that in terms of the generalization error/running time tradeoff, one of our algorithms is superior over the widely used interior point optimizer.",
"title": ""
},
{
"docid": "570db268b70b632c266ee5782d339598",
"text": "Active imaging polarimetry for remote sensing applications has received significant attention recently. Such systems use a variably-polarized active light source to illuminate target objects. Multiple images with different polarizations are then captured and used to build Stokes vectors which are, in turn, used to estimate the refraction indices of materials as well as the relative geometry of the target object. The applications facilitated by active polarimetry include target detection, object recognition, shape extraction and material classification. Unfortunately, this estimation problem requires us to find the solution to a system of nonlinear equations using an iterative optimization technique. In this paper, we introduce a methodology for finding and validating good solutions to this optimization.",
"title": ""
},
{
"docid": "8492ba0660b06ca35ab3f4e96f3a33c3",
"text": "Young men who have sex with men (YMSM) are increasingly using mobile smartphone applications (“apps”), such as Grindr, to meet sex partners. A probability sample of 195 Grindr-using YMSM in Southern California were administered an anonymous online survey to assess patterns of and motivations for Grindr use in order to inform development and tailoring of smartphone-based HIV prevention for YMSM. The number one reason for using Grindr (29 %) was to meet “hook ups.” Among those participants who used both Grindr and online dating sites, a statistically significantly greater percentage used online dating sites for “hook ups” (42 %) compared to Grindr (30 %). Seventy percent of YMSM expressed a willingness to participate in a smartphone app-based HIV prevention program. Development and testing of smartphone apps for HIV prevention delivery has the potential to engage YMSM in HIV prevention programming, which can be tailored based on use patterns and motivations for use. Los hombres que mantienen relaciones sexuales con hombres (YMSM por las siglas en inglés de Young Men Who Have Sex with Men) están utilizando más y más aplicaciones para teléfonos inteligentes (smartphones), como Grindr, para encontrar parejas sexuales. En el Sur de California, se administró de forma anónima un sondeo en internet a una muestra de probabilidad de 195 YMSM usuarios de Grindr, para evaluar los patrones y motivaciones del uso de Grindr, con el fin de utilizar esta información para el desarrollo y personalización de prevención del VIH entre YMSM con base en teléfonos inteligentes. La principal razón para utilizar Grindr (29 %) es para buscar encuentros sexuales casuales (hook-ups). Entre los participantes que utilizan tanto Grindr como otro sitios de citas online, un mayor porcentaje estadísticamente significativo utilizó los sitios de citas online para encuentros casuales sexuales (42 %) comparado con Grindr (30 %). Un setenta porciento de los YMSM expresó su disposición para participar en programas de prevención del VIH con base en teléfonos inteligentes. El desarrollo y evaluación de aplicaciones para teléfonos inteligentes para el suministro de prevención del VIH tiene el potencial de involucrar a los YMSM en la programación de la prevención del VIH, que puede ser adaptada según los patrones y motivaciones de uso.",
"title": ""
},
{
"docid": "6b467ec8262144150b17cedb3d96edcb",
"text": "We describe a new method of measuring surface currents using an interferometric synthetic aperture radar. An airborne implementation has been tested over the San Francisco Bay near the time of maximum tidal flow, resulting in a map of the east-west component of the current. Only the line-of-sight component of velocity is measured by this technique. Where the signal-to-noise ratio was strongest, statistical fluctuations of less than 4 cm s−1 were observed for ocean patches of 60×60 m.",
"title": ""
},
{
"docid": "795bdbc3dea0ade425c5af251e09a607",
"text": "Entity disambiguation with Wikipedia relies on structured information from redirect pages, article text, inter-article links, and categories. We explore whether web links can replace a curated encyclopaedia, obtaining entity prior, name, context, and coherence models from a corpus of web pages with links to Wikipedia. Experiments compare web link models to Wikipedia models on well-known conll and tac data sets. Results show that using 34 million web links approaches Wikipedia performance. Combining web link and Wikipedia models produces the best-known disambiguation accuracy of 88.7 on standard newswire test data.",
"title": ""
},
{
"docid": "1cb47f75cde728f7ba7c75b54516bc46",
"text": "This paper considers the electrical actuation of aircraft wing surfaces, with particular emphasis on flap systems. It discusses existing hydraulic and electrohydraulic systems and proposes an electrical alternative, examining the potential system benefits in terms of increased functionality, maintenance, and life-cycle costs. This paper then progresses to describe a full-scale actuation demonstrator of the flap system, including the high-speed electrical drive, step-down gearbox, and flaps. Detailed descriptions of the fault-tolerant motor, power electronics, control architecture, and position sensor systems are given, along with a range of test results, demonstrating the system in operation.",
"title": ""
},
{
"docid": "03d0fad1fa59e181a176bdf09b57ba58",
"text": "Steganography refers to techniques that hide information inside innocuous looking objects known as “Cover Objects” such that its very existence remains concealed to any unintended recipient. Images are pervasive in day to day applications and have high redundancy in representation. Thus, they are appealing contenders to be used as cover objects. There are a large number of image steganography techniques proposed till date but negligible research has been done on the development of a standard quality evaluation model for judging their performance. Existence of such a model is important for fueling the development of superior techniques and also paves the way for the improvement of the existing ones. However, the common quality parameters often considered for performance evaluation of an image steganography technique are insufficient for overall quantitative evaluation. This paper proposes a rating scale based quality evaluation model for image steganography algorithms that utilizes both quantitative parameters and observation heuristics. Different image steganography techniques have been evaluated using proposed model and quantitative performance scores for each of the techniques have been derived. The scores have been observed to be in accordance with actual literature and the system is simple, efficient and flexible.",
"title": ""
},
{
"docid": "d1fd4d535052a1c2418259c9b2abed66",
"text": "BACKGROUND\nSit-to-stand tests (STST) have recently been developed as easy-to-use field tests to evaluate exercise tolerance in COPD patients. As several modalities of the test exist, this review presents a synthesis of the advantages and limitations of these tools with the objective of helping health professionals to identify the STST modality most appropriate for their patients.\n\n\nMETHOD\nSeventeen original articles dealing with STST in COPD patients have been identified and analysed including eleven on 1min-STST and four other versions of the test (ranging from 5 to 10 repetitions and from 30 s to 3 min). In these studies the results obtained in sit-to-stand tests and the recorded physiological variables have been correlated with the results reported in other functional tests.\n\n\nRESULTS\nA good set of correlations was achieved between STST performances and the results reported in other functional tests, as well as quality of life scores and prognostic index. According to the different STST versions the processes involved in performance are different and consistent with more or less pronounced associations with various physical qualities. These tests are easy to use in a home environment, with excellent metrological properties and responsiveness to pulmonary rehabilitation, even though repetition of the same movement remains a fragmented and restrictive approach to overall physical evaluation.\n\n\nCONCLUSIONS\nThe STST appears to be a relevant and valid tool to assess functional status in COPD patients. While all versions of STST have been tested in COPD patients, they should not be considered as equivalent or interchangeable.",
"title": ""
},
{
"docid": "1ec395dbe807ff883dab413419ceef56",
"text": "\"The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure\" provides a new guideline for hypertension prevention and management. The following are the key messages(1) In persons older than 50 years, systolic blood pressure (BP) of more than 140 mm Hg is a much more important cardiovascular disease (CVD) risk factor than diastolic BP; (2) The risk of CVD, beginning at 115/75 mm Hg, doubles with each increment of 20/10 mm Hg; individuals who are normotensive at 55 years of age have a 90% lifetime risk for developing hypertension; (3) Individuals with a systolic BP of 120 to 139 mm Hg or a diastolic BP of 80 to 89 mm Hg should be considered as prehypertensive and require health-promoting lifestyle modifications to prevent CVD; (4) Thiazide-type diuretics should be used in drug treatment for most patients with uncomplicated hypertension, either alone or combined with drugs from other classes. Certain high-risk conditions are compelling indications for the initial use of other antihypertensive drug classes (angiotensin-converting enzyme inhibitors, angiotensin-receptor blockers, beta-blockers, calcium channel blockers); (5) Most patients with hypertension will require 2 or more antihypertensive medications to achieve goal BP (<140/90 mm Hg, or <130/80 mm Hg for patients with diabetes or chronic kidney disease); (6) If BP is more than 20/10 mm Hg above goal BP, consideration should be given to initiating therapy with 2 agents, 1 of which usually should be a thiazide-type diuretic; and (7) The most effective therapy prescribed by the most careful clinician will control hypertension only if patients are motivated. Motivation improves when patients have positive experiences with and trust in the clinician. Empathy builds trust and is a potent motivator. Finally, in presenting these guidelines, the committee recognizes that the responsible physician's judgment remains paramount.",
"title": ""
},
{
"docid": "545cd566c3563c7c8f8ab39d044b46d6",
"text": "We present a sequential model for temporal relation classification between intrasentence events. The key observation is that the overall syntactic structure and compositional meanings of the multi-word context between events are important for distinguishing among fine-grained temporal relations. Specifically, our approach first extracts a sequence of context words that indicates the temporal relation between two events, which well align with the dependency path between two event mentions. The context word sequence, together with a parts-of-speech tag sequence and a dependency relation sequence that are generated corresponding to the word sequence, are then provided as input to bidirectional recurrent neural network (LSTM) models. The neural nets learn compositional syntactic and semantic representations of contexts surrounding the two events and predict the temporal relation between them. Evaluation of the proposed approach on TimeBank corpus shows that sequential modeling is capable of accurately recognizing temporal relations between events, which outperforms a neural net model using various discrete features as input that imitates previous feature based models.",
"title": ""
},
{
"docid": "019367236d31f53f339fa30e9b38b5e0",
"text": "Compressive sensing is a new type of sampling theory, which predicts that sparse signals and images can be reconstructed from what was previously believed to be incomplete information. As a main feature, efficient algorithms such as l1-minimization can be used for recovery. The theory has many potential applications in signal processing and imaging. This chapter gives an introduction and overview on both theoretical and numerical aspects of compressive sensing.",
"title": ""
},
{
"docid": "f5f1300baf7ed92626c912b98b6308c9",
"text": "The constant increase in global energy demand, together with the awareness of the finite supply of fossil fuels, has brought about an imperious need to take advantage of renewable energy sources. At the same time, concern over CO(2) emissions and future rises in the cost of gasoline has boosted technological efforts to make hybrid and electric vehicles available to the general public. Energy storage is a vital issue to be addressed within this scenario, and batteries are certainly a key player. In this tutorial review, the most recent and significant scientific advances in the field of rechargeable batteries, whose performance is dependent on their underlying chemistry, are covered. In view of its utmost current significance and future prospects, special emphasis is given to progress in lithium-based technologies.",
"title": ""
},
{
"docid": "210cbd0fe42cac593cb8c1c175448be7",
"text": "Lactose is the preeminent soluble glycan in milk and a significant source of energy for most newborn mammals. Elongation of lactose with additional monosaccharides gives rise to a varied repertoire of free soluble glycans such as 2'-fucosyllactose (2'-FL), which is the most abundant oligosaccharide in human milk. In infants, 2'-FL is resistant to digestion and reaches the colon where it is partially fermented, behaving as soluble prebiotic fiber. Evidence also suggests that portions of small soluble milk glycans, including 2'-FL, are absorbed, thus raising the possibility of systemic biological effects. 2'-FL bears an epitope of the Secretor histo-blood group system; approximately 70-80% of all milk samples contain 2'-FL, since its synthesis depends on a fucosyltransferase that is not uniformly expressed. The fact that some infants are not exposed to 2'-FL has helped researchers to retrospectively probe for biological activities of this glycan. This review summarizes the attributes of 2'-FL in terms of its occurrence in mammalian phylogeny, its postulated biological activities, and its variability in human milk.",
"title": ""
},
{
"docid": "134e5a0da9a6aa9b3c5e10a69803c3a3",
"text": "The objectives of this study were to determine the prevalence of overweight and obesity in Turkey, and to investigate their association with age, gender, and blood pressure. A crosssectional population-based study was performed. A total of 20,119 inhabitants (4975 women and 15,144 men, age > 20 years) from 11 Anatolian cities in four geographic regions were screened for body weight, height, and systolic and diastolic blood pressure between the years 1999 and 2000. The overall prevalence rate of overweight was 25.0% and of obesity was 19.4%. The prevalence of overweight among women was 24.3% and obesity 24.6%; 25.9% of men were overweight, and 14.4% were obese. Mean body mass index (BMI) of the studied population was 27.59 +/- 4.61 kg/m(2). Mean systolic and diastolic blood pressure for women were 131.0 +/- 41.0 and 80.2 +/- 16.3 mm Hg, and for men 135.0 +/- 27.3 and 83.2 +/- 16.0 mm Hg. There was a positive linear correlation between BMI and blood pressure, and between age and blood pressure in men and women. Obesity and overweight are highly prevalant in Turkey, and they constitute independent risk factors for hypertension.",
"title": ""
},
{
"docid": "4028f1cd20127f3c6599e6073bb1974b",
"text": "This paper presents a power delivery monitor (PDM) peripheral integrated in a flip-chip packaged 28 nm system-on-chip (SoC) for mobile computing. The PDM is composed entirely of digital standard cells and consists of: 1) a fully integrated VCO-based digital sampling oscilloscope; 2) a synthetic current load; and 3) an event engine for triggering, analysis, and debug. Incorporated inside an SoC, it enables rapid, automated analysis of supply impedance, as well as monitoring supply voltage droop of multi-core CPUs running full software workloads and during scan-test operations. To demonstrate these capabilities, we describe a power integrity case study of a dual-core ARM Cortex-A57 cluster in a commercial 28 nm mobile SoC. Measurements are presented of power delivery network (PDN) electrical parameters, along with waveforms of the CPU cluster running test cases and benchmarks on bare metal and Linux OS. The effect of aggressive power management techniques, such as power gating on the dominant resonant frequency and peak impedance, is highlighted. Finally, we present measurements of supply voltage noise during various scan-test operations, an often-neglected aspect of SoC power integrity.",
"title": ""
},
{
"docid": "2d6d5c8b1ac843687db99ccf50a0baff",
"text": "This paper presents algorithms for fast segmentation of 3D point clouds and subsequent classification of the obtained 3D segments. The method jointly determines the ground surface and segments individual objects in 3D, including overhanging structures. When compared to six other terrain modelling techniques, this approach has minimal error between the sensed data and the representation; and is fast (processing a Velodyne scan in approximately 2 seconds). Applications include improved alignment of successive scans by enabling operations in sections (Velodyne scans are aligned 7% sharper compared to an approach using raw points) and more informed decision-making (paths move around overhangs). The use of segmentation to aid classification through 3D features, such as the Spin Image or the Spherical Harmonic Descriptor, is discussed and experimentally compared. Moreover, the segmentation facilitates a novel approach to 3D classification that bypasses feature extraction and directly compares 3D shapes via the ICP algorithm. This technique is shown to achieve accuracy on par with the best feature based classifier (92.1%) while being significantly faster and allowing a clearer understanding of the classifier’s behaviour.",
"title": ""
},
{
"docid": "d59d1ac7b3833ee1e60f7179a4a9af99",
"text": "s Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. GJCST Classification : C.1.4, C.2.1 Research Issues in Cloud Computing Strictly as per the compliance and regulations of: Research Issues in Cloud Computing V. Krishna Reddy , B. Thirumala Rao , Dr. L.S.S. Reddy , P. Sai Kiran ABSTRACT : Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges.",
"title": ""
},
{
"docid": "4cff5279110ff2e45060f3ccec7d51ba",
"text": "Web site usability is a critical metric for assessing the quality of a firm’s Web presence. A measure of usability must not only provide a global rating for a specific Web site, ideally it should also illuminate specific strengths and weaknesses associated with site design. In this paper, we describe a heuristic evaluation procedure for examining the usability of Web sites. The procedure utilizes a comprehensive set of usability guidelines developed by Microsoft. We present the categories and subcategories comprising these guidelines, and discuss the development of an instrument that operationalizes the measurement of usability. The proposed instrument was tested in a heuristic evaluation study where 1,475 users rated multiple Web sites from four different industry sectors: airlines, online bookstores, automobile manufacturers, and car rental agencies. To enhance the external validity of the study, users were asked to assume the role of a consumer or an investor when assessing usability. Empirical results suggest that the evaluation procedure, the instrument, as well as the usability metric exhibit good properties. Implications of the findings for researchers, for Web site designers, and for heuristic evaluation methods in usability testing are offered. (Usability; Heuristic Evaluation; Microsoft Usability Guidelines; Human-Computer Interaction; Web Interface)",
"title": ""
},
{
"docid": "27f1f3791b7a381f92833d4983620b7e",
"text": "Current action recognition methods heavily rely on trimmed videos for model training. However, it is expensive and time-consuming to acquire a large-scale trimmed video dataset. This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances. Our UntrimmedNet couples two important components, the classification module and the selection module, to learn the action models and reason about the temporal duration of action instances, respectively. These two components are implemented with feed-forward networks, and UntrimmedNet is therefore an end-to-end trainable architecture. We exploit the learned models for action recognition (WSR) and detection (WSD) on the untrimmed video datasets of THUMOS14 and ActivityNet. Although our UntrimmedNet only employs weak supervision, our method achieves performance superior or comparable to that of those strongly supervised approaches on these two datasets.",
"title": ""
}
] |
scidocsrr
|
3bb799333768919e975d00aad27448d9
|
The treasure beneath convolutional layers: Cross-convolutional-layer pooling for image classification
|
[
{
"docid": "5116079b69aeb1858177429fabd10f80",
"text": "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations at present lack geometric invariance, which limits their robustness for tasks such as classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (or MOP-CNN for short). This approach works by extracting CNN activations for local patches at multiple scales, followed by orderless VLAD pooling of these activations at each scale level and concatenating the result. This feature representation decisively outperforms global CNN activations and achieves state-of-the-art performance for scene classification on such challenging benchmarks as SUN397, MIT Indoor Scenes, and ILSVRC2012, as well as for instance-level retrieval on the Holidays dataset.",
"title": ""
},
{
"docid": "a241ca85048e30c48acd532bce1bf2ca",
"text": "This paper addresses the challenge of establlishing a bridge between deep convolutional neural networks and conventional object detection frameworks for accurate and efficient generic object detection. We introduce Dense Neural Patterns, short for DNPs, which are dense local features derived from discriminatively trained deep convolutional neural networks. DNPs can be easily plugged into conventional detection frameworks in the same way as other dense local features(like HOG or LBP). The effectiveness of the proposed approach is demonstrated with Regionlets object detection framework. It achieved 46.1% mean average precision on the PASCAL VOC 2007 dataset, and 44.1% on the PASCAL VOC 2010 dataset, which dramatically improves the originalRegionlets approach without DNPs.",
"title": ""
}
] |
[
{
"docid": "56a4b5052e4d745e7939e2799a40bfd8",
"text": "The evolution of software defined networking (SDN) has played a significant role in the development of next-generation networks (NGN). SDN as a programmable network having “service provisioning on the fly” has induced a keen interest both in academic world and industry. In this article, a comprehensive survey is presented on SDN advancement over conventional network. The paper covers historical evolution in relation to SDN, functional architecture of the SDN and its related technologies, and OpenFlow standards/protocols, including the basic concept of interfacing of OpenFlow with network elements (NEs) such as optical switches. In addition a selective architecture survey has been conducted. Our proposed architecture on software defined heterogeneous network, points towards new technology enabling the opening of new vistas in the domain of network technology, which will facilitate in handling of huge internet traffic and helps infrastructure and service providers to customize their resources dynamically. Besides, current research projects and various activities as being carried out to standardize SDN as NGN by different standard development organizations (SODs) have been duly elaborated to judge how this technology moves towards standardization.",
"title": ""
},
{
"docid": "fea3c6f49169e0af01e31b46d8c72a9b",
"text": "Psoriatic arthritis (PsA) is an archetypal type of spondyloarthritis, but may have some features of rheumatoid arthritis, namely a small joint polyarthritis pattern. Most of these features are well demonstrated on imaging, and as a result, imaging has helped us to better understand the pathophysiology of PsA. Although the unique changes of PsA such as the \"pencil-in-cup\" deformities and periostitis are commonly shown on conventional radiography, PsA affects all areas of joints, with enthesitis being the predominant pathology. Imaging, especially magnetic resonance imaging (MRI) and ultrasonography, has allowed us to explain the relationships between enthesitis, synovitis (or the synovio-entheseal complex) and osteitis or bone oedema in PsA. Histological studies have complemented the imaging findings, and have corroborated the MRI changes seen in the skin and nails in PsA. The advancement in imaging technology such as high-resolution \"microscopy\" MRI and whole-body MRI, and improved protocols such as ultrashort echo time, will further enhance our understanding of the disease mechanisms. The ability to demonstrate very early pre-clinical changes as shown by ultrasonography and bone scintigraphy may eventually provide a basis for screening for disease and will further improve the understanding of the link between skin and joint disease.",
"title": ""
},
{
"docid": "0772a2f393b1820e6fa8970cc14339a2",
"text": "The internet is empowering the rise of crowd work, gig work, and other forms of on--demand labor. A large and growing body of scholarship has attempted to predict the socio--technical outcomes of this shift, especially addressing three questions: begin{inlinelist} item What are the complexity limits of on-demand work?, item How far can work be decomposed into smaller microtasks?, and item What will work and the place of work look like for workers' end {inlinelist} In this paper, we look to the historical scholarship on piecework --- a similar trend of work decomposition, distribution, and payment that was popular at the turn of the nth{20} century --- to understand how these questions might play out with modern on--demand work. We identify the mechanisms that enabled and limited piecework historically, and identify whether on--demand work faces the same pitfalls or might differentiate itself. This approach introduces theoretical grounding that can help address some of the most persistent questions in crowd work, and suggests design interventions that learn from history rather than repeat it.",
"title": ""
},
{
"docid": "d4878e0d2aaf33bb5d9fc9c64605c4d2",
"text": "Labeled Faces in the Wild (LFW) database has been widely utilized as the benchmark of unconstrained face verification and due to big data driven machine learning methods, the performance on the database approaches nearly 100%. However, we argue that this accuracy may be too optimistic because of some limiting factors. Besides different poses, illuminations, occlusions and expressions, crossage face is another challenge in face recognition. Different ages of the same person result in large intra-class variations and aging process is unavoidable in real world face verification. However, LFW does not pay much attention on it. Thereby we construct a Cross-Age LFW (CALFW) which deliberately searches and selects 3,000 positive face pairs with age gaps to add aging process intra-class variance. Negative pairs with same gender and race are also selected to reduce the influence of attribute difference between positive/negative pairs and achieve face verification instead of attributes classification. We evaluate several metric learning and deep learning methods on the new database. Compared to the accuracy on LFW, the accuracy drops about 10%-17% on CALFW.",
"title": ""
},
{
"docid": "fe13ddb78243e3bbb03917be0752872e",
"text": "One of the powerful applications of Booiean expression is to allow users to extract relevant information from a database. Unfortunately, previous research has shown that users have difficulty specifying Boolean queries. In an attempt to overcome this limitation, a graphical Filter/Flow representation of Boolean queries was designed to provide users with an interface that visually conveys the meaning of the Booiean operators (AND, OR, and NOT). This was accomplished by impiementing a graphical interface prototype that uses the metaphor of water flowing through filters. Twenty subjects having no experience with Boolean logic participated in an experiment comparing the Booiean operations represented in the Filter/Flow interface with a text-oniy SQL interface. The subjects independently performed five comprehension tasks and five composition tasks in each of the interfaces. A significant difference (p < 0.05) in the total number of correct queries in each of the comprehension and composition tasks was found favoring Filter/Flow.",
"title": ""
},
{
"docid": "28077980daa51a0c423e1e6298c6b417",
"text": "We introduce a method which enables a recurrent dynamics model to be temporally abstract. Our approach, which we call Adaptive Skip Intervals (ASI), is based on the observation that in many sequential prediction tasks, the exact time at which events occur is irrelevant to the underlying objective. Moreover, in many situations, there exist prediction intervals which result in particularly easy-to-predict transitions. We show that there are prediction tasks for which we gain both computational efficiency and prediction accuracy by allowing the model to make predictions at a sampling rate which it can choose itself.",
"title": ""
},
{
"docid": "819f6b62eb3f8f9d60437af28c657935",
"text": "The global electrical energy consumption is rising and there is a steady increase of the demand on the power capacity, efficient production, distribution and utilization of energy. The traditional power systems are changing globally, a large number of dispersed generation (DG) units, including both renewable and nonrenewable energy sources such as wind turbines, photovoltaic (PV) generators, fuel cells, small hydro, wave generators, and gas/steam powered combined heat and power stations, are being integrated into power systems at the distribution level. Power electronics, the technology of efficiently processing electric power, play an essential part in the integration of the dispersed generation units for good efficiency and high performance of the power systems. This paper reviews the applications of power electronics in the integration of DG units, in particular, wind power, fuel cells and PV generators.",
"title": ""
},
{
"docid": "04b4a505086dbe65cea57bc9f1576e2d",
"text": "Social media sites such as Twitter and Facebook have emerged as popular tools for people to express their opinions on various topics. The large amount of data provided by these media is extremely valuable for mining trending topics and events. In this paper, we build an efficient, scalable system to detect events from tweets (ET). Our approach detects events by exploring their textual and temporal components. ET does not require any target entity or domain knowledge to be specified; it automatically detects events from a set of tweets. The key components of ET are (1) an extraction scheme for event representative keywords, (2) an efficient storage mechanism to store their appearance patterns, and (3) a hierarchical clustering technique based on the common co-occurring features of keywords. The events are determined through the hierarchical clustering process. We evaluate our system on two data-sets; one is provided by VAST challenge 2011, and the other published by US based users in January 2013. Our results show that we are able to detect events of relevance efficiently.",
"title": ""
},
{
"docid": "85d31f3940ee258589615661e596211d",
"text": "Bulk Synchronous Parallelism (BSP) provides a good model for parallel processing of many large-scale graph applications, however it is unsuitable/inefficient for graph applications that require coordination, such as graph-coloring, subcoloring, and clustering. To address this problem, we present an efficient modification to the BSP model to implement serializability (sequential consistency) without reducing the highlyparallel nature of BSP. Our modification bypasses the message queues in BSP and reads directly from the worker’s memory for the internal vertex executions. To ensure serializability, coordination is performed— implemented via dining philosophers or token ring— only for border vertices partitioned across workers. We implement our modifications to BSP on Giraph, an open-source clone of Google’s Pregel. We show through a graph-coloring application that our modified framework, Giraphx, provides much better performance than implementing the application using dining-philosophers over Giraph. In fact, Giraphx outperforms Giraph even for embarrassingly parallel applications that do not require coordination, e.g., PageRank.",
"title": ""
},
{
"docid": "dd62fd669d40571cc11d64789314dba1",
"text": "It took the author 30 years to develop the Viable System Model, which sets out to explain how systems are viable – that is, capable of independent existence. He wanted to elucidate the laws of viability in order to facilitate the management task, and did so in a stream of papers and three (of his ten) books. Much misunderstanding about the VSM and its use seems to exist; especially its methodological foundations have been largely forgotten, while its major results have hardly been noted. This paper reflects on the history, nature and present status of the VSM, without seeking once again to expound the model in detail or to demonstrate its validity. It does, however, provide a synopsis, present the methodology and confront some highly contentious issues about both the managerial and scientific paradigms.",
"title": ""
},
{
"docid": "b4d4e9c36346b58c7cc28c3502a9dde8",
"text": "The aim of this paper is to propose a real-time classification algorithm for the low-amplitude electroencephalography (EEG) signals, such as those produced by remembering an unpleasant odor, to drive a brain-computer interface. The peculiarity of these EEG signals is that they require ad hoc signals preprocessing by wavelet decomposition, and the definition of a set of features able to characterize the signals and to discriminate among different conditions. The proposed method is completely parameterized, aiming at a multiclass classification and it might be considered in the framework of machine learning. It is a two stages algorithm. The first stage is offline and it is devoted to the determination of a suitable set of features and to the training of a classifier. The second stage, the real-time one, is to test the proposed method on new data. In order to avoid redundancy in the set of features, the principal components analysis is adapted to the specific EEG signal characteristics and it is applied; the classification is performed through the support vector machine. Experimental tests on ten subjects, demonstrating the good performance of the algorithm in terms of both accuracy and efficiency, are also reported and discussed.",
"title": ""
},
{
"docid": "ecda1d7fb7e05f6d7c63e38fb8f424b8",
"text": "Auto dynamic difficulty (ADD) is the technique of automatically changing the level of difficulty of a video game in real time to match player expertise. Recreating an ADD system on a game-by-game basis is both expensive and time consuming, ultimately limiting its usefulness. Thus, we leverage the benefits of software design patterns to construct an ADD framework. In this paper, we discuss a number of desirable software quality attributes that can be achieved through the usage of these design patterns, based on a case study of two video games.",
"title": ""
},
{
"docid": "b8e705c7dd974ee43b315d3146a0b149",
"text": "The use of repeated measures, where the same subjects are tested under a number of conditions, has numerous practical and statistical benefits. For one thing it reduces the error variance caused by between-group individual differences, however, this reduction of error comes at a price because repeated measures designs potentially introduce covariation between experimental conditions (this is because the same people are used in each condition and so there is likely to be some consistency in their behaviour across conditions). In between-group ANOVA we have to assume that the groups we test are independent for the test to be accurate (Scariano & Davenport, 1987, have documented some of the consequences of violating this assumption). As such, the relationship between treatments in a repeated measures design creates problems with the accuracy of the test statistic. The purpose of this article is to explain, as simply as possible, the issues that arise in analysing repeated measures data with ANOVA: specifically, what is sphericity and why is it important? What is Sphericity?",
"title": ""
},
{
"docid": "2119a6fcc721124690d6cc2fe6552724",
"text": "A development of humanoid robot HRP-2 is presented in this paper. HRP-2 is a humanoid robotics platform, which we developed in phase two of HRP. HRP was a humanoid robotics project, which had run by the Ministry of Economy, Trade and Industry (METI) of Japan from 1998FY to 2002FY for five years. The ability of the biped locomotion of HRP-2 is improved so that HRP-2 can cope with uneven surface, can walk at two third level of human speed, and can walk on a narrow path. The ability of whole body motion of HRP-2 is also improved so that HRP-2 can get up by a humanoid robot's own self if HRP-2 tips over safely. In this paper, the appearance design, the mechanisms, the electrical systems, specifications, and features upgraded from its prototype are also introduced.",
"title": ""
},
{
"docid": "236d3cb8566d4ae72add4a4b8b1f1fcc",
"text": "SAP HANA is a pioneering, and one of the best performing, data platform designed from the grounds up to heavily exploit modern hardware capabilities, including SIMD, and large memory and CPU footprints. As a comprehensive data management solution, SAP HANA supports the complete data life cycle encompassing modeling, provisioning, and consumption. This extended abstract outlines the vision and planned next step of the SAP HANA evolution growing from a core data platform into an innovative enterprise application platform as the foundation for current as well as novel business applications in both on-premise and on-demand scenarios. We argue that only a holistic system design rigorously applying co-design at di↵erent levels may yield a highly optimized and sustainable platform for modern enterprise applications. 1. THE BEGINNING: SAP HANA DATA PLATFORM A comprehensive data management solution has become one of the most critical assets in large enterprises. Modern data management solutions must cover a wide spectrum of additional data structures ranging from simple keyvalues models to complex graph structured data sets and document-centric data stores. Complex query and manipulation patterns are issued against the database reflecting the algorithmic side of complex enterprise applications. Additionally, data consumption activities with analytical query patterns are no longer reserved for decision makers or specialized data scientists but are increasingly becoming an integral part of complex operational business processes requiring support for analytical as well as transactional workloads managed within the same system [4]. Dealing with these challenges [5] demanded a complete re-thinking of traditional database architectures and data management approaches now made possible by advances in hardware architectures. The development of SAP HANA accepted this challenge head on and started a new generation Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Articles from this volume were invited to present their results at The 39th International Conference on Very Large Data Bases, August 26th 30th 2013, Riva del Garda, Trento, Italy. Proceedings of the VLDB Endowment, Vol. 6, No. 11 Copyright 2013 VLDB Endowment 2150-8097/13/09... $ 10.00. Figure 1: The SAP HANA platform of database system design. The SAP HANA database server now comprises a centrally, and tightly, orchestrated collection of di↵erent processing capabilities, e.g., an in-memory columnar relational store, a graph engine, native support for text processing, comprehensive spatial support, etc., all running within a single system environment and, therefore, within a single transactional sphere of control without the need for data replication and synchronization [2]. Secondly, and most importantly, SAP HANA has triggered a major shift in the database industry from the classical disk-centric database system design to a ground breaking main-memory centric system design [3]. The mainstream availability of very large main memory and CPU core footprints within single compute nodes, combined with SIMD architectures and sophisticated cluster systems based on high speed interconnects, was and remains, the central design guideline of the SAP HANA database server. SAP HANA was the first commercial system to systematically reflect, and exploit, the shift in memory hierarchies and CPU architectures in order to optimize data structures and access paths. As a result, SAP HANA has yielded orders of magnitude performance gains thereby opening up completely novel application opportunities. Most of the core design advances behind SAP HANA are now finding their way into mainstream database system research and development, thereby reflecting its pioneering role. As a foundational tenet, we see rigorous application of Hardware/Database co-design principles as the main success factor to systematically exploit the underlying hardware platform: Literally every core SAP HANA data structure and routine has been systematically inspected, redesigned",
"title": ""
},
{
"docid": "89a73876c24508d92050f2055292d641",
"text": "We study the fundamental problem of computing distances between nodes in large graphs such as the web graph and social networks. Our objective is to be able to answer distance queries between pairs of nodes in real time. Since the standard shortest path algorithms are expensive, our approach moves the time-consuming shortest-path computation offline, and at query time only looks up precomputed values and performs simple and fast computations on these precomputed values. More specifically, during the offline phase we compute and store a small \"sketch\" for each node in the graph, and at query-time we look up the sketches of the source and destination nodes and perform a simple computation using these two sketches to estimate the distance.",
"title": ""
},
{
"docid": "44816a4274b275be9cd7ab6a4e14a966",
"text": "t-distributed Stochastic Neighborhood Embedding (t-SNE), a clustering and visualization method proposed by van der Maaten&Hinton in 2008, has rapidly become a standard tool in a number of natural sciences. Despite its overwhelming success, there is a distinct lack of mathematical foundations and the inner workings of the algorithm are not well understood. The purpose of this paper is to prove that t-SNE is able to recover well-separated clusters; more precisely, we prove that t-SNE in the `early exaggeration' phase, an optimization technique proposed by van der Maaten&Hinton (2008) and van der Maaten (2014), can be rigorously analyzed. As a byproduct, the proof suggests novel ways for setting the exaggeration parameter $\\alpha$ and step size $h$. Numerical examples illustrate the effectiveness of these rules: in particular, the quality of embedding of topological structures (e.g. the swiss roll) improves. We also discuss a connection to spectral clustering methods.",
"title": ""
},
{
"docid": "986279f6f47189a6d069c0336fa4ba94",
"text": "Compared to the traditional single-phase-shift control, dual-phase-shift (DPS) control can greatly improve the performance of the isolated bidirectional dual-active-bridge dc-dc converter (IBDC). This letter points out some wrong knowledge about transmission power of IBDC under DPS control in the earlier studies. On this basis, this letter gives the detailed theoretical and experimental analyses of the transmission power of IBDC under DPS control. And the experimental results showed agreement with theoretical analysis.",
"title": ""
},
{
"docid": "147f7f8f80fbf898fb7f0ead044fa5ca",
"text": "Mirjalili in 2015, proposed a new nature-inspired meta-heuristic Moth Flame Optimization (MFO). It is inspired by the characteristics of a moth in the dark night to either fly straight towards the moon or fly in a spiral path to arrive at a nearby artificial light source. It aims to reach a brighter destination which is treated as a global solution for an optimization problem. In this paper, the original MFO is suitably modified to handle multi-objective optimization problems termed as MOMFO. Typically concepts like the introduction of archive grid, coordinate based distance for sorting, non-dominance of solutions make the proposed approach different from the original single objective MFO. The performance of proposed MOMFO is demonstrated on six benchmark mathematical function optimization problems regarding superior accuracy and lower computational time achieved compared to Non-dominated sorting genetic algorithm-II (NSGA-II) and Multi-objective particle swarm optimization (MOPSO).",
"title": ""
},
{
"docid": "4e41e762756c32edfb73ce144bf7ba49",
"text": "In this paper, we outline a model of semantics that integrates aspects of discourse-sensitive logics with the compositional mechanisms available from lexically-driven semantic interpretation. Specifically, we concentrate on developing a composition logic required to properly model complex types within the Generative Lexicon (henceforth GL), for which we employ SDRT principles. As we are presently interested in the composition of information to construct logical forms, we will build on one standard way of arriving at such representations, the lambda calculus, in which functional types are exploited. We outline a new type calculus that captures one of the fundamental ideas of GL: providing a set of techniques governing type shifting possibilities for various lexical items so as to allow for the combination of lexical items in cases where there is an apparent type mismatch. These techniques themselves should follow from the structure of the lexicon and its underlying logic.",
"title": ""
}
] |
scidocsrr
|
3e9c99fe6228fbde2262d824d16e5a26
|
COMPACT PLANAR MICROSTRIP CROSSOVER FOR BEAMFORMING NETWORKS
|
[
{
"docid": "75961ecd0eadf854ad9f7d0d76f7e9c8",
"text": "This paper presents the design of a microstrip-CPW transition where the CPW line propagates close to slotline mode. This design allows the solution to be determined entirely though analytical techniques. In addition, a planar via-less microwave crossover using this technique is proposed. The experimental results at 5 GHz show that the crossover has a minimum isolation of 32 dB. It also has low in-band insertion loss and return loss of 1.2 dB and 18 dB respectively over more than 44 % of bandwidth.",
"title": ""
},
{
"docid": "cc124a93db48348e37aacac87081e3d4",
"text": "The design of an ultra-wideband crossover for use in printed microwave circuits is presented. It employs a pair of broadside-coupled microstrip-to-coplanar waveguide (CPW) transitions, and a pair of uniplanar microstrip-to-CPW transitions. A lumped-element equivalent circuit is used to explain the operation of the proposed crossover. Its performance is evaluated via full-wave electromagnetic simulations and measurements. The designed device is constructed on a single substrate, and thus, it is fully compatible with microstrip-based microwave circuits. The crossover is shown to operate across the frequency band from 3.1 to 11 GHz with more than 15 dB of isolation, less than 1 dB of insertion loss, and less than 0.1 ns of deviation in the group delay.",
"title": ""
},
{
"docid": "8cd26fcf72f0c20f50bd076560e72be3",
"text": "The design of a wideband crossover that includes a pair of two-port and another pair of four-port microstrip-slotline transitions is presented. The utilized transitions are designed such that the resultant planar crossover has high isolation and return loss, and low insertion loss and deviation in the group delay across a wideband. The simulated and measured results of a developed 9 mm × 13 mm crossover on a substrate of 10.2 dielectric constant show less than 0.5 dB insertion loss, more than 15 dB return loss, more than 15 dB isolation and less than 0.1 ns deviation in the group delay across the band from 4.8 to 7.2 GHz (40% fractional bandwidth).",
"title": ""
}
] |
[
{
"docid": "d326624ef696bb5b2595c2b8d1d8c8a2",
"text": "This paper examines the architecture and efficacy of Quash, an automated medical bill processing system capable of bill routing and abuse detection. Quash is designed to be used in conjunction with human auditors and a standard bill review software platform to provide a complete cost containment solution for medical claims. The primary contribution of Quash is to provide a real world speed up for medical fraud detection experts in their work. There will be a discussion of implementation details and preliminary experimental results. In this paper we are entirely focused on medical data and billing patterns that occur within the United States, though these results should be applicable to any financial transaction environment in which structured coding data can be mined.",
"title": ""
},
{
"docid": "6ae739344034410a570b12a57db426e3",
"text": "In recent times we tend to use a number of surveillance systems for monitoring the targeted area. This requires an enormous amount of storage space along with a lot of human power in order to implement and monitor the area under surveillance. This is supposed to be costly and not a reliable process. In this paper we propose an intelligent surveillance system that continuously monitors the targeted area and detects motion in each and every frame. If the system detects motion in the targeted area then a notification is automatically sent to the user by sms and the video starts getting recorded till the motion is stopped. Using this method the required memory space for storing the video is reduced since it doesn't store the entire video but stores the video only when a motion is detected. This is achieved by using real time video processing using open CV (computer vision / machine vision) technology and raspberry pi system.",
"title": ""
},
{
"docid": "33fc411381fb7864bf11c3ae3ebc592a",
"text": "This paper provides a functional analysis perspective of information-theoretic learning (ITL) by defining bottom-up a reproducing kernel Hilbert space (RKHS) uniquely determined by the symmetric nonnegative definite kernel function known as the cross-information potential (CIP). The CIP as an integral of the product of two probability density functions characterizes similarity between two stochastic functions. We prove the existence of a one-to-one congruence mapping between the ITL RKHS and the Hilbert space spanned by square integrable probability density functions. Therefore, all the statistical descriptors in the original information-theoretic learning formulation can be rewritten as algebraic computations on deterministic functional vectors in the ITL RKHS, instead of limiting the functional view to the estimators as is commonly done in kernel methods. A connection between the ITL RKHS and kernel approaches interested in quantifying the statistics of the projected data is also established.",
"title": ""
},
{
"docid": "1186bb5c96eebc26ce781d45fae7768d",
"text": "Essential genes are required for the viability of an organism. Accurate and rapid identification of new essential genes is of substantial theoretical interest to synthetic biology and has practical applications in biomedicine. Fractals provide facilitated access to genetic structure analysis on a different scale. In this study, machine learning-based methods using solely fractal features are presented and the problem of predicting essential genes in bacterial genomes is evaluated. Six fractal features were investigated to learn the parameters of five supervised classification methods for the binary classification task. The optimal parameters of these classifiers are determined via grid-based searching technique. All the currently available identified genes from the database of essential genes were utilized to build the classifiers. The fractal features were proven to be more robust and powerful in the prediction performance. In a statistical sense, the ELM method shows superiority in predicting the essential genes. Non-parameter tests of the average AUC and ACC showed that the fractal feature is much better than other five compared features sets. Our approach is promising and convenient to identify new bacterial essential genes.",
"title": ""
},
{
"docid": "9f635d570b827d68e057afcaadca791c",
"text": "Researches have verified that clothing provides information about the identity of the individual. To extract features from the clothing, the clothing region first must be localized or segmented in the image. At the same time, given multiple images of the same person wearing the same clothing, we expect to improve the effectiveness of clothing segmentation. Therefore, the identity recognition and clothing segmentation problems are inter-twined; a good solution for one aides in the solution for the other. We build on this idea by analyzing the mutual information between pixel locations near the face and the identity of the person to learn a global clothing mask. We segment the clothing region in each image using graph cuts based on a clothing model learned from one or multiple images believed to be the same person wearing the same clothing. We use facial features and clothing features to recognize individuals in other images. The results show that clothing segmentation provides a significant improvement in recognition accuracy for large image collections, and useful clothing masks are simultaneously produced. A further significant contribution is that we introduce a publicly available consumer image collection where each individual is identified. We hope this dataset allows the vision community to more easily compare results for tasks related to recognizing people in consumer image collections.",
"title": ""
},
{
"docid": "59d1d3073d2f56b35c6c54bc034d3f1a",
"text": "Nowadays, many new social networks offering specific services spring up overnight. In this paper, we want to detect communities for emerging networks. Community detection for emerging networks is very challenging as information in emerging networks is usually too sparse for traditional methods to calculate effective closeness scores among users and achieve good community detection results. Meanwhile, users nowadays usually join multiple social networks simultaneously, some of which are developed and can share common information with the emerging networks. Based on both link and attribution information across multiple networks, a new general closeness measure, intimacy, is introduced in this paper. With both micro and macro controls, an effective and efficient method, CAD (Cold stArt community Detector), is proposed to propagate information from developed network to calculate effective intimacy scores among users in emerging networks. Extensive experiments conducted on real-world social networks demonstrate that CAD can perform very well in addressing the emerging network community detection problem.",
"title": ""
},
{
"docid": "3c3ae987e018322ca45b280c3d01eba8",
"text": "Boundary prediction in images as well as video has been a very active topic of research and organizing visual information into boundaries and segments is believed to be a corner stone of visual perception. While prior work has focused on predicting boundaries for observed frames, our work aims at predicting boundaries of future unobserved frames. This requires our model to learn about the fate of boundaries and extrapolate motion patterns. We experiment on established realworld video segmentation dataset, which provides a testbed for this new task. We show for the first time spatio-temporal boundary extrapolation in this challenging scenario. Furthermore, we show long-term prediction of boundaries in situations where the motion is governed by the laws of physics. We successfully predict boundaries in a billiard scenario without any assumptions of a strong parametric model or any object notion. We argue that our model has with minimalistic model assumptions derived a notion of “intuitive physics” that can be applied to novel scenes.",
"title": ""
},
{
"docid": "9d700ef057eb090336d761ebe7f6acb0",
"text": "This article presents initial results on a supervised machine learning approach to determine the semantics of noun compounds in Dutch and Afrikaans. After a discussion of previous research on the topic, we present our annotation methods used to provide a training set of compounds with the appropriate semantic class. The support vector machine method used for this classification experiment utilizes a distributional lexical semantics representation of the compound’s constituents to make its classification decision. The collection of words that occur in the near context of the constituent are considered an implicit representation of the semantics of this constituent. Fscores were reached of 47.8% for Dutch and 51.1% for Afrikaans. Keywords—compound semantics; Afrikaans; Dutch; machine learning; distributional methods",
"title": ""
},
{
"docid": "0d5ba680571a9051e70ababf0c685546",
"text": "• Current deep RL techniques require large amounts of data to find a good policy • Once found, the policy remains a black box to practitioners • Practitioners cannot verify that the policy is making decisions based on reasonable information • MOREL (Motion-Oriented REinforcement Learning) automatically detects moving objects and uses the relevant information for action selection • We gather a dataset using a uniform random policy • Train a network without supervision to capture a structured representation of motion between frames • Network predicts object masks, object motion, and camera motion to warp one frame into the next Introduction Learning to Segment Moving Objects Experiments Visualization",
"title": ""
},
{
"docid": "e129a6d980ede2c637b4d2151725bf27",
"text": "A novel framework for automatic object segmentation is proposed that exploits depth information estimated from a single image as an additional cue. For example, suppose that we have an image containing an object and a background with a similar color or texture to the object. The proposed framework enables us to automatically extract the object from the image while eliminating the misleading background. Although our segmentation framework takes a form of a traditional formulation based on Markov random fields, the proposed method provides a novel scheme to integrate depth and color information, which derives objectness/backgroundness likelihood. We also employ depth estimation via supervised learning so that the proposed method can work even if it has only a single input image with no actual depth information. Experimental results with a dataset originally collected for the evaluation demonstrate the effectiveness of the proposed method against the baseline method and several existing methods for salient region detection.",
"title": ""
},
{
"docid": "0daa43669ae68a81e5eb71db900976c6",
"text": "Fertilizer plays an important role in maintaining soil fertility, increasing yields and improving harvest quality. However, a significant portion of fertilizers are lost, increasing agricultural cost, wasting energy and polluting the environment, which are challenges for the sustainability of modern agriculture. To meet the demands of improving yields without compromising the environment, environmentally friendly fertilizers (EFFs) have been developed. EFFs are fertilizers that can reduce environmental pollution from nutrient loss by retarding, or even controlling, the release of nutrients into soil. Most of EFFs are employed in the form of coated fertilizers. The application of degradable natural materials as a coating when amending soils is the focus of EFF research. Here, we review recent studies on materials used in EFFs and their effects on the environment. The major findings covered in this review are as follows: 1) EFF coatings can prevent urea exposure in water and soil by serving as a physical barrier, thereby reducing the urea hydrolysis rate and decreasing nitrogen oxide (NOx) and dinitrogen (N2) emissions, 2) EFFs can increase the soil organic matter content, 3) hydrogel/superabsorbent coated EFFs can buffer soil acidity or alkalinity and lead to an optimal pH for plants, and 4) hydrogel/superabsorbent coated EFFs can improve water-retention and water-holding capacity of soil. In conclusion, EFFs play an important role in enhancing nutrients efficiency and reducing environmental pollution.",
"title": ""
},
{
"docid": "03fbd0a9bca89e3967db29a0a03a01ba",
"text": "Many factors are believed to increase the vulnerability of software system; for example, the more widely deployed or popular is a software system the more likely it is to be attacked. Early identification of defects has been a widely investigated topic in software engineering research. Early identification of software vulnerabilities can help mitigate these attacks to a large degree by focusing better security verification efforts in these components. Predicting vulnerabilities is complicated by the fact that vulnerabilities are, most often, few in number and introduce significant bias by creating a sparse dataset in the population. As a result, vulnerability prediction can be thought of us preverbally “searching for a needle in a haystack.” In this paper, we present a large-scale empirical study on Windows Vista, where we empirically evaluate the efficacy of classical metrics like complexity, churn, coverage, dependency measures, and organizational structure of the company to predict vulnerabilities and assess how well these software measures correlate with vulnerabilities. We observed in our experiments that classical software measures predict vulnerabilities with a high precision but low recall values. The actual dependencies, however, predict vulnerabilities with a lower precision but substantially higher recall.",
"title": ""
},
{
"docid": "0ec7969da568af2e743d969f9805063d",
"text": "In this letter, a notched-band Vivaldi antenna with high-frequency selectivity is designed and investigated. To obtain two notched poles inside the stopband, an open-circuited half-wavelength resonator and a short-circuited stepped impedance resonator are properly introduced into the traditional Vivaldi antenna. By theoretically calculating the resonant frequencies of the two loaded resonators, the frequency locations of the two notched poles can be precisely determined, thus achieving a wideband antenna with a desired notched band. To validate the feasibility of this new approach, a notched band antenna with a fractional bandwidth of 145.8% is fabricated and tested. Results indicate that good frequency selectivity of the notched band from 4.9 to 6.6 GHz is realized, and the antenna exhibits good impedance match, high radiation gain, and excellent radiation directivity in the passband. Both the simulation and measurement results are provided with good agreement.",
"title": ""
},
{
"docid": "fc3b087bd2c0bd4e12f3cb86f6346c96",
"text": "This study investigated whether changes in the technological/social environment in the United States over time have resulted in concomitant changes in the multitasking skills of younger generations. One thousand, three hundred and nineteen Americans from three generations were queried to determine their at-home multitasking behaviors. An anonymous online questionnaire asked respondents to indicate which everyday and technology-based tasks they choose to combine for multitasking and to indicate how difficult it is to multitask when combining the tasks. Combining tasks occurred frequently, especially while listening to music or eating. Members of the ‘‘Net Generation” reported more multitasking than members of ‘‘Generation X,” who reported more multitasking than members of the ‘‘Baby Boomer” generation. The choices of which tasks to combine for multitasking were highly correlated across generations, as were difficulty ratings of specific multitasking combinations. The results are consistent with a greater amount of general multitasking resources in younger generations, but similar mental limitations in the types of tasks that can be multitasked. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f927b88e140c710f77f45d3f5e35904f",
"text": "Prosthetic components and control interfaces for upper limb amputees have barely changed in the past 40 years. Many transradial prostheses have been developed in the past, nonetheless most of them would be inappropriate if/when a large bandwidth human-machine interface for control and perception would be available, due to either their limited (or inexistent) sensorization or limited dexterity. SmartHand tackles this issue as is meant to be clinically experimented in amputees employing different neuro-interfaces, in order to investigate their effectiveness. This paper presents the design and on bench evaluation of the SmartHand. SmartHand design was bio-inspired in terms of its physical appearance, kinematics, sensorization, and its multilevel control system. Underactuated fingers and differential mechanisms were designed and exploited in order to fit all mechatronic components in the size and weight of a natural human hand. Its sensory system was designed with the aim of delivering significant afferent information to the user through adequate interfaces. SmartHand is a five fingered self-contained robotic hand, with 16 degrees of freedom, actuated by 4 motors. It integrates a bio-inspired sensory system composed of 40 proprioceptive and exteroceptive sensors and a customized embedded controller both employed for implementing automatic grasp control and for potentially delivering sensory feedback to the amputee. It is able to perform everyday grasps, count and independently point the index. The weight (530 g) and speed (closing time: 1.5 seconds) are comparable to actual commercial prostheses. It is able to lift a 10 kg suitcase; slippage tests showed that within particular friction and geometric conditions the hand is able to stably grasp up to 3.6 kg cylindrical objects. Due to its unique embedded features and human-size, the SmartHand holds the promise to be experimentally fitted on transradial amputees and employed as a bi-directional instrument for investigating -during realistic experiments- different interfaces, control and feedback strategies in neuro-engineering studies.",
"title": ""
},
{
"docid": "2f7a63571f8d695d402a546a457470c4",
"text": "Why does Deep Learning work? What representations does it capture? How do higher-order representations emerge? We study these questions from the perspective of group theory, thereby opening a new approach towards a theory of Deep learning. One factor behind the recent resurgence of the subject is a key algorithmic step called pretraining: first search for a good generative model for the input samples, and repeat the process one layer at a time. We show deeper implications of this simple principle, by establishing a connection with the interplay of orbits and stabilizers of group actions. Although the neural networks themselves may not form groups, we show the existence of shadow groups whose elements serve as close approximations. Over the shadow groups, the pretraining step, originally introduced as a mechanism to better initialize a network, becomes equivalent to a search for features with minimal orbits. Intuitively, these features are in a way the simplest. Which explains why a deep learning network learns simple features first. Next, we show how the same principle, when repeated in the deeper layers, can capture higher order representations, and why representation complexity increases as the layers get deeper.",
"title": ""
},
{
"docid": "44f7dede57a762365e477e160da733e9",
"text": "Publishing reproducible analyses is a long-standing and widespread challenge [1] for the scientific community, funding bodies and publishers [2, 3, 4]. Although a definitive solution is still elusive [5], the problem is recognized to affect all disciplines [6, 7, 8] and lead to a critical system inefficiency [9]. Here, we propose a blockchain-based approach to enhance scientific reproducibility, with a focus on life science studies and precision medicine. While the interest of encoding permanently into an immutable ledger all the study key information–including endpoints, data and metadata, protocols, analytical methods and all findings–has been already highlighted, here we apply the blockchain approach to solve the issue of rewarding time and expertise of scientists that commit to verify reproducibility. Our mechanism builds a trustless ecosystem of researchers, funding bodies and publishers cooperating to guarantee digital and permanent access to information and reproducible results. As a natural byproduct, a procedure to quantify scientists’ and institutions’ reputation for ranking purposes is obtained.",
"title": ""
},
{
"docid": "bc269e27e99f8532c7bd41b9ad45ac9a",
"text": "There are millions of users who tag multimedia content, generating a large vocabulary of tags. Some tags are frequent, while other tags are rarely used following a long tail distribution. For frequent tags, most of the multimedia methods that aim to automatically understand audio-visual content, give excellent results. It is not clear, however, how these methods will perform on rare tags. In this paper we investigate what social tags constitute the long tail and how they perform on two multimedia retrieval scenarios, tag relevance and detector learning. We show common valuable tags within the long tail, and by augmenting them with semantic knowledge, the performance of tag relevance and detector learning improves substantially.",
"title": ""
},
{
"docid": "1c2043ac65c6d8a47bffb7dcbab42c54",
"text": "In the past three years, Emotion Recognition in the Wild (EmotiW) Grand Challenge has drawn more and more attention due to its huge potential applications. In the fourth challenge, aimed at the task of video based emotion recognition, we propose a multi-clue emotion fusion (MCEF) framework by modeling human emotion from three mutually complementary sources, facial appearance texture, facial action, and audio. To extract high-level emotion features from sequential face images, we employ a CNN-RNN architecture, where face image from each frame is first fed into the fine-tuned VGG-Face network to extract face feature, and then the features of all frames are sequentially traversed in a bidirectional RNN so as to capture dynamic changes of facial textures. To attain more accurate facial actions, a facial landmark trajectory model is proposed to explicitly learn emotion variations of facial components. Further, audio signals are also modeled in a CNN framework by extracting low-level energy features from segmented audio clips and then stacking them as an image-like map. Finally, we fuse the results generated from three clues to boost the performance of emotion recognition. Our proposed MCEF achieves an overall accuracy of 56.66% with a large improvement of 16.19% with respect to the baseline.",
"title": ""
},
{
"docid": "4561fbad61cb72cd7e631fd2f72de762",
"text": "Graphene has been hailed as a wonderful material in electronics, and recently, it is the rising star in photonics, as well. The wonderful optical properties of graphene afford multiple functions of signal emitting, transmitting, modulating, and detection to be realized in one material. In this paper, the latest progress in graphene photonics, plasmonics, and broadband optoelectronic devices is reviewed. Particular emphasis is placed on the ability to integrate graphene photonics onto the silicon platform to afford broadband operation in light routing and amplification, which involves components like polarizer, modulator, and photodetector. Other functions like saturable absorber and optical limiter are also reviewed.",
"title": ""
}
] |
scidocsrr
|
fe5597a76544a776519a5fbf9efe7ebf
|
Automatic identification of cited text spans: a multi-classifier approach over imbalanced dataset
|
[
{
"docid": "d38e5fa4adadc3e979c5de812599c78a",
"text": "The convergence properties of a nearest neighbor rule that uses an editing procedure to reduce the number of preclassified samples and to improve the performance of the rule are developed. Editing of the preclassified samples using the three-nearest neighbor rule followed by classification using the single-nearest neighbor rule with the remaining preclassified samples appears to produce a decision procedure whose risk approaches the Bayes' risk quite closely in many problems with only a few preclassified samples. The asymptotic risk of the nearest neighbor rules and the nearest neighbor rules using edited preclassified samples is calculated for several problems.",
"title": ""
},
{
"docid": "01055f9b1195cd7d03b404f3d530bb55",
"text": "In recent years there has been an increasing interest in approaches to scientific summarization that take advantage of the citations a research paper has received in order to extract its main contributions. In this context, the CL-SciSumm 2017 Shared Task has been proposed to address citation-based information extraction and summarization. In this paper we present several systems to address three of the CL-SciSumm tasks. Notably, unsupervised systems to match citing and cited sentences (Task 1A), a supervised approach to identify the type of information being cited (Task 1B), and a supervised citation-based summarizer (Task 2).",
"title": ""
},
{
"docid": "a13a50d552572d08b4d1496ca87ac160",
"text": "In recent years, mining with imbalanced data sets receives more and more attentions in both theoretical and practical aspects. This paper introduces the importance of imbalanced data sets and their broad application domains in data mining, and then summarizes the evaluation metrics and the existing methods to evaluate and solve the imbalance problem. Synthetic minority oversampling technique (SMOTE) is one of the over-sampling methods addressing this problem. Based on SMOTE method, this paper presents two new minority over-sampling methods, borderline-SMOTE1 and borderline-SMOTE2, in which only the minority examples near the borderline are over-sampled. For the minority class, experiments show that our approaches achieve better TP rate and F-value than SMOTE and random over-sampling methods.",
"title": ""
}
] |
[
{
"docid": "ffc239273a5e911dcc59559ef7c2c7f8",
"text": "Human-dominated marine ecosystems are experiencing accelerating loss of populations and species, with largely unknown consequences. We analyzed local experiments, long-term regional time series, and global fisheries data to test how biodiversity loss affects marine ecosystem services across temporal and spatial scales. Overall, rates of resource collapse increased and recovery potential, stability, and water quality decreased exponentially with declining diversity. Restoration of biodiversity, in contrast, increased productivity fourfold and decreased variability by 21%, on average. We conclude that marine biodiversity loss is increasingly impairing the ocean's capacity to provide food, maintain water quality, and recover from perturbations. Yet available data suggest that at this point, these trends are still reversible.",
"title": ""
},
{
"docid": "8cc9ab356aa8b0f88d244b2077816ddc",
"text": "Brain control of prehension is thought to rely on two specific brain circuits: a dorsomedial one (involving the areas of the superior parietal lobule and the dorsal premotor cortex) involved in the transport of the hand toward the object and a dorsolateral one (involving the inferior parietal lobule and the ventral premotor cortex) dealing with the preshaping of the hand according to the features of the object. The present study aimed at testing whether a pivotal component of the dorsomedial pathway (area V6A) is involved also in hand preshaping and grip formation to grasp objects of different shapes. Two macaque monkeys were trained to reach and grasp different objects. For each object, animals used a different grip: whole-hand prehension, finger prehension, hook grip, primitive precision grip, and advanced precision grip. Almost half of 235 neurons recorded from V6A displayed selectivity for a grip or a group of grips. Several experimental controls were used to ensure that neural modulation was attributable to grip only. These findings, in concert with previous studies demonstrating that V6A neurons are modulated by reach direction and wrist orientation, that lesion of V6A evokes reaching and grasping deficits, and that dorsal premotor cortex contains both reaching and grasping neurons, indicate that the dorsomedial parieto-frontal circuit may play a central role in all phases of reach-to-grasp action. Our data suggest new directions for the modeling of prehension movements and testable predictions for new brain imaging and neuropsychological experiments.",
"title": ""
},
{
"docid": "2be085910cbfd243ba85eba0a6521779",
"text": "BACKGROUND\nSuspension sutures are commonly used in numerous cosmetic surgical procedures. Several authors have described the use of such sutures as a part of classical rhinoplasty. On the other hand, it is not uncommon to see patients seeking nasal surgery for only a minimal hump deformity combined with an underrotated, underprojecting tip, which does not necessarily require all components of rhinoplasty. With the benefit of the suture suspension technique described here, such simple tip deformities can be reshaped percutaneously via minimal incisions.\n\n\nOBJECTIVE\nIn this study, the author describes an original technique based on the philosophy of vertical suspension lifts, achieving the suspension of the nasal tip with a percutaneous purse-string suture applied through small access punctures.\n\n\nPATIENTS AND METHODS\nBetween December 2005 and December 2008, 86 patients were selected to undergo rhinoplasty using the author's shuttle lifting technique. The procedure was performed with a double-sided needle or shuttle, smoothly anchoring the lower lateral cartilages in a vertical direction to the glabellar periosteum, excluding the skin envelope.\n\n\nRESULTS\nMean follow-up was 13 months, with a range of eight to 24 months. Outcomes were satisfactory in all but 12 cases, of which seven found the result inadequate; two of those patients underwent a definitive rhinoplasty operation. Five patients requested that the suture be detached because of an overexaggerated appearance. Operative time was less than 15 minutes in all patients, with an uneventful rapid recovery.\n\n\nCONCLUSIONS\nAs a minimally invasive nasal reshaping procedure, shuttle lifting is a good choice to achieve long-lasting, satisfactory results in selected patients with minimal hump deformity and an underrotated tip. The significance of this technique lies in the fact that it is one of very few office-based minimally invasive alternatives for aesthetic nasal surgery, with a recovery period of two to three days.",
"title": ""
},
{
"docid": "8f601e751650b56be81b069c42089640",
"text": "Inspired by the success of self attention mechanism and Transformer architecture in sequence transduction and image generation applications, we propose novel self attention-based architectures to improve the performance of adversarial latent codebased schemes in text generation. Adversarial latent code-based text generation has recently gained a lot of attention due to its promising results. In this paper, we take a step to fortify the architectures used in these setups, specifically AAE and ARAE. We benchmark two latent code-based methods (AAE and ARAE) designed based on adversarial setups. In our experiments, the Google sentence compression dataset is utilized to compare our method with these methods using various objective and subjective measures. The experiments demonstrate the proposed (self) attention-based models outperform the state-of-the-art in adversarial code-based text generation.",
"title": ""
},
{
"docid": "41c317b0e275592ea9009f3035d11a64",
"text": "We introduce a distribution based model to learn bilingual word embeddings from monolingual data. It is simple, effective and does not require any parallel data or any seed lexicon. We take advantage of the fact that word embeddings are usually in form of dense real-valued lowdimensional vector and therefore the distribution of them can be accurately estimated. A novel cross-lingual learning objective is proposed which directly matches the distributions of word embeddings in one language with that in the other language. During the joint learning process, we dynamically estimate the distributions of word embeddings in two languages respectively and minimize the dissimilarity between them through standard back propagation algorithm. Our learned bilingual word embeddings allow to group each word and its translations together in the shared vector space. We demonstrate the utility of the learned embeddings on the task of finding word-to-word translations from monolingual corpora. Our model achieved encouraging performance on data in both related languages and substantially different languages.",
"title": ""
},
{
"docid": "9698bfe078a32244169cbe50a04ebb00",
"text": "Maximum power point tracking (MPPT) controllers play an important role in photovoltaic systems. They maximize the output power of a PV array for a given set of conditions. This paper presents an overview of the different MPPT techniques. Each technique is evaluated on its ability to detect multiple maxima, convergence speed, ease of implementation, efficiency over a wide output power range, and cost of implementation. The perturbation and observation (P & O), and incremental conductance (IC) algorithms are widely used techniques, with many variants and optimization techniques reported. For this reason, this paper evaluates the performance of these two common approaches from a dynamic and steady state perspective.",
"title": ""
},
{
"docid": "8e9deb174bedff0a5b03e4286172cd36",
"text": "An ethnographic approach to the study of caregiver-assisted music events was employed with patients suffering from dementia or suspected dementia. The aim of this study was to illuminate the importance of music events and the reactions and social interactions of patients with dementia or suspected dementia and their caregivers before, during and after such events, including the remainder of the day. The results showed that the patients experienced an ability to sing, play instruments, perform body movements, and make puns during such music events. While singing familiar songs, some patients experienced the return of distant memories, which they seemed to find very pleasurable. During and after the music events, the personnel experienced bonding with the patients, who seemed easier to care for. Caregiver-assisted music events show a great potential for use in dementia care.",
"title": ""
},
{
"docid": "4e263764fd14f643f7b414bc12615565",
"text": "We present a superpixel method for full spatial phase and amplitude control of a light beam using a digital micromirror device (DMD) combined with a spatial filter. We combine square regions of nearby micromirrors into superpixels by low pass filtering in a Fourier plane of the DMD. At each superpixel we are able to independently modulate the phase and the amplitude of light, while retaining a high resolution and the very high speed of a DMD. The method achieves a measured fidelity F = 0.98 for a target field with fully independent phase and amplitude at a resolution of 8 × 8 pixels per diffraction limited spot. For the LG10 orbital angular momentum mode the calculated fidelity is F = 0.99993, using 768 × 768 DMD pixels. The superpixel method reduces the errors when compared to the state of the art Lee holography method for these test fields by 50% and 18%, with a comparable light efficiency of around 5%. Our control software is publicly available.",
"title": ""
},
{
"docid": "c2fee2767395b1e9d6490956c7a23268",
"text": "In this paper, we elaborate the advantages of combining two neural network methodologies, convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent neural networks, with the framework of hybrid hidden Markov models (HMM) for recognizing offline handwriting text. CNNs employ shift-invariant filters to generate discriminative features within neural networks. We show that CNNs are powerful tools to extract general purpose features that even work well for unknown classes. We evaluate our system on a Chinese handwritten text database and provide a GPU-based implementation that can be used to reproduce the experiments. All experiments were conducted with RWTH OCR, an open-source system developed at our institute.",
"title": ""
},
{
"docid": "457f2508c59daaae9af818f8a6a963d1",
"text": "Robotic systems hold great promise to assist with household, educational, and research tasks, but the difficulties of designing and building such robots often are an inhibitive barrier preventing their development. This paper presents a framework in which simple robots can be easily designed and then rapidly fabricated and tested, paving the way for greater proliferation of robot designs. The Python package presented in this work allows for the scripted generation of mechanical elements, using the principles of hierarchical structure and modular reuse to simplify the design process. These structures are then manufactured using an origami-inspired method in which precision cut sheets of plastic film are folded to achieve desired geometries. Using these processes, lightweight, low cost, rapidly built quadrotors were designed and fabricated. Flight tests compared the resulting robots against similar micro air vehicles (MAVs) generated using other processes. Despite lower tolerance and precision, robots generated using the process presented in this work took significantly less time and cost to design and build, and yielded lighter, lower power MAVs.",
"title": ""
},
{
"docid": "fbd05f764470b94af30c7799e94ff0f0",
"text": "Agent-based modeling of human social behavior is an increasingly important research area. A key factor in human social interaction is our beliefs about others, a theory of mind. Whether we believe a message depends not only on its content but also on our model of the communicator. How we act depends not only on the immediate effect but also on how we believe others will react. In this paper, we discuss PsychSim, an implemented multiagent-based simulation tool for modeling interactions and influence. While typical approaches to such modeling have used first-order logic, PsychSim agents have their own decision-theoretic model of the world, including beliefs about its environment and recursive models of other agents. Using these quantitative models of uncertainty and preferences, we have translated existing psychological theories into a decision-theoretic semantics that allow the agents to reason about degrees of believability in a novel way. We discuss PsychSim’s underlying architecture and describe its application to a school violence scenario for illustration.",
"title": ""
},
{
"docid": "5eb9e759ec8fc9ad63024130f753d136",
"text": "A 3-10 GHz broadband CMOS T/R switch for ultra-wideband (UWB) transceiver is presented. The broadband CMOS T/R switch is fabricated based on the 0.18 mu 1P6M standard CMOS process. On-chip measurement of the CMOS T/R switch is performed. The insertion loss of the proposed CMOS T/R Switch is about 3.1plusmn1.3dB. The return losses at both input and output terminals are higher than 14 dB. It is also characterized with 25-34dB isolation and 18-20 dBm input P1dB. The broadband CMOS T/R switch shows highly linear phase and group delay of 20plusmn10 ps from 10MHz to 15GHz. It can be easily integrated with other CMOS RFICs to form on-chip transceivers for various UWB applications",
"title": ""
},
{
"docid": "51b766b0a7f1e3bc1f49d16df04a69f7",
"text": "This study reports the results of a biometrical genetical analysis of scores on a personality inventory (The Eysenck Personality Questionnaire, or EPQ), which purports to measure psychoticism, neuroticism, extraversion and dissimulation (Lie Scale). The subjects were 544 pairs of twins, from the Maudsley Twin Register. The purpose of the study was to test the applicability of various genotypeenvironmental models concerning the causation of P scores. Transformation of the raw scores is required to secure a scale on which the effects of genes and environment are additive. On such a scale 51% of the variation in P is due to environmental differences within families, but the greater part (77%) of this environmental variation is due to random effects which are unlikely to be controllable. . The genetical consequences ot'assortative mating were too slight to be detectable in this study, and the genetical variation is consistent with the hypothesis that gene effects are additive. This is a general finding for traits which have been subjected to stabilizing selection. Our model for P is consistent with these advanced elsewhere to explain the origin of certain kinds of psychopathology. The data provide little support for the view that the \"family environment\" (including the environmental influence of parents) plays a major part in the determination of individual differences in P, though we cite evidence suggesting that sibling competition effects are producing genotypeenvironmental covariation for the determinants of P in males. The genetical and environmental determinants of the covariation of P with other personality dimensions are considered. Assumptions are discussed and tested where possible.",
"title": ""
},
{
"docid": "0eed7e3a9128b10f8c4711592b9628ee",
"text": "Visual defects, called mura in the field, sometimes occur during the manufacturing of the flat panel liquid crystal displays. In this paper we propose an automatic inspection method that reliably detects and quantifies TFT-LCD regionmura defects. The method consists of two phases. In the first phase we segment candidate region-muras from TFT-LCD panel images using the modified regression diagnostics and Niblack’s thresholding. In the second phase, based on the human eye’s sensitivity to mura, we quantify mura level for each candidate, which is used to identify real muras by grading them as pass or fail. Performance of the proposed method is evaluated on real TFT-LCD panel samples. key words: Machine vision, image segmentation, regression diagnostics, industrial inspection, visual perception.",
"title": ""
},
{
"docid": "55aea20148423bdb7296addac847d636",
"text": "This paper describes an underwater sensor network with dual communication and support for sensing and mobility. The nodes in the system are connected acoustically for broadcast communication using an acoustic modem we developed. For higher point to point communication speed the nodes are networked optically using custom built optical modems. We describe the hardware details of the underwater sensor node and the communication and networking protocols. Finally, we present and discuss the results from experiments with this system.",
"title": ""
},
{
"docid": "daecaa40531dad2622d83aca90ff7185",
"text": "Advances in tourism economics have enabled us to collect massive amounts of travel tour data. If properly analyzed, this data could be a source of rich intelligence for providing real-time decision making and for the provision of travel tour recommendations. However, tour recommendation is quite different from traditional recommendations, because the tourist’s choice is affected directly by the travel costs, which includes both financial and time costs. To that end, in this article, we provide a focused study of cost-aware tour recommendation. Along this line, we first propose two ways to represent user cost preference. One way is to represent user cost preference by a two-dimensional vector. Another way is to consider the uncertainty about the cost that a user can afford and introduce a Gaussian prior to model user cost preference. With these two ways of representing user cost preference, we develop different cost-aware latent factor models by incorporating the cost information into the probabilistic matrix factorization (PMF) model, the logistic probabilistic matrix factorization (LPMF) model, and the maximum margin matrix factorization (MMMF) model, respectively. When applied to real-world travel tour data, all the cost-aware recommendation models consistently outperform existing latent factor models with a significant margin.",
"title": ""
},
{
"docid": "e81b4c01c2512f2052354402cd09522b",
"text": "...................................................................................................................... iii ACKNOWLEDGEMENTS .................................................................................................v CHAPTER",
"title": ""
},
{
"docid": "a9fa30e95bf31ea2061a66f5b4aaf210",
"text": "In the context of current concerns about replication in psychological science, we describe 10 findings from behavioral genetic research that have replicated robustly. These are \"big\" findings, both in terms of effect size and potential impact on psychological science, such as linearly increasing heritability of intelligence from infancy (20%) through adulthood (60%). Four of our top 10 findings involve the environment, discoveries that could have been found only with genetically sensitive research designs. We also consider reasons specific to behavioral genetics that might explain why these findings replicate.",
"title": ""
},
{
"docid": "4dc6f5768b43e6c491f0b08600acbea5",
"text": "Stochastic Dual Coordinate Ascent is a popular method for solving regularized loss minimization for the case of convex losses. We describe variants of SDCA that do not require explicit regularization and do not rely on duality. We prove linear convergence rates even if individual loss functions are non-convex, as long as the expected loss is strongly convex.",
"title": ""
}
] |
scidocsrr
|
b0975ac88cbc489dac8ff98ae7401dfe
|
Active learning for regression using greedy sampling
|
[
{
"docid": "ef444570c043be67453317e26600972f",
"text": "In multiple regression it is shown that parameter estimates based on minimum residual sum of squares have a high probability of being unsatisfactory, if not incorrect, if the prediction vectors are not orthogonal. Proposed is an estimation procedure based on adding small positive quantities to the diagonal of X’X. Introduced is the ridge trace, a method for showing in two dimensions the effects of nonorthogonality. It is then shown how to augment X’X to obtain biased estimates with smaller mean square error.",
"title": ""
}
] |
[
{
"docid": "b763ab2702a32f82b75af938cb352317",
"text": "The idea that memory is stored in the brain as physical alterations goes back at least as far as Plato, but further conceptualization of this idea had to wait until the 20(th) century when two guiding theories were presented: the \"engram theory\" of Richard Semon and Donald Hebb's \"synaptic plasticity theory.\" While a large number of studies have been conducted since, each supporting some aspect of each of these theories, until recently integrative evidence for the existence of engram cells and circuits as defined by the theories was lacking. In the past few years, the combination of transgenics, optogenetics, and other technologies has allowed neuroscientists to begin identifying memory engram cells by detecting specific populations of cells activated during specific learning epochs and by engineering them not only to evoke recall of the original memory, but also to alter the content of the memory.",
"title": ""
},
{
"docid": "e3218926a5a32d2c44d5aea3171085e2",
"text": "The present study sought to determine the effects of Mindful Sport Performance Enhancement (MSPE) on runners. Participants were 25 recreational long-distance runners openly assigned to either the 4-week intervention or to a waiting-list control group, which later received the same program. Results indicate that the MSPE group showed significantly more improvement in organizational demands (an aspect of perfectionism) compared with controls. Analyses of preto postworkshop change found a significant increase in state mindfulness and trait awareness and decreases in sport-related worries, personal standards perfectionism, and parental criticism. No improvements in actual running performance were found. Regression analyses revealed that higher ratings of expectations and credibility of the workshop were associated with lower postworkshop perfectionism, more years running predicted higher ratings of perfectionism, and more life stressors predicted lower levels of worry. Findings suggest that MSPE may be a useful mental training intervention for improving mindfulness, sport-anxiety related worry, and aspects of perfectionism in long-distance runners.",
"title": ""
},
{
"docid": "d67dec88b60988b385befb5653abef2b",
"text": "With the growing importance of networked embedded devices in the upcoming Internet of Things, new attacks targeting embedded OSes are emerging. ARM processors, which power over 60% of embedded devices, introduce a hardware security extension called TrustZone to protect secure applications in an isolated secure world that cannot be manipulated by a compromised OS in the normal world. Leveraging TrustZone technology, a number of memory integrity checking schemes have been proposed in the secure world to introspect malicious memory modification of the normal world. In this paper, we first discover and verify an ARM TrustZone cache incoherence behavior, which results in the cache contents of the two worlds, secure and non-secure, potentially being different even when they are mapped to the same physical address. Furthermore, code in one TrustZone world cannot access the cache content in the other world. Based on this observation, we develop a new rootkit called CacheKit that hides in the cache of the normal world and is able to evade memory introspection from the secure world. We implement a CacheKit prototype on Cortex-A8 processors after solving a number of challenges. First, we employ the Cache-as-RAM technique to ensure that the malicious code is only loaded into the CPU cache and not RAM. Thus, the secure world cannot detect the existence of the malicious code by examining the RAM. Second, we use the ARM processor's hardware support on cache settings to keep the malicious code persistent in the cache. Third, to evade introspection that flushes cache content back into RAM, we utilize physical addresses from the I/O address range that is not backed by any real I/O devices or RAM. The experimental results show that CacheKit can successfully evade memory introspection from the secure world and has small performance impacts on the rich OS. We discuss potential countermeasures to detect this type of rootkit attack.",
"title": ""
},
{
"docid": "3ff13bb873dd9a8deada0a7837c5eca4",
"text": "This work investigates the use of deep fully convolutional neural networks (DFCNN) for pixel-wise scene labeling of Earth Observation images. Especially, we train a variant of the SegNet architecture on remote sensing data over an urban area and study different strategies for performing accurate semantic segmentation. Our contributions are the following: 1) we transfer efficiently a DFCNN from generic everyday images to remote sensing images; 2) we introduce a multi-kernel convolutional layer for fast aggregation of predictions at multiple scales; 3) we perform data fusion from heterogeneous sensors (optical and laser) using residual correction. Our framework improves state-of-the-art accuracy on the ISPRS Vaihingen 2D Semantic Labeling dataset.",
"title": ""
},
{
"docid": "0277fd19009088f84ce9f94a7e942bc1",
"text": "These study it is necessary to can be used as a theoretical foundation upon which to base decision-making and strategic thinking about e-learning system. This paper proposes a new framework for assessing readiness of an organization to implement the e-learning system project on the basis of McKinsey 7S model using fuzzy logic analysis. The study considers 7 dimensions as approach to assessing the current situation of the organization prior to system implementation to identify weakness areas which may encounter the project with failure. Adopted was focus on Questionnaires and group interviews to specific data collection from three colleges in Mosul University in Iraq. This can be achieved success in building an e-learning system at the University of Mosul by readiness assessment according to the model of multidimensional based on the framework of 7S is selected by 23 factors, and thus can avoid failures or weaknesses facing the implementation process before the start of the project and a step towards enabling the administration to make decisions that achieve success in this area, as well as to avoid the high cost associated with the implementation process.",
"title": ""
},
{
"docid": "16fa2f02d0709c130cc35fce61793ae1",
"text": "Evaluating similarity between graphs is of major importance in several computer vision and pattern recognition problems, where graph representations are often used to model objects or interactions between elements. The choice of a distance or similarity metric is, however, not trivial and can be highly dependent on the application at hand. In this work, we propose a novel metric learning method to evaluate distance between graphs that leverages the power of convolutional neural networks, while exploiting concepts from spectral graph theory to allow these operations on irregular graphs. We demonstrate the potential of our method in the field of connectomics, where neuronal pathways or functional connections between brain regions are commonly modelled as graphs. In this problem, the definition of an appropriate graph similarity function is critical to unveil patterns of disruptions associated with certain brain disorders. Experimental results on the ABIDE dataset show that our method can learn a graph similarity metric tailored for a clinical application, improving the performance of a simple k-nn classifier by 11.9% compared to a traditional distance metric.",
"title": ""
},
{
"docid": "d6a6cadd782762e4591447b7dd2c870a",
"text": "OBJECTIVE\nThe objective of this study was to assess the effects of participation in a mindfulness meditation-based stress reduction program on mood disturbance and symptoms of stress in cancer outpatients.\n\n\nMETHODS\nA randomized, wait-list controlled design was used. A convenience sample of eligible cancer patients enrolled after giving informed consent and were randomly assigned to either an immediate treatment condition or a wait-list control condition. Patients completed the Profile of Mood States and the Symptoms of Stress Inventory both before and after the intervention. The intervention consisted of a weekly meditation group lasting 1.5 hours for 7 weeks plus home meditation practice.\n\n\nRESULTS\nNinety patients (mean age, 51 years) completed the study. The group was heterogeneous in type and stage of cancer. Patients' mean preintervention scores on dependent measures were equivalent between groups. After the intervention, patients in the treatment group had significantly lower scores on Total Mood Disturbance and subscales of Depression, Anxiety, Anger, and Confusion and more Vigor than control subjects. The treatment group also had fewer overall Symptoms of Stress; fewer Cardiopulmonary and Gastrointestinal symptoms; less Emotional Irritability, Depression, and Cognitive Disorganization; and fewer Habitual Patterns of stress. Overall reduction in Total Mood Disturbance was 65%, with a 31% reduction in Symptoms of Stress.\n\n\nCONCLUSIONS\nThis program was effective in decreasing mood disturbance and stress symptoms in both male and female patients with a wide variety of cancer diagnoses, stages of illness, and ages. cancer, stress, mood, intervention, mindfulness.",
"title": ""
},
{
"docid": "f3c6b42ed65b38708b12d46c48af4f0b",
"text": "Data are often labeled by many different experts with each expert only labeling a small fraction of the data and each data point being labeled by several experts. This reduces the workload on individual experts and also gives a better estimate of the unobserved ground truth. When experts disagree, the standard approaches are to treat the majority opinion as the correct label and to model the correct label as a distribution. These approaches, however, do not make any use of potentially valuable information about which expert produced which label. To make use of this extra information, we propose modeling the experts individually and then learning averaging weights for combining them, possibly in samplespecific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. Here we show that our approach leads to improvements in computeraided diagnosis of diabetic retinopathy. We also show that our method performs better than competing algorithms by Welinder and Perona (2010); Mnih and Hinton (2012). Our work offers an innovative approach for dealing with the myriad real-world settings that use expert opinions to define labels",
"title": ""
},
{
"docid": "b00ce7fc3de34fcc31ada0f66042ef5e",
"text": "If you get the printed book in on-line book store, you may also find the same problem. So, you must move store to store and search for the available there. But, it will not happen here. The book that we will offer right here is the soft file concept. This is what make you can easily find and get this secure broadcast communication in wired and wireless networks by reading this site. We offer you the best product, always and always.",
"title": ""
},
{
"docid": "cc56bbfe498556acb317fd325d750cf9",
"text": "The goal of the current work is to evaluate semantic feature aggregation techniques in a task of gender classification of public social media texts in Russian. We collect Facebook posts of Russian-speaking users and apply them as a dataset for two topic modelling techniques and a distributional clustering approach. The output of the algorithms is applied as a feature aggregation method in a task of gender classification based on a smaller Facebook sample. The classification performance of the best model is favorably compared against the lemmas baseline and the state-of-the-art results reported for a different genre or language. The resulting successful features are exemplified, and the difference between the three techniques in terms of classification performance and feature contents are discussed, with the best technique clearly outperforming the others.",
"title": ""
},
{
"docid": "26b992f705ef29460c0b459d75a115a8",
"text": "Supply chain management creates value for companies, customers and stakeholders interacting throughout a supply chain. The strategic dimension of supply chains makes it paramount that their performances are measured. In today’s performance evaluation processes, companies tend to refer to several models that will differ in terms of corporate organization, the distribution of responsibilities and supply chain maturity. The present article analyzes various models used to assess supply chains by highlighting their specific characteristics and applicability in different contexts. It also offers an analytical grid breaking these models down into seven layers. This grid will help managers evolve towards a model that is more suitable for their needs.",
"title": ""
},
{
"docid": "56a72aaff0c955b79449035f2cccabbc",
"text": "This work aims to identify the main aspects of Web design responsible for eliciting specific emotions. For this purpose, we performed a user study with 40 participants testing a Web application designed by applying a set of criteria for stimulating various emotions. In particular, we considered six emotions (hate, anxiety, boredom, fun, serenity, love), and for each of them a specific set of design criteria was exploited. The purpose of the study was to reach a better understanding regarding what design techniques are most important to stimulate each emotion. We report on the results obtained and discuss their implications. Such results can inform the development of guidelines for Web applications able to stimulate users’ emotions.",
"title": ""
},
{
"docid": "01034189c9a4aa11bdff074e7470b3f8",
"text": "We introducea methodfor predictinga controlsignalfrom anotherrelatedsignal,and applyit to voice puppetry: Generatingfull facialanimationfrom expressi ve information in anaudiotrack. Thevoicepuppetlearnsa facialcontrolmodelfrom computervision of realfacialbehavior, automaticallyincorporatingvocalandfacialdynamicssuchascoarticulation. Animation is producedby usingaudioto drive themodel,which induces a probability distribution over the manifold of possiblefacial motions. We presenta linear-time closed-formsolution for the most probabletrajectoryover this manifold. The outputis a seriesof facial control parameters, suitablefor driving many different kindsof animationrangingfrom video-realisticimagewarpsto 3D cartooncharacters. This work may not be copiedor reproducedin whole or in part for any commercialpurpose.Permissionto copy in whole or in part without paymentof fee is grantedfor nonprofiteducationaland researchpurposesprovided that all suchwhole or partial copiesincludethe following: a noticethat suchcopying is by permissionof Mitsubishi Electric InformationTechnologyCenterAmerica;an acknowledgmentof the authorsandindividual contributionsto the work; andall applicableportionsof the copyright notice. Copying, reproduction,or republishingfor any otherpurposeshall requirea licensewith paymentof feeto MitsubishiElectricInformationTechnologyCenterAmerica.All rightsreserved. Copyright c MitsubishiElectricInformationTechnologyCenterAmerica,1999 201Broadway, Cambridge,Massachusetts 02139 Publication History:– 1. 9sep98first circulated. 2. 7jan99submittedto SIGGRAPH’99",
"title": ""
},
{
"docid": "72b080856124d39b62d531cb52337ce9",
"text": "Experimental and clinical studies have identified a crucial role of microcirculation impairment in severe infections. We hypothesized that mottling, a sign of microcirculation alterations, was correlated to survival during septic shock. We conducted a prospective observational study in a tertiary teaching hospital. All consecutive patients with septic shock were included during a 7-month period. After initial resuscitation, we recorded hemodynamic parameters and analyzed their predictive value on mortality. The mottling score (from 0 to 5), based on mottling area extension from the knees to the periphery, was very reproducible, with an excellent agreement between independent observers [kappa = 0.87, 95% CI (0.72–0.97)]. Sixty patients were included. The SOFA score was 11.5 (8.5–14.5), SAPS II was 59 (45–71) and the 14-day mortality rate 45% [95% CI (33–58)]. Six hours after inclusion, oliguria [OR 10.8 95% CI (2.9, 52.8), p = 0.001], arterial lactate level [<1.5 OR 1; between 1.5 and 3 OR 3.8 (0.7–29.5); >3 OR 9.6 (2.1–70.6), p = 0.01] and mottling score [score 0–1 OR 1; score 2–3 OR 16, 95% CI (4–81); score 4–5 OR 74, 95% CI (11–1,568), p < 0.0001] were strongly associated with 14-day mortality, whereas the mean arterial pressure, central venous pressure and cardiac index were not. The higher the mottling score was, the earlier death occurred (p < 0.0001). Patients whose mottling score decreased during the resuscitation period had a better prognosis (14-day mortality 77 vs. 12%, p = 0.0005). The mottling score is reproducible and easy to evaluate at the bedside. The mottling score as well as its variation during resuscitation is a strong predictor of 14-day survival in patients with septic shock.",
"title": ""
},
{
"docid": "6cb2e41787378eca0dbbc892f46274e5",
"text": "Both reviews and user-item interactions (i.e., rating scores) have been widely adopted for user rating prediction. However, these existing techniques mainly extract the latent representations for users and items in an independent and static manner. That is, a single static feature vector is derived to encode user preference without considering the particular characteristics of each candidate item. We argue that this static encoding scheme is incapable of fully capturing users’ preferences, because users usually exhibit different preferences when interacting with different items. In this article, we propose a novel context-aware user-item representation learning model for rating prediction, named CARL. CARL derives a joint representation for a given user-item pair based on their individual latent features and latent feature interactions. Then, CARL adopts Factorization Machines to further model higher order feature interactions on the basis of the user-item pair for rating prediction. Specifically, two separate learning components are devised in CARL to exploit review data and interaction data, respectively: review-based feature learning and interaction-based feature learning. In the review-based learning component, with convolution operations and attention mechanism, the pair-based relevant features for the given user-item pair are extracted by jointly considering their corresponding reviews. However, these features are only reivew-driven and may not be comprehensive. Hence, an interaction-based learning component further extracts complementary features from interaction data alone, also on the basis of user-item pairs. The final rating score is then derived with a dynamic linear fusion mechanism. Experiments on seven real-world datasets show that CARL achieves significantly better rating prediction accuracy than existing state-of-the-art alternatives. Also, with the attention mechanism, we show that the pair-based relevant information (i.e., context-aware information) in reviews can be highlighted to interpret the rating prediction for different user-item pairs.",
"title": ""
},
{
"docid": "c206399c6ebf96f3de3aa5fdb10db49d",
"text": "Canine monocytotropic ehrlichiosis (CME), caused by the rickettsia Ehrlichia canis, an important canine disease with a worldwide distribution. Diagnosis of the disease can be challenging due to its different phases and multiple clinical manifestations. CME should be suspected when a compatible history (living in or traveling to an endemic region, previous tick exposure), typical clinical signs and characteristic hematological and biochemical abnormalities are present. Traditional diagnostic techniques including hematology, cytology, serology and isolation are valuable diagnostic tools for CME, however a definitive diagnosis of E. canis infection requires molecular techniques. This article reviews the current literature covering the diagnosis of infection caused by E. canis.",
"title": ""
},
{
"docid": "4074b8cd9b869a7a57f2697b97139308",
"text": "The highly influential framework of conceptual spaces provides a geometric way of representing knowledge. Instances are represented by points in a similarity space and concepts are represented by convex regions in this space. After pointing out a problem with the convexity requirement, we propose a formalization of conceptual spaces based on fuzzy star-shaped sets. Our formalization uses a parametric definition of concepts and extends the original framework by adding means to represent correlations between different domains in a geometric way. Moreover, we define various operations for our formalization, both for creating new concepts from old ones and for measuring relations between concepts. We present an illustrative toy-example and sketch a research project on concept formation that is based on both our formalization and its implementation.",
"title": ""
},
{
"docid": "559e5a5da1f0a924fc432e7f4c3548bd",
"text": "Deep learning is recently showing outstanding results for solving a wide variety of robotic tasks in the areas of perception, planning, localization, and control. Its excellent capabilities for learning representations from the complex data acquired in real environments make it extremely suitable for many kinds of autonomous robotic applications. In parallel, Unmanned Aerial Vehicles (UAVs) are currently being extensively applied for several types of civilian tasks in applications going from security, surveillance, and disaster rescue to parcel delivery or warehouse management. In this paper, a thorough review has been performed on recent reported uses and applications of deep learning forUAVs, including themost relevant developments as well as their performances and limitations. In addition, a detailed explanation of the main deep learning techniques is provided. We conclude with a description of the main challenges for the application of deep learning for UAV-based solutions.",
"title": ""
},
{
"docid": "be06f51778191cf3b4a97b25c367575e",
"text": "Wireless sensor networks are gaining more and more attention these days. They gave us the chance of collecting data from noisy environment. So it becomes possible to obtain precise and continuous monitoring of different phenomenons. However wireless Sensor Network (WSN) is affected by many anomalies that occur due to software or hardware problems. So various protocols are developed in order to detect and localize faults then distinguish the faulty node from the right one. In this paper we are concentrated on a specific type of faults in WSN which is the outlier. We are focus on the classification of data (outlier and normal) using three different methods of machine learning then we compare between them. These methods are validated using real data obtained from motes deployed in an actual living lab.",
"title": ""
},
{
"docid": "5898f4adaf86393972bcbf4c4ab91540",
"text": "This paper presents a non-intrusive approach for monitoring driver drowsiness using the fusion of several optimized indicators based on driver physical and driving performance measures, obtained from ADAS (Advanced Driver Assistant Systems) in simulated conditions. The paper is focused on real-time drowsiness detection technology rather than on long-term sleep/awake regulation prediction technology. We have developed our own vision system in order to obtain robust and optimized driver indicators able to be used in simulators and future real environments. These indicators are principally based on driver physical and driving performance skills. The fusion of several indicators, proposed in the literature, is evaluated using a neural network and a stochastic optimization method to obtain the best combination. We propose a new method for ground-truth generation based on a supervised Karolinska Sleepiness Scale (KSS). An extensive evaluation of indicators, derived from trials over a third generation simulator with several test subjects during different driving sessions, was performed. The main conclusions about the performance of single indicators and the best combinations of them are included, as well as the future works derived from this study.",
"title": ""
}
] |
scidocsrr
|
cead435001317fbeb3d78c5b1d4c5f5c
|
Website quality impact on customers' purchase intention through social commerce website
|
[
{
"docid": "7724384670e34c3492e563af9e2cad2b",
"text": "Social media have provided new opportunities to consumers to engage in social interaction on the internet. Consumers use social media, such as online communities, to generate content and to network with other users. The study of social media can also identify the advantages to be gained by business. A multidisciplinary model, building on the technology acceptance model and relevant literature on trust and social media, has been devised. The model has been validated by SEM-PLS, demonstrating the role of social media in the development of e-commerce into social commerce. The data emerging from a survey show how social media facilitate the social interaction of consumers, leading to increased trust and intention to buy. The results also show that trust has a significant direct effect on intention to buy. The perceived usefulness (PU) of a site is also identified as a contributory factor. At the end of the paper, the author discusses the results, along with implications, limitations and recommended future research directions.",
"title": ""
},
{
"docid": "0696f518544589e4f7dbee4b50886685",
"text": "This research was designed to theoretically address and empirically examine research issues related to customer’s satisfaction with social commerce. To investigate these research issues, data were collected using a written survey as part of a free simulation experiment. In this experiment, 136 participants were asked to evaluate two social commerce websites using an instrument designed to measure relationships between s-commerce website quality, customer psychological empowerment and customer satisfaction. A total of 278 usable s-commerce site evaluations were collected and analyzed. The results showed that customer satisfaction with social commerce is correlated with social commerce sites quality and customer psychological empowerment.",
"title": ""
}
] |
[
{
"docid": "5c46e5fc52797636bf389c8196deea86",
"text": "An efficient single-phase Transformerless grid-connected voltage source inverter topology by using the proposed active virtual ground (AVG) technique is presented. With the AVG, the conventional output L filter can be reconfigured to LCL structure without adding additional inductor. High-frequency differential mode current ripple can be significantly suppressed comparing to the available single-phase grid-connected inverter topologies. Additionally, strong attenuation to the high-frequency common-mode current is achieved. It is particularly important for some applications such as photovoltaic and motor drives. High efficiency can be achieved due to fewer components involved in the conduction loss. Cost of the magnetic device can be reduced since the required inductance of the filter becomes smaller. Performance of the proposed inverter has been evaluated analytically. Experimental verification is performed on a 1-kW, 400-V input, and 110-V/60-Hz output prototype.",
"title": ""
},
{
"docid": "7867544be1b36ffab85b02c63cb03922",
"text": "In this paper a general theory of multistage decimators and interpolators for sampling rate reduction and sampling rate increase is presented. A set of curves and the necessary relations for optimally designing multistage decimators is also given. It is shown that the processes of decimation and interpolation are duals and therefore the same set of design curves applies to both problems. Further, it is shown that highly efficient implementations of narrow-band finite impulse response (FIR) fiiters can be obtained by cascading the processes of decimation and interpolation. Examples show that the efficiencies obtained are comparable to those of recursive elliptic filter designs.",
"title": ""
},
{
"docid": "7150d210ad78110897c3b3f5078c935b",
"text": "Resolution in Magnetic Resonance (MR) is limited by diverse physical, technological and economical considerations. In conventional medical practice, resolution enhancement is usually performed with bicubic or B-spline interpolations, strongly affecting the accuracy of subsequent processing steps such as segmentation or registration. This paper presents a sparse-based super-resolution method, adapted for easily including prior knowledge, which couples up high and low frequency information so that a high-resolution version of a low-resolution brain MR image is generated. The proposed approach includes a whole-image multi-scale edge analysis and a dimensionality reduction scheme, which results in a remarkable improvement of the computational speed and accuracy, taking nearly 26 min to generate a complete 3D high-resolution reconstruction. The method was validated by comparing interpolated and reconstructed versions of 29 MR brain volumes with the original images, acquired in a 3T scanner, obtaining a reduction of 70% in the root mean squared error, an increment of 10.3 dB in the peak signal-to-noise ratio, and an agreement of 85% in the binary gray matter segmentations. The proposed method is shown to outperform a recent state-of-the-art algorithm, suggesting a substantial impact in voxel-based morphometry studies.",
"title": ""
},
{
"docid": "e3f847a7c815772b909fcccbafed4af3",
"text": "The contribution of tumorigenic stem cells to haematopoietic cancers has been established for some time, and cells possessing stem-cell properties have been described in several solid tumours. Although chemotherapy kills most cells in a tumour, it is believed to leave tumour stem cells behind, which might be an important mechanism of resistance. For example, the ATP-binding cassette (ABC) drug transporters have been shown to protect cancer stem cells from chemotherapeutic agents. Gaining a better insight into the mechanisms of stem-cell resistance to chemotherapy might therefore lead to new therapeutic targets and better anticancer strategies.",
"title": ""
},
{
"docid": "ec788f48207b0a001810e1eabf6b2312",
"text": "Maximum likelihood factor analysis provides an effective method for estimation of factor matrices and a useful test statistic in the likelihood ratio for rejection of overly simple factor models. A reliability coefficient is proposed to indicate quality of representation of interrelations among attributes in a battery by a maximum likelihood factor analysis. Usually, for a large sample of individuals or objects, the likelihood ratio statistic could indicate that an otherwise acceptable factor model does not exactly represent the interrelations among the attributes for a population. The reliability coefficient could indicate a very close representation in this case and be a better indication as to whether to accept or reject the factor solution.",
"title": ""
},
{
"docid": "dca8895967ae9b86979f428d77e84ae5",
"text": "This study examined how the frequency of positive and negative emotions is related to life satisfaction across nations. Participants were 8,557 people from 46 countries who reported on their life satisfaction and frequency of positive and negative emotions. Multilevel analyses showed that across nations, the experience of positive emotions was more strongly related to life satisfaction than the absence of negative emotions. Yet, the cultural dimensions of individualism and survival/self-expression moderated these relationships. Negative emotional experiences were more negatively related to life satisfaction in individualistic than in collectivistic nations, and positive emotional experiences had a larger positive relationship with life satisfaction in nations that stress self-expression than in nations that value survival. These findings show how emotional aspects of the good life vary with national culture and how this depends on the values that characterize one's society. Although to some degree, positive and negative emotions might be universally viewed as desirable and undesirable, respectively, there appear to be clear cultural differences in how relevant such emotional experiences are to quality of life.",
"title": ""
},
{
"docid": "722bb59033ea5722b762ccac5d032235",
"text": "In this paper, we provide a real nursing data set for mobile activity recognition that can be used for supervised machine learning, and big data combined the patient medical records and sensors attempted for 2 years, and also propose a method for recognizing activities for a whole day utilizing prior knowledge about the activity segments in a day. Furthermore, we demonstrate data mining by applying our method to the bigger data with additional hospital data. In the proposed method, we 1) convert a set of segment timestamps into a prior probability of the activity segment by exploiting the concept of importance sampling, 2) obtain the likelihood of traditional recognition methods for each local time window within the segment range, and, 3) apply Bayesian estimation by marginalizing the conditional probability of estimating the activities for the segment samples. By evaluating with the dataset, the proposed method outperformed the traditional method without using the prior knowledge by 25.81% at maximum by balanced classification rate. Moreover, the proposed method significantly reduces duration errors of activity segments from 324.2 seconds of the traditional method to 74.6 seconds at maximum. We also demonstrate the data mining by applying our method to bigger data in a hospital.",
"title": ""
},
{
"docid": "021d51e8152d2e2a9a834b5838139605",
"text": "Social networking sites (SNSs) have gained substantial popularity among youth in recent years. However, the relationship between the use of these Web-based platforms and mental health problems in children and adolescents is unclear. This study investigated the association between time spent on SNSs and unmet need for mental health support, poor self-rated mental health, and reports of psychological distress and suicidal ideation in a representative sample of middle and high school children in Ottawa, Canada. Data for this study were based on 753 students (55% female; Mage=14.1 years) in grades 7-12 derived from the 2013 Ontario Student Drug Use and Health Survey. Multinomial logistic regression was used to examine the associations between mental health variables and time spent using SNSs. Overall, 25.2% of students reported using SNSs for more than 2 hours every day, 54.3% reported using SNSs for 2 hours or less every day, and 20.5% reported infrequent or no use of SNSs. Students who reported unmet need for mental health support were more likely to report using SNSs for more than 2 hours every day than those with no identified unmet need for mental health support. Daily SNS use of more than 2 hours was also independently associated with poor self-rating of mental health and experiences of high levels of psychological distress and suicidal ideation. The findings suggest that students with poor mental health may be greater users of SNSs. These results indicate an opportunity to enhance the presence of health service providers on SNSs in order to provide support to youth.",
"title": ""
},
{
"docid": "a786837b12c07039d4eca34c02e5c7d6",
"text": "The wafer level package (WLP) is a cost-effective solution for electronic package, and it has been increasingly applied during recent years. In this study, a new packaging technology which retains the advantages of WLP, the panel level package (PLP) technology, is proposed to further obtain the capability of signals fan-out for the fine-pitched integrated circuit (IC). In the PLP, the filler material is selected to fill the trench around the chip and provide a smooth surface for the redistribution lines. Therefore, the solder bumps could be located on both the filler and the chip surface, and the pitch of the chip side is fanned-out. In our previous research, it was found that the lifetime of solder joints in PLP can easily pass 3,500 cycles. The outstanding performance is explained by the application of a soft filler and a lamination material. However, it is also learned that the deformation of the lamination material during thermal loading may affect the reliability of the adjacent metal trace. In this study, the material effects of the proposed PLP technology are investigated and discussed through finite element analysis (FEA). A factorial analysis with three levels and three factors (the chip carrier, the lamination, and the filler material) is performed to obtain sensitivity information. Based on the results, the suggested combinations of packaging material in the PLP are provided. The reliability of the metal trace can be effectively improved by means of wisely applying materials in the PLP, and therefore, the PLP technology is expected to have a high potential for various applications in the near future.",
"title": ""
},
{
"docid": "ee9c0e79b29fbe647e3e0ccb168532b5",
"text": "We propose an effective approach for spatio-temporal action localization in realistic videos. The approach first detects proposals at the frame-level and scores them with a combination of static and motion CNN features. It then tracks high-scoring proposals throughout the video using a tracking-by-detection approach. Our tracker relies simultaneously on instance-level and class-level detectors. The tracks are scored using a spatio-temporal motion histogram, a descriptor at the track level, in combination with the CNN features. Finally, we perform temporal localization of the action using a sliding-window approach at the track level. We present experimental results for spatio-temporal localization on the UCF-Sports, J-HMDB and UCF-101 action localization datasets, where our approach outperforms the state of the art with a margin of 15%, 7% and 12% respectively in mAP.",
"title": ""
},
{
"docid": "caa252bbfad7ab5c989ae7687818f8ae",
"text": "Nowadays, GPU accelerators are widely used in areas with large data-parallel computations such as scientific computations or neural networks. Programmers can either write code in low-level CUDA/OpenCL code or use a GPU extension for a high-level programming language for better productivity. Most extensions focus on statically-typed languages, but many programmers prefer dynamically-typed languages due to their simplicity and flexibility. \n This paper shows how programmers can write high-level modular code in Ikra, a Ruby extension for array-based GPU computing. Programmers can compose GPU programs of multiple reusable parallel sections, which are subsequently fused into a small number of GPU kernels. We propose a seamless syntax for separating code regions that extensively use dynamic language features from those that are compiled for efficient execution. Moreover, we propose symbolic execution and a program analysis for kernel fusion to achieve performance that is close to hand-written CUDA code.",
"title": ""
},
{
"docid": "d3142d58a777bd86c460733011d27d3b",
"text": "Recent studies of distributional semantic models have set up a competition between word embeddings obtained from predictive neural networks and word vectors obtained from count-based models. This paper is an attempt to reveal the underlying contribution of additional training data and post-processing steps on each type of model in word similarity and relatedness inference tasks. We do so by designing an artificial language, training a predictive and a count-based model on data sampled from this grammar, and evaluating the resulting word vectors in paradigmatic and syntagmatic tasks defined with respect to the grammar.",
"title": ""
},
{
"docid": "5d424f550cb19265f68d24f22bbcd237",
"text": "We have succeeded in developing three techniques, a precise lens-alignment technique, low-loss built-in Spatial Multiplexing optics and a well-matched electrical connection for high-frequency signals, which are indispensable for realizing compact high-performance TOSAs and ROSAs employing hybrid integration technology. The lens position was controlled to within ±0.3 μm by high-power laser irradiation. All components comprising the multiplexing optics are bonded to a prism, enabling the insertion loss to be held down to 0.8 dB due to the dimensional accuracy of the prism. The addition of an FPC layer reduced the impedance mismatch at the junction between the FPC and PCB. We demonstrated a compact integrated four-lane 25 Gb/s TOSA (15.1 mm × 6.5 mm × 5.6 mm) and ROSA (17.0 mm × 12.0 mm × 7.0 mm) using the built-in spatial Mux/Demux optics with good transmission performance for 100 Gb/s Ethernet. These are respectively suitable for the QSFP28 and CFP2 form factors. key words: hybrid integration, optical sub-assembly, 100 Gb/s Ethernet",
"title": ""
},
{
"docid": "058bcdfd935b5906381d7c5b31a8b744",
"text": "BACKGROUND\nValproate was initially introduced as an antiepileptic agent in 1967, but has been used over the years to treat a variety of psychiatric disorders. Its use in the treatment of patients exhibiting aggressive and violent behaviors has been reported in the literature as far back as 1988. However, these reports are uncontrolled, which is in marked contrast to the actual wide and established use of valproate for the treatment of aggressive behaviors. The aim of this report is to critically review the available data on valproate's use in nonbipolar patients with aggressive and violent behaviors.\n\n\nDATA SOURCES\nThe MEDLINE and PsycLIT databases were searched for all reports published from 1987-1998 containing the keywords valproate, the names of all commercial preparations, aggression, and violence.\n\n\nSTUDY FINDINGS\nSeventeen reports with a total of 164 patients were located. Ten of these were case reports with a total of 31 patients. Three were retrospective chart reviews with 83 patients, and 3 were open-label prospective studies with a total of 34 patients. No double-blind, placebo-controlled study could be found. An overall response rate of 77.1% was calculated when response was defined as a 50% reduction of target behavior. Most frequent diagnoses recorded were dementia, organic brain syndromes, and mental retardation. The antiaggressive response usually occurred in conjunction with other psychotropic medication. The dose and plasma valproate level required for response appeared to be the same as in the treatment of seizure disorders.\n\n\nDISCUSSION\nWhile valproate's general antiaggressive effect is promising, in the absence of controlled data, conclusions are limited at this time. Specific recommendations for study design are given to obtain interpretable data for this indication.",
"title": ""
},
{
"docid": "f4d9190ad9123ddcf809f47c71225162",
"text": "Please cite this article in press as: Tseng, M Industrial Engineering (2009), doi:10.1016/ Selection of appropriate suppliers in supply chain management strategy (SCMS) is a challenging issue because it requires battery of evaluation criteria/attributes, which are characterized with complexity, elusiveness, and uncertainty in nature. This paper proposes a novel hierarchical evaluation framework to assist the expert group to select the optimal supplier in SCMS. The rationales for the evaluation framework are based upon (i) multi-criteria decision making (MCDM) analysis that can select the most appropriate alternative from a finite set of alternatives with reference to multiple conflicting criteria, (ii) analytic network process (ANP) technique that can simultaneously take into account the relationships of feedback and dependence of criteria, and (iii) choquet integral—a non-additive fuzzy integral that can eliminate the interactivity of expert subjective judgment problems. A case PCB manufacturing firm is studied and the results indicated that the proposed evaluation framework is simple and reasonable to identify the primary criteria influencing the SCMS, and it is effective to determine the optimal supplier even with the interactive and interdependent criteria/attributes. This hierarchical evaluation framework provides a complete picture in SCMS contexts to both researchers and practitioners. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c7e584bca061335c8cd085511f4abb3b",
"text": "The application of boosting technique to regression problems has received relatively little attention in contrast to research aimed at classification problems. This letter describes a new boosting algorithm, AdaBoost.RT, for regression problems. Its idea is in filtering out the examples with the relative estimation error that is higher than the preset threshold value, and then following the AdaBoost procedure. Thus, it requires selecting the suboptimal value of the error threshold to demarcate examples as poorly or well predicted. Some experimental results using the M5 model tree as a weak learning machine for several benchmark data sets are reported. The results are compared to other boosting methods, bagging, artificial neural networks, and a single M5 model tree. The preliminary empirical comparisons show higher performance of AdaBoost.RT for most of the considered data sets.",
"title": ""
},
{
"docid": "1c8ae6e8d46e95897a9bd76e09fd28aa",
"text": "Skin diseases are very common in our daily life. Due to the similar appearance of skin diseases, automatic classification through lesion images is quite a challenging task. In this paper, a novel multi-classification method based on convolutional neural network (CNN) is proposed for dermoscopy images. A CNN network with nested residual structure is designed first, which can learn more information than the original residual structure. Then, the designed network are trained through transfer learning. With the trained network, 6 kinds of lesion diseases are classified, including nevus, seborrheic keratosis, psoriasis, seborrheic dermatitis, eczema and basal cell carcinoma. The experiments are conducted on six-classification and two-classification tasks, and with the accuracies of 65.8% and 90% respectively, our method greatly outperforms other 4 state-of-the-art networks and the average of 149 professional dermatologists.",
"title": ""
},
{
"docid": "561615280956051346f55269054f3632",
"text": "Jasmonate is an important endogenous chemical signal that plays a role in modulation of plant defense responses. To understand its mechanisms in regulation of rice resistance against the fungal pathogen Magnaporthe oryzae, comparative phenotype and proteomic analyses were undertaken using two near-isogenic cultivars with different levels of disease resistance. Methyl-jasmonate (MeJA) treatment significantly enhanced the resistance against M. oryzae in both cultivars but the treated resistant cultivar maintained a higher level of resistance than the same treated susceptible cultivars. Proteomic analysis revealed 26 and 16 MeJA-modulated proteins in resistant and susceptible cultivars, respectively, and both cultivars shared a common set of 13 proteins. Cumulatively, a total of 29 unique MeJA-influenced proteins were identified with many of them known to be associated with plant defense response and ROS accumulation. Consistent with the findings of proteomic analysis, MeJA treatment increased ROS accumulation in both cultivars with the resistant cultivar showing higher levels of ROS production and cell membrane damage than the susceptible cultivar. Taken together, our data add a new insight into the mechanisms of overall MeJA-induced rice defense response and provide a molecular basis of using MeJA to enhance fungal disease resistance in resistant and susceptible rice cultivars.",
"title": ""
},
{
"docid": "80948f6534fd73a4a12af93cfff3084f",
"text": "The ubiquity of location enabled devices has resulted in a wide proliferation of location based applications and services. To handle the growing scale, database management systems driving such location based services (LBS) must cope with high insert rates for location updates of millions of devices, while supporting efficient real-time analysis on latest location. Traditional DBMSs, equipped with multi-dimensional index structures, can efficiently handle spatio-temporal data. However, popular open source relational database systems are overwhelmed by the high insertion rates, real-time querying requirements, and terabytes of data that these systems must handle. On the other hand, Key-value stores can effectively support large scale operation, but do not natively support multi-attribute accesses needed to support the rich querying functionality essential for the LBSs. We present MD-HBase, a scalable data management system for LBSs that bridges this gap between scale and functionality. Our approach leverages a multi-dimensional index structure layered over a Key-value store. The underlying Key-value store allows the system to sustain high insert throughput and large data volumes, while ensuring fault-tolerance, and high availability. On the other hand, the index layer allows efficient multi-dimensional query processing. We present the design of MD-HBase that builds two standard index structuresâ€\"the K-d tree and the Quad treeâ€\"over a range partitioned Key-value store. Our prototype implementation using HBase, a standard open-source Key-value store, can handle hundreds of thousands of inserts per second using a modest 16 node cluster, while efficiently processing multidimensional range queries and nearest neighbor queries in real-time with response times as low as hundreds of milliseconds.",
"title": ""
},
{
"docid": "8ca30cd6fd335024690837c137f0d1af",
"text": "Non-negative matrix factorization (NMF) is a recently deve loped technique for finding parts-based, linear representations of non-negative data. Although it h as successfully been applied in several applications, it does not always result in parts-based repr esentations. In this paper, we show how explicitly incorporating the notion of ‘sparseness’ impro ves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF a nd for our extension. Our hope is that this will further the application of these methods to olving novel data-analysis problems.",
"title": ""
}
] |
scidocsrr
|
50a10e9e0aa69ff54db368e6268ec580
|
Effective Social Graph Deanonymization Based on Graph Structure and Descriptive Information
|
[
{
"docid": "d67cd936448ea71c8f4f54edbc04c292",
"text": "Matching elements of two data schemas or two data instances plays a key role in data warehousing, e-business, or even biochemical applications. In this paper we present a matching algorithm based on a fixpoint computation that is usable across different scenarios. The algorithm takes two graphs (schemas, catalogs, or other data structures) as input, and produces as output a mapping between corresponding nodes of the graphs. Depending on the matching goal, a subset of the mapping is chosen using filters. After our algorithm runs, we expect a human to check and if necessary adjust the results. As a matter of fact, we evaluate the ‘accuracy’ of the algorithm by counting the number of needed adjustments. We conducted a user study, in which our accuracy metric was used to estimate the labor savings that the users could obtain by utilizing our algorithm to obtain an initial matching. Finally, we illustrate how our matching algorithm is deployed as one of several high-level operators in an implemented testbed for managing information models and mappings.",
"title": ""
}
] |
[
{
"docid": "438e934fd2b149c0c756bbf97216cb1f",
"text": "NoSQL databases manage the bulk of data produced by modern Web applications such as social networks. This stems from their ability to partition and spread data to all available nodes, allowing NoSQL systems to scale. Unfortunately, current solutions' scale out is oblivious to the underlying data access patterns, resulting in both highly skewed load across nodes and suboptimal node configurations.\n In this paper, we first show that judicious placement of HBase partitions taking into account data access patterns can improve overall throughput by 35%. Next, we go beyond current state of the art elastic systems limited to uninformed replica addition and removal by: i) reconfiguring existing replicas according to access patterns and ii) adding replicas specifically configured to the expected access pattern.\n MeT is a prototype for a Cloud-enabled framework that can be used alone or in conjunction with OpenStack for the automatic and heterogeneous reconfiguration of a HBase deployment. Our evaluation, conducted using the YCSB workload generator and a TPC-C workload, shows that MeT is able to i) autonomously achieve the performance of a manual configured cluster and ii) quickly reconfigure the cluster according to unpredicted workload changes.",
"title": ""
},
{
"docid": "bf932af43192818825b29d98ed32f35f",
"text": "Most software quality research has focused on identifying faults (i.e., information is incorrectly recorded in an artifact). Because software still exhibits incorrect behavior, a different approach is needed. This paper presents a systematic literature review to develop taxonomy of errors (i.e., the sources of faults) that may occur during the requirements phase of software lifecycle. This taxonomy is designed to aid developers during the requirement inspection process and to improve overall software quality. The review identified 149 papers from the software engineering, psychology and human cognition literature that provide information about the sources of requirements faults. A major result of this paper is a categorization of the sources of faults into a formal taxonomy that provides a starting point for future research into error-based approaches to improving software quality. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "296da9be6a4b3c6d111f875157e196c8",
"text": "Histopathology image analysis is a gold standard for cancer recognition and diagnosis. Automatic analysis of histopathology images can help pathologists diagnose tumor and cancer subtypes, alleviating the workload of pathologists. There are two basic types of tasks in digital histopathology image analysis: image classification and image segmentation. Typical problems with histopathology images that hamper automatic analysis include complex clinical representations, limited quantities of training images in a dataset, and the extremely large size of singular images (usually up to gigapixels). The property of extremely large size for a single image also makes a histopathology image dataset be considered large-scale, even if the number of images in the dataset is limited. In this paper, we propose leveraging deep convolutional neural network (CNN) activation features to perform classification, segmentation and visualization in large-scale tissue histopathology images. Our framework transfers features extracted from CNNs trained by a large natural image database, ImageNet, to histopathology images. We also explore the characteristics of CNN features by visualizing the response of individual neuron components in the last hidden layer. Some of these characteristics reveal biological insights that have been verified by pathologists. According to our experiments, the framework proposed has shown state-of-the-art performance on a brain tumor dataset from the MICCAI 2014 Brain Tumor Digital Pathology Challenge and a colon cancer histopathology image dataset. The framework proposed is a simple, efficient and effective system for histopathology image automatic analysis. We successfully transfer ImageNet knowledge as deep convolutional activation features to the classification and segmentation of histopathology images with little training data. CNN features are significantly more powerful than expert-designed features.",
"title": ""
},
{
"docid": "5006a2106f5cb5e97f2b4499fa9e2da5",
"text": "OpenEDGAR is an open source Python framework designed to rapidly construct research databases based on the Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system operated by the US Securities and Exchange Commission (SEC). OpenEDGAR is built on the Django application framework, supports distributed compute across one or more servers, and includes functionality to (i) retrieve and parse index and filing data from EDGAR, (ii) build tables for key metadata like form type and filer, (iii) retrieve, parse, and update CIK to ticker and industry mappings, (iv) extract content and metadata from filing documents, and (v) search filing document contents. OpenEDGAR is designed for use in both academic research and industrial applications, and is distributed under MIT License at https://github.com/LexPredict/openedgar.",
"title": ""
},
{
"docid": "517d60646fd6a570a70d555f8046cff3",
"text": "In many visual classification tasks the spatial distribution of discriminative information is (i) non uniform e.g. person `reading' can be distinguished from `taking a photo' based on the area around the arms i.e. ignoring the legs and (ii) has intra class variations e.g. different readers may hold the books differently. Motivated by these observations, we propose to learn the discriminative spatial saliency of images while simultaneously learning a max margin classifier for a given visual classification task. Using the saliency maps to weight the corresponding visual features improves the discriminative power of the image representation. We treat the saliency maps as latent variables and allow them to adapt to the image content to maximize the classification score, while regularizing the change in the saliency maps. Our experimental results on three challenging datasets, for (i) human action classification, (ii) fine grained classification and (iii) scene classification, demonstrate the effectiveness and wide applicability of the method.",
"title": ""
},
{
"docid": "efb81d85abcf62f4f3747a58154c5144",
"text": "Visual signals in a video can be divided into content and motion. While content specifies which objects are in the video, motion describes their dynamics. Based on this prior, we propose the Motion and Content decomposed Generative Adversarial Network (MoCoGAN) framework for video generation. The proposed framework generates a video by mapping a sequence of random vectors to a sequence of video frames. Each random vector consists of a content part and a motion part. While the content part is kept fixed, the motion part is realized as a stochastic process. To learn motion and content decomposition in an unsupervised manner, we introduce a novel adversarial learning scheme utilizing both image and video discriminators. Extensive experimental results on several challenging datasets with qualitative and quantitative comparison to the state-of-the-art approaches, verify effectiveness of the proposed framework. In addition, we show that MoCoGAN allows one to generate videos with same content but different motion as well as videos with different content and same motion. Our code is available at https://github.com/sergeytulyakov/mocogan.",
"title": ""
},
{
"docid": "e524172ea4a9ea547aebe3dae1a2f47f",
"text": "In this paper, the built-in transformer voltage multiplier cell is inserted into each phase of the conventional interleaved boost converter to provide additional control freedom for the voltage gain extension without extreme duty cycle. The voltage multiplier cell is only composed of the built-in transformer windings, diodes and small capacitors. And additional active switches are not required to simplify the circuit configuration. Furthermore, the switch voltage stress and the diode peak current are also minimized due to the built-in transformer voltage multiplier cells to improve the conversion efficiency. Moreover, there is no reverse-recovery problem for the clamp diodes and the reverse-recovery current for the regenerative and output diodes are controlled by the leakage inductance of the built-in transformer to reduce the relative losses. In addition, the switch turn-off voltage spikes are suppressed effectively by the ingenious and inherent passive clamp scheme and zero current switch (ZCS) turn-on is realized for the switches, which can enhance the power device reliability. Finally, a 40 V-input 380 V-output 1 kW prototype is built to demonstrate the clear advantages of the proposed converter.",
"title": ""
},
{
"docid": "a2adeb9448c699bbcbb10d02a87e87a5",
"text": "OBJECTIVE\nTo quantify the presence of health behavior theory constructs in iPhone apps targeting physical activity.\n\n\nMETHODS\nThis study used a content analysis of 127 apps from Apple's (App Store) Health & Fitness category. Coders downloaded the apps and then used an established theory-based instrument to rate each app's inclusion of theoretical constructs from prominent behavior change theories. Five common items were used to measure 20 theoretical constructs, for a total of 100 items. A theory score was calculated for each app. Multiple regression analysis was used to identify factors associated with higher theory scores.\n\n\nRESULTS\nApps were generally observed to be lacking in theoretical content. Theory scores ranged from 1 to 28 on a 100-point scale. The health belief model was the most prevalent theory, accounting for 32% of all constructs. Regression analyses indicated that higher priced apps and apps that addressed a broader activity spectrum were associated with higher total theory scores.\n\n\nCONCLUSION\nIt is not unexpected that apps contained only minimal theoretical content, given that app developers come from a variety of backgrounds and many are not trained in the application of health behavior theory. The relationship between price and theory score corroborates research indicating that higher quality apps are more expensive. There is an opportunity for health and behavior change experts to partner with app developers to incorporate behavior change theories into the development of apps. These future collaborations between health behavior change experts and app developers could foster apps superior in both theory and programming possibly resulting in better health outcomes.",
"title": ""
},
{
"docid": "6b8329ef59c6811705688e48bf6c0c08",
"text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.",
"title": ""
},
{
"docid": "e70425a0b9d14ff4223f3553de52c046",
"text": "CUDA is a new general-purpose C language interface to GPU developed by NVIDIA. It makes full use of parallel of GPU and has been widely used now. 3D model reconstruction is a traditional and common technique which has been widely used in engineering experiments, CAD and computer graphics. In this paper, we present an algorithm of CUDA-based Poisson surface reconstruction. Our algorithm makes full use of parallel of GPU and runs entirely on GPU and is ten times faster than previous CPU algorithm.",
"title": ""
},
{
"docid": "572867885a16afc0af6a8ed92632a2a7",
"text": "We present an Efficient Log-based Troubleshooting(ELT) system for cloud computing infrastructures. ELT adopts a novel hybrid log mining approach that combines coarse-grained and fine-grained log features to achieve both high accuracy and low overhead. Moreover, ELT can automatically extract key log messages and perform invariant checking to greatly simplify the troubleshooting task for the system administrator. We have implemented a prototype of the ELT system and conducted an extensive experimental study using real management console logs of a production cloud system and a Hadoop cluster. Our experimental results show that ELT can achieve more efficient and powerful troubleshooting support than existing schemes. More importantly, ELT can find software bugs that cannot be detected by current cloud system management practice.",
"title": ""
},
{
"docid": "af5645e4c2b37d229b525ff3bbac505f",
"text": "PURPOSE OF REVIEW\nTo analyze the role of prepuce preservation in various disorders and discuss options available to reconstruct the prepuce.\n\n\nRECENT FINDINGS\nThe prepuce can be preserved in selected cases of penile degloving procedures, phimosis or hypospadias repair, and penile cancer resection. There is no clear evidence that debilitating and persistent preputial lymphedema develops after a prepuce-sparing penile degloving procedure. In fact, the prepuce can at times be preserved even if lymphedema develops. The prepuce can potentially be preserved in both phimosis and hypospadias repair. Penile cancer localized to the prepuce can be excised using Mohs' micrographic surgery without compromising survival. Reconstruction of the prepuce still remains a theoretical topic. There has been no study that has systematically evaluated efficacy of any reconstructive procedures.\n\n\nSUMMARY\nThe standard practice for preputial disorders remains circumcision. However, prepuce preservation is often technically feasible without compromising treatment. Preservative surgery combined with reconstruction may lead to better patient satisfaction and quality of life.",
"title": ""
},
{
"docid": "ecccd99ca44298ac58156adf14048c09",
"text": "String similarity search is a fundamental query that has been widely used for DNA sequencing, error-tolerant query auto-completion, and data cleaning needed in database, data warehouse, and data mining. In this paper, we study string similarity search based on edit distance that is supported by many database management systems such as <italic>Oracle </italic> and <italic>PostgreSQL</italic>. Given the edit distance, <inline-formula><tex-math notation=\"LaTeX\"> ${\\mathsf {ed}} (s,t)$</tex-math><alternatives><inline-graphic xlink:href=\"yu-ieq1-2756932.gif\"/></alternatives> </inline-formula>, between two strings, <inline-formula><tex-math notation=\"LaTeX\">$s$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq2-2756932.gif\"/></alternatives></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$t$</tex-math><alternatives><inline-graphic xlink:href=\"yu-ieq3-2756932.gif\"/></alternatives> </inline-formula>, the string similarity search is to find every string <inline-formula><tex-math notation=\"LaTeX\">$t$ </tex-math><alternatives><inline-graphic xlink:href=\"yu-ieq4-2756932.gif\"/></alternatives></inline-formula> in a string database <inline-formula><tex-math notation=\"LaTeX\">$D$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq5-2756932.gif\"/></alternatives></inline-formula> which is similar to a query string <inline-formula><tex-math notation=\"LaTeX\">$s$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq6-2756932.gif\"/></alternatives></inline-formula> such that <inline-formula> <tex-math notation=\"LaTeX\">${\\mathsf {ed}} (s, t) \\leq \\tau$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq7-2756932.gif\"/></alternatives></inline-formula> for a given threshold <inline-formula><tex-math notation=\"LaTeX\">$\\tau$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq8-2756932.gif\"/></alternatives></inline-formula>. In the literature, most existing work takes a filter-and-verify approach, where the filter step is introduced to reduce the high verification cost of two strings by utilizing an index built offline for <inline-formula><tex-math notation=\"LaTeX\">$D$</tex-math> <alternatives><inline-graphic xlink:href=\"yu-ieq9-2756932.gif\"/></alternatives></inline-formula>. The two up-to-date approaches are prefix filtering and local filtering. In this paper, we study string similarity search where strings can be either short or long. Our approach can support long strings, which are not well supported by the existing approaches due to the size of the index built and the time to build such index. We propose two new hash-based labeling techniques, named <inline-formula><tex-math notation=\"LaTeX\">$\\mathsf {OX}$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq10-2756932.gif\"/></alternatives></inline-formula> label and <inline-formula> <tex-math notation=\"LaTeX\">$\\mathsf {XX}$</tex-math><alternatives><inline-graphic xlink:href=\"yu-ieq11-2756932.gif\"/> </alternatives></inline-formula> label, for string similarity search. We assign a hash-label, <inline-formula> <tex-math notation=\"LaTeX\">${\\mathsf {H}} _s$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq12-2756932.gif\"/></alternatives></inline-formula>, to a string <inline-formula> <tex-math notation=\"LaTeX\">$s$</tex-math><alternatives><inline-graphic xlink:href=\"yu-ieq13-2756932.gif\"/> </alternatives></inline-formula>, and prune the dissimilar strings by comparing two hash-labels, <inline-formula> <tex-math notation=\"LaTeX\">${\\mathsf {H}} _s$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq14-2756932.gif\"/></alternatives></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${\\mathsf {H}} _t$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq15-2756932.gif\"/></alternatives></inline-formula>, for two strings <inline-formula> <tex-math notation=\"LaTeX\">$s$</tex-math><alternatives><inline-graphic xlink:href=\"yu-ieq16-2756932.gif\"/> </alternatives></inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$t$</tex-math><alternatives> <inline-graphic xlink:href=\"yu-ieq17-2756932.gif\"/></alternatives></inline-formula> in the filter step. The key idea is to take the dissimilar bit-patterns between two hash-labels. We discuss our hash-based approaches, address their pruning power, and give the algorithms. Our hash-based approaches achieve high efficiency, and keep its index size and index construction time one order of magnitude smaller than the existing approaches in our experiment at the same time.",
"title": ""
},
{
"docid": "86a622185eeffc4a7ea96c307aed225a",
"text": "Copyright © 2014 Massachusetts Medical Society. In light of the rapidly shifting landscape regarding the legalization of marijuana for medical and recreational purposes, patients may be more likely to ask physicians about its potential adverse and beneficial effects on health. The popular notion seems to be that marijuana is a harmless pleasure, access to which should not be regulated or considered illegal. Currently, marijuana is the most commonly used “illicit” drug in the United States, with about 12% of people 12 years of age or older reporting use in the past year and particularly high rates of use among young people.1 The most common route of administration is inhalation. The greenish-gray shredded leaves and flowers of the Cannabis sativa plant are smoked (along with stems and seeds) in cigarettes, cigars, pipes, water pipes, or “blunts” (marijuana rolled in the tobacco-leaf wrapper from a cigar). Hashish is a related product created from the resin of marijuana flowers and is usually smoked (by itself or in a mixture with tobacco) but can be ingested orally. Marijuana can also be used to brew tea, and its oil-based extract can be mixed into food products. The regular use of marijuana during adolescence is of particular concern, since use by this age group is associated with an increased likelihood of deleterious consequences2 (Table 1). Although multiple studies have reported detrimental effects, others have not, and the question of whether marijuana is harmful remains the subject of heated debate. Here we review the current state of the science related to the adverse health effects of the recreational use of marijuana, focusing on those areas for which the evidence is strongest.",
"title": ""
},
{
"docid": "a2da0b3dde5d54f68616d3ca78a17c08",
"text": "The increase in storage capacity and the progress in information technology today lead to a rapid growth in the amount of stored data. In increasing amounts of data, gaining insight becomes rapidly more difficult. Existing automatic analysis approaches are not sufficient for the analysis of the data. The problem that the amount of stored data increases faster than the computing power to analyse the data is called information overload phenomenon. Visual analytics is an approach to overcome this problem. It combines the strengths of computers to quickly identify re-occurring patterns and to process large amounts of data with human strengths such as flexibility, intuition, and contextual knowledge. In the process of visual analytics knowledge is applied by expert users to conduct the analysis. In many settings the expert users will apply the similar knowledge continuously in several iterations or across various comparable analytical tasks. This approach is time consuming, costly and possibly frustrating for the expert users. Therefore a demand for concepts and methods to prevent repetitive analysis steps can be identified. This thesis presents a reference architecture for knowledge-based visual analytics systems, the KnoVA RA, that provides concepts and methods to represent, extract and reapply knowledge in visual analytic systems. The basic idea of the reference architecture is to extract knowledge that was applied in the analysis process in order to enhance or to derive automated analysis steps. The objective is to reduce the work-load of the experts and to enhance the traceability and reproducibility of results. The KnoVA RA consist of four parts: a model of the analysis process, the KnoVA process model, a meta data model for knowledge-based visual analytics systems, the KnoVA meta model, concepts and algorithms for the extraction of knowledge and concepts and algorithms for the reapplication of knowledge. With these concepts, the reference architecture servers as a blueprint for knowledge-based visual analytics systems. To create the reference architecture, in this thesis, two real-world scenarios from different application domains (automotive and healthcare) are introduced. These scenarios provide requirements that lead to implications for the design of the reference architecture. On the example of the motivating scenarios the KnovA RA is implemented in two visual analytics applications: TOAD, for the analysis of message traces of in-car bus communication networks and CARELIS, for the aggregation of medical records on an interactive visual interface. These systems illustrate the applicability of the KnoVA RA across different analytical challenges and problem classes.",
"title": ""
},
{
"docid": "dcb07c9ad800fc82b97fda1d4aa5d298",
"text": "Reprojection error is a commonly used measure for comparing the quality of different camera calibrations, for example when choosing the best calibration from a set. While this measure is suitable for single cameras, we show that we can improve calibrations in a binocular or multi-camera setup by calibrating the cameras in pairs using a rectification error. The rectification error determines the mismatch in epipolar constraints between a pair of cameras, and it can be used to calibrate binocular camera setups more accurately than using the reprojection error. We provide a quantitative comparison of the reprojection and rectification errors, and also demonstrate our result with examples of binocular stereo reconstruction.",
"title": ""
},
{
"docid": "1bb5a42f2264c082b78a645eb9dd5bd5",
"text": "We introduce the structured projection of intermediate gradients optimization technique (SPIGOT), a new method for backpropagating through neural networks that include hard-decision structured predictions (e.g., parsing) in intermediate layers. SPIGOT requires no marginal inference, unlike structured attention networks (Kim et al., 2017) and some reinforcement learning-inspired solutions (Yogatama et al., 2017). Like socalled straight-through estimators (Hinton, 2012), SPIGOT defines gradient-like quantities associated with intermediate nondifferentiable operations, allowing backpropagation before and after them; SPIGOT’s proxy aims to ensure that, after a parameter update, the intermediate structure will remain well-formed. We experiment on two structured NLP pipelines: syntactic-then-semantic dependency parsing, and semantic parsing followed by sentiment classification. We show that training with SPIGOT leads to a larger improvement on the downstream task than a modularly-trained pipeline, the straight-through estimator, and structured attention, reaching a new state of the art on semantic dependency parsing.",
"title": ""
},
{
"docid": "63ab6c486aa8025c38bd5b7eadb68cfa",
"text": "The demands on a natural language understanding system used for spoken language differ somewhat from the demands of text processing. For processing spoken language, there is a tension between the system being as robust as necessary, and as constrained as possible. The robust system will a t tempt to find as sensible an interpretation as possible, even in the presence of performance errors by the speaker, or recognition errors by the speech recognizer. In contrast, in order to provide language constraints to a speech recognizer, a system should be able to detect that a recognized string is not a sentence of English, and disprefer that recognition hypothesis from the speech recognizer. If the coupling is to be tight, with parsing and recognition interleaved, then the parser should be able to enforce as many constraints as possible for partial utterances. The approach taken in Gemini is to tightly constrain language recognition to limit overgeneration, but to extend the language analysis to recognize certain characteristic patterns of spoken utterances (but not generally thought of as part of grammar) and to recognize specific types of performance errors by the speaker.",
"title": ""
},
{
"docid": "406e06e00799733c517aff88c9c85e0b",
"text": "Matrix rank minimization problem is in general NP-hard. The nuclear norm is used to substitute the rank function in many recent studies. Nevertheless, the nuclear norm approximation adds all singular values together and the approximation error may depend heavily on the magnitudes of singular values. This might restrict its capability in dealing with many practical problems. In this paper, an arctangent function is used as a tighter approximation to the rank function. We use it on the challenging subspace clustering problem. For this nonconvex minimization problem, we develop an effective optimization procedure based on a type of augmented Lagrange multipliers (ALM) method. Extensive experiments on face clustering and motion segmentation show that the proposed method is effective for rank approximation.",
"title": ""
},
{
"docid": "5716a9b957281c93a98c0ced122fbd3b",
"text": "In black-box testing, one is interested in creating a suite of tests from requirements that adequately exercise the behavior of a software system without regard to the internal structure of the implementation. In current practice, the adequacy of black box test suites is inferred by examining coverage on an executable artifact, either source code or a software model.In this paper, we define structural coverage metrics directly on high-level formal software requirements. These metrics provide objective, implementation-independent measures of how well a black-box test suite exercises a set of requirements. We focus on structural coverage criteria on requirements formalized as LTL properties and discuss how they can be adapted to measure finite test cases. These criteria can also be used to automatically generate a requirements-based test suite. Unlike model or code-derived test cases, these tests are immediately traceable to high-level requirements. To assess the practicality of our approach, we apply it on a realistic example from the avionics domain.",
"title": ""
}
] |
scidocsrr
|
c948dafacf3bd2626a6a86f858604ff2
|
Monitoring endurance running performance using cardiac parasympathetic function
|
[
{
"docid": "4428705a7eab914db00a38a57fb9199e",
"text": "Physiological testing of elite athletes requires the correct identification and assessment of sports-specific underlying factors. It is now recognised that performance in long-distance events is determined by maximal oxygen uptake (V(2 max)), energy cost of exercise and the maximal fractional utilisation of V(2 max) in any realised performance or as a corollary a set percentage of V(2 max) that could be endured as long as possible. This later ability is defined as endurance, and more precisely aerobic endurance, since V(2 max) sets the upper limit of aerobic pathway. It should be distinguished from endurance ability or endurance performance, which are synonymous with performance in long-distance events. The present review examines methods available in the literature to assess aerobic endurance. They are numerous and can be classified into two categories, namely direct and indirect methods. Direct methods bring together all indices that allow either a complete or a partial representation of the power-duration relationship, while indirect methods revolve around the determination of the so-called anaerobic threshold (AT). With regard to direct methods, performance in a series of tests provides a more complete and presumably more valid description of the power-duration relationship than performance in a single test, even if both approaches are well correlated with each other. However, the question remains open to determine which systems model should be employed among the several available in the literature, and how to use them in the prescription of training intensities. As for indirect methods, there is quantitative accumulation of data supporting the utilisation of the AT to assess aerobic endurance and to prescribe training intensities. However, it appears that: there is no unique intensity corresponding to the AT, since criteria available in the literature provide inconsistent results; and the non-invasive determination of the AT using ventilatory and heart rate data instead of blood lactate concentration ([La(-)](b)) is not valid. Added to the fact that the AT may not represent the optimal training intensity for elite athletes, it raises doubt on the usefulness of this theory without questioning, however, the usefulness of the whole [La(-)](b)-power curve to assess aerobic endurance and predict performance in long-distance events.",
"title": ""
}
] |
[
{
"docid": "dbc66199d6873d990a8df18ce7adf01d",
"text": "Facebook has rapidly become the most popular Social Networking Site (SNS) among faculty and students in higher education institutions in recent years. Due to the various interactive and collaborative features Facebook supports, it offers great opportunities for higher education institutions to support student engagement and improve different aspects of teaching and learning. To understand the social aspects of Facebook use among students and how they perceive using it for academic purposes, an exploratory survey has been distributed to 105 local and international students at a large public technology university in Malaysia. Results reveal consistent patterns of usage compared to what has been reported in literature reviews in relation to the intent of use and the current use for educational purposes. A comparison was conducted of male and female, international and local, postgraduate and undergraduate students respectively, using nonparametric tests. The results indicate that the students’ perception of using Facebook for academic purposes is not significantly related to students’ gender or students’ background; while it is significantly related to study level and students’ experience. Moreover, based on the overall results of the survey and literature reviews, the paper presents recommendations and suggestions for further research of social networking in a higher education context.",
"title": ""
},
{
"docid": "c26abad7f3396faa798a74cfb23e6528",
"text": "Recent advances in seismic sensor technology, data acquisition systems, digital communications, and computer hardware and software make it possible to build reliable real-time earthquake information systems. Such systems provide a means for modern urban regions to cope effectively with the aftermath of major earthquakes and, in some cases, they may even provide warning, seconds before the arrival of seismic waves. In the long term these systems also provide basic data for mitigation strategies such as improved building codes.",
"title": ""
},
{
"docid": "2875373b63642ee842834a5360262f41",
"text": "Video stabilization techniques are essential for most hand-held captured videos due to high-frequency shakes. Several 2D-, 2.5D-, and 3D-based stabilization techniques have been presented previously, but to the best of our knowledge, no solutions based on deep neural networks had been proposed to date. The main reason for this omission is shortage in training data as well as the challenge of modeling the problem using neural networks. In this paper, we present a video stabilization technique using a convolutional neural network. Previous works usually propose an off-line algorithm that smoothes a holistic camera path based on feature matching. Instead, we focus on low-latency, real-time camera path smoothing that does not explicitly represent the camera path and does not use future frames. Our neural network model, called StabNet, learns a set of mesh-grid transformations progressively for each input frame from the previous set of stabilized camera frames and creates stable corresponding latent camera paths implicitly. To train the network, we collect a dataset of synchronized steady and unsteady video pairs via a specially designed hand-held hardware. Experimental results show that our proposed online method performs comparatively to the traditional off-line video stabilization methods without using future frames while running about 10 times faster. More importantly, our proposed StabNet is able to handle low-quality videos, such as night-scene videos, watermarked videos, blurry videos, and noisy videos, where the existing methods fail in feature extraction or matching.",
"title": ""
},
{
"docid": "2615f2f66adeaf1718d7afa5be3b32b1",
"text": "In this paper, an advanced design of an Autonomous Underwater Vehicle (AUV) is presented. The design is driven only by four water pumps. The different power combinations of the four motors provides the force and moment for propulsion and maneuvering. No control surfaces are needed in this design, which make the manufacturing cost of such a vehicle minimal and more reliable. Based on the propulsion method of the vehicle, a nonlinear AUV dynamic model is studied. This nonlinear model is linearized at the operation point. A control strategy of the AUV is proposed including attitude control and auto-pilot design. Simulation results for the attitude control loop are presented to validate this approach.",
"title": ""
},
{
"docid": "8b63800da2019180d266297647e3dbc0",
"text": "Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the class-probability distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. A central idea is the concept of context: a set of contiguous examples where the distribution is stationary. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error wil decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example kw, and the drift level at example kd. This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since kw. The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and also with learning the new concept. We also observe that the method is independent of the learning algorithm.",
"title": ""
},
{
"docid": "1f8115529218a17313032a88467ccc64",
"text": "s on Human Factors in Computing Systems (pp. 722–",
"title": ""
},
{
"docid": "27856dcc3b48bb86ca8bd3ca8b046385",
"text": "This paper provides evidence of the significant negative health externalities of traffic congestion. We exploit the introduction of electronic toll collection, or E-ZPass, which greatly reduced traffic congestion and emissions from motor vehicles in the vicinity of highway toll plazas. Specifically, we compare infants born to mothers living near toll plazas to infants born to mothers living near busy roadways but away from toll plazas with the idea that mothers living away from toll plazas did not experience significant reductions in local traffic congestion. We also examine differences in the health of infants born to the same mother, but who differ in terms of whether or not they were “exposed” to E-ZPass. We find that reductions in traffic congestion generated by E-ZPass reduced the incidence of prematurity and low birth weight among mothers within 2km of a toll plaza by 6.7-9.1% and 8.5-11.3% respectively, with larger effects for African-Americans, smokers, and those very close to toll plazas. There were no immediate changes in the characteristics of mothers or in housing prices in the vicinity of toll plazas that could explain these changes, and the results are robust to many changes in specification. The results suggest that traffic congestion is a significant contributor to poor health in affected infants. Estimates of the costs of traffic congestion should account for these important health externalities. * We are grateful to the MacArthur foundation for financial support. We thank Katherine Hempstead and Matthew Weinberg of the New Jersey Department of Health, and Craig Edelman of the Pennsylvania Department of Health for facilitating our access to the data. We are grateful to James MacKinnon and seminar participants at Harvard University, the University of Maryland, Queens University, Princeton University, the NBER Summer Institute, the SOLE/EALE 2010 meetings, Tulane University, and Uppsala University for helpful comments. All opinions and any errors are our own. Motor vehicles are a major source of air pollution. Nationally they are responsible for over 50% of carbon monoxide (CO), 34 percent of nitrogen oxide (NO2) and over 29 percent of hydrocarbon emissions in addition to as much as 10 percent of fine particulate matter emissions (Ernst et al., 2003). In urban areas, vehicles are the dominant source of these emissions. Furthermore, between 1980 and 2003 total vehicle miles traveled (VMT) in urban areas in the United States increased by 111% against an increase in urban lane-miles of only 51% (Bureau of Transportation Statistics, 2004). As a result, traffic congestion has steadily increased across the United States, causing 3.7 billion hours of delay by 2003 and wasting 2.3 billion gallons of motor fuel (Schrank and Lomax, 2005). Traditional estimates of the cost of congestion typically include delay costs (Vickrey, 1969), but they rarely address other congestion externalities such as the health effects of congestion. This paper seeks to provide estimates of the health effects of traffic congestion by examining the effect of a policy change that caused a sharp drop in congestion (and therefore in the level of local motor vehicle emissions) within a relatively short time frame at different sites across the northeastern United States. Engineering studies suggest that the introduction of electronic toll collection (ETC) technology, called E-ZPass in the Northeast, sharply reduced delays at toll plazas and pollution caused by idling, decelerating, and accelerating. We study the effect of E-ZPass, and thus the sharp reductions in local traffic congestion, on the health of infants born to mothers living near toll plazas. This question is of interest for three reasons. First, there is increasing evidence of the long-term effects of poor health at birth on future outcomes. For example, low birth weight has been linked to future health problems and lower educational attainment (see Currie (2009) for a summary of this research). The debate over the costs and benefits of emission controls and traffic congestion policies could be significantly impacted by evidence that traffic congestion has a deleterious effect on fetal health. Second, the study of newborns overcomes several difficulties in making the connection between pollution and health because, unlike adult diseases that may reflect pollution exposure that occurred many years ago, the link between cause and effect is immediate. Third, E-ZPass is an interesting policy experiment because, while pollution control was an important consideration for policy makers, the main motive for consumers to sign up for E-ZPass is to reduce travel time. Hence, E-ZPass offers an example of achieving reductions in pollution by bundling emissions reductions with something consumers perhaps value more highly such as reduced travel time. Our analysis improves upon much of the previous research linking air pollution to fetal health as well as on the somewhat smaller literature focusing specifically on the relationship between residential proximity to busy roadways and poor pregnancy outcomes. Since air pollution is not randomly assigned, studies that attempt to compare health outcomes for populations exposed to differing pollution levels may not be adequately controlling for confounding determinants of health. Since air quality is capitalized into housing prices (see Chay and Greenstone, 2003) families with higher incomes or preferences for cleaner air are likely to sort into locations with better air quality, and failure to account for this sorting will lead to overestimates of the effects of pollution. Alternatively, pollution levels are higher in urban areas where there are often more educated individuals with better access to health care, which can cause underestimates of the true effects of pollution on health. In the absence of a randomized trial, we exploit a policy change that created large local and persistent reductions in traffic congestion and traffic related air emissions for certain segments along a highway. We compare the infant health outcomes of those living near an electronic toll plaza before and after implementation of E-ZPass to those living near a major highway but further away from a toll plaza. Specifically, we compare mothers within 2 kilometers of a toll plaza to mothers who are between 2 and 10 km from a toll plaza but still within 3 kilometers of a major highway before and after the adoption of E-ZPass in New Jersey and Pennsylvania. New Jersey and Pennsylvania provide a compelling setting for our particular research design. First, both New Jersey and Pennsylvania are heavily populated, with New Jersey being the most densely populated state in the United States and Pennsylvania being the sixth most populous state in the country. As a result, these two states have some of the busiest interstate systems in the country, systems that also happen to be densely surrounded by residential housing. Furthermore, we know the exact addresses of mothers, in contrast to many observational studies which approximate the individual’s location as the centroid of a geographic area or by computing average pollution levels within the geographic area. This information enables us to improve on the assignment of pollution exposure. Lastly, E-ZPass adoption and take up was extremely quick, and the reductions in congestion spillover to all automobiles, not just those registered with E-ZPass (New Jersey Transit Authority, 2001). Our difference-in-differences research design relies on the assumption that the characteristics of mothers near a toll plaza change over time in a way that is comparable to those of other mothers who live further away from a plaza but still close to a major highway. We test this assumption by examining the way that observable characteristics of the two groups of mothers and housing prices change before and after E-ZPass adoption. We also estimate a range of alternative specifications in an effort to control for unobserved characteristics of mothers and neighborhoods that could confound our estimates. We find significant effects on infant health. The difference-in-difference models suggest that prematurity fell by 6.7-9.16% among mothers within 2km of a toll plaza, while the incidence of low birth weight fell by 8.5-11.3%. We argue that these are large but not implausible effects given previous studies. In contrast, we find that there are no significant effects of E-ZPass adoption on the demographic characteristics of mothers in the vicinity of a toll plaza. We also find no immediate effect on housing prices, suggesting that the composition of women giving birth near toll plazas shows little change in the immediate aftermath of E-ZPass adoption (though of course it might change more over time). The rest of the paper is laid out as follows: Section I provides necessary background. Section II describes our methods, while data are described in Section III. Section IV presents our results. Section VI discusses the magnitude of the effects we find, and Section V details our conclusions. I. Background Many studies suggest an association between air pollution and fetal health. Mattison et al. (2003) and Glinianaia et al. (2004) summarize much of the literature. For more recent papers see for example Currie et al. (2009); Dugandzic et al. (2006); Huynh et al. (2006); Karr et al. (2009); Lee et al. (2008); Leem et al. (2006); Liu et al. (2007); Parker et al. (2005); Salam et al. (2005); Ritz et al. (2006); Wilhelm and Ritz (2005); Woodruff et al. (2008). Since traffic is a major contributor to air pollution, several studies have focused specifically on the effects of exposure to motor vehicle exhaust (see Wilhelm and Ritz (2003); Ponce et al. (2005); Brauer et 1 There is also a large literature linking air pollution and child health, some of it focusing on the effects of traffic on child health. See Schwartz (2004) and Glinianaia et al. (2004b) for reviews. ",
"title": ""
},
{
"docid": "fa404bb1a60c219933f1666552771ada",
"text": "A novel low voltage self-biased high swing cascode current mirror (SHCCM) employing bulk-driven NMOS transistors is proposed in this paper. The comparison with the conventional circuit reveals that the proposed bulk-driven circuit operates at lower voltages and provides enhanced bandwidth with improved output resistance. The proposed circuit is further modified by replacing the passive resistance by active MOS realization. Small signal analysis of the proposed and conventional SHCCM are carried out to show the improvement achieved through the proposed circuit. The circuits are simulated in standard SPICE 0.25 mm CMOS technology and simulated results are compared with the theoretically obtained results. To ensure robustness of the proposed SHCCM, simulation results of component tolerance and process variation have also been included. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "462f8689f7be66267bfb77f99352e93a",
"text": "Face recognition under variable pose and illumination is a challenging problem in computer vision tasks. In this paper, we solve this problem by proposing a new residual based deep face reconstruction neural network to extract discriminative pose-and-illumination-invariant (PII) features. Our deep model can change arbitrary pose and illumination face images to the frontal view with standard illumination. We propose a new triplet-loss training method instead of Euclidean loss to optimize our model, which has two advantages: a) The training triplets can be easily augmented by freely choosing combinations of labeled face images, in this way, overfitting can be avoided; b) The triplet-loss training makes the PII features more discriminative even when training samples have similar appearance. By using our PII features, we achieve 83.8% average recognition accuracy on MultiPIE face dataset which is competitive to the state-of-the-art face recognition methods.",
"title": ""
},
{
"docid": "edbbf1491e552346d42d39ebf90fc9fc",
"text": "The use of ICT in the classroom is very important for providing opportunities for students to learn to operate in an information age. Studying the obstacles to the use of ICT in education may assist educators to overcome these barriers and become successful technology adopters in the future. This paper provides a meta-analysis of the relevant literature that aims to present the perceived barriers to technology integration in science education. The findings indicate that teachers had a strong desire for to integrate ICT into education; but that, they encountered many barriers. The major barriers were lack of confidence, lack of competence, and lack of access to resources. Since confidence, competence and accessibility have been found to be the critical components of technology integration in schools, ICT resources including software and hardware, effective professional development, sufficient time, and technical support need to be provided to teachers. No one component in itself is sufficient to provide good teaching. However, the presence of all components increases the possibility of excellent integration of ICT in learning and teaching opportunities. Generally, this paper provides information and recommendation to those responsible for the integration of new technologies into science education.",
"title": ""
},
{
"docid": "eb4d350f389c6f046b81e4459fcb236c",
"text": "Customer relationship management (CRM) in business‐to‐business (B2B) e‐commerce Yun E. Zeng H. Joseph Wen David C. Yen Article information: To cite this document: Yun E. Zeng H. Joseph Wen David C. Yen, (2003),\"Customer relationship management (CRM) in business#to#business (B2B) e#commerce\", Information Management & Computer Security, Vol. 11 Iss 1 pp. 39 44 Permanent link to this document: http://dx.doi.org/10.1108/09685220310463722",
"title": ""
},
{
"docid": "fca372687a77fd27b8c56ed494a6628b",
"text": "Sentiment analysis is the computational study of opinions, sentiments, evaluations, attitudes, views and emotions expressed in text. It refers to a classification problem where the main focus is to predict the polarity of words and then classify them into positive or negative sentiment. Sentiment analysis over Twitter offers people a fast and effective way to measure the public's feelings towards their party and politicians. The primary issues in previous sentiment analysis techniques are classification accuracy, as they incorrectly classify most of the tweets with the biasing towards the training data. In opinion texts, lexical content alone also can be misleading. Therefore, here we adopt a lexicon based sentiment analysis method, which will exploit the sense definitions, as semantic indicators of sentiment. Here we propose a novel approach for accurate sentiment classification of twitter messages using lexical resources SentiWordNet and WordNet along with Word Sense Disambiguation. Thus we applied the SentiWordNet lexical resource and Word Sense Disambiguation for finding political sentiment from real time tweets. Our method also uses a negation handling as a pre-processing step in order to achieve high accuracy.",
"title": ""
},
{
"docid": "f1c5f6f2bdff251e91df1dbd1e2302b2",
"text": "In this paper, mathematical models for permutation flow shop scheduling and job shop scheduling problems are proposed. The first problem is based on a mixed integer programming model. As the problem is NP-complete, this model can only be used for smaller instances where an optimal solution can be computed. For large instances, another model is proposed which is suitable for solving the problem by stochastic heuristic methods. For the job shop scheduling problem, a mathematical model and its main representation schemes are presented. Keywords—Flow shop, job shop, mixed integer model, representation scheme.",
"title": ""
},
{
"docid": "63b2bc943743d5b8ef9220fd672df84f",
"text": "In multiagent systems, we often have a set of agents each of which have a preference ordering over a set of items and one would like to know these preference orderings for various tasks, for example, data analysis, preference aggregation, voting etc. However, we often have a large number of items which makes it impractical to ask the agents for their complete preference ordering. In such scenarios, we usually elicit these agents’ preferences by asking (a hopefully small number of) comparison queries — asking an agent to compare two items. Prior works on preference elicitation focus on unrestricted domain and the domain of single peaked preferences and show that the preferences in single peaked domain can be elicited by much less number of queries compared to unrestricted domain. We extend this line of research and study preference elicitation for single peaked preferences on trees which is a strict superset of the domain of single peaked preferences. We show that the query complexity crucially depends on the number of leaves, the path cover number, and the distance from path of the underlying single peaked tree, whereas the other natural parameters like maximum degree, diameter, pathwidth do not play any direct role in determining query complexity. We then investigate the query complexity for finding a weak Condorcet winner for preferences single peaked on a tree and show that this task has much less query complexity than preference elicitation. Here again we observe that the number of leaves in the underlying single peaked tree and the path cover number of the tree influence the query complexity of the problem.",
"title": ""
},
{
"docid": "6724f1e8a34a6d9f64a30061ce7f67c0",
"text": "Mental contrasting with implementation intentions (MCII) has been found to improve selfregulation across many life domains. The present research investigates whether MCII can benefit time management. In Study 1, we asked students to apply MCII to a pressing academic problem and assessed how they scheduled their time for the upcoming week. MCII participants scheduled more time than control participants who in their thoughts either reflected on similar contents using different cognitive procedures (content control group) or applied the same cognitive procedures on different contents (format control group). In Study 2, students were taught MCII as a metacognitive strategy to be used on any upcoming concerns of the subsequent week. As compared to the week prior to the training, students in the MCII (vs. format control) condition improved in self-reported time management. In Study 3, MCII (vs. format control) helped working mothers who enrolled in a vocational business program to attend classes more regularly. The findings suggest that performing MCII on one’s everyday concerns improves time management.",
"title": ""
},
{
"docid": "cdfec1296a168318f773bb7ef0bfb307",
"text": "Today service markets are becoming business reality as for example Amazon's EC2 spot market. However, current research focusses on simplified consumer-provider service markets only. Taxes are an important market element which has not been considered yet for service markets. This paper introduces and evaluates the effects of tax systems for IaaS markets which trade virtual machines. As a digital good with well defined characteristics like storage or processing power a virtual machine can be taxed by the tax authority using different tax systems. Currently the value added tax is widely used for taxing virtual machines only. The main contribution of the paper is the so called CloudTax component, a framework to simulate and evaluate different tax systems on service markets. It allows to introduce economical principles and phenomenons like the Laffer Curve or tax incidences. The CloudTax component is based on the CloudSim simulation framework using the Bazaar-Extension for comprehensive economic simulations. We show that tax mechanisms strongly influence the efficiency of negotiation processes in the Cloud market.",
"title": ""
},
{
"docid": "ca932a0b6b71f009f95bad6f2f3f8a38",
"text": "Page 13 Supply chain management is increasingly being recognized as the integration of key business processes across the supply chain. For example, Hammer argues that now that companies have implemented processes within the firm, they need to integrate them between firms: Streamlining cross-company processes is the next great frontier for reducing costs, enhancing quality, and speeding operations. It is where this decade’s productivity wars will be fought. The victors will be those companies that are able to take a new approach to business, working closely with partners to design and manage processes that extend across traditional corporate boundaries. They will be the ones that make the leap from efficiency to super efficiency [1]. Monczka and Morgan also focus on the importance of process integration in supply chain management [2]. The piece that seems to be missing from the literature is a comprehensive definition of the processes that constitute supply chain management. How can companies achieve supply chain integration if there is not a common understanding of the key business processes? It seems that in order to build links between supply chain members it is necessary for companies to implement a standard set of supply chain processes. Practitioners and educators need a common definition of supply chain management, and a shared understanding of the processes. We recommend the definition of supply chain management developed and used by The Global Supply Chain Forum: Supply Chain Management is the integration of key business processes from end user through original suppliers that provides products, services, and information that add value for customers and other stakeholders [3]. The Forum members identified eight key processes that need to be implemented within and across firms in the supply chain. To date, The Supply Chain Management Processes",
"title": ""
},
{
"docid": "c4d4cb398cfa5cbae37879c385a9a6ed",
"text": "Performing large-scale malware classification is increasingly becoming a critical step in malware analytics as the number and variety of malware samples is rapidly growing. Statistical machine learning constitutes an appealing method to cope with this increase as it can use mathematical tools to extract information out of large-scale datasets and produce interpretable models. This has motivated a surge of scientific work in developing machine learning methods for detection and classification of malicious executables. However, an optimal method for extracting the most informative features for different malware families, with the final goal of malware classification, is yet to be found. Fortunately, neural networks have evolved to the state that they can surpass the limitations of other methods in terms of hierarchical feature extraction. Consequently, neural networks can now offer superior classification accuracy in many domains such as computer vision and natural language processing. In this paper, we transfer the performance improvements achieved in the area of neural networks to model the execution sequences of disassembled malicious binaries. We implement a neural network that consists of convolutional and feedforward neural constructs. This architecture embodies a hierarchical feature extraction approach that combines convolution of n-grams of instructions with plain vectorization of features derived from the headers of the Portable Executable (PE) files. Our evaluation results demonstrate that our approach outperforms baseline methods, such as simple Feedforward Neural Networks and Support Vector Machines, as we achieve 93% on precision and recall, even in case of obfuscations in the data.",
"title": ""
},
{
"docid": "60bdd255a19784ed2d19550222e61b69",
"text": "Haptic feedback on touch-sensitive displays provides significant benefits in terms of reducing error rates, increasing interaction speed and minimizing visual distraction. This particularly holds true for multitasking situations such as the interaction with mobile devices or touch-based in-vehicle systems. In this paper, we explore how the interaction with tactile touchscreens can be modeled and enriched using a 2+1 state transition model. The model expands an approach presented by Buxton. We present HapTouch -- a force-sensitive touchscreen device with haptic feedback that allows the user to explore and manipulate interactive elements using the sense of touch. We describe the results of a preliminary quantitative study to investigate the effects of tactile feedback on the driver's visual attention, driving performance and operating error rate. In particular, we focus on how active tactile feedback allows the accurate interaction with small on-screen elements during driving. Our results show significantly reduced error rates and input time when haptic feedback is given.",
"title": ""
},
{
"docid": "3bde393992b3055083e7348d360f7ec5",
"text": "A new smart power switch for industrial, automotive and computer applications developed in BCD (Bipolar, CMOS, DMOS) technology is described. It consists of an on-chip 70 mΩ power DMOS transistor connected in high side configuration and its driver makes the device virtually indestructible and suitable to drive any kind of load with an output current of 2.5 A. If the load is inductive, an internal voltage clamp allows fast demagnetization down to 55 V under the supply voltage. The device includes novel structures for the driver, the fully integrated charge pump circuit and its oscillator. These circuits have specifically been designed to reduce ElectroMagnetic Interference (EMI) thanks to an accurate control of the output voltage slope and the reduction of the output voltage ripple caused by the charge pump itself (several patents pending). An innovative open load circuit allows the detection of the open load condition with high precision (2 to 4 mA within the temperature range and including process spreads). The quiescent current has also been reduced to 600 uA. Diagnostics for CPU feedback is available at the external connections of the chip when the following fault conditions occur: open load; output short circuit to supply voltage; overload or output short circuit to ground; over temperature; under voltage supply.",
"title": ""
}
] |
scidocsrr
|
a38492ed7d3a6ca0d75054765f346f6f
|
Personalized Prognostic Models for Oncology: A Machine Learning Approach
|
[
{
"docid": "a88c0d45ca7859c050e5e76379f171e6",
"text": "Cancer and other chronic diseases have constituted (and will do so at an increasing pace) a significant portion of healthcare costs in the United States in recent years. Although prior research has shown that diagnostic and treatment recommendations might be altered based on the severity of comorbidities, chronic diseases are still being investigated in isolation from one another in most cases. To illustrate the significance of concurrent chronic diseases in the course of treatment, this study uses SEER’s cancer data to create two comorbid data sets: one for breast and female genital cancers and another for prostate and urinal cancers. Several popular machine learning techniques are then applied to the resultant data sets to build predictive models. Comparison of the results shows that having more information about comorbid conditions of patients can improve models’ predictive power, which in turn, can help practitioners make better diagnostic and treatment decisions. Therefore, proper identification, recording, and use of patients’ comorbidity status can potentially lower treatment costs and ease the healthcare related economic challenges.",
"title": ""
}
] |
[
{
"docid": "30dffba83b24e835a083774aa91e6c59",
"text": "Wikipedia is one of the most popular sites on the Web, with millions of users relying on it to satisfy a broad range of information needs every day. Although it is crucial to understand what exactly these needs are in order to be able to meet them, little is currently known about why users visit Wikipedia. The goal of this paper is to fill this gap by combining a survey of Wikipedia readers with a log-based analysis of user activity. Based on an initial series of user surveys, we build a taxonomy of Wikipedia use cases along several dimensions, capturing users’ motivations to visit Wikipedia, the depth of knowledge they are seeking, and their knowledge of the topic of interest prior to visiting Wikipedia. Then, we quantify the prevalence of these use cases via a large-scale user survey conducted on live Wikipedia with almost 30,000 responses. Our analyses highlight the variety of factors driving users to Wikipedia, such as current events, media coverage of a topic, personal curiosity, work or school assignments, or boredom. Finally, we match survey responses to the respondents’ digital traces in Wikipedia’s server logs, enabling the discovery of behavioral patterns associated with specific use cases. For instance, we observe long and fast-paced page sequences across topics for users who are bored or exploring randomly, whereas those using Wikipedia for work or school spend more time on individual articles focused on topics such as science. Our findings advance our understanding of reader motivations and behavior on Wikipedia and can have implications for developers aiming to improve Wikipedia’s user experience, editors striving to cater to their readers’ needs, third-party services (such as search engines) providing access to Wikipedia content, and researchers aiming to build tools such as recommendation engines.",
"title": ""
},
{
"docid": "3aa4fd13689907ae236bd66c8a7ed8c8",
"text": "Biomedical named entity recognition(BNER) is a crucial initial step of information extraction in biomedical domain. The task is typically modeled as a sequence labeling problem. Various machine learning algorithms, such as Conditional Random Fields (CRFs), have been successfully used for this task. However, these state-of-the-art BNER systems largely depend on hand-crafted features. We present a recurrent neural network (RNN) framework based on word embeddings and character representation. On top of the neural network architecture, we use a CRF layer to jointly decode labels for the whole sentence. In our approach, contextual information from both directions and long-range dependencies in the sequence, which is useful for this task, can be well modeled by bidirectional variation and long short-term memory (LSTM) unit, respectively. Although our models use word embeddings and character embeddings as the only features, the bidirectional LSTM-RNN (BLSTM-RNN) model achieves state-of-the-art performance — 86.55% F1 on BioCreative II gene mention (GM) corpus and 73.79% F1 on JNLPBA 2004 corpus. Our neural network architecture can be successfully used for BNER without any manual feature engineering. Experimental results show that domain-specific pre-trained word embeddings and character-level representation can improve the performance of the LSTM-RNN models. On the GM corpus, we achieve comparable performance compared with other systems using complex hand-crafted features. Considering the JNLPBA corpus, our model achieves the best results, outperforming the previously top performing systems. The source code of our method is freely available under GPL at https://github.com/lvchen1989/BNER .",
"title": ""
},
{
"docid": "eb6572344dbaf8e209388f888fba1c10",
"text": "[Purpose] The present study was performed to evaluate the changes in the scapular alignment, pressure pain threshold and pain in subjects with scapular downward rotation after 4 weeks of wall slide exercise or sling slide exercise. [Subjects and Methods] Twenty-two subjects with scapular downward rotation participated in this study. The alignment of the scapula was measured using radiographic analysis (X-ray). Pain and pressure pain threshold were assessed using visual analogue scale and digital algometer. Patients were assessed before and after a 4 weeks of exercise. [Results] In the within-group comparison, the wall slide exercise group showed significant differences in the resting scapular alignment, pressure pain threshold, and pain after four weeks. The between-group comparison showed that there were significant differences between the wall slide group and the sling slide group after four weeks. [Conclusion] The results of this study found that the wall slide exercise may be effective at reducing pain and improving scapular alignment in subjects with scapular downward rotation.",
"title": ""
},
{
"docid": "955c7d91d4463fc50feb93320b7c370c",
"text": "The major problem in the use of the Web is that of searching for relevant information that meets the expectations of a user. This problem increases every day and especially with the emergence of web 2.0 or social web. Our paper, therefore, ignores the disadvantage of social web and operates it to rich user profile.",
"title": ""
},
{
"docid": "96d6173f58e36039577c8e94329861b2",
"text": "Reverse Turing tests, or CAPTCHAs, have become an ubiquitous defense used to protect open Web resources from being exploited at scale. An effective CAPTCHA resists existing mechanistic software solving, yet can be solved with high probability by a human being. In response, a robust solving ecosystem has emerged, reselling both automated solving technology and realtime human labor to bypass these protections. Thus, CAPTCHAs can increasingly be understood and evaluated in purely economic terms; the market price of a solution vs the monetizable value of the asset being protected. We examine the market-side of this question in depth, analyzing the behavior and dynamics of CAPTCHA-solving service providers, their price performance, and the underlying labor markets driving this economy.",
"title": ""
},
{
"docid": "1cbf55610014ef23e4015c07f5846619",
"text": "Variation of the system parameters and external disturbances always happen in the CNC servo system. With a traditional PID controller, it will cause large overshoot or poor stability. In this paper, a fuzzy-PID controller is proposed in order to improve the performance of the servo system. The proposed controller incorporates the advantages of PID control which can eliminate the steady-state error, and the advantages of fuzzy logic such as simple design, no need of an accurate mathematical model and some adaptability to nonlinearity and time-variation. The fuzzy-PID controller accepts the error (e) and error change(ec) as inputs ,while the parameters kp, ki, kd as outputs. Control rules of the controller are established based on experience so that self-regulation of the values of PID parameters is achieved. A simulation model of position servo system is constructed in Matlab/Simulink module based on a high-speed milling machine researched in our institute. By comparing the traditional PID controller and the fuzzy-PID controller, the simulation results show that the system has stronger robustness and disturbance rejection capability with the latter controller which can meet the performance requirements of the CNC position servo system better",
"title": ""
},
{
"docid": "e146a0534b5a81ac6f332332056ae58c",
"text": "Paraphrase identification is an important topic in artificial intelligence and this task is often tackled as sequence alignment and matching. Traditional alignment methods take advantage of attention mechanism, which is a soft-max weighting technique. Weighting technique could pick out the most similar/dissimilar parts, but is weak in modeling the aligned unmatched parts, which are the crucial evidence to identify paraphrase. In this paper, we empower neural architecture with Hungarian algorithm to extract the aligned unmatched parts. Specifically, first, our model applies BiLSTM to parse the input sentences into hidden representations. Then, Hungarian layer leverages the hidden representations to extract the aligned unmatched parts. Last, we apply cosine similarity to metric the aligned unmatched parts for a final discrimination. Extensive experiments show that our model outperforms other baselines, substantially and significantly.",
"title": ""
},
{
"docid": "cd35c6e2763b634d23de1903a3261c59",
"text": "We investigate the Belousov-Zhabotinsky (BZ) reaction in an attempt to establish a basis for computation using chemical oscillators coupled via inhibition. The system consists of BZ droplets suspended in oil. Interdrop coupling is governed by the non-polar communicator of inhibition, Br2. We consider a linear arrangement of three droplets to be a NOR gate, where the center droplet is the output and the other two are inputs. Oxidation spikes in the inputs, which we define to be TRUE, cause a delay in the next spike of the output, which we read to be FALSE. Conversely, when the inputs do not spike (FALSE) there is no delay in the output (TRUE), thus producing the behavior of a NOR gate. We are able to reliably produce NOR gates with this behavior in microfluidic experiment.",
"title": ""
},
{
"docid": "35ac15f19cefd103f984519e046e407c",
"text": "This paper presents a highly sensitive sensor for crack detection in metallic surfaces. The sensor is inspired by complementary split-ring resonators which have dimensions much smaller than the excitation’s wavelength. The entire sensor is etched in the ground plane of a microstrip line and fabricated using printed circuit board technology. Compared to available microwave techniques, the sensor introduced here has key advantages including high sensitivity, increased dynamic range, spatial resolution, design simplicity, selectivity, and scalability. Experimental measurements showed that a surface crack having 200-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> width and 2-mm depth gives a shift in the resonance frequency of 1.5 GHz. This resonance frequency shift exceeds what can be achieved using other sensors operating in the low GHz frequency regime by a significant margin. In addition, using numerical simulation, we showed that the new sensor is able to resolve a 10-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula>-wide crack (equivalent to <inline-formula> <tex-math notation=\"LaTeX\">$\\lambda $ </tex-math></inline-formula>/4000) with 180-MHz shift in the resonance frequency.",
"title": ""
},
{
"docid": "bde1d85da7f1ac9c9c30b0fed448aac6",
"text": "We survey temporal description logics that are based on standard temporal logics such as LTL and CTL. In particular, we concentrate on the computational complexity of the satisfiability problem and algorithms for deciding it.",
"title": ""
},
{
"docid": "1b790d2a5b9d8f6a911efee43ee2a9d2",
"text": "Content Centric Networking (CCN) represents an important change in the current operation of the Internet, prioritizing content over the communication between end nodes. Routers play an essential role in CCN, since they receive the requests for a given content and provide content caching for the most popular ones. They have their own forwarding strategies and caching policies for the most popular contents. Despite the number of works on this field, experimental evaluation of different forwarding algorithms and caching policies yet demands a huge effort in routers programming. In this paper we propose SDCCN, a SDN approach to CCN that provides programmable forwarding strategy and caching policies. SDCCN allows fast prototyping and experimentation in CCN. Proofs of concept were performed to demonstrate the programmability of the cache replacement algorithms and the Strategy Layer. Experimental results, obtained through implementation in the Mininet environment, are presented and evaluated.",
"title": ""
},
{
"docid": "9bf26d0e444ab8332ac55ce87d1b7797",
"text": "Toll like receptors (TLR)s have a central role in regulating innate immunity and in the last decade studies have begun to reveal their significance in potentiating autoimmune diseases such as rheumatoid arthritis (RA). Earlier investigations have highlighted the importance of TLR2 and TLR4 function in RA pathogenesis. In this review, we discuss the newer data that indicate roles for TLR5 and TLR7 in RA and its preclinical models. We evaluate the pathogenicity of TLRs in RA myeloid cells, synovial tissue fibroblasts, T cells, osteoclast progenitor cells and endothelial cells. These observations establish that ligation of TLRs can transform RA myeloid cells into M1 macrophages and that the inflammatory factors secreted from M1 and RA synovial tissue fibroblasts participate in TH-17 cell development. From the investigations conducted in RA preclinical models, we conclude that TLR-mediated inflammation can result in osteoclastic bone erosion by interconnecting the myeloid and TH-17 cell response to joint vascularization. In light of emerging unique aspects of TLR function, we summarize the novel approaches that are being tested to impair TLR activation in RA patients.",
"title": ""
},
{
"docid": "2afb992058eb720ff0baf4216e3a22c2",
"text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: Summary. — A longitudinal anthropological study of cotton farming in Warangal District of Andhra Pradesh, India, compares a group of villages before and after adoption of Bt cotton. It distinguishes \" field-level \" and \" farm-level \" impacts. During this five-year period yields rose by 18% overall, with greater increases among poor farmers with the least access to information. Insecticide sprayings dropped by 55%, although predation by non-target pests was rising. However shifting from the field to the historically-situated context of the farm recasts insect attacks as a symptom of larger problems in agricultural decision-making. Bt cotton's opponents have failed to recognize real benefits at the field level, while its backers have failed to recognize systemic problems that Bt cotton may exacerbate.",
"title": ""
},
{
"docid": "929f294583267ca8cb8616e803687f1e",
"text": "Recent systems for natural language understanding are strong at overcoming linguistic variability for lookup style reasoning. Yet, their accuracy drops dramatically as the number of reasoning steps increases. We present the first formal framework to study such empirical observations, addressing the ambiguity, redundancy, incompleteness, and inaccuracy that the use of language introduces when representing a hidden conceptual space. Our formal model uses two interrelated spaces: a conceptual meaning space that is unambiguous and complete but hidden, and a linguistic symbol space that captures a noisy grounding of the meaning space in the symbols or words of a language. We apply this framework to study the connectivity problem in undirected graphs---a core reasoning problem that forms the basis for more complex multi-hop reasoning. We show that it is indeed possible to construct a high-quality algorithm for detecting connectivity in the (latent) meaning graph, based on an observed noisy symbol graph, as long as the noise is below our quantified noise level and only a few hops are needed. On the other hand, we also prove an impossibility result: if a query requires a large number (specifically, logarithmic in the size of the meaning graph) of hops, no reasoning system operating over the symbol graph is likely to recover any useful property of the meaning graph. This highlights a fundamental barrier for a class of reasoning problems and systems, and suggests the need to limit the distance between the two spaces, rather than investing in multi-hop reasoning with\"many\"hops.",
"title": ""
},
{
"docid": "f5c04016ea72c94437cb5baeb556b01d",
"text": "This paper reports the design of a three pass stemmer STHREE for Malayalam. The language is rich in morphological variations but poor in linguistic computational resources. The system returns the meaningful root word of the input word in 97% of the cases when tested with 1040 words. This is a significant improvement over the reported accuracy of SILPA system, the only known stemmer for Malayalam, with the same test data sets.",
"title": ""
},
{
"docid": "427028ef819df3851e37734e5d198424",
"text": "The code that provides solutions to key software requirements, such as security and fault-tolerance, tends to be spread throughout (or cross-cut) the program modules that implement the “primary functionality” of a software system. Aspect-oriented programming is an emerging programming paradigm that supports implementing such cross-cutting requirements into named program units called “aspects”. To construct a system as an aspect-oriented program (AOP), one develops code for primary functionality in traditional modules and code for cross-cutting functionality in aspect modules. Compiling and running an AOP requires that the aspect code be “woven” into the code. Although aspect-oriented programming supports the separation of concerns into named program units, explicit and implicit dependencies of both aspects and traditional modules will result in systems with new testing challenges, which include new sources for program faults. This paper introduces a candidate fault model, along with associated testing criteria, for AOPs based on interactions that are unique to AOPs. The paper also identifies key issues relevant to the systematic testing of AOPs.",
"title": ""
},
{
"docid": "5de517f8ccdbf12228ca334173ecf797",
"text": "This paper describes the Chinese handwriting recognition competition held at the 12th International Conference on Document Analysis and Recognition (ICDAR 2013). This third competition in the series again used the CASIAHWDB/OLHWDB databases as the training set, and all the submitted systems were evaluated on closed datasets to report character-level correct rates. This year, 10 groups submitted 27 systems for five tasks: classification on extracted features, online/offline isolated character recognition, online/offline handwritten text recognition. The best results (correct rates) are 93.89% for classification on extracted features, 94.77% for offline character recognition, 97.39% for online character recognition, 88.76% for offline text recognition, and 95.03% for online text recognition, respectively. In addition to the test results, we also provide short descriptions of the recognition methods and brief discussions on the results. Keywords—Chinese handwriting recognition competition; isolated character recongition; handwritten text recognition; offline; online; CASIA-HWDB/OLHWDB database.",
"title": ""
},
{
"docid": "924dbc783bf8743a28c2cd4563d50de9",
"text": "This paper studies the off-policy evaluation problem, where one aims to estimate the value of a target policy based on a sample of observations collected by another policy. We first consider the multi-armed bandit case, establish a minimax risk lower bound, and analyze the risk of two standard estimators. It is shown, and verified in simulation, that one is minimax optimal up to a constant, while another can be arbitrarily worse, despite its empirical success and popularity. The results are applied to related problems in contextual bandits and fixed-horizon Markov decision processes, and are also related to semi-supervised learning.",
"title": ""
},
{
"docid": "27ed0ab08b10935d12b59b6d24bed3f1",
"text": "A major stumbling block to progress in understanding basic human interactions, such as getting out of bed or opening a refrigerator, is lack of good training data. Most past efforts have gathered this data explicitly: starting with a laundry list of action labels, and then querying search engines for videos tagged with each label. In this work, we do the reverse and search implicitly: we start with a large collection of interaction-rich video data and then annotate and analyze it. We use Internet Lifestyle Vlogs as the source of surprisingly large and diverse interaction data. We show that by collecting the data first, we are able to achieve greater scale and far greater diversity in terms of actions and actors. Additionally, our data exposes biases built into common explicitly gathered data. We make sense of our data by analyzing the central component of interaction - hands. We benchmark two tasks: identifying semantic object contact at the video level and non-semantic contact state at the frame level. We additionally demonstrate future prediction of hands.",
"title": ""
},
{
"docid": "fe3a2ef6ffc3e667f73b19f01c14d15a",
"text": "The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others.",
"title": ""
}
] |
scidocsrr
|
c2e0f5a2362d741cd300ba72025cf93b
|
Automatic detection of cyberbullying in social media text
|
[
{
"docid": "c447e34a5048c7fe2d731aaa77b87dd3",
"text": "Bullying, in both physical and cyber worlds, has been recognized as a serious health issue among adolescents. Given its significance, scholars are charged with identifying factors that influence bullying involvement in a timely fashion. However, previous social studies of bullying are handicapped by data scarcity. The standard psychological science approach to studying bullying is to conduct personal surveys in schools. The sample size is typically in the hundreds, and these surveys are often collected only once. On the other hand, the few computational studies narrowly restrict themselves to cyberbullying, which accounts for only a small fraction of all bullying episodes.",
"title": ""
},
{
"docid": "f91a507a9cb7bdee2e8c3c86924ced8d",
"text": "a r t i c l e i n f o It is often stated that bullying is a \" group process \" , and many researchers and policymakers share the belief that interventions against bullying should be targeted at the peer-group level rather than at individual bullies and victims. There is less insight into what in the group level should be changed and how, as the group processes taking place at the level of the peer clusters or school classes have not been much elaborated. This paper reviews the literature on the group involvement in bullying, thus providing insight into the individuals' motives for participation in bullying, the persistence of bullying, and the adjustment of victims across different peer contexts. Interventions targeting the peer group are briefly discussed and future directions for research on peer processes in bullying are suggested. Bullying is a subtype of aggressive behavior, in which an individual or a group of individuals repeatedly attacks, humiliates, and/or excludes a relatively powerless person. The majority of studies on the topic have been conducted in schools, focusing on bullying among the concept of bullying is used to refer to peer-to-peer bullying among school-aged children and youth, when not otherwise mentioned. It is known that a sizable minority of primary and secondary school students is involved in peer-to-peer bullying either as perpetrators or victims — or as both, being both bullied themselves and harassing others. In WHO's Health Behavior in School-Aged Children survey (HBSC, see Craig & Harel, 2004), the average prevalence of victims across the 35 countries involved was 11%, whereas bullies represented another 11%. Children who report both bullying others and being bullied by others (so-called bully–victims) were not identified in the HBSC study, but other studies have shown that approximately 4–6% of the children can be classified as bully–victims (Haynie et al., 2001; Nansel et al., 2001). Bullying constitutes a serious risk for the psychosocial and academic adjustment of both victims",
"title": ""
},
{
"docid": "f6df133663ab4342222d95a20cd09996",
"text": "Web 2.0 has led to the development and evolution of web-based communities and applications. These communities provide places for information sharing and collaboration. They also open the door for inappropriate online activities, such as harassment, in which some users post messages in a virtual community that are intentionally offensive to other members of the community. It is a new and challenging task to detect online harassment; currently few systems attempt to solve this problem. In this paper, we use a supervised learning approach for detecting harassment. Our technique employs content features, sentiment features, and contextual features of documents. The experimental results described herein show that our method achieves significant improvements over several baselines, including Term FrequencyInverse Document Frequency (TFIDF) approaches. Identification of online harassment is feasible when TFIDF is supplemented with sentiment and contextual feature attributes.",
"title": ""
}
] |
[
{
"docid": "eb85cffda3aec56b77ae016ac6f73011",
"text": "This paper proposes a low-complexity word-level deep convolutional neural network (CNN) architecture for text categorization that can efficiently represent longrange associations in text. In the literature, several deep and complex neural networks have been proposed for this task, assuming availability of relatively large amounts of training data. However, the associated computational complexity increases as the networks go deeper, which poses serious challenges in practical applications. Moreover, it was shown recently that shallow word-level CNNs are more accurate and much faster than the state-of-the-art very deep nets such as character-level CNNs even in the setting of large training data. Motivated by these findings, we carefully studied deepening of word-level CNNs to capture global representations of text, and found a simple network architecture with which the best accuracy can be obtained by increasing the network depth without increasing computational cost by much. We call it deep pyramid CNN. The proposed model with 15 weight layers outperforms the previous best models on six benchmark datasets for sentiment classification and topic categorization.",
"title": ""
},
{
"docid": "69d42340c09303b69eafb19de7170159",
"text": "Based on an example of translational motion, this report shows how to model and initialize the Kalman Filter. Basic rules about physical motion are introduced to point out, that the well-known laws of physical motion are a mere approximation. Hence, motion of non-constant velocity or acceleration is modelled by additional use of white noise. Special attention is drawn to the matrix initialization for use in the Kalman Filter, as, in general, papers and books do not give any hint on this; thus inducing the impression that initializing is not important and may be arbitrary. For unknown matrices many users of the Kalman Filter choose the unity matrix. Sometimes it works, sometimes it does not. In order to close this gap, initialization is shown on the example of human interactive motion. In contrast to measuring instruments with documented measurement errors in manuals, the errors generated by vision-based sensoring must be estimated carefully. Of course, the described methods may be adapted to other circumstances.",
"title": ""
},
{
"docid": "d50550fe203ffe135ef90dd0b20cd975",
"text": "The problem of automatically matching composite sketches to facial photographs is addressed in this paper. Previous research on sketch recognition focused on matching sketches drawn by professional artists who either looked directly at the subjects (viewed sketches) or used a verbal description of the subject's appearance as provided by an eyewitness (forensic sketches). Unlike sketches hand drawn by artists, composite sketches are synthesized using one of the several facial composite software systems available to law enforcement agencies. We propose a component-based representation (CBR) approach to measure the similarity between a composite sketch and mugshot photograph. Specifically, we first automatically detect facial landmarks in composite sketches and face photos using an active shape model (ASM). Features are then extracted for each facial component using multiscale local binary patterns (MLBPs), and per component similarity is calculated. Finally, the similarity scores obtained from individual facial components are fused together, yielding a similarity score between a composite sketch and a face photo. Matching performance is further improved by filtering the large gallery of mugshot images using gender information. Experimental results on matching 123 composite sketches against two galleries with 10,123 and 1,316 mugshots show that the proposed method achieves promising performance (rank-100 accuracies of 77.2% and 89.4%, respectively) compared to a leading commercial face recognition system (rank-100 accuracies of 22.8% and 52.0%) and densely sampled MLBP on holistic faces (rank-100 accuracies of 27.6% and 10.6%). We believe our prototype system will be of great value to law enforcement agencies in apprehending suspects in a timely fashion.",
"title": ""
},
{
"docid": "db252efe7bde6cc0d58e337f8ad04271",
"text": "Social skills training is a well-established method to decrease human anxiety and discomfort in social interaction, and acquire social skills. In this paper, we attempt to automate the process of social skills training by developing a dialogue system named \"automated social skills trainer,\" which provides social skills training through human-computer interaction. The system includes a virtual avatar that recognizes user speech and language information and gives feedback to users to improve their social skills. Its design is based on conventional social skills training performed by human participants, including defining target skills, modeling, role-play, feedback, reinforcement, and homework. An experimental evaluation measuring the relationship between social skill and speech and language features shows that these features have a relationship with autistic traits. Additional experiments measuring the effect of performing social skills training with the proposed application show that most participants improve their skill by using the system for 50 minutes.",
"title": ""
},
{
"docid": "66451aa5a41ec7f9246d749c0983fa60",
"text": "A new method for automatically acquiring case frame patterns from large corpora is proposed. In particular, the problem of generalizing values of a case frame slot for a verb is viewed as that of estimating a conditional probability distribution over a partition of words, and a new generalization method based on the Minimum Description Length (MDL) principle is proposed. In order to assist with efficiency, the proposed method makes use of an existing thesaurus and restricts its attention to those partitions that are present as \"cuts\" in the thesaurus tree, thus reducing the generalization problem to that of estimating a \"tree cut model\" of the thesaurus tree. An efficient algorithm is given, which provably obtains the optimal tree cut model for the given frequency data of a case slot, in the sense of MDL. Case frame patterns obtained by the method were used to resolve PP-attachment ambiguity. Experimental results indicate that the proposed method improves upon or is at least comparable with existing methods.",
"title": ""
},
{
"docid": "c9acadfba9aa66ef6e7f4bc1d86943f6",
"text": "We propose a new saliency detection model by combining global information from frequency domain analysis and local information from spatial domain analysis. In the frequency domain analysis, instead of modeling salient regions, we model the nonsalient regions using global information; these so-called repeating patterns that are not distinctive in the scene are suppressed by using spectrum smoothing. In spatial domain analysis, we enhance those regions that are more informative by using a center-surround mechanism similar to that found in the visual cortex. Finally, the outputs from these two channels are combined to produce the saliency map. We demonstrate that the proposed model has the ability to highlight both small and large salient regions in cluttered scenes and to inhibit repeating objects. Experimental results also show that the proposed model outperforms existing algorithms in predicting objects regions where human pay more attention.",
"title": ""
},
{
"docid": "20ac5cea816906d595a65915680575f2",
"text": "A combination of distributed computation, positive feedback and constructive greedy heuristic is proposed as a new approach to stochastic optimization and problem solving. Positive feedback accounts for rapid discovery of very good solutions, distributed computation avoids premature convergence, and greedy heuristic helps the procedure to find acceptable solutions in the early stages of the search process. An application of the proposed methodology to the classical travelling salesman problem shows that the system can rapidly provide very good, if not optimal, solutions. We report on many simulation results and discuss the working of the algorithm. Some hints about how this approach can be applied to a variety of optimization problems are also given.",
"title": ""
},
{
"docid": "b829049a8abf47f8f13595ca54eaa009",
"text": "This paper describes a face recognition-based people tracking and re-identification system for RGB-D camera networks. The system tracks people and learns their faces online to keep track of their identities even if they move out from the camera's field of view once. For robust people re-identification, the system exploits the combination of a deep neural network- based face representation and a Bayesian inference-based face classification method. The system also provides a predefined people identification capability: it associates the online learned faces with predefined people face images and names to know the people's whereabouts, thus, allowing a rich human-system interaction. Through experiments, we validate the re-identification and the predefined people identification capabilities of the system and show an example of the integration of the system with a mobile robot. The overall system is built as a Robot Operating System (ROS) module. As a result, it simplifies the integration with the many existing robotic systems and algorithms which use such middleware. The code of this work has been released as open-source in order to provide a baseline for the future publications in this field.",
"title": ""
},
{
"docid": "101554958aedffeaa26e429fca84e661",
"text": "Many healthcare reforms are to digitalize and integrate healthcare information systems. However, the disparity of business benefits in having an integrated healthcare information system (IHIS) varies with organizational fit factors. Critical success factors (CSFs) exist for hospitals to implement an IHIS successfully. This study investigated the relationship between the organizational fit and the system success. In addition, we examined the moderating effect of five CSFs -information systems adjustment, business process adjustment, organizational resistance, top management support, and the capability of key team members – in an IHIS implementation. Fifty-three hospitals that have successfully undertaken IHIS projects participated in this study. We used regression analysis to assess the relationships. The findings of this study provide a roadmap for hospitals to capitalize on the organizational fit and the five critical success factors in order to implement successful IHIS projects. Shin-Yuan Hung, Charlie Chen, Kuan-Hsiuang Wang (2014) \"Critical Success Factors For The Implementation Of Integrated Healthcare Information Systems Projects: An Organizational Fit Perspective\" Communication of the Association for Information Systems volume 34 Article 39 Version of record Available @ www.aisel.aisnet.org",
"title": ""
},
{
"docid": "fdd4c5fc773aa001da927ab3776559ae",
"text": "We treated a 65-year-old Japanese man with a giant penile lymphedema due to chronic penile strangulation with a rubber band. He was referred to our hospital with progressive penile swelling that had developed over a period of 2 years from chronic use of a rubber band placed around the penile base for prevention of urinary incontinence. Under a diagnosis of giant penile lymphedema, we performed resection of abnormal penile skin weighing 4.8 kg, followed by a penile plasty procedure. To the best of our knowledge, this is only the seventh report of such a case worldwide, with the present giant penile lymphedema the most reported.",
"title": ""
},
{
"docid": "624806aa09127fbca2e01c9d52b5764a",
"text": "Over the last few years, increased interest has arisen with respect to age-related tasks in the Computer Vision community. As a result, several \"in-the-wild\" databases annotated with respect to the age attribute became available in the literature. Nevertheless, one major drawback of these databases is that they are semi-automatically collected and annotated and thus they contain noisy labels. Therefore, the algorithms that are evaluated in such databases are prone to noisy estimates. In order to overcome such drawbacks, we present in this paper the first, to the best of knowledge, manually collected \"in-the-wild\" age database, dubbed AgeDB, containing images annotated with accurate to the year, noise-free labels. As demonstrated by a series of experiments utilizing state-of-the-art algorithms, this unique property renders AgeDB suitable when performing experiments on age-invariant face verification, age estimation and face age progression \"in-the-wild\".",
"title": ""
},
{
"docid": "2acb16f1e67f141220dc05b90ac23385",
"text": "By combining patch-clamp methods with two-photon microscopy, it is possible to target recordings to specific classes of neurons in vivo. Here we describe methods for imaging and recording from the soma and dendrites of neurons identified using genetically encoded probes such as green fluorescent protein (GFP) or functional indicators such as Oregon Green BAPTA-1. Two-photon targeted patching can also be adapted for use with wild-type brains by perfusing the extracellular space with a membrane-impermeable dye to visualize the cells by their negative image and target them for electrical recordings, a technique termed \"shadowpatching.\" We discuss how these approaches can be adapted for single-cell electroporation to manipulate specific cells genetically. These approaches thus permit the recording and manipulation of rare genetically, morphologically, and functionally distinct subsets of neurons in the intact nervous system.",
"title": ""
},
{
"docid": "bbedbe2d901f63e3f163ea0f24a2e2d7",
"text": "a r t i c l e i n f o a b s t r a c t The leader trait perspective is perhaps the most venerable intellectual tradition in leadership research. Despite its early prominence in leadership research, it quickly fell out of favor among leadership scholars. Thus, despite recent empirical support for the perspective, conceptual work in the area lags behind other theoretical perspectives. Accordingly, the present review attempts to place the leader trait perspective in the context of supporting intellectual traditions, including evolutionary psychology and behavioral genetics. We present a conceptual model that considers the source of leader traits, mediators and moderators of their effects on leader emergence and leadership effectiveness, and distinguish between perceived and actual leadership effectiveness. We consider both the positive and negative effects of specific \" bright side \" personality traits: the Big Five traits, core self-evaluations, intelligence, and charisma. We also consider the positive and negative effects of \" dark side \" leader traits: Narcissism, hubris, dominance, and Machiavellianism. If one sought to find singular conditions that existed across species, one might find few universals. One universal that does exist, at least those species that have brains and nervous systems, is leadership. From insects to reptiles to mammals, leadership exists as surely as collective activity exists. There is the queen bee, and there is the alpha male. Though the centrality of leadership may vary by species (it seems more important to mammals than, say, to avians and reptiles), it is fair to surmise that whenever there is social activity, a social structure develops, and one (perhaps the) defining characteristic of that structure is the emergence of a leader or leaders. The universality of leadership, however, does not deny the importance of individual differences — indeed the emergence of leadership itself is proof of individual differences. Moreover, even casual observation of animal (including human) collective behavior shows the existence of a leader. Among a herd of 100 cattle or a pride of 20 lions, one is able to detect a leadership structure (especially at times of eating, mating, and attack). One quickly wonders: What has caused this leadership structure to emerge? Why has one animal (the alpha) emerged to lead the collective? And how does this leadership cause this collective to flourish — or founder? Given these questions, it is of no surprise that the earliest conceptions of leadership focused on individual …",
"title": ""
},
{
"docid": "d906d31f32ad89a843645cad98eab700",
"text": "Deep Learning has led to a dramatic leap in SuperResolution (SR) performance in the past few years. However, being supervised, these SR methods are restricted to specific training data, where the acquisition of the low-resolution (LR) images from their high-resolution (HR) counterparts is predetermined (e.g., bicubic downscaling), without any distracting artifacts (e.g., sensor noise, image compression, non-ideal PSF, etc). Real LR images, however, rarely obey these restrictions, resulting in poor SR results by SotA (State of the Art) methods. In this paper we introduce \"Zero-Shot\" SR, which exploits the power of Deep Learning, but does not rely on prior training. We exploit the internal recurrence of information inside a single image, and train a small image-specific CNN at test time, on examples extracted solely from the input image itself. As such, it can adapt itself to different settings per image. This allows to perform SR of real old photos, noisy images, biological data, and other images where the acquisition process is unknown or non-ideal. On such images, our method outperforms SotA CNN-based SR methods, as well as previous unsupervised SR methods. To the best of our knowledge, this is the first unsupervised CNN-based SR method.",
"title": ""
},
{
"docid": "5d2c1095a34ee582f490f4b0392a3da0",
"text": "We study the problem of online learning to re-rank, where users provide feedback to improve the quality of displayed lists. Learning to rank has been traditionally studied in two settings. In the offline setting, rankers are typically learned from relevance labels of judges. These approaches have become the industry standard. However, they lack exploration, and thus are limited by the information content of offline data. In the online setting, an algorithm can propose a list and learn from the feedback on it in a sequential fashion. Bandit algorithms developed for this setting actively experiment, and in this way overcome the biases of offline data. But they also tend to ignore offline data, which results in a high initial cost of exploration. We propose BubbleRank, a bandit algorithm for re-ranking that combines the strengths of both settings. The algorithm starts with an initial base list and improves it gradually by swapping higher-ranked less attractive items for lower-ranked more attractive items. We prove an upper bound on the n-step regret of BubbleRank that degrades gracefully with the quality of the initial base list. Our theoretical findings are supported by extensive numerical experiments on a large real-world click dataset.",
"title": ""
},
{
"docid": "6442c9e4eb9034abf90fcd697c32a343",
"text": "With the increasing popularity and demand for mobile applications, there has been a significant increase in the number of mobile application development projects. Highly volatile requirements of mobile applications require adaptive software development methods. The Agile approach is seen as a natural fit for mobile application and there is a need to explore various Agile methodologies for the development of mobile applications. This paper evaluates how adopting various Agile approaches improves the development of mobile applications and if they can be used in order to provide more tailor-made process improvements within an organization. A survey related to mobile application development process improvement was developed. The use of various Agile approaches for success in mobile application development were evaluated by determining the significance of the most used Agile engineering paradigms such as XP, Scrum, and Lean. The findings of the study show that these Agile methods have the potential to help deliver enhanced speed and quality for mobile application development.",
"title": ""
},
{
"docid": "13fc420d1fa63445c29c4107734e2943",
"text": "As technology advances, more and more devices have Internet access. This gives rise to the Internet of Things. With all these new devices connected to the Internet, cybercriminals are undoubtedly trying to take advantage of these devices, especially when they have poor protection. These botnets will have a large amount of processing power in the near future. This paper will elaborate on how much processing power these IoT botnets can gain and to what extend cryptocurrencies will be influenced by it. This will be done through a literature study which is validated through an experiment.",
"title": ""
},
{
"docid": "74b163a2c2f149dce9850c6ff5d7f1f6",
"text": "The vast majority of cutaneous canine nonepitheliotropic lymphomas are of T cell origin. Nonepithelial Bcell lymphomas are extremely rare. The present case report describes a 10-year-old male Golden retriever that was presented with slowly progressive nodular skin lesions on the trunk and limbs. Histopathology of skin biopsies revealed small periadnexal dermal nodules composed of rather pleomorphic round cells with round or contorted nuclei. The diagnosis of nonepitheliotropic cutaneous B-cell lymphoma was based on histopathological morphology and case follow-up, and was supported immunohistochemically by CD79a positivity.",
"title": ""
},
{
"docid": "0cae8939c57ff3713d7321102c80816e",
"text": "In this paper, we propose using 3D Convolutional Neural Networks for large scale user-independent continuous gesture recognition. We have trained an end-to-end deep network for continuous gesture recognition (jointly learning both the feature representation and the classifier). The network performs three-dimensional (i.e. space-time) convolutions to extract features related to both the appearance and motion from volumes of color frames. Space-time invariance of the extracted features is encoded via pooling layers. The earlier stages of the network are partially initialized using the work of Tran et al. before being adapted to the task of gesture recognition. An earlier version of the proposed method, which was trained for 11,250 iterations, was submitted to ChaLearn 2016 Continuous Gesture Recognition Challenge and ranked 2nd with the Mean Jaccard Index Score of 0.269235. When the proposed method was further trained for 28,750 iterations, it achieved state-of-the-art performance on the same dataset, yielding a 0.314779 Mean Jaccard Index Score.",
"title": ""
},
{
"docid": "e44f67fec39390f215b5267c892d1a26",
"text": "Primary progressive aphasia (PPA) may be the onset of several neurodegenerative diseases. This study evaluates a cohort of patients with PPA to assess their progression to different clinical syndromes, associated factors that modulate this progression, and patterns of cerebral metabolism linked to different clinical evolutionary forms. Thirty-five patients meeting PPA criteria underwent a clinical and neuroimaging 18F-Fluorodeoxyglucose PET evaluation. Survival analysis was performed using time from clinical onset to the development of a non-language symptom or deficit (PPA-plus). Cerebral metabolism was analyzed using Statistical Parametric Mapping. Patients classified into three PPA variants evolved to atypical parkinsonism, behavioral disorder and motor neuron disease in the agrammatic variant; to behavioral disorder in the semantic; and to memory impairment in the logopenic. Median time from the onset of symptoms to PPA-plus was 36 months (31–40, 95 % confidence interval). Right laterality, and years of education were associated to a lower risk of progression, while logopenic variant to a higher risk. Different regions of hypometabolism were identified in agrammatic PPA with parkinsonism, motor neuron disease and logopenic PPA-plus. Clinical course of PPA differs according to each variant. Left anterior temporal and frontal medial hypometabolism in agrammatic variant is linked to motor neuron disease and atypical parkinsonism, respectively. PPA variant, laterality and education may be associated to the risk of progression. These results suggest the possibility that clinical and imaging data could help to predict the clinical course of PPA.",
"title": ""
}
] |
scidocsrr
|
1dc241d5a52b7bd7f17e80dddac7fa45
|
Quantum statistical mechanics over function fields
|
[
{
"docid": "19d2e8cfa7787a139ca8117a0522b044",
"text": "We give here a comprehensive treatment of the mathematical theory of per-turbative renormalization (in the minimal subtraction scheme with dimensional regularization), in the framework of the Riemann–Hilbert correspondence and motivic Galois theory. We give a detailed overview of the work of Connes– Kreimer [31], [32]. We also cover some background material on affine group schemes, Tannakian categories, the Riemann–Hilbert problem in the regular singular and irregular case, and a brief introduction to motives and motivic Ga-lois theory. We then give a complete account of our results on renormalization and motivic Galois theory announced in [35]. Our main goal is to show how the divergences of quantum field theory, which may at first appear as the undesired effect of a mathematically ill-formulated theory, in fact reveal the presence of a very rich deeper mathematical structure, which manifests itself through the action of a hidden \" cosmic Galois group \" 1 , which is of an arithmetic nature, related to motivic Galois theory. Historically, perturbative renormalization has always appeared as one of the most elaborate recipes created by modern physics, capable of producing numerical quantities of great physical relevance out of a priori meaningless mathematical expressions. In this respect, it is fascinating for mathematicians and physicists alike. The depth of its origin in quantum field theory and the precision with which it is confirmed by experiments undoubtedly make it into one of the jewels of modern theoretical physics. For a mathematician in quest of \" meaning \" rather than heavy formalism, the attempts to cast the perturbative renormalization technique in a conceptual framework were so far falling short of accounting for the main computational aspects, used for instance in QED. These have to do with the subtleties involved in the subtraction of infinities in the evaluation of Feynman graphs and do not fall under the range of \" asymptotically free theories \" for which constructive quantum field theory can provide a mathematically satisfactory formulation., where the conceptual meaning of the detailed computational devices used in perturbative renormalization is analysed. Their work shows that the recursive procedure used by physicists is in fact identical to a mathematical method of extraction of finite values known as the Birkhoff decomposition, applied to a loop γ(z) with values in a complex pro-unipotent Lie group G.",
"title": ""
}
] |
[
{
"docid": "f7d535f9a5eeae77defe41318d642403",
"text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.",
"title": ""
},
{
"docid": "91c0bd1c3faabc260277c407b7c6af59",
"text": "In this paper, we consider the Direct Perception approach for autonomous driving. Previous efforts in this field focused more on feature extraction of the road markings and other vehicles in the scene rather than on the autonomous driving algorithm and its performance under realistic assumptions. Our main contribution in this paper is introducing a new, more robust, and more realistic Direct Perception framework and corresponding algorithm for autonomous driving. First, we compare the top 3 Convolutional Neural Networks (CNN) models in the feature extraction competitions and test their performance for autonomous driving. The experimental results showed that GoogLeNet performs the best in this application. Subsequently, we propose a deep learning based algorithm for autonomous driving, and we refer to our algorithm as GoogLenet for Autonomous Driving (GLAD). Unlike previous efforts, GLAD makes no unrealistic assumptions about the autonomous vehicle or its surroundings, and it uses only five affordance parameters to control the vehicle as compared to the 14 parameters used by prior efforts. Our simulation results show that the proposed GLAD algorithm outperforms previous Direct Perception algorithms both on empty roads and while driving with other surrounding vehicles.",
"title": ""
},
{
"docid": "c08e9731b9a1135b7fb52548c5c6f77e",
"text": "Many geometry processing applications, such as morphing, shape blending, transfer of texture or material properties, and fitting template meshes to scan data, require a bijective mapping between two or more models. This mapping, or cross-parameterization, typically needs to preserve the shape and features of the parameterized models, mapping legs to legs, ears to ears, and so on. Most of the applications also require the models to be represented by compatible meshes, i.e. meshes with identical connectivity, based on the cross-parameterization. In this paper we introduce novel methods for shape preserving cross-parameterization and compatible remeshing. Our cross-parameterization method computes a low-distortion bijective mapping between models that satisfies user prescribed constraints. Using this mapping, the remeshing algorithm preserves the user-defined feature vertex correspondence and the shape correlation between the models. The remeshing algorithm generates output meshes with significantly fewer elements compared to previous techniques, while accurately approximating the input geometry. As demonstrated by the examples, the compatible meshes we construct are ideally suitable for morphing and other geometry processing applications.",
"title": ""
},
{
"docid": "f11a88cad05210e26940e79700b0ca11",
"text": "Agile software development methods provide great flexibility to adapt to changing requirements and rapidly market products. Sri Lankan software organizations too are embracing these methods to develop software products. Being an iterative an incremental software engineering methodology, agile philosophy promotes working software over comprehensive documentation and heavily relies on continuous customer collaboration throughout the life cycle of the product. Hence characteristics of the people involved with the project and their working environment plays an important role in the success of an agile project compared to any other software engineering methodology. This study investigated the factors that lead to the success of a project that adopts agile methodology in Sri Lanka. An online questionnaire was used to collect data to identify people and organizational factors that lead to project success. The sample consisted of Sri Lankan software professionals with several years of industry experience in developing projects using agile methods. According to the statistical data analysis, customer satisfaction, customer commitment, team size, corporate culture, technical competency, decision time, customer commitment and training and learning have a influence on the success of the project.",
"title": ""
},
{
"docid": "9c5711c68c7a9c7a4a8fc4d9dbcf145d",
"text": "Approximate set membership data structures (ASMDSs) are ubiquitous in computing. They trade a tunable, often small, error rate ( ) for large space savings. The canonical ASMDS is the Bloom filter, which supports lookups and insertions but not deletions in its simplest form. Cuckoo filters (CFs), a recently proposed class of ASMDSs, add deletion support and often use fewer bits per item for equal . This work introduces the Morton filter (MF), a novel ASMDS that introduces several key improvements to CFs. Like CFs, MFs support lookups, insertions, and deletions, but improve their respective throughputs by 1.3× to 2.5×, 0.9× to 15.5×, and 1.3× to 1.6×. MFs achieve these improvements by (1) introducing a compressed format that permits a logically sparse filter to be stored compactly in memory, (2) leveraging succinct embedded metadata to prune unnecessary memory accesses, and (3) heavily biasing insertions to use a single hash function. With these optimizations, lookups, insertions, and deletions often only require accessing a single hardware cache line from the filter. These improvements are not at a loss in space efficiency, as MFs typically use comparable to slightly less space than CFs for the same . PVLDB Reference Format: Alex D. Breslow and Nuwan S. Jayasena. Morton Filters: Faster, Space-Efficient Cuckoo Filters via Biasing, Compression, and Decoupled Logical Sparsity. PVLDB, 11(9): 1041-1055, 2018. DOI: https://doi.org/10.14778/3213880.3213884",
"title": ""
},
{
"docid": "bf2fbbfca758af3be4c6e84fb56ddf26",
"text": "Classification is important problem in data mining. Given a data set, classifier generates meaningful description for each class. Decision trees are most effective and widely used classification methods. There are several algorithms for induction of decision trees. These trees are first induced and then prune subtrees with subsequent pruning phase to improve accuracy and prevent overfitting. In this paper, various pruning methods are discussed with their features and also effectiveness of pruning is evaluated. Accuracy is measured for diabetes and glass dataset with various pruning factors. The experiments are shown for this two datasets for measuring accuracy and size of the tree.",
"title": ""
},
{
"docid": "6f45b4858c33d88472c131f379fd3edf",
"text": "Shadow maps are the current technique for generating high quality real-time dynamic shadows. This article gives a ‘practical’ introduction to shadow mapping (or projection mapping) with numerous simple examples and source listings. We emphasis some of the typical limitations and common pitfalls when implementing shadow mapping for the first time and how the reader can overcome these problems using uncomplicated debugging techniques. A scene without shadowing is life-less and flat objects seem decoupled. While different graphical techniques add a unique effect to the scene, shadows are crucial and when not present create a strange and mood-less aura.",
"title": ""
},
{
"docid": "beb7509b59f1bac8083ce5fbddb247e5",
"text": "Congestion in the Industrial, Scientific, and Medical (ISM) frequency band limits the expansion of the IEEE 802.11 Wireless Local Area Network (WLAN). Recently, due to the ‘digital switchover’ from analog to digital TV (DTV) broadcasting, a sizeable amount of bandwidth have been freed up in the conventional TV bands, resulting in the availability of TV white space (TVWS). The IEEE 802.11af is a standard for the WLAN technology that operates at the TVWS spectrum. TVWS operation must not cause harmful interference to the incumbent DTV service. This paper provides a method of computing the keep-out distance required between an IEEE 802.11af device and the DTV service contour, in order to keep the interference to a harmless level. The ITU-R P.1411-7 propagation model is used in the calculation. Four different DTV services are considered: Advanced Television Systems Committee (ATSC), Digital Video Broadcasting — Terrestrial (DVB-T), Integrated Services Digital Broadcasting — Terrestrial (ISDB-T), and Digital Terrestrial Multimedia Broadcasting (DTMB). The calculation results reveal that under many circumstances, allocating keep-out distance of 1 to 2.5 km is sufficient for the protection of DTV service.",
"title": ""
},
{
"docid": "d92b7ee3739843c2649d0f3f1e0ee5b2",
"text": "In this short note we observe that the Peikert-Vaikuntanathan-Waters (PVW) method of packing many plaintext elements in a single Regev-type ciphertext, can be used for performing SIMD homomorphic operations on packed ciphertext. This provides an alternative to the Smart-Vercauteren (SV) ciphertextpacking technique that relies on polynomial-CRT. While the SV technique is only applicable to schemes that rely on ring-LWE (or other hardness assumptions in ideal lattices), the PVW method can be used also for cryptosystems whose security is based on standard LWE (or more broadly on the hardness of “General-LWE”). Although using the PVW method with LWE-based schemes leads to worse asymptotic efficiency than using the SV technique with ring-LWE schemes, the simplicity of this method may still offer some practical advantages. Also, the two techniques can be used in tandem with “general-LWE” schemes, suggesting yet another tradeoff that can be optimized for different settings. Acknowledgments The first author is sponsored by DARPA under agreement number FA8750-11-C-0096. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. The second and third authors are sponsored by DARPA and ONR under agreement number N00014-11C-0390. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, or the U.S. Government. Distribution Statement “A” (Approved for Public Release, Distribution Unlimited).",
"title": ""
},
{
"docid": "5a73be1c8c24958779272a1190a3df20",
"text": "We study how contract element extraction can be automated. We provide a labeled dataset with gold contract element annotations, along with an unlabeled dataset of contracts that can be used to pre-train word embeddings. Both datasets are provided in an encoded form to bypass privacy issues. We describe and experimentally compare several contract element extraction methods that use manually written rules and linear classifiers (logistic regression, SVMs) with hand-crafted features, word embeddings, and part-of-speech tag embeddings. The best results are obtained by a hybrid method that combines machine learning (with hand-crafted features and embeddings) and manually written post-processing rules.",
"title": ""
},
{
"docid": "e4a22b34510b28d1235fc987b97a8607",
"text": "Many regions of the globe are experiencing rapid urban growth, the location and intensity of which can have negative effects on ecological and social systems. In some locales, planners and policy makers have used urban growth boundaries to direct the location and intensity of development; however the empirical evidence for the efficacy of such policies is mixed. Monitoring the location of urban growth is an essential first step in understanding how the system has changed over time. In addition, if regulations purporting to direct urban growth to specific locales are present, it is important to evaluate if the desired pattern (or change in pattern) has been observed. In this paper, we document land cover and change across six dates (1986, 1991, 1995, 1999, 2002, and 2007) for six counties in the Central Puget Sound, Washington State, USA. We explore patterns of change by three different spatial partitions (the region, each county, 2000 U.S. Census Tracks), and with respect to urban growth boundaries implemented in the late 1990’s as part of the state’s Growth Management Act. Urban land cover increased from 8 to 19% of the study area between 1986 and 2007, while lowland deciduous and mixed forests decreased from 21 to 13% and grass and agriculture decreased from 11 to 8%. Land in urban classes outside of the urban growth boundaries increased more rapidly (by area and percentage of new urban land cover) than land within the urban growth boundaries, suggesting that the intended effect of the Growth Management Act to direct growth to within the urban growth boundaries may not have been accomplished by 2007. Urban sprawl, as estimated by the area of land per capita, increased overall within the region, with the more rural counties within commuting distance to cities having the highest rate of increase observed. Land cover data is increasingly available and can be used to rapidly evaluate urban development patterns over large areas. Such data are important inputs for policy makers, urban planners, and modelers alike to manage and plan for future population, land use, and land cover changes.",
"title": ""
},
{
"docid": "f1925c66ed41aa50838d115b235349f0",
"text": "Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution. It requires less adversarial information and can fool more types of networks. The results show that 68.36% of the natural images in CIFAR10 test dataset and 41.22% of the ImageNet (ILSVRC 2012) validation images can be perturbed to at least one target class by modifying just one pixel with 73.22% and 5.52% confidence on average. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks.",
"title": ""
},
{
"docid": "d657085072f829db812a2735d0e7f41c",
"text": "Recently, increasing attention has been drawn to training semantic segmentation models using synthetic data and computer-generated annotation. However, domain gap remains a major barrier and prevents models learned from synthetic data from generalizing well to real-world applications. In this work, we take the advantage of additional geometric information from synthetic data, a powerful yet largely neglected cue, to bridge the domain gap. Such geometric information can be generated easily from synthetic data, and is proven to be closely coupled with semantic information. With the geometric information, we propose a model to reduce domain shift on two levels: on the input level, we augment the traditional image translation network with the additional geometric information to translate synthetic images into realistic styles; on the output level, we build a task network which simultaneously performs depth estimation and semantic segmentation on the synthetic data. Meanwhile, we encourage the network to preserve the correlation between depth and semantics by adversarial training on the output space. We then validate our method on two pairs of synthetic to real dataset: Virtual KITTI→KITTI, and SYNTHIA→Cityscapes, where we achieve a significant performance gain compared to the non-adaptive baseline and methods without using geometric information. This demonstrates the usefulness of geometric information from synthetic data for cross-domain semantic segmentation.",
"title": ""
},
{
"docid": "cfeb97c3be1c697fb500d54aa43af0e1",
"text": "The development of accurate and robust palmprint verification algorithms is a critical issue in automatic palmprint authentication systems. Among various palmprint verification approaches, the orientation based coding methods, such as competitive code (CompCode), palmprint orientation code (POC) and robust line orientation code (RLOC), are state-of-the-art ones. They extract and code the locally dominant orientation as features and could match the input palmprint in real-time and with high accuracy. However, using only one dominant orientation to represent a local region may lose some valuable information because there are cross lines in the palmprint. In this paper, we propose a novel feature extraction algorithm, namely binary orientation co-occurrence vector (BOCV), to represent multiple orientations for a local region. The BOCV can better describe the local orientation features and it is more robust to image rotation. Our experimental results on the public palmprint database show that the proposed BOCV outperforms the CompCode, POC and RLOC by reducing the equal error rate (EER) significantly. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "885a51f55d5dfaad7a0ee0c56a64ada3",
"text": "This paper presents a new method, Minimax Tree Optimization (MMTO), to learn a heuristic evaluation function of a practical alpha-beta search program. The evaluation function may be a linear or non-linear combination of weighted features, and the weights are the parameters to be optimized. To control the search results so that the move decisions agree with the game records of human experts, a well-modeled objective function to be minimized is designed. Moreover, a numerical iterative method is used to find local minima of the objective function, and more than forty million parameters are adjusted by using a small number of hyper parameters. This method was applied to shogi, a major variant of chess in which the evaluation function must handle a larger state space than in chess. Experimental results show that the large-scale optimization of the evaluation function improves the playing strength of shogi programs, and the new method performs significantly better than other methods. Implementation of the new method in our shogi program Bonanza made substantial contributions to the program’s first-place finish in the 2013 World Computer Shogi Championship. Additionally, we present preliminary evidence of broader applicability of our method to other two-player games such as chess.",
"title": ""
},
{
"docid": "86d58f4196ceb48e29cb143e6a157c22",
"text": "In this paper, we challenge a form of paragraph-to-question generation task. We propose a question generation system which can generate a set of comprehensive questions from a body of text. Besides the tree kernel functions to assess the grammatically of the generated questions, our goal is to rank them by using community-based question answering systems to calculate the importance of the generated questions. The main assumption behind our work is that each body of text is related to a topic of interest and it has a comprehensive information about the topic.",
"title": ""
},
{
"docid": "e9f9a7c506221bacf966808f54c4f056",
"text": "Reconfigurable antennas, with the ability to radiate more than one pattern at different frequencies and polarizations, are necessary in modern telecommunication systems. The requirements for increased functionality (e.g., direction finding, beam steering, radar, control, and command) within a confined volume place a greater burden on today's transmitting and receiving systems. Reconfigurable antennas are a solution to this problem. This paper discusses the different reconfigurable components that can be used in an antenna to modify its structure and function. These reconfiguration techniques are either based on the integration of radio-frequency microelectromechanical systems (RF-MEMS), PIN diodes, varactors, photoconductive elements, or on the physical alteration of the antenna radiating structure, or on the use of smart materials such as ferrites and liquid crystals. Various activation mechanisms that can be used in each different reconfigurable implementation to achieve optimum performance are presented and discussed. Several examples of reconfigurable antennas for both terrestrial and space applications are highlighted, such as cognitive radio, multiple-input-multiple-output (MIMO) systems, and satellite communication.",
"title": ""
},
{
"docid": "b127e63ac45c81ce9fa9aa6240ce5154",
"text": "This paper examines the use of social learning platforms in conjunction with the emergent pedagogy of the `flipped classroom'. In particular the attributes of the social learning platform “Edmodo” is considered alongside the changes in the way in which online learning environments are being implemented, especially within British education. Some observations are made regarding the use and usefulness of these platforms along with a consideration of the increasingly decentralized nature of education in the United Kingdom.",
"title": ""
},
{
"docid": "c77c6ea404d9d834ef1be5a1d7222e66",
"text": "We introduce the concepts of regular and totally regular bipolar fuzzy graphs. We prove necessary and sufficient condition under which regular bipolar fuzzy graph and totally bipolar fuzzy graph are equivalent. We introduce the notion of bipolar fuzzy line graphs and present some of their properties. We state a necessary and sufficient condition for a bipolar fuzzy graph to be isomorphic to its corresponding bipolar fuzzy line graph. We examine when an isomorphism between two bipolar fuzzy graphs follows from an isomorphism of their corresponding bipolar fuzzy line graphs.",
"title": ""
},
{
"docid": "61c73842d25b54f24ff974b439d55c64",
"text": "Many electrical vehicles have been developed recently, and one of them is the vehicle type with the self-balancing capability. Portability also one of issue related to the development of electric vehicles. This paper presents one wheeled self-balancing electric vehicle namely PENS-Wheel. Since it only consists of one motor as its actuator, it becomes more portable than any other self-balancing vehicle types. This paper discusses on the implementation of Kalman filter for filtering the tilt sensor used by the self-balancing controller, mechanical design, and fabrication of the vehicle. The vehicle is designed based on the principle of the inverted pendulum by utilizing motor's torque on the wheel to maintain its upright position. The sensor system uses IMU which combine accelerometer and gyroscope data to get the accurate pitch angle of the vehicle. The paper presents the effects of Kalman filter parameters including noise variance of the accelerometer, noise variance of the gyroscope, and the measurement noise to the response of the sensor output. Finally, we present the result of the proposed filter and compare it with proprietary filter algorithm from InvenSense, Inc. running on Digital Motion Processor (DMP) inside the MPU6050 chip. The result of the filter algorithm implemented in the vehicle shows that it is capable in delivering comparable performance with the proprietary one.",
"title": ""
}
] |
scidocsrr
|
218c89117ca7b9dd1e88e1922bae6c11
|
Quadrotor Helicopter Trajectory Tracking Control
|
[
{
"docid": "5c732e1b9ded9ce11a347b82683fb039",
"text": "This paper presents the design of an embedded-control architecture for a four-rotor unmanned air vehicle (UAV) to perform autonomous hover flight. A non-linear control law based on nested saturations technique is presented that stabilizes the state of the aircraft around the origin. The control law was implemented in a microcontroller to stabilize the aircraft in real time. In order to obtain experimental results we have built a low-cost on-board system which drives the aircraft in position and orientation. The nonlinear controller has been successfully tested experimentally",
"title": ""
}
] |
[
{
"docid": "78253b77b78c8e2b57b56e4d87c908ab",
"text": "OBJECTIVES\nThis study examines living arrangements of older adults across 43 developing countries and compares patterns by gender, world regions, and macro-level indicators of socioeconomic development.\n\n\nMETHODS\nData are from Demographic and Health Surveys. The country is the unit of analysis. Indicators include household size, headship, relationship to head, and coresidence with spouse, children, and others. Unweighted regional averages and ordinary least-squares regressions determine whether variations exist.\n\n\nRESULTS\nAverage household sizes are large, but a substantially greater proportion of older adults live alone than do individuals in other age groups. Females are more likely than males to live alone and are less likely to live with a spouse or head of a household. Heading a household and living in a large household and with young children is more prevalent in Africa than elsewhere. Coresidence with adult children is most common in Asia and least in Africa. Coresidence is more frequent with sons than with daughters in both Asia and Africa, but not in Latin America. As a country's level of schooling rises, most living arrangement indicators change with families becoming more nuclear. Urbanization and gross national product have no significant effects.\n\n\nDISCUSSION\nAlthough living arrangements differ across world regions and genders, within-region variations exist and are explained in part by associations between countrywide levels of education and household structure. These associations may be caused by a variety of intermediating factors, such as migration of children and preferences for privacy.",
"title": ""
},
{
"docid": "b6f05fcc1face0dcf4981e6578b0330e",
"text": "The importance of accurate and timely information describing the nature and extent of land resources and changes over time is increasing, especially in rapidly growing metropolitan areas. We have developed a methodology to map and monitor land cover change using multitemporal Landsat Thematic Mapper (TM) data in the seven-county Twin Cities Metropolitan Area of Minnesota for 1986, 1991, 1998, and 2002. The overall seven-class classification accuracies averaged 94% for the four years. The overall accuracy of land cover change maps, generated from post-classification change detection methods and evaluated using several approaches, ranged from 80% to 90%. The maps showed that between 1986 and 2002 the amount of urban or developed land increased from 23.7% to 32.8% of the total area, while rural cover types of agriculture, forest and wetland decreased from 69.6% to 60.5%. The results quantify the land cover change patterns in the metropolitan area and demonstrate the potential of multitemporal Landsat data to provide an accurate, economical means to map and analyze changes in land cover over time that can be used as inputs to land management and policy decisions. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "c0df4f379a3b54c4e6fa9855b1b6d372",
"text": "We present a novel optimization-based retraction algorithm to improve the performance of sample-based planners in narrow passages for 3D rigid robots. The retraction step is formulated as an optimization problem using an appropriate distance metric in the configuration space. Our algorithm computes samples near the boundary of C-obstacle using local contact analysis and uses those samples to improve the performance of RRT planners in narrow passages. We analyze the performance of our planner using Voronoi diagrams and show that the tree can grow closely towards any randomly generated sample. Our algorithm is general and applicable to all polygonal models. In practice, we observe significant speedups over prior RRT planners on challenging scenarios with narrow passages.",
"title": ""
},
{
"docid": "1c7131fcb031497b2c1487f9b25d8d4e",
"text": "Biases in information processing undoubtedly play an important role in the maintenance of emotion and emotional disorders. In an attentional cueing paradigm, threat words and angry faces had no advantage over positive or neutral words (or faces) in attracting attention to their own location, even for people who were highly state-anxious. In contrast, the presence of threatening cues (words and faces) had a strong impact on the disengagement of attention. When a threat cue was presented and a target subsequently presented in another location, high state-anxious individuals took longer to detect the target relative to when either a positive or a neutral cue was presented. It is concluded that threat-related stimuli affect attentional dwell time and the disengage component of attention, leaving the question of whether threat stimuli affect the shift component of attention open to debate.",
"title": ""
},
{
"docid": "5ff345f050ec14b02c749c41887d592d",
"text": "Testing multithreaded code is hard and expensive. Each multithreaded unit test creates two or more threads, each executing one or more methods on shared objects of the class under test. Such unit tests can be generated at random, but basic generation produces tests that are either slow or do not trigger concurrency bugs. Worse, such tests have many false alarms, which require human effort to filter out. We present BALLERINA, a novel technique for automatic generation of efficient multithreaded random tests that effectively trigger concurrency bugs. BALLERINA makes tests efficient by having only two threads, each executing a single, randomly selected method. BALLERINA increases chances that such a simple parallel code finds bugs by appending it to more complex, randomly generated sequential code. We also propose a clustering technique to reduce the manual effort in inspecting failures of automatically generated multithreaded tests. We evaluate BALLERINA on 14 real-world bugs from 6 popular codebases: Groovy, Java JDK, jFreeChart, Log4j, Lucene, and Pool. The experiments show that tests generated by BALLERINA can find bugs on average 2X-10X faster than various configurations of basic random generation, and our clustering technique reduces the number of inspected failures on average 4X-8X. Using BALLERINA, we found three previously unknown bugs in Apache Pool and Log4j, one of which was already confirmed and fixed.",
"title": ""
},
{
"docid": "c59e0968b2d4dc314e52c116b21c3659",
"text": "This document aims to clarify frequent questions on using the Accord.NET Framework to perform statistical analyses. Here, we reproduce all steps of the famous Lindsay's Tutorial on Principal Component Analysis, in an attempt to give the reader a complete hands-on overview on the framework's basics while also discussing some of the results and sources of divergence between the results generated by Accord.NET and by other software packages.",
"title": ""
},
{
"docid": "9bb970d7a6c4f1c0f566cca6bc26750c",
"text": "The science goals of the SKA specify a field of view which is far greater than what current radio telescopes provide. Two possible feed architectures for reflector antennas are clusters of horns or phased-array feeds. This memo compares these two alternatives and finds that the beams produced by horn clusters fall short of fully sampling the sky and require interleaved pointings, whereas phased-array feeds can provide complete sampling with a single pointing. Thus for a given focal-plane area horn clusters incur an equivalent system temperature penalty of ∼ 2× or more. The situation is worse for wide-band feeds since the spacing of the beams is constant while the beamwidth is inversely proportional to frequency, increasing the number of pointings for a fully-sampled map at the high-end of an operating band. These disadvantages, along with adaptive beamforming capabilities, provide a strong argument for the development of phased-array technology for wide-field and wide-band feeds.",
"title": ""
},
{
"docid": "4c861a25442ed6c177853626382b3aa8",
"text": "In this paper we present a user study evaluating the benefits of geometrically correct user-perspective rendering using an Augmented Reality (AR) magic lens. In simulation we compared a user-perspective magic lens against the common device-perspective magic lens on both phone-sized and tablet-sized displays. Our results indicate that a tablet-sized display allows for significantly faster performance of a selection task and that a user-perspective lens has benefits over a device-perspective lens for a selection task. Based on these promising results, we created a proof-of-concept prototype, engineered with current off-the-shelf devices and software. To our knowledge, this is the first geometrically correct user-perspective magic lens.",
"title": ""
},
{
"docid": "95d5229599fcf91b7ea302aa5dafee2a",
"text": "The more the telecom services marketing paradigm evolves, the more important it becomes to retain high value customers. Traditional customer segmentation methods based on experience or ARPU (Average Revenue per User) consider neither customers’ future revenue nor the cost of servicing customers of different types. Therefore, it is very difficult to effectively identify high-value customers. In this paper, we propose a novel customer segmentation method based on customer lifecycle, which includes five decision models, i.e. current value, historic value, prediction of long-term value, credit and loyalty. Due to the difficulty of quantitative computation of long-term value, credit and loyalty, a decision tree method is used to extract important parameters related to long-term value, credit and loyalty. Then a judgments matrix formulated on the basis of characteristics of data and the experience of business experts is presented. Finally a simple and practical customer value evaluation system is built. This model is applied to telecom operators in a province in China and good accuracy is achieved. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c1ddefd126c6d338c4cd9238e9067435",
"text": "Tensor networks are efficient representations of high-dimensional tensors which have been very successful for physics and mathematics applications. We demonstrate how algorithms for optimizing such networks can be adapted to supervised learning tasks by using matrix product states (tensor trains) to parameterize models for classifying images. For the MNIST data set we obtain less than 1% test set classification error. We discuss how the tensor network form imparts additional structure to the learned model and suggest a possible generative interpretation.",
"title": ""
},
{
"docid": "accebc4ebc062f9676977b375e0c4f32",
"text": "Microtask crowdsourcing organizes complex work into workflows, decomposing large tasks into small, relatively independent microtasks. Applied to software development, this model might increase participation in open source software development by lowering the barriers to contribu-tion and dramatically decrease time to market by increasing the parallelism in development work. To explore this idea, we have developed an approach to decomposing programming work into microtasks. Work is coordinated through tracking changes to a graph of artifacts, generating appropriate microtasks and propagating change notifications to artifacts with dependencies. We have implemented our approach in CrowdCode, a cloud IDE for crowd development. To evaluate the feasibility of microtask programming, we performed a small study and found that a small crowd of 12 workers was able to successfully write 480 lines of code and 61 unit tests in 14.25 person-hours of time.",
"title": ""
},
{
"docid": "a9f8f3946dd963066006f19a251eef7c",
"text": "Three-dimensional virtual worlds are an emerging medium currently being used in both traditional classrooms and for distance education. Three-dimensional (3D) virtual worlds are a combination of desk-top interactive Virtual Reality within a chat environment. This analysis provides an overview of Active Worlds Educational Universe and Adobe Atmosphere and the pedagogical affordances and constraints of the inscription tools, discourse tools, experiential tools, and resource tools of each application. The purpose of this review is to discuss the implications of using each application for educational initiatives by exploring how the various design features of each may support and enhance the design of interactive learning environments.",
"title": ""
},
{
"docid": "2b1adb51eafbcd50675513bc67e42140",
"text": "This text reviews the generic aspects of the central nervous system evolutionary development, emphasizing the developmental features of the brain structures related with behavior and with the cognitive functions that finally characterized the human being. Over the limbic structures that with the advent of mammals were developed on the top of the primitive nervous system of their ancestrals, the ultimate cortical development with neurons arranged in layers constituted the structural base for an enhanced sensory discrimination, for more complex motor activities, and for the development of cognitive and intellectual functions that finally characterized the human being. The knowledge of the central nervous system phylogeny allow us particularly to infer possible correlations between the brain structures that were developed along phylogeny and the behavior of their related beings. In this direction, without discussing its conceptual aspects, this review ends with a discussion about the central nervous system evolutionary development and the emergence of consciousness, in the light of its most recent contributions.",
"title": ""
},
{
"docid": "7e3cdead80a1d17b064b67ddacd5d8c1",
"text": "BACKGROUND\nThe aim of the study was to evaluate the relationship between depression and Internet addiction among adolescents.\n\n\nSAMPLING AND METHOD\nA total of 452 Korean adolescents were studied. First, they were evaluated for their severity of Internet addiction with consideration of their behavioral characteristics and their primary purpose for computer use. Second, we investigated correlations between Internet addiction and depression, alcohol dependence and obsessive-compulsive symptoms. Third, the relationship between Internet addiction and biogenetic temperament as assessed by the Temperament and Character Inventory was evaluated.\n\n\nRESULTS\nInternet addiction was significantly associated with depressive symptoms and obsessive-compulsive symptoms. Regarding biogenetic temperament and character patterns, high harm avoidance, low self-directedness, low cooperativeness and high self-transcendence were correlated with Internet addiction. In multivariate analysis, among clinical symptoms depression was most closely related to Internet addiction, even after controlling for differences in biogenetic temperament.\n\n\nCONCLUSIONS\nThis study reveals a significant association between Internet addiction and depressive symptoms in adolescents. This association is supported by temperament profiles of the Internet addiction group. The data suggest the necessity of the evaluation of the potential underlying depression in the treatment of Internet-addicted adolescents.",
"title": ""
},
{
"docid": "b678ca4c649a2e69637b84c3e35f88f5",
"text": "Induced expression of the Flock House virus in the soma of C. elegans results in the RNAi-dependent production of virus-derived, small-interfering RNAs (viRNAs), which in turn silence the viral genome. We show here that the viRNA-mediated viral silencing effect is transmitted in a non-Mendelian manner to many ensuing generations. We show that the viral silencing agents, viRNAs, are transgenerationally transmitted in a template-independent manner and work in trans to silence viral genomes present in animals that are deficient in producing their own viRNAs. These results provide evidence for the transgenerational inheritance of an acquired trait, induced by the exposure of animals to a specific, biologically relevant physiological challenge. The ability to inherit such extragenic information may provide adaptive benefits to an animal.",
"title": ""
},
{
"docid": "d3156738608e92d69b5ec7a5fa91af18",
"text": "Carotid intima-media thickness (CIMT) has been shown to predict cardiovascular (CV) risk in multiple large studies. Careful evaluation of CIMT studies reveals discrepancies in the comprehensiveness with which CIMT is assessed-the number of carotid segments evaluated (common carotid artery [CCA], internal carotid artery [ICA], or the carotid bulb), the type of measurements made (mean or maximum of single measurements, mean of the mean, or mean of the maximum for multiple measurements), the number of imaging angles used, whether plaques were included in the intima-media thickness (IMT) measurement, the report of adjusted or unadjusted models, risk association versus risk prediction, and the arbitrary cutoff points for CIMT and for plaque to predict risk. Measuring the far wall of the CCA was shown to be the least variable method for assessing IMT. However, meta-analyses suggest that CCA-IMT alone only minimally improves predictive power beyond traditional risk factors, whereas inclusion of the carotid bulb and ICA-IMT improves prediction of both cardiac risk and stroke risk. Carotid plaque appears to be a more powerful predictor of CV risk compared with CIMT alone. Quantitative measures of plaques such as plaque number, plaque thickness, plaque area, and 3-dimensional assessment of plaque volume appear to be progressively more sensitive in predicting CV risk than mere assessment of plaque presence. Limited data show that plaque characteristics including plaque vascularity may improve CV disease risk stratification further. IMT measurement at the CCA, carotid bulb, and ICA that allows inclusion of plaque in the IMT measurement or CCA-IMT measurement along with plaque assessment in all carotid segments is emerging as the focus of carotid artery ultrasound imaging for CV risk prediction.",
"title": ""
},
{
"docid": "06413e71fbbe809ee2ffbdb31dc8fe59",
"text": "This paper takes a critical look at the features used in the semantic role tagging literature and show that the information in the input, generally a syntactic parse tree, has yet to be fully exploited. We propose an additional set of features and our experiments show that these features lead to fairly significant improvements in the tasks we performed. We further show that different features are needed for different subtasks. Finally, we show that by using a Maximum Entropy classifier and fewer features, we achieved results comparable with the best previously reported results obtained with SVM models. We believe this is a clear indication that developing features that capture the right kind of information is crucial to advancing the stateof-the-art in semantic analysis.",
"title": ""
},
{
"docid": "d80fc668073878c476bdf3997b108978",
"text": "Automotive information services utilizing vehicle data are rapidly expanding. However, there is currently no data centric software architecture that takes into account the scale and complexity of data involving numerous sensors. To address this issue, the authors have developed an in-vehicle datastream management system for automotive embedded systems (eDSMS) as data centric software architecture. Providing the data stream functionalities to drivers and passengers are highly beneficial. This paper describes a vehicle embedded data stream processing platform for Android devices. The platform enables flexible query processing with a dataflow query language and extensible operator functions in the query language on the platform. The platform employs architecture independent of data stream schema in in-vehicle eDSMS to facilitate smoother Android application program development. This paper presents specifications and design of the query language and APIs of the platform, evaluate it, and discuss the results. Keywords—Android, automotive, data stream management system",
"title": ""
},
{
"docid": "1db6ecf2059b749f0ad640f9c53b1826",
"text": "U.S. hotel brands and international hotel brands headquartered in the United States have increasingly evolved away from being hotel operating companies to being brand management and franchise administration organizations. This trend has allowed for the accelerated growth and development of many major hotel brands, and the increasing growth of franchised hotels. There are numerous strategic implications related to this trend. This study seeks to analyze some of these strategic implications by evaluating longitudinal data regarding the performance of major hotel brands in the marketplace, both in terms of guest satisfaction and revenue indicators. Specifically, the authors test whether guest satisfaction at various U.S. and international brands influences both brand occupancy percentage and average daily room rate 3 years later. In addition, the authors investigate whether the percentage of franchised hotel properties influences both guest satisfaction and occupancy 3 years later. Also, they test whether overall brand size has a positive or detrimental effect on future hotel occupancy. Finally, whether the change in guest satisfaction for hotel brands effects the change in average daily rate during the same 3-year period is tested.",
"title": ""
},
{
"docid": "c467edcb0c490034776ba2dc2cde9d9e",
"text": "BACKGROUND\nPostoperative complications of blepharoplasty range from cutaneous changes to vision-threatening emergencies. Some of these can be prevented with careful preoperative evaluation and surgical technique. When complications arise, their significance can be diminished by appropriate management. This article addresses blepharoplasty complications based on the typical postoperative timeframe when they are encountered.\n\n\nMETHODS\nThe authors conducted a review article of major blepharoplasty complications and their treatment.\n\n\nRESULTS\nComplications within the first postoperative week include corneal abrasions and vision-threatening retrobulbar hemorrhage; the intermediate period (weeks 1 through 6) addresses upper and lower eyelid malpositions, strabismus, corneal exposure, and epiphora; and late complications (>6 weeks) include changes in eyelid height and contour along with asymmetries, scarring, and persistent edema.\n\n\nCONCLUSIONS\nA thorough knowledge of potential complications of blepharoplasty surgery is necessary for the practicing aesthetic surgeon. Within this article, current concepts and relevant treatment strategies are reviewed with the use of the most recent and/or appropriate peer-reviewed literature available.",
"title": ""
}
] |
scidocsrr
|
dc09c1afdec2f4438587ec9dfc5da30f
|
GST: GPU-decodable supercompressed textures
|
[
{
"docid": "911ca70346689d6ba5fd01b1bc964dbe",
"text": "We present a novel texture compression scheme, called iPACKMAN, targeted for hardware implementation. In terms of image quality, it outperforms the previous de facto standard texture compression algorithms in the majority of all cases that we have tested. Our new algorithm is an extension of the PACKMAN texture compression system, and while it is a bit more complex than PACKMAN, it is still very low in terms of hardware complexity.",
"title": ""
},
{
"docid": "90ca045940f1bc9517c64bd93fd33d37",
"text": "We present a new algorithm for encoding low dynamic range images into fixed-rate texture compression formats. Our approach provides orders of magnitude improvements in speed over existing publicly-available compressors, while generating high quality results. The algorithm is applicable to any fixed-rate texture encoding scheme based on Block Truncation Coding and we use it to compress images into the OpenGL BPTC format. The underlying technique uses an axis-aligned bounding box to estimate the proper partitioning of a texel block and performs a generalized cluster fit to compute the endpoint approximation. This approximation can be further refined using simulated annealing. The algorithm is inherently parallel and scales with the number of processor cores. We highlight its performance on low-frequency game textures and the high frequency Kodak Test Image Suite.",
"title": ""
}
] |
[
{
"docid": "cc9c9720b223ff1d433758bce11a373a",
"text": "or to skim the text of the article quickly, while academics are more likely to download and print the paper. Further research investigating the ratio between HTML views and PDF downloads could uncover interesting findings about how the public interacts with the open access (OA) research literature. Scholars In addition to tracking scholarly impacts on traditionally invisible audiences, altmetrics hold potential for tracking previously hidden scholarly impacts. Faculty of 1000 Faculty of 1000 (F1000) is a service publishing reviews of important articles, as adjudged by a core “faculty” of selected scholars. Wets, Weedon, and Velterop (2003) argue that F1000 is valuable because it assesses impact at the article level, and adds a human level assessment that statistical indicators lack. Others disagree (Nature Neuroscience, 2005), pointing to a very strong correlation (r = 0.93) between F1000 score and Journal Impact Factor. This said, the service has clearly demonstrated some value, as over two thirds of the world’s top research institutions pay the annual subscription fee to use F1000 (Wets et al., 2003). Moreover, F1000 has been to shown to spot valuable articles which “sole reliance on bibliometric indicators would have led [researchers] to miss” (Allen, Jones, Dolby, Lynn, & Walport, 2009, p. 1). In the PLoS dataset, F1000 recommendations were not closely associated with citation or other altmetrics counts, and formed their own factor in factor analysis, suggesting they track a relatively distinct sort of impact. Conversation (scholarly blogging) In this context, “scholarly blogging” is distinguished from its popular counterpart by the expertise and qualifications of the blogger. While a useful distinction, this is inevitably an imprecise one. One approach has been to limit the investigation to science-only aggregators like ResearchBlogging (Groth & Gurney, 2010; Shema & Bar-Ilan, 2011). Academic blogging has grown steadily in visibility; academics have blogged their dissertations (Efimova, 2009), and the ranks of academic bloggers contain several Fields Medalists, Nobel laureates, and other eminent scholars (Nielsen, 2009). Economist and Nobel laureate Paul Krugman (Krugman, 2012), himself a blogger, argues that blogs are replacing the working-paper culture that has in turn already replaced economics journals as distribution tools. Given its importance, there have been surprisingly few altmetrics studies of scholarly blogging. Extant research, however, has shown that blogging shares many of the characteristics of more formal communication, including a long-tail distribution of cited articles (Groth & Gurney, 2010; Shema & Bar-Ilan, 2011). Although science bloggers can write anonymously, most blog under their real names (Shema & Bar-Ilan, 2011). Conversation (Twitter) Scholars on Twitter use the service to support different activities, including teaching (Dunlap & Lowenthal, 2009; Junco, Heiberger, & Loken, 2011), participating in conferences (Junco et al., 2011; Letierce et al., 2010; Ross et al., 2011), citing scholarly articles (Priem & Costello, 2010; Weller, Dröge, & Puschmann, 2011), and engaging in informal communication (Ross et al., 2011; Zhao & Rosson, 2009). Citations from Twitter are a particularly interesting data source, since they capture the sort of informal discussion that accompanies early important work. There is, encouragingly, evidence that Tweeting scholars take citations from Twitter seriously, both in creating and reading them (Priem & Costello, 2010). The number of scholars on Twitter is growing steadily, as shown in Figure 1. The same study found that, in a sample of around 10,000 Ph.D. students and faculty members at five representative universities, one 1 in 40 scholars had an active Twitter account. Although some have suggested that Twitter is only used by younger scholars, rank was not found to significantly associate with Twitter use, and in fact faculty members’ tweets were twice as likely to discuss their and others’ scholarly work. Conversation (article commenting) Following the lead of blogs and other social media platforms, many journals have added article-level commenting to their online platforms in the middle of the last decade. In theory, the discussion taking place in these threads is another valuable lens into the early impacts of scientific ideas. In practice, however, many commenting systems are virtual ghost towns. In a sample of top medical journals, fully half had commenting systems laying idle, completely unused by anyone (Schriger, Chehrazi, Merchant, & Altman, 2011). However, commenting was far from universally unsuccessful; several journals had comments on 50-76% of their articles. In a sample from the British Medical Journal, articles had, on average, nearly five comments each (Gotzsche, Delamothe, Godlee, & Lundh, 2010). Additionally, many articles may accumulate comments in other environments; the growing number of external comment sites allows users to post comments on journal articles published elsewhere. These have tended to appear and disappear quickly over the last few years. Neylon (2010) argues that online article commenting is thriving, particularly for controversial papers, but that \"...people are much more comfortable commenting in their own spaces” (para. 5), like their blogs and on Twitter. Reference managers Reference managers like Mendeley and CiteULike are very useful sources of altmetrics data and are currently among the most studied. Although scholars have used electronic reference managers for some time, this latest generation offers scientometricians the chance to query their datasets, offering a compelling glimpse into scholars’ libraries. It is worth summarizing three main points, though. First, the most important social reference managers are CiteULike and Mendeley. Another popular reference manager, Zotero, has received less study (but see Lucas, 2008). Papers and ReadCube are newer, smaller reference managers; Connotea and 2Collab both dealt poorly with spam; the latter has closed, and the former may follow. Second, the usage base of social reference managers—particularly Mendeley—is large and growing rapidly. Mendeley’s coverage, in particular, rivals that of commercial databases like Scopus and Web of Science (WoS) (Bar-Ilan et al., 2012; Haustein & Siebenlist, 2011; Li et al., 2011; Priem et al., 2012). Finally, inclusion in reference managers correlates to citation more strongly than most other altmetrics. Working with various datasets, researchers have reported correlations of .46 (Bar-Ilan, 2012), .56 (Li et al., 2011), and .5 (Priem et al., 2012) between inclusion in users’ Mendeley libraries, and WoS citations. This closer relationship is likely because of the importance of reference managers in the citation workflow. However, the lack of perfect or even strong correlation suggests that this altmetric, too, captures influence not reflected in the citation record. There has been particular interest in using social bookmarking for recommendations (Bogers & van den Bosch, 2008; Jiang, He, & Ni, 2011). pdf downloads As discussed earlier, most research on downloads today does not distinguish between HTML views in PDF downloads. However there is a substantial and growing body of research investigating article downloads, and their relation to later citation. Several researchers have found that downloads predict or correlate with later citation (Perneger, 2004; Brody et al., 2006). The MESUR project is the largest of these studies to date, and used linked usage events to create a novel map of the connections between disciplines, as well as analyses of potential metrics using download and citation data in novel ways (Bollen, et al., 2009). Shuai, Pepe, and Bollen (2012) show that downloads and Twitter citations interact, with Twitter likely driving traffic to new papers, and also reflecting reader interest. Uses, limitations and future research Uses Several uses of altmetrics have been proposed, which aim to capitalize on their speed, breadth, and diversity, including use in evaluation, analysis, and prediction. Evaluation The breadth of altmetrics could support more holistic evaluation efforts; a range of altmetrics may help solve the reliability problems of individual measures by triangulating scores from easily-accessible “converging partial indicators” (Martin & Irvine, 1983, p. 1). Altmetrics could also support the evaluation of increasingly important, non-traditional scholarly products like datasets and software, which are currently underrepresented in the citation record (Howison & Herbsleb, 2011; Sieber & Trumbo, 1995). Research that impacts wider audiences could also be better rewarded; Neylon (2012) relates a compelling example of how tweets reveal clinical use of a research paper—use that would otherwise go undiscovered and unrewarded. The speed of altmetrics could also be useful in evaluation, particularly for younger scholars whose research has not yet accumulated many citations. Most importantly, altmetrics could help open a window on scholars’ “scientific ‘street cred’” (Cronin, 2001, p. 6), helping reward researchers whose subtle influences—in conversations, teaching, methods expertise, and so on— influence their colleagues without perturbing the citation record. Of course, potential evaluators must be strongly cautioned that while uncritical application of any metric is dangerous, this is doubly so with altmetrics, whose research base is not yet adequate to support high-stakes decisions.",
"title": ""
},
{
"docid": "42050d2d11a30e003b9d35fad12daa5e",
"text": "Document is unavailable: This DOI was registered to an article that was not presented by the author(s) at this conference. As per section 8.2.1.B.13 of IEEE's \"Publication Services and Products Board Operations Manual,\" IEEE has chosen to exclude this article from distribution. We regret any inconvenience.",
"title": ""
},
{
"docid": "33eeb883ae070fdc1b5a1eb656bce6b9",
"text": "Traffic Congestion is one of many serious global problems in all great cities resulted from rapid urbanization which always exert negative externalities upon society. The solution of traffic congestion is highly geocentric and due to its heterogeneous nature, curbing congestion is one of the hard tasks for transport planners. It is not possible to suggest unique traffic congestion management framework which could be absolutely applied for every great cities. Conversely, it is quite feasible to develop a framework which could be used with or without minor adjustment to deal with congestion problem. So, the main aim of this paper is to prepare a traffic congestion mitigation framework which will be useful for urban planners, transport planners, civil engineers, transport policy makers, congestion management researchers who are directly or indirectly involved or willing to involve in the task of traffic congestion management. Literature review is the main source of information of this study. In this paper, firstly, traffic congestion is defined on the theoretical point of view and then the causes of traffic congestion are briefly described. After describing the causes, common management measures, using worldwide, are described and framework for supply side and demand side congestion management measures are prepared.",
"title": ""
},
{
"docid": "2d34486ae54b2ed4795a8e85ce22ce57",
"text": "We collected a corpus of parallel text in 11 languages from the proceedings of the European Parliament, which are published on the web1. This corpus has found widespread use in the NLP community. Here, we focus on its acquisition and its application as training data for statistical machine translation (SMT). We trained SMT systems for 110 language pairs, which reveal interesting clues into the challenges ahead.",
"title": ""
},
{
"docid": "32f3396d7e843f75c504cd99b00944a0",
"text": "This paper aims to address the very challenging problem of efficient and accurate hand tracking from depth sequences, meanwhile to deform a high-resolution 3D hand model with geometric details. We propose an integrated regression framework to infer articulated hand pose, and regress high-frequency details from sparse high-resolution 3D hand model examples. Specifically, our proposed method mainly consists of four components: skeleton embedding, hand joint regression, skeleton alignment, and high-resolution details integration. Skeleton embedding is optimized via a wrinkle-based skeleton refinement method for faithful hand models with fine geometric details. Hand joint regression is based on a deep convolutional network, from which 3D hand joint locations are predicted from a single depth map, then a skeleton alignment stage is performed to recover fully articulated hand poses. Deformable fine-scale details are estimated from a nonlinear mapping between the hand joints and per-vertex displacements. Experiments on two challenging datasets show that our proposed approach can achieve accurate, robust, and real-time hand tracking, while preserve most high-frequency details when deforming a virtual hand.",
"title": ""
},
{
"docid": "59405c31da09ea58ef43a03d3fc55cf4",
"text": "The Quality of Service (QoS) management is one of the urgent problems in networking which doesn't have an acceptable solution yet. In the paper the approach to this problem based on multipath routing protocol in SDN is considered. The proposed approach is compared with other QoS management methods. A structural and operation schemes for its practical implementation is proposed.",
"title": ""
},
{
"docid": "7b02c36cef0c195d755b6cc1c7fbda2e",
"text": "Content based object retrieval across large scale surveillance video dataset is a significant and challenging task, in which learning an effective compact object descriptor plays a critical role. In this paper, we propose an efficient deep compact descriptor with bagging auto-encoders. Specifically, we take advantage of discriminative CNN to extract efficient deep features, which not only involve rich semantic information but also can filter background noise. Besides, to boost the retrieval speed, auto-encoders are used to map the high-dimensional real-valued CNN features into short binary codes. Considering the instability of auto-encoder, we adopt a bagging strategy to fuse multiple auto-encoders to reduce the generalization error, thus further improving the retrieval accuracy. In addition, bagging is easy for parallel computing, so retrieval efficiency can be guaranteed. Retrieval experimental results on the dataset of 100k visual objects extracted from multi-camera surveillance videos demonstrate the effectiveness of the proposed deep compact descriptor.",
"title": ""
},
{
"docid": "bc388488c5695286fe7d7e56ac15fa94",
"text": "In this paper a new parking guiding and information system is described. The system assists the user to find the most suitable parking space based on his/her preferences and learned behavior. The system takes into account parameters such as driver's parking duration, arrival time, destination, type preference, cost preference, driving time, and walking distance as well as time-varying parking rules and pricing. Moreover, a prediction algorithm is proposed to forecast the parking availability for different parking locations for different times of the day based on the real-time parking information, and previous parking availability/occupancy data. A novel server structure is used to implement the system. Intelligent parking assist system reduces the searching time for parking spots in urban environments, and consequently leads to a reduction in air pollutions and traffic congestion. On-street parking meters, off-street parking garages, as well as free parking spaces are considered in our system.",
"title": ""
},
{
"docid": "920748fbdcaf91346a40e3bf5ae53d42",
"text": "This sketch presents an improved formalization of automatic caricature that extends a standard approach to account for the population variance of facial features. Caricature is generally considered a rendering that emphasizes the distinctive features of a particular face. A formalization of this idea, which we term “Exaggerating the Difference from the Mean” (EDFM), is widely accepted among caricaturists [Redman 1984] and was first implemented in a groundbreaking computer program by [Brennan 1985]. Brennan’s “Caricature generator” program produced caricatures by manually defining a polyline drawing with topology corresponding to a frontal, mean, face-shape drawing, and then displacing the vertices by a constant factor away from the mean shape. Many psychological studies have applied the “Caricature Generator” or EDFM idea to investigate caricaturerelated issues in face perception [Rhodes 1997].",
"title": ""
},
{
"docid": "8d5d2f266181d456d4f71df26075a650",
"text": "Integrated architectures in the automotive and avionic domain promise improved resource utilization and enable a better tactic coordination of application subsystems compared to federated systems. In order to support safety-critical application subsystems, an integrated architecture needs to support fault-tolerant strategies that enable the continued operation of the system in the presence of failures. The basis for the implementation and validation of fault-tolerant strategies is a fault hypothesis that identifies the fault containment regions, specifies the failure modes and provides realistic failure rate assumptions. This paper describes a fault hypothesis for integrated architectures, which takes into account the collocation of multiple software components on shared node computers. We argue in favor of a differentiation of fault containment regions for hardware and software faults. In addition, the fault hypothesis describes the assumptions concerning the respective frequencies of transient and permanent failures in consideration of recent semiconductor trends",
"title": ""
},
{
"docid": "ab92c8ded0001d4103be4e7a8ee3a1f7",
"text": "Metabolic syndrome defines a cluster of interrelated risk factors for cardiovascular disease and diabetes mellitus. These factors include metabolic abnormalities, such as hyperglycemia, elevated triglyceride levels, low high-density lipoprotein cholesterol levels, high blood pressure, and obesity, mainly central adiposity. In this context, extracellular vesicles (EVs) may represent novel effectors that might help to elucidate disease-specific pathways in metabolic disease. Indeed, EVs (a terminology that encompasses microparticles, exosomes, and apoptotic bodies) are emerging as a novel mean of cell-to-cell communication in physiology and pathology because they represent a new way to convey fundamental information between cells. These microstructures contain proteins, lipids, and genetic information able to modify the phenotype and function of the target cells. EVs carry specific markers of the cell of origin that make possible monitoring their fluctuations in the circulation as potential biomarkers inasmuch their circulating levels are increased in metabolic syndrome patients. Because of the mixed components of EVs, the content or the number of EVs derived from distinct cells of origin, the mode of cell stimulation, and the ensuing mechanisms for their production, it is difficult to attribute specific functions as drivers or biomarkers of diseases. This review reports recent data of EVs from different origins, including endothelial, smooth muscle cells, macrophages, hepatocytes, adipocytes, skeletal muscle, and finally, those from microbiota as bioeffectors of message, leading to metabolic syndrome. Depicting the complexity of the mechanisms involved in their functions reinforce the hypothesis that EVs are valid biomarkers, and they represent targets that can be harnessed for innovative therapeutic approaches.",
"title": ""
},
{
"docid": "73c2874b381e49f9c36ae0b43d7e73fb",
"text": "Automatic abnormality detection in video sequences has recently gained an increasing attention within the research community. Although progress has been seen, there are still some limitations in current research. While most systems are designed at detecting specific abnormality, others which are capable of detecting more than two types of abnormalities rely on heavy computation. Therefore, we provide a framework for detecting abnormalities in video surveillance by using multiple features and cascade classifiers, yet achieve above real-time processing speed. Experimental results on two datasets show that the proposed framework can reliably detect abnormalities in the video sequence, outperforming the current state-of-the-art methods.",
"title": ""
},
{
"docid": "19f4de5f01f212bf146087d4695ce15e",
"text": "Reliable feature correspondence between frames is a critical step in visual odometry (VO) and visual simultaneous localization and mapping (V-SLAM) algorithms. In comparison with existing VO and V-SLAM algorithms, semi-direct visual odometry (SVO) has two main advantages that lead to stateof-the-art frame rate camera motion estimation: direct pixel correspondence and efficient implementation of probabilistic mapping method. This paper improves the SVO mapping by initializing the mean and the variance of the depth at a feature location according to the depth prediction from a singleimage depth prediction network. By significantly reducing the depth uncertainty of the initialized map point (i.e., small variance centred about the depth prediction), the benefits are twofold: reliable feature correspondence between views and fast convergence to the true depth in order to create new map points. We evaluate our method with two outdoor datasets: KITTI dataset and Oxford Robotcar dataset. The experimental results indicate that the improved SVO mapping results in increased robustness and camera tracking accuracy.",
"title": ""
},
{
"docid": "59754857209f45ab7c3708fa413808a3",
"text": "Recent studies on the hippocampus and the prefrontal cortex have considerably advanced our understanding of the distinct roles of these brain areas in the encoding and retrieval of memories, and of how they interact in the prolonged process by which new memories are consolidated into our permanent storehouse of knowledge. These studies have led to a new model of how the hippocampus forms and replays memories and how the prefrontal cortex engages representations of the meaningful contexts in which related memories occur, as well as how these areas interact during memory retrieval. Furthermore, they have provided new insights into how interactions between the hippocampus and prefrontal cortex support the assimilation of new memories into pre-existing networks of knowledge, called schemas, and how schemas are modified in this process as the foundation of memory consolidation.",
"title": ""
},
{
"docid": "248a447eb07f0939fa479b0eb8778756",
"text": "The present study was done to determine the long-term success and survival of fixed partial dentures (FPDs) and to evaluate the risks for failures due to specific biological and technical complications. A MEDLINE search (PubMed) from 1966 up to March 2004 was conducted, as well as hand searching of bibliographies from relevant articles. Nineteen studies from an initial yield of 3658 titles were finally selected and data were extracted independently by three reviewers. Prospective and retrospective cohort studies with a mean follow-up time of at least 5 years in which patients had been examined clinically at the follow-up visits were included in the meta-analysis. Publications only based on patients records, questionnaires or interviews were excluded. Survival of the FPDs was analyzed according to in situ and intact failure risks. Specific biological and technical complications such as caries, loss of vitality and periodontal disease recurrence as well as loss of retention, loss of vitality, tooth and material fractures were also analyzed. The 10-year probability of survival for fixed partial dentures was 89.1% (95% confidence interval (CI): 81-93.8%) while the probability of success was 71.1% (95% CI: 47.7-85.2%). The 10-year risk for caries and periodontitis leading to FPD loss was 2.6% and 0.7%, respectively. The 10-year risk for loss of retention was 6.4%, for abutment fracture 2.1% and for material fractures 3.2%.",
"title": ""
},
{
"docid": "641049f7bdf194b3c326298c5679c469",
"text": "Acknowledgements Research in areas where there are many possible paths to follow requires a keen eye for crucial issues. The study of learning systems is such an area. Through the years of working with Andy Barto and Rich Sutton, I have observed many instances of \" fluff cutting \" and the exposure of basic issues. I thank both Andy and Rich for the insights that have rubbed off on me. I also thank Andy for opening up an infinite world of perspectives on learning, ranging from engineering principles to neural processing theories. I thank Rich for showing me the most important step in doing \" science \" —simplify your questions by isolating the issues. Several people contributed to the readability of this dissertation. Andy spent much time carefully reading several drafts. Through his efforts the clarity is much improved. I thank Paul Utgoff, Michael Arbib, and Bill Kilmer for reading drafts of this dissertation and providing valuable criticisms. Paul provided a non-connectionist perspective that widened my view considerably. He never hesitated to work out differences in terms and methodologies that have been developed through research with connectionist vs. symbolic representations. I thank for commenting on an early draft and for many interesting discussions. and the AFOSR for starting and maintaining the research project that supported the work reported in this dis-sertation. I thank Susan Parker for the skill with which she administered the project. And I thank the COINS Department at UMass and the RCF Staff for the maintenance of the research computing environment. Much of the computer graphics software used to generate figures of this dissertation is based on graphics tools provided by Rich Sutton and Andy Cromarty. Most importantly, I thank Stacey and Joseph for always being there to lift my spirits while I pursued distant milestones and to share my excitement upon reaching them. Their faith and confidence helped me maintain a proper perspective. The difficulties of learning in multilayered networks of computational units has limited the use of connectionist systems in complex domains. This dissertation elucidates the issues of learning in a network's hidden units, and reviews methods for addressing these issues that have been developed through the years. Issues of learning in hidden units are shown to be analogous to learning issues for multilayer systems employing symbolic representations. Comparisons of a number of algorithms for learning in hidden units are made by applying them in …",
"title": ""
},
{
"docid": "7af1da740fbff209987276bf0d765365",
"text": "A finite-difference method for solving the time-dependent NavierStokes equations for an incompressible fluid is introduced. This method uses the primitive variables, i.e. the velocities and the pressure, and is equally applicable to problems in two and three space dimensions. Test problems are solved, and an application to a three-dimensional convection problem is presented. Introduction. The equations of motion of an incompressible fluid are dtUi 4UjdjUi = — — dip + vV2Ui + Ei} ( V2 = Yl d2 ) , djUj = 0 , PO \\ 3 ' where Ui are the velocity components, p is the pressure, p0 is the density, Ei are the components of the external forces per unit mass, v is the coefficient of kinematic viscosity, t is the time, and the indices i, j refer to the space coordinates Xi, x¡, i, j = 1, 2, 3. d, denotes differentiation with respect to Xi, and dt differentiation with respect to the time t. The summation convention is used in writing the equations. We write , Uj , Xj , _ ( d \\ Ui u ' Xi \" d ' p \\povur",
"title": ""
},
{
"docid": "a57e470ad16c025f6b0aae99de25f498",
"text": "Purpose To establish the efficacy and safety of botulinum toxin in the treatment of Crocodile Tear Syndrome and record any possible complications.Methods Four patients with unilateral aberrant VII cranial nerve regeneration following an episode of facial paralysis consented to be included in this study after a comprehensive explanation of the procedure and possible complications was given. On average, an injection of 20 units of botulinum toxin type A (Dysport®) was given to the affected lacrimal gland. The effect was assessed with a Schirmer’s test during taste stimulation. Careful recording of the duration of the effect and the presence of any local or systemic complications was made.Results All patients reported a partial or complete disappearance of the reflex hyperlacrimation following treatment. Schirmer’s tests during taste stimulation documented a significant decrease in tear secretion. The onset of effect of the botulinum toxin was typically 24–48 h after the initial injection and lasted 4–5 months. One patient had a mild increase in his preexisting upper lid ptosis, but no other local or systemic side effects were experienced.Conclusions The injection of botulinum toxin type A into the affected lacrimal glands of patients with gusto-lacrimal reflex is a simple, effective and safe treatment.",
"title": ""
},
{
"docid": "9cdddf98d24d100c752ea9d2b368bb77",
"text": "Using predictive models to identify patterns that can act as biomarkers for different neuropathoglogical conditions is becoming highly prevalent. In this paper, we consider the problem of Autism Spectrum Disorder (ASD) classification where previous work has shown that it can be beneficial to incorporate a wide variety of meta features, such as socio-cultural traits, into predictive modeling. A graph-based approach naturally suits these scenarios, where a contextual graph captures traits that characterize a population, while the specific brain activity patterns are utilized as a multivariate signal at the nodes. Graph neural networks have shown improvements in inferencing with graph-structured data. Though the underlying graph strongly dictates the overall performance, there exists no systematic way of choosing an appropriate graph in practice, thus making predictive models non-robust. To address this, we propose a bootstrapped version of graph convolutional neural networks (G-CNNs) that utilizes an ensemble of weakly trained G-CNNs, and reduce the sensitivity of models on the choice of graph construction. We demonstrate its effectiveness on the challenging Autism Brain Imaging Data Exchange (ABIDE) dataset and show that our approach improves upon recently proposed graph-based neural networks. We also show that our method remains more robust to noisy graphs.",
"title": ""
},
{
"docid": "9c800a53208bf1ded97e963ed4f80b28",
"text": "We have developed a multi-material 3D printing platform that is high-resolution, low-cost, and extensible. The key part of our platform is an integrated machine vision system. This system allows for self-calibration of printheads, 3D scanning, and a closed-feedback loop to enable print corrections. The integration of machine vision with 3D printing simplifies the overall platform design and enables new applications such as 3D printing over auxiliary parts. Furthermore, our platform dramatically expands the range of parts that can be 3D printed by simultaneously supporting up to 10 different materials that can interact optically and mechanically. The platform achieves a resolution of at least 40 μm by utilizing piezoelectric inkjet printheads adapted for 3D printing. The hardware is low cost (less than $7,000) since it is built exclusively from off-the-shelf components. The architecture is extensible and modular -- adding, removing, and exchanging printing modules can be done quickly. We provide a detailed analysis of the system's performance. We also demonstrate a variety of fabricated multi-material objects.",
"title": ""
}
] |
scidocsrr
|
ad1f409ebcef4ddcf9b58c6dd80771ef
|
Investigation of forecasting methods for the hourly spot price of the day-ahead electric power markets
|
[
{
"docid": "508eb69a9e6b0194fbda681439e404c4",
"text": "Price forecasting is becoming increasingly relevant to producers and consumers in the new competitive electric power markets. Both for spot markets and long-term contracts, price forecasts are necessary to develop bidding strategies or negotiation skills in order to maximize benefit. This paper provides a method to predict next-day electricity prices based on the ARIMA methodology. ARIMA techniques are used to analyze time series and, in the past, have been mainly used for load forecasting due to their accuracy and mathematical soundness. A detailed explanation of the aforementioned ARIMA models and results from mainland Spain and Californian markets are presented.",
"title": ""
}
] |
[
{
"docid": "f85a8a7e11a19d89f2709cc3c87b98fc",
"text": "This paper presents novel store-and-forward packet routing algorithms for Wireless Body Area Networks (WBAN) with frequent postural partitioning. A prototype WBAN has been constructed for experimentally characterizing on-body topology disconnections in the presence of ultra short range radio links, unpredictable RF attenuation, and human postural mobility. On-body DTN routing protocols are then developed using a stochastic link cost formulation, capturing multi-scale topological localities in human postural movements. Performance of the proposed protocols are evaluated experimentally and via simulation, and are compared with a number of existing single-copy DTN routing protocols and an on-body packet flooding mechanism that serves as a performance benchmark with delay lower-bound. It is shown that via multi-scale modeling of the spatio-temporal locality of on-body link disconnection patterns, the proposed algorithms can provide better routing performance compared to a number of existing probabilistic, opportunistic, and utility-based DTN routing protocols in the literature.",
"title": ""
},
{
"docid": "d9aa5e0d687add02a6b31759c482489c",
"text": "This paper presents an accurate and fast algorithm for road segmentation using convolutional neural network (CNN) and gated recurrent units (GRU). For autonomous vehicles, road segmentation is a fundamental task that can provide the drivable area for path planning. The existing deep neural network based segmentation algorithms usually take a very deep encoder-decoder structure to fuse pixels, which requires heavy computations, large memory and long processing time. Hereby, a CNN-GRU network model is proposed and trained to perform road segmentation using data captured by the front camera of a vehicle. GRU network obtains a long spatial sequence with lower computational complexity, comparing to traditional encoderdecoder architecture. The proposed road detector is evaluated on the KITTI road benchmark and achieves high accuracy for road segmentation at real-time processing speed.",
"title": ""
},
{
"docid": "5b5d4c33a600d93b8b999a51318980da",
"text": "In this work, we focused on liveness detection for facial recognition system's spoofing via fake face movement. We have developed a pupil direction observing system for anti-spoofing in face recognition systems using a basic hardware equipment. Firstly, eye area is being extracted from real time camera by using Haar-Cascade Classifier with specially trained classifier for eye region detection. Feature points have extracted and traced for minimizing person's head movements and getting stable eye region by using Kanade-Lucas-Tomasi (KLT) algorithm. Eye area is being cropped from real time camera frame and rotated for a stable eye area. Pupils are extracted from eye area by using a new improved algorithm subsequently. After a few stable number of frames that has pupils, proposed spoofing algorithm selects a random direction and sends a signal to Arduino to activate that selected direction's LED on a square frame that has totally eight LEDs for each direction. After chosen LED has been activated, eye direction is observed whether pupil direction and LED's position matches. If the compliance requirement is satisfied, algorithm returns data that contains liveness information. Complete algorithm for liveness detection using pupil tracking is tested on volunteers and algorithm achieved high success ratio.",
"title": ""
},
{
"docid": "f25b9147e67bd8051852142ebd82cf20",
"text": "Fossil fuels currently supply most of the world's energy needs, and however unacceptable their long-term consequences, the supplies are likely to remain adequate for the next few generations. Scientists and policy makers must make use of this period of grace to assess alternative sources of energy and determine what is scientifically possible, environmentally acceptable and technologically promising.",
"title": ""
},
{
"docid": "2bc30693be1c5855a9410fb453128054",
"text": "Person re-identification is to match pedestrian images from disjoint camera views detected by pedestrian detectors. Challenges are presented in the form of complex variations of lightings, poses, viewpoints, blurring effects, image resolutions, camera settings, occlusions and background clutter across camera views. In addition, misalignment introduced by the pedestrian detector will affect most existing person re-identification methods that use manually cropped pedestrian images and assume perfect detection. In this paper, we propose a novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter. All the key components are jointly optimized to maximize the strength of each component when cooperating with others. In contrast to existing works that use handcrafted features, our method automatically learns features optimal for the re-identification task from data. The learned filter pairs encode photometric transforms. Its deep architecture makes it possible to model a mixture of complex photometric and geometric transforms. We build the largest benchmark re-id dataset with 13, 164 images of 1, 360 pedestrians. Unlike existing datasets, which only provide manually cropped pedestrian images, our dataset provides automatically detected bounding boxes for evaluation close to practical applications. Our neural network significantly outperforms state-of-the-art methods on this dataset.",
"title": ""
},
{
"docid": "a737511620632ac8920a20d566c93974",
"text": "Hidradenitis suppurativa (HS) is an inflammatory skin disease. Several observations imply that sex hormones may play a role in its pathogenesis. HS is more common in women, and the disease severity appears to vary in intensity according to the menstrual cycle. In addition, parallels have been drawn between HS and acne vulgaris, suggesting that sex hormones may play a role in the condition. The role of androgens and estrogens in HS has therefore been explored in numerous observational and some interventional studies; however, the studies have often reported conflicting results. This systematic review includes 59 unique articles and aims to give an overview of the available research. Articles containing information on natural variation, severity changes during menstruation and pregnancy, as well as articles on serum levels of hormones in patients with HS and the therapeutic options of hormonal manipulation therapy have all been included and are presented in this systematic review. Our results show that patients with HS do not seem to have increased levels of sex hormones and that their hormone levels lie within the normal range. While decreasing levels of progesterone and estrogen seem to coincide with disease flares in premenopausal women, the association is speculative and requires experimental confirmation. Antiandrogen treatment could be a valuable approach in treating HS, however randomized control trials are lacking.",
"title": ""
},
{
"docid": "7e8723331aaec6b4f448030a579fa328",
"text": "With the recent trend toward more non extraction treatment, several appliances have been advocated to distalize molars in the upper arch. Certain principles, as outlined by Burstone, must be borne in mind when designing such an appliance:",
"title": ""
},
{
"docid": "3d81867b694a7fa56383583d9ee2637f",
"text": "Elasticity is undoubtedly one of the most striking characteristics of cloud computing. Especially in the area of high performance computing (HPC), elasticity can be used to execute irregular and CPU-intensive applications. However, the on- the-fly increase/decrease in resources is more widespread in Web systems, which have their own IaaS-level load balancer. Considering the HPC area, current approaches usually focus on batch jobs or assumptions such as previous knowledge of application phases, source code rewriting or the stop-reconfigure-and-go approach for elasticity. In this context, this article presents AutoElastic, a PaaS-level elasticity model for HPC in the cloud. Its differential approach consists of providing elasticity for high performance applications without user intervention or source code modification. The scientific contributions of AutoElastic are twofold: (i) an Aging-based approach to resource allocation and deallocation actions to avoid unnecessary virtual machine (VM) reconfigurations (thrashing) and (ii) asynchronism in creating and terminating VMs in such a way that the application does not need to wait for completing these procedures. The prototype evaluation using OpenNebula middleware showed performance gains of up to 26 percent in the execution time of an application with the AutoElastic manager. Moreover, we obtained low intrusiveness for AutoElastic when reconfigurations do not occur.",
"title": ""
},
{
"docid": "3fe5ea7769bfd7e7ea0adcb9ae497dcf",
"text": "Working memory emerges in infancy and plays a privileged role in subsequent adaptive cognitive development. The neural networks important for the development of working memory during infancy remain unknown. We used diffusion tensor imaging (DTI) and deterministic fiber tracking to characterize the microstructure of white matter fiber bundles hypothesized to support working memory in 12-month-old infants (n=73). Here we show robust associations between infants' visuospatial working memory performance and microstructural characteristics of widespread white matter. Significant associations were found for white matter tracts that connect brain regions known to support working memory in older children and adults (genu, anterior and superior thalamic radiations, anterior cingulum, arcuate fasciculus, and the temporal-parietal segment). Better working memory scores were associated with higher FA and lower RD values in these selected white matter tracts. These tract-specific brain-behavior relationships accounted for a significant amount of individual variation above and beyond infants' gestational age and developmental level, as measured with the Mullen Scales of Early Learning. Working memory was not associated with global measures of brain volume, as expected, and few associations were found between working memory and control white matter tracts. To our knowledge, this study is among the first demonstrations of brain-behavior associations in infants using quantitative tractography. The ability to characterize subtle individual differences in infant brain development associated with complex cognitive functions holds promise for improving our understanding of normative development, biomarkers of risk, experience-dependent learning and neuro-cognitive periods of developmental plasticity.",
"title": ""
},
{
"docid": "28e9bb0eef126b9969389068b6810073",
"text": "This paper presents the task specifications for designing a novel Insertable Robotic Effectors Platform (IREP) with integrated stereo vision and surgical intervention tools for Single Port Access Surgery (SPAS). This design provides a compact deployable mechanical architecture that may be inserted through a single Ø15 mm access port. Dexterous surgical intervention and stereo vision are achieved via the use of two snake-like continuum robots and two controllable CCD cameras. Simulations and dexterity evaluation of our proposed design are compared to several design alternatives with different kinematic arrangements. Results of these simulations show that dexterity is improved by using an independent revolute joint at the tip of a continuum robot instead of achieving distal rotation by transmission of rotation about the backbone of the continuum robot. Further, it is shown that designs with two robotic continuum robots as surgical arms have diminished dexterity if the bases of these arms are close to each other. This result justifies our design and points to ways of improving the performance of existing designs that use continuum robots as surgical arms.",
"title": ""
},
{
"docid": "768ed187f94163727afd011817a306c6",
"text": "Although interest regarding the role of dispositional affect in job behaviors has surged in recent years, the true magnitude of affectivity's influence remains unknown. To address this issue, the authors conducted a qualitative and quantitative review of the relationships between positive and negative affectivity (PA and NA, respectively) and various performance dimensions. A series of meta-analyses based on 57 primary studies indicated that PA and NA predicted task performance in the hypothesized directions and that the relationships were strongest for subjectively rated versus objectively rated performance. In addition, PA was related to organizational citizenship behaviors but not withdrawal behaviors, and NA was related to organizational citizenship behaviors, withdrawal behaviors, counterproductive work behaviors, and occupational injury. Mediational analyses revealed that affect operated through different mechanisms in influencing the various performance dimensions. Regression analyses documented that PA and NA uniquely predicted task performance but that extraversion and neuroticism did not, when the four were considered simultaneously. Discussion focuses on the theoretical and practical implications of these findings. (PsycINFO Database Record (c) 2009 APA, all rights reserved).",
"title": ""
},
{
"docid": "09fdc74a146a876e44bec1eca1bf7231",
"text": "With more and more people around the world learning Chinese as a second language, the need of Chinese error correction tools is increasing. In the HSK dynamic composition corpus, word usage error (WUE) is the most common error type. In this paper, we build a neural network model that considers both target erroneous token and context to generate a correction vector and compare it against a candidate vocabulary to propose suitable corrections. To deal with potential alternative corrections, the top five proposed candidates are judged by native Chinese speakers. For more than 91% of the cases, our system can propose at least one acceptable correction within a list of five candidates. To the best of our knowledge, this is the first research addressing general-type Chinese WUE correction. Our system can help non-native Chinese learners revise their sentences by themselves. Title and Abstract in Chinese",
"title": ""
},
{
"docid": "8db3f92e38d379ab5ba644ff7a59544d",
"text": "Within American psychology, there has been a recent surge of interest in self-compassion, a construct from Buddhist thought. Self-compassion entails: (a) being kind and understanding toward oneself in times of pain or failure, (b) perceiving one’s own suffering as part of a larger human experience, and (c) holding painful feelings and thoughts in mindful awareness. In this article we review findings from personality, social, and clinical psychology related to self-compassion. First, we define self-compassion and distinguish it from other self-constructs such as self-esteem, self-pity, and self-criticism. Next, we review empirical work on the correlates of self-compassion, demonstrating that self-compassion has consistently been found to be related to well-being. These findings support the call for interventions that can raise self-compassion. We then review the theory and empirical support behind current interventions that could enhance self-compassion including compassionate mind training (CMT), imagery work, the gestalt two-chair technique, mindfulness based stress reduction (MBSR), dialectical behavior therapy (DBT), and acceptance and commitment therapy (ACT). Directions for future research are also discussed.",
"title": ""
},
{
"docid": "b0ac318eea1dc5f6feb9fdaf5f554752",
"text": "In this paper an RSA calculation architecture is proposed for FPGAs that addresses the issues of scalability, flexible performance, and silicon efficiency for the hardware acceleration of Public Key crypto systems. Using techniques based around Montgomery math for exponentiation, the proposed RSA calculation architecture is compared to existing FPGA-based solutions for speed, FPGA utilisation, and scalability. The paper will cover the RSA encryption algorithm, Montgomery math, basic FPGA technology, and the implementation details of the proposed RSA calculation architecture. Conclusions will be drawn, beyond the singular improvements over existing architectures, which highlight the advantages of a fully flexible & parameterisable design.",
"title": ""
},
{
"docid": "3dbedb4539ac6438e9befbad366d1220",
"text": "The main focus of this paper is to propose integration of dynamic and multiobjective algorithms for graph clustering in dynamic environments under multiple objectives. The primary application is to multiobjective clustering in social networks which change over time. Social networks, typically represented by graphs, contain information about the relations (or interactions) among online materials (or people). A typical social network tends to expand over time, with newly added nodes and edges being incorporated into the existing graph. We reflect these characteristics of social networks based on real-world data, and propose a suitable dynamic multiobjective evolutionary algorithm. Several variants of the algorithm are proposed and compared. Since social networks change continuously, the immigrant schemes effectively used in previous dynamic optimisation give useful ideas for new algorithms. An adaptive integration of multiobjective evolutionary algorithms outperformed other algorithms in dynamic social networks.",
"title": ""
},
{
"docid": "21bd78306fc5f899553246e08e4f3c0e",
"text": "In this paper, we present the system we have used for the Implicit WASSA 2018 Implicit Emotion Shared Task. The task is to predict the emotion of a tweet of which the explicit mentions of emotion terms have been removed. The idea is to come up with a model which has the ability to implicitly identify the emotion expressed given the context words. We have used a Gated Recurrent Neural Network (GRU) and a Capsule Network based model for the task. Pre-trained word embeddings have been utilized to incorporate contextual knowledge about words into the model. GRU layer learns latent representations using the input word embeddings. Subsequent Capsule Network layer learns high-level features from that hidden representation. The proposed model managed to achieve a macro-F1 score of 0.692.",
"title": ""
},
{
"docid": "98f76e0ea0f028a1423e1838bdebdccb",
"text": "An operational-transconductance-amplifier (OTA) design for ultra-low voltage ultra-low power applications is proposed. The input stage of the proposed OTA utilizes a bulk-driven pseudo-differential pair to allow minimum supply voltage while achieving a rail-to-rail input range. All the transistors in the proposed OTA operate in the subthreshold region. Using a novel self-biasing technique to bias the OTA obviates the need for extra biasing circuitry and enhances the performance of the OTA. The proposed technique ensures the OTA robustness to process variations and increases design feasibility under ultra-low-voltage conditions. Moreover, the proposed biasing technique significantly improves the common-mode and power-supply rejection of the OTA. To further enhance the bandwidth and allow the use of smaller compensation capacitors, a compensation network based on a damping-factor control circuit is exploited. The OTA is fabricated in a 65 nm CMOS technology. Measurement results show that the OTA provides a low-frequency gain of 46 dB and rail-to-rail input common-mode range with a supply voltage as low as 0.5 V. The dc gain of the OTA is greater than 42 dB for supply voltage as low as 0.35 V. The power dissipation is 182 μW at VDD=0.5 V and 17 μW at VDD=0.35 V.",
"title": ""
},
{
"docid": "09cffaca68a254f591187776e911d36e",
"text": "Signaling across cellular membranes, the 826 human G protein-coupled receptors (GPCRs) govern a wide range of vital physiological processes, making GPCRs prominent drug targets. X-ray crystallography provided GPCR molecular architectures, which also revealed the need for additional structural dynamics data to support drug development. Here, nuclear magnetic resonance (NMR) spectroscopy with the wild-type-like A2A adenosine receptor (A2AAR) in solution provides a comprehensive characterization of signaling-related structural dynamics. All six tryptophan indole and eight glycine backbone 15N-1H NMR signals in A2AAR were individually assigned. These NMR probes provided insight into the role of Asp522.50 as an allosteric link between the orthosteric drug binding site and the intracellular signaling surface, revealing strong interactions with the toggle switch Trp 2466.48, and delineated the structural response to variable efficacy of bound drugs across A2AAR. The present data support GPCR signaling based on dynamic interactions between two semi-independent subdomains connected by an allosteric switch at Asp522.50.",
"title": ""
},
{
"docid": "45dfa7f6b1702942b5abfb8de920d1c2",
"text": "Loneliness is a common condition in older adults and is associated with increased morbidity and mortality, decreased sleep quality, and increased risk of cognitive decline. Assessing loneliness in older adults is challenging due to the negative desirability biases associated with being lonely. Thus, it is necessary to develop more objective techniques to assess loneliness in older adults. In this paper, we describe a system to measure loneliness by assessing in-home behavior using wireless motion and contact sensors, phone monitors, and computer software as well as algorithms developed to assess key behaviors of interest. We then present results showing the accuracy of the system in detecting loneliness in a longitudinal study of 16 older adults who agreed to have the sensor platform installed in their own homes for up to 8 months. We show that loneliness is significantly associated with both time out-of-home (β = -0.88 andp <; 0.01) and number of computer sessions (β = 0.78 and p <; 0.05). R2 for the model was 0.35. We also show the model's ability to predict out-of-sample loneliness, demonstrating that the correlation between true loneliness and predicted out-of-sample loneliness is 0.48. When compared with the University of California at Los Angeles loneliness score, the normalized mean absolute error of the predicted loneliness scores was 0.81 and the normalized root mean squared error was 0.91. These results represent first steps toward an unobtrusive, objective method for the prediction of loneliness among older adults, and mark the first time multiple objective behavioral measures that have been related to this key health outcome.",
"title": ""
}
] |
scidocsrr
|
83f108f5bfb5b010755739fa5b05a995
|
Continuum robots for space applications based on layer-jamming scales with stiffening capability
|
[
{
"docid": "42ed573d8e3fbbb9e178c6cfceccc996",
"text": "We introduce a new method for synthesizing kinematic relationships for a general class of continuous backbone, or continuum , robots. The resulting kinematics enable real-time task and shape control by relating workspace (Cartesian) coordinates to actuator inputs, such as tendon lengths or pneumatic pressures, via robot shape coordinates. This novel approach, which carefully considers physical manipulator constraints, avoids artifacts of simplifying assumptions associated with previous approaches, such as the need to fit the resulting solutions to the physical robot. It is applicable to a wide class of existing continuum robots and models extension, as well as bending, of individual sections. In addition, this approach produces correct results for orientation, in contrast to some previously published approaches. Results of real-time implementations on two types of spatial multisection continuum manipulators are reported.",
"title": ""
},
{
"docid": "be1ac4321c710c325ed4ad5dae927b6c",
"text": "Current work at NASA's Johnson Space Center is focusing on the identification and design of novel robotic archetypes to fill roles complimentary to current space robots during in-space assembly and maintenance tasks. Tendril, NASA's latest robot designed for minimally invasive inspection, is one system born of this effort. Inspired by the biology of snakes, tentacles, and climbing plants, the Tendril robot is a long slender manipulator that can extend deep into crevasses and under thermal blankets to inspect areas largely inaccessible by conventional means. The design of the Tendril, with its multiple bending segments and 1 cm diameter, also serves as an initial step in exploring the whole body control known to continuum robots coupled with the small scale and dexterity found in medical and commercial minimally invasive devices. An overview of Tendril's design is presented along with preliminary results from testing that seeks to improve Tendril's performance through an iterative design process",
"title": ""
},
{
"docid": "be749e59367ee1033477bb88503032cf",
"text": "This paper describes the results of field trials and associated testing of the OctArm series of multi-section continuous backbone \"continuum\" robots. This novel series of manipulators has recently (Spring 2005) undergone a series of trials including open-air and in-water field tests. Outcomes of the trials, in which the manipulators demonstrated the ability for adaptive and novel manipulation in challenging environments, are described. Implications for the deployment of continuum robots in a variety of applications are discussed",
"title": ""
},
{
"docid": "8bb465b2ec1f751b235992a79c6f7bf1",
"text": "Continuum robotics has rapidly become a rich and diverse area of research, with many designs and applications demonstrated. Despite this diversity in form and purpose, there exists remarkable similarity in the fundamental simplified kinematic models that have been applied to continuum robots. However, this can easily be obscured, especially to a newcomer to the field, by the different applications, coordinate frame choices, and analytical formalisms employed. In this paper we review several modeling approaches in a common frame and notational convention, illustrating that for piecewise constant curvature, they produce identical results. This discussion elucidates what has been articulated in different ways by a number of researchers in the past several years, namely that constant-curvature kinematics can be considered as consisting of two separate submappings: one that is general and applies to all continuum robots, and another that is robot-specific. These mappings are then developed both for the singlesection and for the multi-section case. Similarly, we discuss the decomposition of differential kinematics (the robot’s Jacobian) into robot-specific and robot-independent portions. The paper concludes with a perspective on several of the themes of current research that are shaping the future of continuum robotics.",
"title": ""
}
] |
[
{
"docid": "8954672b2e2b6351abfde0747fd5d61c",
"text": "Sentiment Analysis (SA), an application of Natural Language processing (NLP), has been witnessed a blooming interest over the past decade. It is also known as opinion mining, mood extraction and emotion analysis. The basic in opinion mining is classifying the polarity of text in terms of positive (good), negative (bad) or neutral (surprise). Mood Extraction automates the decision making performed by human. It is the important aspect for capturing public opinion about product preferences, marketing campaigns, political movements, social events and company strategies. In addition to sentiment analysis for English and other European languages, this task is applied on various Indian languages like Bengali, Hindi, Telugu and Malayalam. This paper describes the survey on main approaches for performing sentiment extraction.",
"title": ""
},
{
"docid": "5188032027c67f0e91ed0681d4a871b4",
"text": "This paper defines an advanced methodology for modeling applications based on Data Mining methods that represents a logical framework for development of Data Mining applications. Methodology suggested here for Data Mining modeling process has been applied and tested through Data Mining applications for predicting Prepaid users churn in the telecom industry. The main emphasis of this paper is defining of a successful model for prediction of potential Prepaid churners, in which the most important part is to identify the very set of input variables that are high enough to make the prediction model precise and reliable. Several models have been created and compared on the basis of different Data Mining methods and algorithms (neural networks, decision trees, logistic regression). For the modeling examples we used WEKA analysis tool.",
"title": ""
},
{
"docid": "8ac9d212ee98c8dea54ead0bdd43052d",
"text": "This paper discusses two analytical methods used in estimating the equivalent thermal conductivity of impregnated electrical windings constructed with Litz wire. Both methods are based on a double-homogenisation approach consecutively employing the individual winding conductors and wire bundles. The first method is suitable for Litz wire with round-profiled enamel-coated conductors and round-shaped bundles; whereas the second method is tailored for compacted Litz wires with conductors and/or bundles having square or rectangular profiles. The work conducted herein expands upon established methods for cylindrical conductor forms [1], and develops an equivalent lumped-parameter thermal network for rectangular forms. This network derives analytical formulae which represents the winding's equivalent thermal conductivity and directly accounts for any thermal anisotropy. The estimates of equivalent thermal conductivity from theoretical, analytical and finite element (FE) methods have been supplemented with experimental data using impregnated winding samples and are shown to have good correlation.",
"title": ""
},
{
"docid": "175890538c681d55dfce51918c8a1909",
"text": "We recently reported that the brain showed greater responsiveness to some cognitive demands following total sleep deprivation (TSD). Specifically, verbal learning led to increased cerebral activation following TSD while arithmetic resulted in decreased activation. Here we report data from a divided attention task that combined verbal learning and arithmetic. Thirteen normal control subjects performed the task while undergoing functional magnetic resonance imaging (FMRI) scans after a normal night of sleep and following 35 h TSD. Behaviourally, subjects showed only modest impairments following TSD. With respect to cerebral activation, the results showed (a) increased activation in the prefrontal cortex and parietal lobes, particularly in the right hemisphere, following TSD, (b) activation in left inferior frontal gyrus correlated with increased subjective sleepiness after TSD, and (c) activation in bilateral parietal lobes correlated with the extent of intact memory performance after TSD. Many of the brain regions showing a greater response after TSD compared with normal sleep are thought to be involved in control of attention. These data imply that the divided attention task required more attentional resources (specifically, performance monitoring and sustained attention) following TSD than after normal sleep. Other neuroimaging results may relate to the verbal learning and/or arithmetic demands of the task. This is the first study to examine divided attention performance after TSD with neuroimaging and supports our previous suggestion that the brain may be more plastic during cognitive performance following TSD than previously thought.",
"title": ""
},
{
"docid": "89d283980d5a6d95d56a675f89ea823c",
"text": "Desynchronization between the master clock in the brain, which is entrained by (day) light, and peripheral organ clocks, which are mainly entrained by food intake, may have negative effects on energy metabolism. Bile acid metabolism follows a clear day/night rhythm. We investigated whether in rats on a normal chow diet the daily rhythm of plasma bile acids and hepatic expression of bile acid metabolic genes is controlled by the light/dark cycle or the feeding/fasting rhythm. In addition, we investigated the effects of high caloric diets and time-restricted feeding on daily rhythms of plasma bile acids and hepatic genes involved in bile acid synthesis. In experiment 1 male Wistar rats were fed according to three different feeding paradigms: food was available ad libitum for 24 h (ad lib) or time-restricted for 10 h during the dark period (dark fed) or 10 h during the light period (light fed). To allow further metabolic phenotyping, we manipulated dietary macronutrient intake by providing rats with a chow diet, a free choice high-fat-high-sugar diet or a free choice high-fat (HF) diet. In experiment 2 rats were fed a normal chow diet, but food was either available in a 6-meals-a-day (6M) scheme or ad lib. During both experiments, we measured plasma bile acid levels and hepatic mRNA expression of genes involved in bile acid metabolism at eight different time points during 24 h. Time-restricted feeding enhanced the daily rhythm in plasma bile acid concentrations. Plasma bile acid concentrations are highest during fasting and dropped during the period of food intake with all diets. An HF-containing diet changed bile acid pool composition, but not the daily rhythmicity of plasma bile acid levels. Daily rhythms of hepatic Cyp7a1 and Cyp8b1 mRNA expression followed the hepatic molecular clock, whereas for Shp expression food intake was leading. Combining an HF diet with feeding in the light/inactive period annulled CYp7a1 and Cyp8b1 gene expression rhythms, whilst keeping that of Shp intact. In conclusion, plasma bile acids and key genes in bile acid biosynthesis are entrained by food intake as well as the hepatic molecular clock. Eating during the inactivity period induced changes in the plasma bile acid pool composition similar to those induced by HF feeding.",
"title": ""
},
{
"docid": "85b169515b4e4b86117abcdd83f002ea",
"text": "While Bitcoin (Peer-to-Peer Electronic Cash) [Nak]solved the double spend problem and provided work withtimestamps on a public ledger, it has not to date extendedthe functionality of a blockchain beyond a transparent andpublic payment system. Satoshi Nakamoto's original referenceclient had a decentralized marketplace service which was latertaken out due to a lack of resources [Deva]. We continued withNakamoto's vision by creating a set of commercial-grade ser-vices supporting a wide variety of business use cases, includinga fully developed blockchain-based decentralized marketplace,secure data storage and transfer, and unique user aliases thatlink the owner to all services controlled by that alias.",
"title": ""
},
{
"docid": "3fc94de55342ff7560ed0c13a18e526c",
"text": "Linear optics with photon counting is a prominent candidate for practical quantum computing. The protocol by Knill, Laflamme, and Milburn 2001, Nature London 409, 46 explicitly demonstrates that efficient scalable quantum computing with single photons, linear optical elements, and projective measurements is possible. Subsequently, several improvements on this protocol have started to bridge the gap between theoretical scalability and practical implementation. The original theory and its improvements are reviewed, and a few examples of experimental two-qubit gates are given. The use of realistic components, the errors they induce in the computation, and how these errors can be corrected is discussed.",
"title": ""
},
{
"docid": "3e24de04f0b1892b27fc60bb8a405d0d",
"text": "A power factor (PF) corrected single stage, two-switch isolated zeta converter is proposed for arc welding. This modified zeta converter is having two switches and two clamping diodes on the primary side of a high-frequency transformer. This, in turn, results in reduced switch stress. The proposed converter is designed to operate in a discontinuous inductor current mode (DICM) to achieve inherent PF correction at the utility. The DICM operation substantially reduces the complexity of the control and effectively regulates the output dc voltage. The proposed converter offers several features, such as inherent overload current limit and fast parametrical response, to the load and source voltage conditions. This, in turn, results in an improved performance in terms of power quality indices and an enhanced weld bead quality. The proposed modified zeta converter is designed and its performance is simulated in the MATLAB/Simulink environment. Simulated results are also verified experimentally on a developed prototype of the converter. The performance of the system is investigated in terms of its input PF, displacement PF, total harmonic distortion of ac mains current, voltage regulation, and robustness to prove its efficacy in overall performance.",
"title": ""
},
{
"docid": "f65e55d992bff2ce881aaf197a734adf",
"text": "hypervisor as a nondeterministic sequential program prove invariant properties of individual ϋobjects and compose them 14 Phase1 Startup Phase2 Intercept Phase3 Exception Proofs HW initiated concurrent execution Concurrent execution HW initiated sequential execution Sequential execution Intro. Motivating. Ex. Impl. Verif. Results Perf. Concl. Architecture",
"title": ""
},
{
"docid": "50d27a921703202a5fb329d6f615d19f",
"text": "This paper proposes an analytically-based approach for the design of a miniaturized single-band and dual-band two-way Wilkinson power divider. This miniaturization is achieved by realizing the power divider's impedance transformers using slow wave structures. These slow wave structures are designed by periodically loading transmission lines with capacitances, which reduces the phase velocity of the propagating waves and hence engender higher electric lengths using smaller physical lengths. The dispersive analysis of the slow wave structure used is included in the design approach to ensure a smooth nondispersive transmission line operation in the case of dual-band applications. The design methodology is validated with the design of a single-band, reduced size, two-way Wilkinson power divider at 850 and 620 MHz. An approximate length reduction of 25%-35% is achieved with this technique. For dual-band applications, this paper describes the design of a reduced size, two-way Wilkinson power divider for dual-band global system for mobile communications and code division multiple access applications at 850 and 1960 MHz, respectively. An overall reduction factor of 28%, in terms of chip area occupied by the circuit, is achieved. The electromagnetic simulation and experimental results validate the design approach. The circuit is realized with microstrip technology, which can be easily fabricated using conventional printed circuit processes.",
"title": ""
},
{
"docid": "60a0c63f6c1166970d440c1302ca0dbe",
"text": "In vehicle routing problems with time windows (VRPTW), a set of vehicles with limits on capacity and travel time are available to service a set of customers with demands and earliest and latest time for servicing. The objective is to minimize the cost of servicing the set of customers without being tardy or exceeding the capacity or travel time of the vehicles. As finding a feasible solution to the problem is NP-complete, search methods based upon heuristics are most promising for problems of practical size. In this paper we describe GIDEON, a genetic algorithm heuristic for solving the VRPTW. GIDEON consists of a global customer clustering method and a local post-optimization method. The global customer clustering method uses an adaptive search strategy based upon population genetics, to assign vehicles to customers. The best solution obtained from the clustering method is improved by a local post-optimization method. The synergy a between global adaptive clustering method and a local route optimization method produce better results than those obtained by competing heuristic search methods. On a standard set of 56 VRPTW problems obtained from the literature the GIDEON system obtained 41 new best known solutions.",
"title": ""
},
{
"docid": "b191b9829aac1c1e74022c33e2488bbd",
"text": "We investigated the normal and parallel ground reaction forces during downhill and uphill running. Our rationale was that these force data would aid in the understanding of hill running injuries and energetics. Based on a simple spring-mass model, we hypothesized that the normal force peaks, both impact and active, would increase during downhill running and decrease during uphill running. We anticipated that the parallel braking force peaks would increase during downhill running and the parallel propulsive force peaks would increase during uphill running. But, we could not predict the magnitude of these changes. Five male and five female subjects ran at 3m/s on a force treadmill mounted on the level and on 3 degrees, 6 degrees, and 9 degrees wedges. During downhill running, normal impact force peaks and parallel braking force peaks were larger compared to the level. At -9 degrees, the normal impact force peaks increased by 54%, and the parallel braking force peaks increased by 73%. During uphill running, normal impact force peaks were smaller and parallel propulsive force peaks were larger compared to the level. At +9 degrees, normal impact force peaks were absent, and parallel propulsive peaks increased by 75%. Neither downhill nor uphill running affected normal active force peaks. Combined with previous biomechanics studies, our normal impact force data suggest that downhill running substantially increases the probability of overuse running injury. Our parallel force data provide insight into past energetic studies, which show that the metabolic cost increases during downhill running at steep angles.",
"title": ""
},
{
"docid": "b5c2d3295cd563983c81e048e59d6541",
"text": "In this paper, a real-time Human-Computer Interaction (HCI) based on the hand data glove and K-NN classifier for gesture recognition is proposed. HCI is moving more and more natural and intuitive way to be used. One of the important parts of our body is our hand which is most frequently used for the Interaction in Digital Environment and thus complexity and flexibility of motion of hands are the research topics. To recognize these hand gestures more accurately and successfully data glove is used. Here, gloves are used to capture current position of the hand and the angles between the joints and then these features are used to classify the gestures using K-NN classifier. The gestures classified are categorized as clicking, rotating, dragging, pointing and ideal position. Recognizing these gestures relevant actions are taken, such as air writing and 3D sketching by tracking the path helpful in virtual augmented reality (VAR). The results show that glove used for interaction is better than normal static keyboard and mouse as the interaction process is more accurate and natural in dynamic environment with no distance limitations. Also it enhances the user’s interaction and immersion feeling.",
"title": ""
},
{
"docid": "e2280d602e8110dbaf512d6e187ecd9f",
"text": "There are problems in the delimitation/identification of Plectranthus species and this investigation aims to contribute toward solving such problems through structural and histochemical study of the trichomes. Considering the importance of P. zuluensis as restricted to semi-coastal forests of Natal that possess only two fertile stamens not four as the other species of this genus. The objective of this work was to study in detail the distribution, morphology and histochemistry of the foliar trichomes of this species using light and electron microscopy. Distribution and morphology of two types of non-glandular, capitate and peltate glandular trichomes are described on both leaf sides. This study provides a description of the different secretion modes of glandular trichomes. Results of histochemical tests showed a positive reaction to terpenoids, lipids, polysaccharides and phenolics in the glandular trichomes. We demonstrated that the presence, types and structure of glandular and non-glandular trichomes are important systematic criteria for the species delimitation in the genus.",
"title": ""
},
{
"docid": "37c2f0cface4943e6332f29d41ada5b0",
"text": "Although substantial research has explored the emergence of collective intelligence in real-time human-based collaborative systems, much of this work has focused on rigid scenarios such as the Prisoner’s Dilemma (PD). (Pinheiro et al., 2012; Santos et al., 2012). While such work is of great research value, there’s a growing need for a flexible real-world platform that fosters collective intelligence in authentic decision-making situations. This paper introduces a new platform called UNUM that allows groups of online users to collectively answer questions, make decisions, and resolve dilemmas by working together in unified dynamic systems. Modeled after biological swarms, the UNUM platform enables online groups to work in real-time synchrony, collaboratively exploring a decision-space and converging on preferred solutions in a matter of seconds. We call the process “social swarming” and early real-world testing suggests it has great potential for harnessing collective intelligence.",
"title": ""
},
{
"docid": "ee0d89ccd67acc87358fa6dd35f6b798",
"text": "Lessons learned from developing four graph analytics applications reveal good research practices and grand challenges for future research. The application domains include electric-power-grid analytics, social-network and citation analytics, text and document analytics, and knowledge domain analytics.",
"title": ""
},
{
"docid": "05cea038adce7f5ae2a09a7fd5e024a7",
"text": "The paper describes the use TMS320C5402 DSP for single channel active noise cancellation (ANC) in duct system. The canceller uses a feedback control topology and is designed to cancel narrowband periodic tones. The signal is processed with well-known filtered-X least mean square (filtered-X LMS) Algorithm in the digital signal processing. The paper describes the hardware and use chip support libraries for data streaming. The FXLMS algorithm is written in assembly language callable from C main program. The results obtained are compatible to the expected result in the literature available. The paper highlights the features of cancellation and analyzes its performance at different gain and frequency.",
"title": ""
},
{
"docid": "6a96678b14ec12cb4bb3db4e1c4c6d4e",
"text": "Emoticons are widely used to express positive or negative sentiment on Twitter. We report on a study with live users to determine whether emoticons are used to merely emphasize the sentiment of tweets, or whether they are the main elements carrying the sentiment. We found that the sentiment of an emoticon is in substantial agreement with the sentiment of the entire tweet. Thus, emoticons are useful as predictors of tweet sentiment and should not be ignored in sentiment classification. However, the sentiment expressed by an emoticon agrees with the sentiment of the accompanying text only slightly better than random. Thus, using the text accompanying emoticons to train sentiment models is not likely to produce the best results, a fact that we show by comparing lexicons generated using emoticons with others generated using simple textual features.",
"title": ""
},
{
"docid": "3fe2cb22ac6aa37d8f9d16dea97649c5",
"text": "The term biosensors encompasses devices that have the potential to quantify physiological, immunological and behavioural responses of livestock and multiple animal species. Novel biosensing methodologies offer highly specialised monitoring devices for the specific measurement of individual and multiple parameters covering an animal's physiology as well as monitoring of an animal's environment. These devices are not only highly specific and sensitive for the parameters being analysed, but they are also reliable and easy to use, and can accelerate the monitoring process. Novel biosensors in livestock management provide significant benefits and applications in disease detection and isolation, health monitoring and detection of reproductive cycles, as well as monitoring physiological wellbeing of the animal via analysis of the animal's environment. With the development of integrated systems and the Internet of Things, the continuously monitoring devices are expected to become affordable. The data generated from integrated livestock monitoring is anticipated to assist farmers and the agricultural industry to improve animal productivity in the future. The data is expected to reduce the impact of the livestock industry on the environment, while at the same time driving the new wave towards the improvements of viable farming techniques. This review focusses on the emerging technological advancements in monitoring of livestock health for detailed, precise information on productivity, as well as physiology and well-being. Biosensors will contribute to the 4th revolution in agriculture by incorporating innovative technologies into cost-effective diagnostic methods that can mitigate the potentially catastrophic effects of infectious outbreaks in farmed animals.",
"title": ""
},
{
"docid": "088d6f1cd3c19765df8a16cd1a241d18",
"text": "Legged robots need to be able to classify and recognize different terrains to adapt their gait accordingly. Recent works in terrain classification use different types of sensors (like stereovision, 3D laser range, and tactile sensors) and their combination. However, such sensor systems require more computing power, produce extra load to legged robots, and/or might be difficult to install on a small size legged robot. In this work, we present an online terrain classification system. It uses only a monocular camera with a feature-based terrain classification algorithm which is robust to changes in illumination and view points. For this algorithm, we extract local features of terrains using either Scale Invariant Feature Transform (SIFT) or Speed Up Robust Feature (SURF). We encode the features using the Bag of Words (BoW) technique, and then classify the words using Support Vector Machines (SVMs) with a radial basis function kernel. We compare this feature-based approach with a color-based approach on the Caltech-256 benchmark as well as eight different terrain image sets (grass, gravel, pavement, sand, asphalt, floor, mud, and fine gravel). For terrain images, we observe up to 90% accuracy with the feature-based approach. Finally, this online terrain classification system is successfully applied to our small hexapod robot AMOS II. The output of the system providing terrain information is used as an input to its neural locomotion control to trigger an energy-efficient gait while traversing different terrains.",
"title": ""
}
] |
scidocsrr
|
58c927fe85b57c6811b9b199b6d50023
|
Detect-SLAM: Making Object Detection and SLAM Mutually Beneficial
|
[
{
"docid": "091c57447d5a3c97d3ff1afb57ebb4e3",
"text": "We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.",
"title": ""
},
{
"docid": "fb1b80f1e7109b382994ca61b993ad71",
"text": "We present a novel approach to real-time dense visual SLAM. Our system is capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments explored using an RGB-D camera in an incremental online fashion, without pose graph optimisation or any postprocessing steps. This is accomplished by using dense frame-tomodel camera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surface deformations. Our approach applies local model-to-model surface loop closure optimisations as often as possible to stay close to the mode of the map distribution, while utilising global loop closure to recover from arbitrary drift and maintain global consistency.",
"title": ""
},
{
"docid": "5dac8ef81c7a6c508c603b3fd6a87581",
"text": "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.",
"title": ""
}
] |
[
{
"docid": "325e33bb763ed78b6b84deeb0b10453f",
"text": "The present study was conducted to identify possible acoustic cues of sarcasm. Native English speakers produced a variety of simple utterances to convey four different attitudes: sarcasm, humour, sincerity, and neutrality. Following validation by a separate naı̈ve group of native English speakers, the recorded speech was subjected to acoustic analyses for the following features: mean fundamental frequency (F0), F0 standard deviation, F0 range, mean amplitude, amplitude range, speech rate, harmonics-to-noise ratio (HNR, to probe for voice quality changes), and one-third octave spectral values (to probe resonance changes). The results of analyses indicated that sarcasm was reliably characterized by a number of prosodic cues, although one acoustic feature appeared particularly robust in sarcastic utterances: overall reductions in mean F0 relative to all other target attitudes. Sarcasm was also reliably distinguished from sincerity by overall reductions in HNR and in F0 standard deviation. In certain linguistic contexts, sarcasm could be differentiated from sincerity and humour through changes in resonance and reductions in both speech rate and F0 range. Results also suggested a role of language used by speakers in conveying sarcasm and sincerity. It was concluded that sarcasm in speech can be characterized by a specific pattern of prosodic cues in addition to textual cues, and that these acoustic characteristics can be influenced by language used by the speaker. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "94b9a33fc92d38ac7933e541e2d9ef3e",
"text": "Wearable health-monitoring systems are becoming very popular, especially in enabling the noninvasive diagnosis of vital functions of the human body. Besides typical singular heartbeat or perspiration sensors, which have been commercially available in recent years, the deployment of a series of body-worn sensors can enable an effective health-monitoring mechanism. The combined information obtained from such systems can either be relayed directly to any health-monitoring personnel in the case of emergencies or can be logged and analyzed as a part of preventive health measures. However, the deployment of on-body nodes for humans must be performed with care, as they may interfere with the patients' regular movements. This is especially challenging because the relationship between the electromagnetic waves is influenced by the patient's movements, distance from the nearest base station, operating environment, etc. Additional challenges to the deployment of such mechanisms are also faced in situations where the nodes require additional on-body space, impose additional weight, or are not conformal enough to the patient's body. On the hardware design aspect, the sensory and communication functions on the electronic node have to be designed using special materials to avoid reliability issues or damage due to repeated or intense movements. Finally, and perhaps the most important aspect that needs to be addressed concerning such systems, is their electromagnetic safety level, which is defined by their specific absorption rates (SARs). This article aims to review the latest developments in body-worn wireless health-monitoring systems and their current challenges and limitations and to discuss future trends for such worn devices for these applications.",
"title": ""
},
{
"docid": "0ff727ff06c02d2e371798ad657153c9",
"text": "Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.",
"title": ""
},
{
"docid": "252256527c17c21492e4de0ae50d9729",
"text": "Scribbles in scribble-based interactive segmentation such as graph-cut are usually assumed to be perfectly accurate, i.e., foreground scribble pixels will never be segmented as background in the final segmentation. However, it can be hard to draw perfectly accurate scribbles, especially on fine structures of the image or on mobile touch-screen devices. In this paper, we propose a novel ratio energy function that tolerates errors in the user input while encouraging maximum use of the user input information. More specifically, the ratio energy aims to minimize the graph-cut energy while maximizing the user input respected in the segmentation. The ratio energy function can be exactly optimized using an efficient iterated graph cut algorithm. The robustness of the proposed method is validated on the GrabCut dataset using both synthetic scribbles and manual scribbles. The experimental results show that the proposed algorithm is robust to the errors in the user input and preserves the \"anchoring\" capability of the user input.",
"title": ""
},
{
"docid": "c3b2372aea5faf4c2816b295d290095f",
"text": "This paper presents a physically based model for the metal–oxide–semiconductor (MOS) transistor suitable for analysis and design of analog integrated circuits. Static and dynamic characteristics of the MOS field-effect transistor are accurately described by single-piece functions of two saturation currents in all regions of operation. Simple expressions for the transconductance-to-current ratio, the drain-to-source saturation voltage, and the cutoff frequency in terms of the inversion level are given. The design of a common-source amplifier illustrates the application of the proposed model.",
"title": ""
},
{
"docid": "ba29062eee3fe640451a9d169e19acde",
"text": "For terrestrial free space optical (FSO) systems, we investigate the use of pulse position modulation (PPM) which has the interesting property of being average-energy efficient. We first discuss the upper bound on the information transmission rate for a Gaussian channel. Next, we consider the more practical aspect of channel coding and look for a suitable solution for the case of Q-ary PPM. Instead of using a non-binary channel code, we suggest to use a simple binary convolutional code and to perform iterative soft demodulation (demapping) and channel decoding at the receiver. We show that the proposed scheme is quite efficient against demodulation errors due to the receiver noise. Moreover, we propose a simple soft-demapping method of low complexity for the general case of Q-ary PPM. The receiver complexity remains then reasonable in view of implementation in a terrestrial FSO system.",
"title": ""
},
{
"docid": "89238dd77c0bf0994b53190078eb1921",
"text": "Several methods exist for a computer to generate music based on data including Markov chains, recurrent neural networks, recombinancy, and grammars. We explore the use of unit selection and concatenation as a means of generating music using a procedure based on ranking, where, we consider a unit to be a variable length number of measures of music. We first examine whether a unit selection method, that is restricted to a finite size unit library, can be sufficient for encompassing a wide spectrum of music. This is done by developing a deep autoencoder that encodes a musical input and reconstructs the input by selecting from the library. We then describe a generative model that combines a deep structured semantic model (DSSM) with an LSTM to predict the next unit, where units consist of four, two, and one measures of music. We evaluate the generative model using objective metrics including mean rank and accuracy and with a subjective listening test in which expert musicians are asked to complete a forcedchoiced ranking task. Our system is compared to a note-level generative baseline model that consists of a stacked LSTM trained to predict forward by one note.",
"title": ""
},
{
"docid": "250ef2e3df7577986ec96dce44f27132",
"text": "This review paper focuses on studies in healthy human subjects that examined the functional neuroanatomy and cerebral plasticity associated with the learning, consolidation and retention phases of motor skilled behaviors using modern brain imaging techniques. Evidence in support of a recent model proposed by Doyon and Ungerleider [Functional Anatomy of Motor Skill Learning. In: Squire LR, Schacter DL, editors. Neuropsychology of Memory. New York: Guilford Press, 2002.] is also discussed. The latter suggests that experience-dependent changes in the brain depend not only on the stage of learning, but also on whether subjects are required to learn a new sequence of movements (motor sequence learning) or learn to adapt to environmental perturbations (motor adaptation). This model proposes that the cortico-striatal and cortico-cerebellar systems contribute differentially to motor sequence learning and motor adaptation, respectively, and that this is most apparent during the slow learning phase (i.e. automatization) when subjects achieve asymptotic performance, as well as during reactivation of the new skilled behavior in the retention phase.",
"title": ""
},
{
"docid": "1023cd0b40e24429cb39b4d38477cada",
"text": "Organizations that migrate from identity-centric to role-based Identity Management face the initial task of defining a valid set of roles for their employees. Due to its capabilities of automated and fast role detection, role mining as a solution for dealing with this challenge has gathered a rapid increase of interest in the academic community. Research activities throughout the last years resulted in a large number of different approaches, each covering specific aspects of the challenge. In this paper, firstly, a survey of the research area provides insight into the development of the field, underlining the need for a comprehensive perspective on role mining. Consecutively, a generic process model for role mining including preand post-processing activities is introduced and existing research activities are classified according to this model. The goal is to provide a basis for evaluating potentially valuable combinations of those approaches in the future.",
"title": ""
},
{
"docid": "924d833125453fa4c525df5f607724e1",
"text": "Strong stubborn sets have recently been analyzed and successfully applied as a pruning technique for planning as heuristic search. Strong stubborn sets are defined declaratively as constraints over operator sets. We show how these constraints can be relaxed to offer more freedom in choosing stubborn sets while maintaining the correctness and optimality of the approach. In general, many operator sets satisfy the definition of stubborn sets. We study different strategies for selecting among these possibilities and show that existing approaches can be considerably improved by rather simple strategies, eliminating most of the overhead of the previous",
"title": ""
},
{
"docid": "fae8f50726c33390e0c49499af2509f0",
"text": "Abnormal bearer session release (i.e. bearer session drop) in cellular telecommunication networks may seriously impact the quality of experience of mobile users. The latest mobile technologies enable high granularity real-time reporting of all conditions of individual sessions, which gives rise to use data analytics methods to process and monetize this data for network optimization. One such example for analytics is Machine Learning (ML) to predict session drops well before the end of session. In this paper a novel ML method is presented that is able to predict session drops with higher accuracy than using traditional models. The method is applied and tested on live LTE data offline. The high accuracy predictor can be part of a SON function in order to eliminate the session drops or mitigate their effects.",
"title": ""
},
{
"docid": "6629711b532cde87fdb2d710178d5197",
"text": "Aligning information systems with organizational processes, goals and strategies is becoming increasingly important. Prior research has identified two dimensions of strategic alignment the social and intellectual. The former focuses primarily on the people involved in achieving alignment, whilsrihe latter is more likely to be associated with the investigation oj plans and planning methodologies. Until recently most research has concentrated on the intellectual dimension however the importance of the social dimension is being increasingly recognized. In most instances research is conducted on these two dimensions independently without consideration of the affect on the other. The research presented here, involving the creation of a causal-loop diagram by six senior IS/IT managers, presents a systemic view of the development of alignment within a typical organization and emphasizes the relationship between the social and intellectual dimensions. It indicates that practitioners understand that a high level of connection between IS/IT and business planning processes may be dependent on the level of integration between the IS/IT group and other sections of the organization. However, it appears that the culture oj many organizations is impeding the development of this integration.",
"title": ""
},
{
"docid": "5ad4b3c5905b7b716a806432b755e60b",
"text": "The formation of both germline cysts and the germinal epithelium is described during the ovary development in Cyprinus carpio. As in the undifferentiated gonad of mammals, cords of PGCs become oogonia when they are surrounded by somatic cells. Ovarian differentiation is triggered when oogonia proliferate and enter meiosis, becoming oocytes. Proliferation of single oogonium results in clusters of interconnected oocytes, the germline cysts, that are encompassed by somatic prefollicle cells and form cell nests. Both PGCs and cell nests are delimited by a basement membrane. Ovarian follicles originate from the germline cysts, about the time of meiotic arrest, as prefollicle cells surround oocytes, individualizing them. They synthesize a basement membrane and an oocyte forms a follicle. With the formation of the stroma, unspecialized mesenchymal cells differentiate, and encompass each follicle, forming the theca. The follicle, basement membrane, and theca constitute the follicle complex. Along the ventral region of the differentiating ovary, the epithelium invaginates to form the ovigerous lamellae whose developing surface epithelium, the germinal epithelium, is composed of epithelial cells, germline cysts with oogonia, oocytes, and developing follicles. The germinal epithelium rests upon a basement membrane. The follicles complexes are connected to the germinal epithelium by a shared portion of basement membrane. In the differentiated ovary, germ cell proliferation in the epithelium forms nests in which there are the germline cysts. Germline cysts, groups of cells that form from a single founder cell and are joined by intercellular bridges, are conserved throughout the vertebrates, as is the germinal epithelium.",
"title": ""
},
{
"docid": "4118cc1ed5ae11289029338c99964c1b",
"text": "The concept of t-designs in compact symmetric spaces of rank 1 is a generalization of the theory of classical t-designs. In this paper we obtain new lower bounds on the cardinality of designs in projective compact symmetric spaces of rank 1. With one exception our bounds are the first improvements of the classical bounds by more than one. We use the linear programming technique and follow the approach we have proposed for spherical codes and designs. Some examples are shown and compared with the classical bounds.",
"title": ""
},
{
"docid": "4e23abcd1746d23c54e36c51e0a59208",
"text": "Recognizing actions is one of the important challenges in computer vision with respect to video data, with applications to surveillance, diagnostics of mental disorders, and video retrieval. Compared to other data modalities such as documents and images, processing video data demands orders of magnitude higher computational and storage resources. One way to alleviate this difficulty is to focus the computations to informative (salient) regions of the video. In this paper, we propose a novel global spatio-temporal selfsimilarity measure to score saliency using the ideas of dictionary learning and sparse coding. In contrast to existing methods that use local spatio-temporal feature detectors along with descriptors (such as HOG, HOG3D, HOF, etc.), dictionary learning helps consider the saliency in a global setting (on the entire video) in a computationally efficient way. We consider only a small percentage of the most salient (least self-similar) regions found using our algorithm, over which spatio-temporal descriptors such as HOG and region covariance descriptors are computed. The ensemble of such block descriptors in a bag-of-features framework provides a holistic description of the motion sequence which can be used in a classification setting. Experiments on several benchmark datasets in video based action classification demonstrate that our approach performs competitively to the state of the art.",
"title": ""
},
{
"docid": "2a86c4904ef8059295f1f0a2efa546d8",
"text": "3D shape is a crucial but heavily underutilized cue in today’s computer vision system, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape model in the loop. Apart from object recognition on 2.5D depth maps, recovering these incomplete 3D shapes to full 3D is critical for analyzing shape variations. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses. It naturally supports joint object recognition and shape reconstruction from 2.5D depth maps, and further, as an additional application it allows active object recognition through view planning. We construct a largescale 3D CAD model dataset to train our model, and conduct extensive experiments to study our new representation.",
"title": ""
},
{
"docid": "38036ea0a6f79ff62027e8475859acb9",
"text": "The constantly increasing demand for nutraceuticals is paralleled by a more pronounced request for natural ingredients and health-promoting foods. The multiple functional properties of cactus pear fit well this trend. Recent data revealed the high content of some chemical constituents, which can give added value to this fruit on a nutritional and technological functionality basis. High levels of betalains, taurine, calcium, magnesium, and antioxidants are noteworthy.",
"title": ""
},
{
"docid": "fd55c59744ff3cdb65d3e752acb0086c",
"text": "The traffic classification problem has recently attracted the interest of both network operators and researchers. Several machine learning (ML) methods have been proposed in the literature as a promising solution to this problem. Surprisingly, very few works have studied the traffic classification problem with Sampled NetFlow data. However, Sampled NetFlow is a widely extended monitoring solution among network operators. In this paper we aim to fulfill this gap. First, we analyze the performance of current ML methods with NetFlow by adapting a popular ML-based technique. The results show that, although the adapted method is able to obtain similar accuracy than previous packet-based methods (≈90%), its accuracy degrades drastically in the presence of sampling. In order to reduce this impact, we propose a solution to network operators that is able to operate with Sampled NetFlow data and achieve good accuracy in the presence of sampling.",
"title": ""
},
{
"docid": "9c98dfb1e7df220edc4bc7cd57956b4b",
"text": "In this paper we present MATISSE 2.0, a microscopic multi-agent based simulation system for the specification and execution of simulation scenarios for Agent-based intelligent Transportation Systems (ATS). In MATISSE, each smart traffic element (e.g., vehicle, intersection control device) is modeled as a virtual agent which continuously senses its surroundings and communicates and collaborates with other agents. MATISSE incorporates traffic control strategies such as contraflow operations and dynamic traffic sign changes. Experimental results show the ability of MATISSE 2.0 to simulate traffic scenarios with thousands of agents on a single PC.",
"title": ""
},
{
"docid": "ae3d6467c0952a770956e8c0eed04c8d",
"text": "Many modern cities strive to integrate information technology into every aspect of city life to create so-called smart cities. Smart cities rely on a large number of application areas and technologies to realize complex interactions between citizens, third parties, and city departments. This overwhelming complexity is one reason why holistic privacy protection only rarely enters the picture. A lack of privacy can result in discrimination and social sorting, creating a fundamentally unequal society. To prevent this, we believe that a better understanding of smart cities and their privacy implications is needed. We therefore systematize the application areas, enabling technologies, privacy types, attackers, and data sources for the attacks, giving structure to the fuzzy term “smart city.” Based on our taxonomies, we describe existing privacy-enhancing technologies, review the state of the art in real cities around the world, and discuss promising future research directions. Our survey can serve as a reference guide, contributing to the development of privacy-friendly smart cities.",
"title": ""
}
] |
scidocsrr
|
975434c682886d981f6ec79602811241
|
Interest-based personalized search
|
[
{
"docid": "1272563e64ca327aba1be96f2e045c30",
"text": "Current Web search engines are built to serve all users, independent of the special needs of any individual user. Personalization of Web search is to carry out retrieval for each user incorporating his/her interests. We propose a novel technique to learn user profiles from users' search histories. The user profiles are then used to improve retrieval effectiveness in Web search. A user profile and a general profile are learned from the user's search history and a category hierarchy, respectively. These two profiles are combined to map a user query into a set of categories which represent the user's search intention and serve as a context to disambiguate the words in the user's query. Web search is conducted based on both the user query and the set of categories. Several profile learning and category mapping algorithms and a fusion algorithm are provided and evaluated. Experimental results indicate that our technique to personalize Web search is both effective and efficient.",
"title": ""
}
] |
[
{
"docid": "62c71a412a8b715e2fda64cd8b6a2a66",
"text": "We study the design of local algorithms for massive graphs. A local graph algorithm is one that finds a solution containing or near a given vertex without looking at the whole graph. We present a local clustering algorithm. Our algorithm finds a good cluster—a subset of vertices whose internal connections are significantly richer than its external connections—near a given vertex. The running time of our algorithm, when it finds a nonempty local cluster, is nearly linear in the size of the cluster it outputs. The running time of our algorithm also depends polylogarithmically on the size of the graph and polynomially on the conductance of the cluster it produces. Our clustering algorithm could be a useful primitive for handling massive graphs, such as social networks and webgraphs. As an application of this clustering algorithm, we present a partitioning algorithm that finds an approximate sparsest cut with nearly optimal balance. Our algorithm takes time nearly linear in the number edges of the graph. Using the partitioning algorithm of this paper, we have designed a nearly linear time algorithm for constructing spectral sparsifiers of graphs, which we in turn use in a nearly linear time algorithm for solving linear systems in symmetric, diagonally dominant matrices. The linear system solver also leads to a nearly linear time algorithm for approximating the secondsmallest eigenvalue and corresponding eigenvector of the Laplacian matrix of a graph. These other results are presented in two companion papers.",
"title": ""
},
{
"docid": "9be252c72f5f11a391ea180baca6b6dd",
"text": "In a typical cloud computing diverse facilitating components like hardware, software, firmware, networking, and services integrate to offer different computational facilities, while Internet or a private network (or VPN) provides the required backbone to deliver the services. The security risks to the cloud system delimit the benefits of cloud computing like “on-demand, customized resource availability and performance management”. It is understood that current IT and enterprise security solutions are not adequate to address the cloud security issues. This paper explores the challenges and issues of security concerns of cloud computing through different standard and novel solutions. We propose analysis and architecture for incorporating different security schemes, techniques and protocols for cloud computing, particularly in Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) systems. The proposed architecture is generic in nature, not dependent on the type of cloud deployment, application agnostic and is not coupled with the underlying backbone. This would facilitate to manage the cloud system more effectively and provide the administrator to include the specific solution to counter the threat. We have also shown using experimental data how a cloud service provider can estimate the charging based on the security service it provides and security-related cost-benefit analysis can be estimated.",
"title": ""
},
{
"docid": "28ff3b1e9f29d7ae4b57f6565330cde5",
"text": "To identify the effects of core stabilization exercise on the Cobb angle and lumbar muscle strength of adolescent patients with idiopathic scoliosis. Subjects in the present study consisted of primary school students who were confirmed to have scoliosis on radiologic examination performed during their visit to the National Fitness Center in Seoul, Korea. Depending on whether they participated in a 12-week core stabilization exercise program, subjects were divided into the exercise (n=14, age 12.71±0.72 years) or control (n=15, age 12.80±0.86 years) group. The exercise group participated in three sessions of core stabilization exercise per week for 12 weeks. The Cobb angle, flexibility, and lumbar muscle strength tests were performed before and after core stabilization exercise. Repeated-measure two-way analysis of variance was performed to compare the treatment effects between the exercise and control groups. There was no significant difference in thoracic Cobb angle between the groups. The exercise group had a significant decrease in the lumbar Cobb angle after exercise compared to before exercise (P<0.001). The exercise group also had a significant increase in lumbar flexor and extensor muscles strength after exercise compared to before exercise (P<0.01 and P<0.001, respectively). Core stabilization exercise can be an effective therapeutic exercise to decrease the Cobb angle and improve lumbar muscle strength in adolescents with idiopathic scoliosis.",
"title": ""
},
{
"docid": "445b3f542e785425cd284ad556ef825a",
"text": "Despite the success of neural networks (NNs), there is still a concern among many over their “black box” nature. Why do they work? Yes, we have Universal Approximation Theorems, but these concern statistical consistency, a very weak property, not enough to explain the exceptionally strong performance reports of the method. Here we present a simple analytic argument that NNs are in fact essentially polynomial regression models, with the effective degree of the polynomial growing at each hidden layer. This view will have various implications for NNs, e.g. providing an explanation for why convergence problems arise in NNs, and it gives rough guidance on avoiding overfitting. In addition, we use this phenomenon to predict and confirm a multicollinearity property of NNs not previously reported in the literature. Most importantly, given this loose correspondence, one may choose to routinely use polynomial models instead of NNs, thus avoiding some major problems of the latter, such as having to set many tuning parameters and dealing with convergence issues. We present a number of empirical results; in each case, the accuracy of the polynomial approach matches or exceeds that of NN approaches. A many-featured, open-source software package, polyreg, is available. 1 ar X iv :1 80 6. 06 85 0v 2 [ cs .L G ] 2 9 Ju n 20 18 1 The Mystery of NNs Neural networks (NNs), especially in the currently popular form of many-layered deep learning networks (DNNs), have become many analysts’ go-to method for predictive analytics. Indeed, in the popular press, the term artificial intelligence has become virtually synonymous with NNs.1 Yet there is a feeling among many in the community that NNs are “black boxes”; just what is going on inside? Various explanations have been offered for the success of NNs, a prime example being [Shwartz-Ziv and Tishby(2017)]. However, the present paper will present significant new insights. 2 Contributions of This Paper The contribution of the present work will be as follows:2 (a) We will show that, at each layer of an NY, there is a rough correspondence to some fitted ordinary parametric polynomial regression (PR) model; in essence, NNs are a form of PR. We refer to this loose correspondence here as NNAEPR, Neural Nets Are Essentially Polynomial Models. (b) A very important aspect of NNAEPR is that the degree of the approximating polynomial increases with each hidden layer. In other words, our findings should not be interpreted as merely saying that the end result of an NN can be approximated by some polynomial. (c) We exploit NNAEPR to learn about general properties of NNs via our knowledge of the properties of PR. This will turn out to provide new insights into aspects such as the numbers of hidden layers and numbers of units per layer, as well as how convergence problems arise. For example, we use NNAEPR to predict and confirm a multicollinearity property of NNs not previous reported in the literature. (d) Property (a) suggests that in many applications, one might simply fit a polynomial model in the first place, bypassing NNs. This would have the advantage of avoiding the problems of choosing tuning parameters (the polynomial approach has just one, the degree), nonconvergence and so on. 1There are many different variants of NNs, but for the purposes of this paper, we can consider them as a group. 2 Author listing is alphabetical by surname. XC wrote the entire core code for the polyreg package; NM conceived of the main ideas underlying the work, developed the informal mathematical material and wrote support code; BK assembled the brain and kidney cancer data, wrote some of the support code, and provided domain expertise guidance for genetics applications; PM wrote extensive support code, including extending his kerasformula package, and provided specialized expertise on NNs. All authors conducted data experiments.",
"title": ""
},
{
"docid": "4a87e61106125ffdd49c42517ce78b87",
"text": "Due to network effects and switching costs, platform providers often become entrenched. To dislodge them, entrants generally must offer revolutionary products. We explore a second path to platform leadership change that does not rely on Schumpeterian creative destruction: platform envelopment. By leveraging common components and shared user relationships, one platform provider can move into another’s market, combining its own functionality with the target’s in a multi-platform bundle. Dominant firms otherwise sheltered from entry by standalone rivals may be vulnerable to an adjacent platform provider’s envelopment attack. We analyze conditions under which envelopment strategies are likely to succeed.",
"title": ""
},
{
"docid": "0075c4714b8e7bf704381d3a3722ab59",
"text": "This paper surveys the current state of the art in Natural Language Generation (nlg), defined as the task of generating text or speech from non-linguistic input. A survey of nlg is timely in view of the changes that the field has undergone over the past two decades, especially in relation to new (usually data-driven) methods, as well as new applications of nlg technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in nlg and the architectures adopted in which such tasks are organised; (b) highlight a number of recent research topics that have arisen partly as a result of growing synergies between nlg and other areas of artificial intelligence; (c) draw attention to the challenges in nlg evaluation, relating them to similar challenges faced in other areas of nlp, with an emphasis on different evaluation methods and the relationships between them.",
"title": ""
},
{
"docid": "e8eba986ab77d519ce8808b3d33b2032",
"text": "In this paper, an implementation of an extended target tracking filter using measurements from high-resolution automotive Radio Detection and Ranging (RADAR) is proposed. Our algorithm uses the Cartesian point measurements from the target's contour as well as the Doppler range rate provided by the RADAR to track a target vehicle's position, orientation, and translational and rotational velocities. We also apply a Gaussian Process (GP) to model the vehicle's shape. To cope with the nonlinear measurement equation, we implement an Extended Kalman Filter (EKF) and provide the necessary derivatives for the Doppler measurement. We then evaluate the effectiveness of incorporating the Doppler rate on simulations and on 2 sets of real data.",
"title": ""
},
{
"docid": "741dbabfa94b787f31bccf12471724a4",
"text": "In this paper is proposed a Takagi-Sugeno Fuzzy controller (TSF) applied to the direct torque control scheme with space vector modulation. In conventional DTC-SVM scheme, two PI controllers are used to generate the reference stator voltage vector. To improve the drawback of this conventional DTC-SVM scheme is proposed the TSF controller to substitute both PI controllers. The proposed controller calculates the reference quadrature components of the stator voltage vector. The rule base for the proposed controller is defined in function of the stator flux error and the electromagnetic torque error using trapezoidal and triangular membership functions. Constant switching frequency and low torque ripple are obtained using space vector modulation technique. Performance of the proposed DTC-SVM with TSF controller is analyzed in terms of several performance measures such as rise time, settling time and torque ripple considering different operating conditions. The simulation results shown that the proposed scheme ensure fast torque response and low torque ripple validating the proposed scheme.",
"title": ""
},
{
"docid": "c2177b7e3cdca3800b3d465229835949",
"text": "BACKGROUND\nIn 2010, the World Health Organization published benchmarks for training in osteopathy in which osteopathic visceral techniques are included. The purpose of this study was to identify and critically appraise the scientific literature concerning the reliability of diagnosis and the clinical efficacy of techniques used in visceral osteopathy.\n\n\nMETHODS\nDatabases MEDLINE, OSTMED.DR, the Cochrane Library, Osteopathic Research Web, Google Scholar, Journal of American Osteopathic Association (JAOA) website, International Journal of Osteopathic Medicine (IJOM) website, and the catalog of Académie d'ostéopathie de France website were searched through December 2017. Only inter-rater reliability studies including at least two raters or the intra-rater reliability studies including at least two assessments by the same rater were included. For efficacy studies, only randomized-controlled-trials (RCT) or crossover studies on unhealthy subjects (any condition, duration and outcome) were included. Risk of bias was determined using a modified version of the quality appraisal tool for studies of diagnostic reliability (QAREL) in reliability studies. For the efficacy studies, the Cochrane risk of bias tool was used to assess their methodological design. Two authors performed data extraction and analysis.\n\n\nRESULTS\nEight reliability studies and six efficacy studies were included. The analysis of reliability studies shows that the diagnostic techniques used in visceral osteopathy are unreliable. Regarding efficacy studies, the least biased study shows no significant difference for the main outcome. The main risks of bias found in the included studies were due to the absence of blinding of the examiners, an unsuitable statistical method or an absence of primary study outcome.\n\n\nCONCLUSIONS\nThe results of the systematic review lead us to conclude that well-conducted and sound evidence on the reliability and the efficacy of techniques in visceral osteopathy is absent.\n\n\nTRIAL REGISTRATION\nThe review is registered PROSPERO 12th of December 2016. Registration number is CRD4201605286 .",
"title": ""
},
{
"docid": "a10a51d1070396e1e8a8b186af18f87d",
"text": "An upcoming trend for automobile manufacturers is to provide firmware updates over the air (FOTA) as a service. Since the firmware controls the functionality of a vehicle, security is important. To this end, several secure FOTA protocols have been developed. However, the secure FOTA protocols only solve the security for the transmission of the firmware binary. Once the firmware is downloaded, an attacker could potentially modify its contents before it is flashed to the corresponding ECU'S ROM. Thus, there is a need to extend the flashing procedure to also verify that the correct firmware has been flashed to the ECU. We present a framework for self-verification of firmware updates over the air. We include a verification code in the transmission to the vehicle, and after the firmware has been flashed, the integrity of the memory contents can be verified using the verification code. The verification procedure entails only simple hash functions and is thus suitable for the limited resources in the vehicle. Virtualization techniques are employed to establish a trusted computing base in the ECU, which is then used to perform the verification. The proposed framework allows the ECU itself to perform self-verification and can thus ensure the successful flashing of the firmware.",
"title": ""
},
{
"docid": "bad5040a740421b3079c3fa7bf598d71",
"text": "Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art results on various competitive benchmarks. The powerful learning ability of deep CNN is largely achieved with the use of multiple non-linear feature extraction stages that can automatically learn hierarchical representation from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs and recently very interesting deep CNN architectures are reported. The recent race in deep CNN architectures for achieving high performance on the challenging benchmarks has shown that the innovative architectural ideas, as well as parameter optimization, can improve the CNN performance on various vision-related tasks. In this regard, different ideas in the CNN design have been explored such as use of different activation and loss functions, parameter optimization, regularization, and restructuring of processing units. However, the major improvement in representational capacity is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is gaining substantial appreciation. This survey thus focuses on the intrinsic taxonomy present in the recently reported CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multipath, width, feature map exploitation, channel boosting and attention. Additionally, it covers the elementary understanding of the CNN components and sheds light on the current challenges and applications of CNNs.",
"title": ""
},
{
"docid": "c7059c650323a08ac7453ad4185e6c4f",
"text": "Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain. It is particularly important to neural networks, which are very likely to be overfitting. In some fields like image processing, many studies have shown the effectiveness of neural network-based transfer learning. For neural NLP, however, existing studies have only casually applied transfer learning, and conclusions are inconsistent. In this paper, we conduct systematic case studies and provide an illuminating picture on the transferability of neural networks in NLP.1",
"title": ""
},
{
"docid": "c02e7ece958714df34539a909c2adb7d",
"text": "Despite the growing evidence of the association between shame experiences and eating psychopathology, the specific effect of body image-focused shame memories on binge eating remains largely unexplored. The current study examined this association and considered current body image shame and self-criticism as mediators. A multi-group path analysis was conducted to examine gender differences in these relationships. The sample included 222 women and 109 men from the Portuguese general and college student populations who recalled an early body image-focused shame experience and completed measures of the centrality of the shame memory, current body image shame, binge eating symptoms, depressive symptoms, and self-criticism. For both men and women, the effect of the centrality of shame memories related to body image on binge eating symptoms was fully mediated by body image shame and self-criticism. In women, these effects were further mediated by self-criticism focused on a sense of inadequacy and also on self-hatred. In men, only the form of self-criticism focused on a sense of inadequacy mediated these associations. The present study has important implications for the conceptualization and treatment of binge eating symptoms. Findings suggest that, in both genders, body image-focused shame experiences are associated with binge eating symptoms via their effect on current body image shame and self-criticism.",
"title": ""
},
{
"docid": "d5d2e1feeb2d0bf2af49e1d044c9e26a",
"text": "ISSN: 2167-0811 (Print) 2167-082X (Online) Journal homepage: http://www.tandfonline.com/loi/rdij20 Algorithmic Transparency in the News Media Nicholas Diakopoulos & Michael Koliska To cite this article: Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, DOI: 10.1080/21670811.2016.1208053 To link to this article: http://dx.doi.org/10.1080/21670811.2016.1208053",
"title": ""
},
{
"docid": "a6e18aa7f66355fb8407798a37f53f45",
"text": "We review some of the recent advances in level-set methods and their applications. In particular, we discuss how to impose boundary conditions at irregular domains and free boundaries, as well as the extension of level-set methods to adaptive Cartesian grids and parallel architectures. Illustrative applications are taken from the physical and life sciences. Fast sweeping methods are briefly discussed.",
"title": ""
},
{
"docid": "69fb4deab14bd651e20209695c6b50a2",
"text": "An impediment to Web-based retail sales is the impersonal nature of Web-based shopping. A solution to this problem is to use an avatar to deliver product information. An avatar is a graphic representation that can be animated by means of computer technology. Study 1 shows that using an avatar sales agent leads to more satisfaction with the retailer, a more positive attitude toward the product, and a greater purchase intention. Study 2 shows that an attractive avatar is a more effective sales agent at moderate levels of product involvement, but an expert avatar is a more effective sales agent at high levels of product involvement.",
"title": ""
},
{
"docid": "4a85e3b10ecc4c190c45d0dfafafb388",
"text": "The number of malicious applications targeting the Android system has literally exploded in recent years. While the security community, well aware of this fact, has proposed several methods for detection of Android malware, most of these are based on permission and API usage or the identification of expert features. Unfortunately, many of these approaches are susceptible to instruction level obfuscation techniques. Previous research on classic desktop malware has shown that some high level characteristics of the code, such as function call graphs, can be used to find similarities between samples while being more robust against certain obfuscation strategies. However, the identification of similarities in graphs is a non-trivial problem whose complexity hinders the use of these features for malware detection. In this paper, we explore how recent developments in machine learning classification of graphs can be efficiently applied to this problem. We propose a method for malware detection based on efficient embeddings of function call graphs with an explicit feature map inspired by a linear-time graph kernel. In an evaluation with 12,158 malware samples our method, purely based on structural features, outperforms several related approaches and detects 89% of the malware with few false alarms, while also allowing to pin-point malicious code structures within Android applications.",
"title": ""
},
{
"docid": "edd8ac16c7eaebf5b5b06964eacb6e8c",
"text": "The authors examined White and Black participants' emotional, physiological, and behavioral responses to same-race or different-race evaluators, following rejecting social feedback or accepting social feedback. As expected, in ingroup interactions, the authors observed deleterious responses to social rejection and benign responses to social acceptance. Deleterious responses included cardiovascular (CV) reactivity consistent with threat states and poorer performance, whereas benign responses included CV reactivity consistent with challenge states and better performance. In intergroup interactions, however, a more complex pattern of responses emerged. Social rejection from different-race evaluators engendered more anger and activational responses, regardless of participants' race. In contrast, social acceptance produced an asymmetrical race pattern--White participants responded more positively than did Black participants. The latter appeared vigilant and exhibited threat responses. Discussion centers on implications for attributional ambiguity theory and potential pathways from discrimination to health outcomes.",
"title": ""
},
{
"docid": "a564d62de4afc7e6e5c76f1955809b61",
"text": "The implementation of a polycrystalline silicon solar cell as a microwave groundplane in a low-profile, reduced-footprint microstrip patch antenna design for autonomous communication applications is reported. The effects on the antenna/solar performances due to the integration, different electrical conductivities in the silicon layer and variation in incident light intensity are investigated. The antenna sensitivity to the orientation of the anisotropic solar cell geometry is discussed.",
"title": ""
},
{
"docid": "3f72e02928b5fcc6e8a9155f0344e6e1",
"text": "Due to the limitations of power amplifiers or loudspeakers, the echo signals captured in the microphones are not in a linear relationship with the far-end signals even when the echo path is perfectly linear. The nonlinear components of the echo cannot be successfully removed by a linear acoustic echo canceller. Residual echo suppression (RES) is a technique to suppress the remained echo after acoustic echo suppression (AES). Conventional approaches compute RES gain using Wiener filter or spectral subtraction method based on the estimated statistics on related signals. In this paper, we propose a deep neural network (DNN)-based RES gain estimation based on both the far-end and the AES output signals in all frequency bins. A DNN architecture, which is suitable to model a complicated nonlinear mapping between high-dimensional vectors, is employed as a regression function from these signals to the optimal RES gain. The proposed method can suppress the residual components without any explicit double-talk detectors. The experimental results show that our proposed approach outperforms a conventional method in terms of the echo return loss enhancement (ERLE) for single-talk periods and the perceptual evaluation of speech quality (PESQ) score for double-talk periods.",
"title": ""
}
] |
scidocsrr
|
1bcec79362fa439d41c7719fe2abba72
|
Detecting code clones in binary executables
|
[
{
"docid": "5d377a17d3444d6137be582cbbc6c1db",
"text": "Next generation malware will by be characterized by the intense use of polymorphic and metamorphic techniques aimed at circumventing the current malware detectors, based on pattern matching. In order to deal with this new kind of threat novel techniques have to be devised for the realization of malware detectors. Recent papers started to address such issue and this paper represents a further contribution in such a field. More precisely in this paper we propose a strategy for the detection of malicious codes that adopt the most evolved self-mutation techniques; we also provide experimental data supporting the validity of",
"title": ""
}
] |
[
{
"docid": "1286a39cec0d00f269c7490fb38f422b",
"text": "BACKGROUND\nAttention-deficit/hyperactivity disorder (ADHD) is one of the most common developmental disorders experienced in childhood and can persist into adulthood. The disorder has early onset and is characterized by a combination of overactive, poorly modulated behavior with marked inattention. In the long term it can impair academic performance, vocational success and social-emotional development. Meditation is increasingly used for psychological conditions and could be used as a tool for attentional training in the ADHD population.\n\n\nOBJECTIVES\nTo assess the effectiveness of meditation therapies as a treatment for ADHD.\n\n\nSEARCH STRATEGY\nOur extensive search included: CENTRAL, MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, C2-SPECTR, dissertation abstracts, LILACS, Virtual Health Library (VHL) in BIREME, Complementary and Alternative Medicine specific databases, HSTAT, Informit, JST, Thai Psychiatric databases and ISI Proceedings, plus grey literature and trial registries from inception to January 2010.\n\n\nSELECTION CRITERIA\nRandomized controlled trials that investigated the efficacy of meditation therapy in children or adults diagnosed with ADHD.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo authors extracted data independently using a pre-designed data extraction form. We contacted study authors for additional information required. We analyzed data using mean difference (MD) to calculate the treatment effect. The results are presented in tables, figures and narrative form.\n\n\nMAIN RESULTS\nFour studies, including 83 participants, are included in this review. Two studies used mantra meditation while the other two used yoga compared with drugs, relaxation training, non-specific exercises and standard treatment control. Design limitations caused high risk of bias across the studies. Only one out of four studies provided data appropriate for analysis. For this study there was no statistically significant difference between the meditation therapy group and the drug therapy group on the teacher rating ADHD scale (MD -2.72, 95% CI -8.49 to 3.05, 15 patients). Likewise, there was no statistically significant difference between the meditation therapy group and the standard therapy group on the teacher rating ADHD scale (MD -0.52, 95% CI -5.88 to 4.84, 17 patients). There was also no statistically significant difference between the meditation therapy group and the standard therapy group in the distraction test (MD -8.34, 95% CI -107.05 to 90.37, 17 patients).\n\n\nAUTHORS' CONCLUSIONS\nAs a result of the limited number of included studies, the small sample sizes and the high risk of bias, we are unable to draw any conclusions regarding the effectiveness of meditation therapy for ADHD. The adverse effects of meditation have not been reported. More trials are needed.",
"title": ""
},
{
"docid": "785b9b8522d2957fc5ecf53bf3c408e0",
"text": "Clinical trial investigators often record a great deal of baseline data on each patient at randomization. When reporting the trial's findings such baseline data can be used for (i) subgroup analyses which explore whether there is evidence that the treatment difference depends on certain patient characteristics, (ii) covariate-adjusted analyses which aim to refine the analysis of the overall treatment difference by taking account of the fact that some baseline characteristics are related to outcome and may be unbalanced between treatment groups, and (iii) baseline comparisons which compare the baseline characteristics of patients in each treatment group for any possible (unlucky) differences. This paper examines how these issues are currently tackled in the medical journals, based on a recent survey of 50 trial reports in four major journals. The statistical ramifications are explored, major problems are highlighted and recommendations for future practice are proposed. Key issues include: the overuse and overinterpretation of subgroup analyses; the underuse of appropriate statistical tests for interaction; inconsistencies in the use of covariate-adjustment; the lack of clear guidelines on covariate selection; the overuse of baseline comparisons in some studies; the misuses of significance tests for baseline comparability, and the need for trials to have a predefined statistical analysis plan for all these uses of baseline data.",
"title": ""
},
{
"docid": "e7664a3c413f86792b98912a0241a6ac",
"text": "Seq2seq learning has produced promising results on summarization. However, in many cases, system summaries still struggle to keep the meaning of the original intact. They may miss out important words or relations that play critical roles in the syntactic structure of source sentences. In this paper, we present structure-infused copy mechanisms to facilitate copying important words and relations from the source sentence to summary sentence. The approach naturally combines source dependency structure with the copy mechanism of an abstractive sentence summarizer. Experimental results demonstrate the effectiveness of incorporating source-side syntactic information in the system, and our proposed approach compares favorably to state-of-the-art methods.",
"title": ""
},
{
"docid": "1ceab925041160f17163940360354c55",
"text": "A complete reconstruction of D.H. Lehmer’s ENIAC set-up for computing the exponents of p modulo 2 is given. This program served as an early test program for the ENIAC (1946). The reconstruction illustrates the difficulties of early programmers to find a way between a man operated and a machine operated computation. These difficulties concern both the content level (the algorithm) and the formal level (the logic of sequencing operations).",
"title": ""
},
{
"docid": "457ba37bf69b870db2653b851d271b0b",
"text": "This paper presents a unified approach to local trajectory planning and control for the autonomous ground vehicle driving along a rough predefined path. In order to cope with the unpredictably changing environment reactively and reason about the global guidance, we develop an efficient sampling-based model predictive local path generation approach to generate a set of kinematically-feasible trajectories aligning with the reference path. A discrete optimization scheme is developed to select the best path based on a specified objective function, then followed by the velocity profile generation. As for the low-level control, to achieve high performance of control, two degree of freedom control architecture is employed by combining the feedforward control with the feedback control. The simulation results demonstrate the capability of the proposed approach to track the curvature-discontinuous reference path robustly, while avoiding collisions with static obstacles.",
"title": ""
},
{
"docid": "d86eb65183f059a4ca7cb0ad9190a0ca",
"text": "Different short circuits, load growth, generation shortage, and other faults which disturb the voltage and frequency stability are serious threats to the system security. The frequency and voltage instability causes dispersal of a power system into sub-systems, and leads to blackout as well as heavy damages of the system equipment. This paper presents a fast and optimal adaptive load shedding method, for isolated power system using Artificial Neural Networks (ANN). The proposed method is able to determine the necessary load shedding in all steps simultaneously and is much faster than conventional methods. This method has been tested on the New-England power system. The simulation results show that the proposed algorithm is fast, robust and optimal values of load shedding in different loading scenarios are obtained in comparison with conventional method.",
"title": ""
},
{
"docid": "c42f395adaee401acdf31a1211d225f3",
"text": "In recent years, research efforts seeking to provide more natural, human-centered means of interacting with computers have gained growing interest. A particularly important direction is that of perceptive user interfaces, where the computer is endowed with perceptive capabilities that allow it to acquire both implicit and explicit information about the user and the environment. Vision has the potential of carrying a wealth of information in a non-intrusive manner and at a low cost, therefore it constitutes a very attractive sensing modality for developing perceptive user interfaces. Proposed approaches for vision-driven interactive user interfaces resort to technologies such as head tracking, face and facial expression recognition, eye tracking and gesture recognition. In this paper, we focus our attention to vision-based recognition of hand gestures. The first part of the paper provides an overview of the current state of the art regarding the recognition of hand gestures as these are observed and recorded by typical video cameras. In order to make the review of the related literature tractable, this paper does not discuss:",
"title": ""
},
{
"docid": "d36eec03e4fe2d491e22a758c5675c1f",
"text": "The large-scale deployment of modern phishing attacks relies on the automatic exploitation of vulnerable websites in the wild, to maximize profit while hindering attack traceability, detection and blacklisting. To the best of our knowledge, this is the first work that specifically leverages this adversarial behavior for detection purposes. We show that phishing webpages can be accurately detected by highlighting HTML code and visual differences with respect to other (legitimate) pages hosted within a compromised website. Our system, named DeltaPhish, can be installed as part of a web application firewall, to detect the presence of anomalous content on a website after compromise, and eventually prevent access to it. DeltaPhish is also robust against adversarial attempts in which the HTML code of the phishing page is carefully manipulated to evade detection. We empirically evaluate it on more than 5,500 webpages collected in the wild from compromised websites, showing that it is capable of detecting more than 99% of phishing webpages, while only misclassifying less than 1% of legitimate pages. We further show that the detection rate remains higher than 70% even under very sophisticated attacks carefully designed to evade our system. ∗Preprint version of the work accepted for publication at ESORICS 2017.",
"title": ""
},
{
"docid": "f8058d7c6fa5d7b442e3ca0a445e2c6d",
"text": "The second generation of the Digital Video Broadcasting standard for Satellite transmission, DVB-S2, is the evolution of the highly successful DVB-S satellite distribution technology. DVB-S2 has benefited from the latest progress in channel coding and modulation such as Low Density Parity Check Codes and higher order constellations to achieve performance that approaches Shannon¿s theoretical limit. We present a cross-layer design for Quality-of-Service (QoS) provision of interactive services, which is not specified in the standard. Our cross-layer approach exploits the satellite channel characteristics of space-time correlation via a cross-layer queueing architecture and an adaptive cross-layer scheduling policy. We show that our approach not only allows system load control but also rate adaptation to channel conditions and traffic demands on the coverage area. We also present the extension of our cross-layer design for mobile gateways focusing on the railway scenario. We illustrate the trade-off between system-wide and individual throughput by means of simulation, and that this trade-off could be a key metric in measuring the service level of DVB-S2 Broadband Service.",
"title": ""
},
{
"docid": "72a6a7fe366def9f97ece6d1ddc46a2e",
"text": "Our work in this paper presents a prediction of quality of experience based on full reference parametric (SSIM, VQM) and application metrics (resolution, bit rate, frame rate) in SDN networks. First, we used DCR (Degradation Category Rating) as subjective method to build the training model and validation, this method is based on not only the quality of received video but also the original video but all subjective methods are too expensive, don't take place in real time and takes much time for example our method takes three hours to determine the average MOS (Mean Opinion Score). That's why we proposed novel method based on machine learning algorithms to obtain the quality of experience in an objective manner. Previous researches in this field help us to use four algorithms: Decision Tree (DT), Neural Network, K nearest neighbors KNN and Random Forest RF thanks to their efficiency. We have used two metrics recommended by VQEG group to assess the best algorithm: Pearson correlation coefficient r and Root-Mean-Square-Error RMSE. The last part of the paper describes environment based on: Weka to analyze ML algorithms, MSU tool to calculate SSIM and VQM and Mininet for the SDN simulation.",
"title": ""
},
{
"docid": "e36a5ffed8bafcfc750e811041a3696b",
"text": "In this paper, inkjet-printed microwave circuits fabricated on paper-based substrates were investigated, as a system-level solution for ultra-low-cost mass production. The RF characteristics of the paper-based substrate were studied by using the cavity resonator method and the Transmission Line method in order to characterize the dielectric constant (epsivr) and loss tangent (tandelta) of the substrate. A UHF RFID tag module was then developed with the inkjet-printing technology which could function as a technology for much simpler and faster fabrication on/in paper. Simulation and well-agreed measurement results verify a good performance of the tag module. In addition, for the first time, the possibility of paper-based substrate for multilayer microwave structures was explored, and a 2.4 GHz multilayer patch resonator bandpass filter with insertion loss < 0.6dB was demonstrated. These results show that the paper material can serve for the purpose of economical multilayer structures for telecommunication and sensing applications.",
"title": ""
},
{
"docid": "84ad547eb8a3435b214ed1a192fa96a9",
"text": "We present the first known case of somatic PTEN mosaicism causing features of Cowden syndrome (CS) and inheritance in the subsequent generation. A 20-year-old woman presented for genetics evaluation with multiple ganglioneuromas of the colon. On examination, she was found to have a thyroid goiter, macrocephaly, and tongue papules, all suggestive of CS. However, her reported family history was not suspicious for CS. A deleterious PTEN mutation was identified in blood lymphocytes, 966A>G, 967delA. Genetic testing was recommended for her parents. Her 48-year-old father was referred for evaluation and was found to have macrocephaly and a history of Hashimoto’s thyroiditis, but no other features of CS. Site-specific genetic testing carried out on blood lymphocytes showed mosaicism for the same PTEN mutation identified in his daughter. Identifying PTEN mosaicism in the proband’s father had significant implications for the risk assessment/genetic testing plan for the rest of his family. His result also provides impetus for somatic mosaicism in a parent to be considered when a de novo PTEN mutation is suspected.",
"title": ""
},
{
"docid": "617fa45a68d607a4cb169b1446aa94bd",
"text": "The Draganflyer is a radio-controlled helicopter. It is powered by 4 rotors and is capable of motion in air in 6 degrees of freedom and of stable hovering. For flying it requires a high degree of skill, with the operator continually making small adjustments. In this paper, we do a theoretical analysis of the dynamics of the Draganflyer in order to develop a model of it from which we can develop a computer control system for stable hovering and indoor flight.",
"title": ""
},
{
"docid": "1ba1b3bb1ef0fb0b6b10b8f4dcaa6716",
"text": "Lichen sclerosus et atrophicus (LSA) is a chronic inflammatory scarring disease with a predilection for the anogenital area; however, 15%-20% of LSA cases are extragenital. The folliculocentric variant is rarely reported and less well understood. The authors report a rare case of extragenital, folliculocentric LSA in a 10-year-old girl. The patient presented to the dermatology clinic for evaluation of an asymptomatic eruption of the arms and legs, with no vaginal or vulvar involvement. Physical examination revealed the presence of numerous 2-4 mm, mostly perifollicular, hypopigmented, slightly atrophic papules and plaques. Many of the lesions had a central keratotic plug. Cutaneous histopathological examination showed features of LSA. Based on clinical and histological findings, folliculocentric extragenital LSA was diagnosed.",
"title": ""
},
{
"docid": "eb3d82a85c8a9c3f815f0f62b6ae55cd",
"text": "In this paper, we explore and compare multiple solutions to the problem of data augmentation in image classification. Previous work has demonstrated the effectiveness of data augmentation through simple techniques, such as cropping, rotating, and flipping input images. We artificially constrain our access to data to a small subset of the ImageNet dataset, and compare each data augmentation technique in turn. One of the more successful data augmentations strategies is the traditional transformations mentioned above. We also experiment with GANs to generate images of different styles. Finally, we propose a method to allow a neural net to learn augmentations that best improve the classifier, which we call neural augmentation. We discuss the successes and shortcomings of this method on various datasets.",
"title": ""
},
{
"docid": "cd49e83f91bcb03c5d1f5e8629b48b3b",
"text": "Toward implementation of fashion recommendation system based on photos taken with mobile phones, we propose a framework to recognize hierarchical categories of a fashion item. To classify an arbitrary photo of clothes robustly, (1) we collected two kind of dataset: (I) 120K datasets of clothes images on EC sites to train the classifier and (II) the clothing image set composed of photos taken by participants with their mobile phones. (2) we proposed Layered Deep Convolutional Neural Networks (LDCNNs) which is specialized in classifying images into hierarchical categories: hoodie is a lower in hierarchy in tops category. Experimental result shows proposed LDCNNs obtained mean accuracy of 92.7% for datasets from EC sites and 96.9% for those from mobile phones. This results are better than those (84.9%, 90.6 % respectively) for MLR+CNN in classification accuracy.",
"title": ""
},
{
"docid": "27f6a0f6eedba454c7385499a81a59a3",
"text": "In this paper we compare and evaluate the effectiveness of the brute force methodology using dataset of known password. It is a known fact that user chosen passwords are easily recognizable and crackable, by using several password recovery techniques; Brute force attack is one of them. For rescuing such attacks several organizations proposed the password creation rules which stated that password must include number and special characters for strengthening it and protecting against various password cracking attacks such as Dictionary attack, brute force attack etc. The result of this paper and proposed methodology helps in evaluating the system and account security for measuring the degree of authentication by estimating the password strength. The experiment is conducted on our proposed dataset (TG-DATASET) that contain an iterative procedure for creating the alphanumeric password string like a*, b*, c* and so on. The proposed dataset is prepared due to non-availability of iterative password in any existing password data sets.",
"title": ""
},
{
"docid": "39ebc7cc1a2cb50fb362804b6ae0f768",
"text": "We model a dependency graph as a book, a particular kind of topological space, for semantic dependency parsing. The spine of the book is made up of a sequence of words, and each page contains a subset of noncrossing arcs. To build a semantic graph for a given sentence, we design new Maximum Subgraph algorithms to generate noncrossing graphs on each page, and a Lagrangian Relaxation-based algorithm to combine pages into a book. Experiments demonstrate the effectiveness of the book embedding framework across a wide range of conditions. Our parser obtains comparable results with a state-of-the-art transition-based parser.",
"title": ""
},
{
"docid": "a0852f31be3791d7ce52b99930ea95d1",
"text": "Stock trading system to assist decision-making is an emerging research area and has great commercial potentials. Successful trading operations should occur near the reversal points of price trends. Traditional technical analysis, which usually appears as various trading rules, does aim to look for peaks and bottoms of trends and is widely used in stock market. Unfortunately, it is not convenient to directly apply technical analysis since it depends on person’s experience to select appropriate rules for individual share. In this paper, we enhance conventional technical analysis with Genetic Algorithms by learning trading rules from history for individual stock and then combine different rules together with Echo State Network to provide trading suggestions. Numerous experiments on S&P 500 components demonstrate that whether in bull or bear market, our system significantly outperforms buy-and-hold strategy. Especially in bear market where S&P 500 index declines a lot, our system still profits. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
47bb7c744642a9af905bc728025b3552
|
FatCBST: Concurrent Binary Search Tree with Fatnodes
|
[
{
"docid": "3311ef081d181ce715713dacf735d644",
"text": "The advent of multicore processors as the standard computing platform will force major changes in software design.",
"title": ""
},
{
"docid": "3f0b2f3739a6b9fdf3681dd4296405e6",
"text": "One approach to achieving high performance in a database management system is to store the database in main memorv rather than on disk. -One can then design new data structures aid algorithms oriented towards making eflicient use of CPU cycles and memory space rather than minimizing disk accesses and &ing disk space efliciently. In this paper we present some results on index structures from an ongoing study of main memory database management systems. We propose a new index structure, the T Tree, and we compare it to existing index structures in a main memory database environment. Our results indicate that the T Tree provides good overall performance in main memory.",
"title": ""
}
] |
[
{
"docid": "16186ff81d241ecaea28dcf5e78eb106",
"text": "Different kinds of people use computers now than several decades ago, but operating systems have not fully kept pace with this change. It is true that we have point-and-click GUIs now instead of command line interfaces, but the expectation of the average user is different from what it used to be, because the user is different. Thirty or 40 years ago, when operating systems began to solidify into their current form, almost all computer users were programmers, scientists, engineers, or similar professionals doing heavy-duty computation, and they cared a great deal about speed. Few teenagers and even fewer grandmothers spent hours a day behind their terminal. Early users expected the computer to crash often; reboots came as naturally as waiting for the neighborhood TV repairman to come replace the picture tube on their home TVs. All that has changed and operating systems need to change with the times.",
"title": ""
},
{
"docid": "5e2b8d3ed227b71869550d739c61a297",
"text": "Dairy cattle experience a remarkable shift in metabolism after calving, after which milk production typically increases so rapidly that feed intake alone cannot meet energy requirements (Bauman and Currie, 1980; Baird, 1982). Cows with a poor adaptive response to negative energy balance may develop hyperketonemia (ketosis) in early lactation. Cows that develop ketosis in early lactation lose milk yield and are at higher risk for other postpartum diseases and early removal from the herd.",
"title": ""
},
{
"docid": "6bbc6a3f4f8d6f050f4317837cf30144",
"text": "Characterizing driving styles of human drivers using vehicle sensor data, e.g., GPS, is an interesting research problem and an important real-world requirement from automotive industries. A good representation of driving features can be highly valuable for autonomous driving, auto insurance, and many other application scenarios. However, traditional methods mainly rely on handcrafted features, which limit machine learning algorithms to achieve a better performance. In this paper, we propose a novel deep learning solution to this problem, which could be the first attempt of studying deep learning for driving behavior analysis. The proposed approach can effectively extract high level and interpretable features describing complex driving patterns from GPS data. It also requires significantly less human experience and work. The power of the learned driving style representations are validated through the driver identification problem using a large real dataset.",
"title": ""
},
{
"docid": "7d8617c12c24e61b7ef003a5055fbf2f",
"text": "We present the first approximation algorithms for a large class of budgeted learning problems. One classicexample of the above is the budgeted multi-armed bandit problem. In this problem each arm of the bandithas an unknown reward distribution on which a prior isspecified as input. The knowledge about the underlying distribution can be refined in the exploration phase by playing the arm and observing the rewards. However, there is a budget on the total number of plays allowed during exploration. After this exploration phase,the arm with the highest (posterior) expected reward is hosen for exploitation. The goal is to design the adaptive exploration phase subject to a budget constraint on the number of plays, in order to maximize the expected reward of the arm chosen for exploitation. While this problem is reasonably well understood in the infinite horizon discounted reward setting, the budgeted version of the problem is NP-Hard. For this problem and several generalizations, we provide approximate policies that achieve a reward within constant factor of the reward optimal policy. Our algorithms use a novel linear program rounding technique based on stochastic packing.",
"title": ""
},
{
"docid": "c95da5ee6fde5cf23b551375ff01e709",
"text": "The 3GPP has introduced the LTE-M and NB-IoT User Equipment categories and made amendments to LTE release 13 to support the cellular Internet of Things. The contribution of this paper is to analyze the coverage probability, the number of supported devices, and the device battery life in networks equipped with either of the newly standardized technologies. The study is made for a site specific network deployment of a Danish operator, and the simulation is calibrated using drive test measurements. The results show that LTE-M can provide coverage for 99.9 % of outdoor and indoor devices, if the latter is experiencing 10 dB additional loss. However, for deep indoor users NB-IoT is required and provides coverage for about 95 % of the users. The cost is support for more than 10 times fewer devices and a 2-6 times higher device power consumption. Thus both LTE-M and NB- IoT provide extended support for the cellular Internet of Things, but with different trade- offs.",
"title": ""
},
{
"docid": "354b35bb1c51442a7e855824ab7b91e0",
"text": "Educational games and intelligent tutoring systems (ITS) both support learning by doing, although often in different ways. The current classroom experiment compared a popular commercial game for equation solving, DragonBox and a research-based ITS, Lynnette with respect to desirable educational outcomes. The 190 participating 7th and 8th grade students were randomly assigned to work with either system for 5 class periods. We measured out-of-system transfer of learning with a paper and pencil pre- and post-test of students’ equation-solving skill. We measured enjoyment and accuracy of self-assessment with a questionnaire. The students who used DragonBox solved many more problems and enjoyed the experience more, but the students who used Lynnette performed significantly better on the post-test. Our analysis of the design features of both systems suggests possible explanations and spurs ideas for how the strengths of the two systems might be combined. The study shows that intuitions about what works, educationally, can be fallible. Therefore, there is no substitute for rigorous empirical evaluation of educational technologies.",
"title": ""
},
{
"docid": "e72fa6412ba935448c7a7b7a00d64ec2",
"text": "This Critical Review on environmental concerns of desalination plants suggests that planning and monitoring stages are critical aspects of successful management and operation of plants. The site for the desalination plants should be selected carefully and should be away from residential areas particularly for forward planning for possible future expansions. The concerning issues identified are noise pollution, visual pollution, reduction in recreational fishing and swimming areas, emission of materials into the atmosphere, the brine discharge and types of disposal methods used are the main cause of pollution. The reverse osmosis (RO) method is the preferred option in modern times especially when fossil fuels are becoming expensive. The RO has other positives such as better efficiency (30-50%) when compared with distillation type plants (10-30%). However, the RO membranes are susceptible to fouling and scaling and as such they need to be cleaned with chemicals regularly that may be toxic to receiving waters. The input and output water in desalination plants have to be pre and post treated, respectively. This involves treating for pH, coagulants, Cl, Cu, organics, CO(2), H(2)S and hypoxia. The by-product of the plant is mainly brine with concentration at times twice that of seawater. This discharge also includes traces of various chemicals used in cleaning including any anticorrosion products used in the plant and has to be treated to acceptable levels of each chemical before discharge but acceptable levels vary depending on receiving waters and state regulations. The discharge of the brine is usually done by a long pipe far into the sea or at the coastline. Either way the high density of the discharge reaches the bottom layers of receiving waters and may affect marine life particularly at the bottom layers or boundaries. The longer term effects of such discharge concentrate has not been documented but it is possible that small traces of toxic substances used in the cleaning of RO membranes may be harmful to marine life and ecosystem. The plants require saline water and thus the construction of input and discharge output piping is vital. The piping are often lengthy and underground as it is in Tugun (QLD, Australia), passing below the ground. Leakage of the concentrate via cracks in rocks to aquifers is a concern and therefore appropriate monitoring quality is needed. Leakage monitoring devices ought to be attached to such piping during installation. The initial environment impact assessment should identify key parameters for monitoring during discharge processes and should recommend ongoing monitoring with devices attached to structures installed during construction of plants.",
"title": ""
},
{
"docid": "45447ab4e0a8bd84fcf683ac482f5497",
"text": "Most of the current learning analytic techniques have as starting point the data recorded by Learning Management Systems (LMS) about the interactions of the students with the platform and among themselves. But there is a tendency on students to rely less on the functionality offered by the LMS and use more applications that are freely available on the net. This situation is magnified in studies in which students need to interact with a set of tools that are easily installed on their personal computers. This paper shows an approach using Virtual Machines by which a set of events occurring outside of the LMS are recorded and sent to a central server in a scalable and unobtrusive manner.",
"title": ""
},
{
"docid": "6018c84c0e5666b5b4615766a5bb98a9",
"text": "We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.",
"title": ""
},
{
"docid": "2c93fcf96c71c7c0a8dcad453da53f81",
"text": "Production cars are designed to understeer and rarely do they oversteer. If a car could automatically compensate for an understeer/oversteer problem, the driver would enjoy nearly neutral steering under varying operating conditions. Four-wheel steering is a serious effort on the part of automotive design engineers to provide near-neutral steering. Also in situations like low speed cornering, vehicle parking and driving in city conditions with heavy traffic in tight spaces, driving would be very difficult due to vehicle’s larger wheelbase and track width. Hence there is a requirement of a mechanism which result in less turning radius and it can be achieved by implementing four wheel steering mechanism instead of regular two wheel steering. In this project Maruti Suzuki 800 is considered as a benchmark vehicle. The main aim of this project is to turn the rear wheels out of phase to the front wheels. In order to achieve this, a mechanism which consists of two bevel gears and intermediate shaft which transmit 100% torque as well turns rear wheels in out of phase was developed. The mechanism was modelled using CATIA and the motion simulation was done using ADAMS. A physical prototype was realised. The prototype was tested for its cornering ability through constant radius test and was found 50% reduction in turning radius and the vehicle was operated at low speed of 10 kmph.",
"title": ""
},
{
"docid": "38cccac8ee9371c55a54b2b43c25e2d9",
"text": "Blepharophimosis-ptosis-epicanthus inversus syndrome (BPES) is a rare autosomal dominant disorder whose main features are the abnormal shape, position and alignment of the eyelids. Type I refers to BPES with female infertility from premature ovarian failure while type II is limited to the ocular features. A causative gene, FOXL2, has been localized to 3q23. We report a black female who carried a de novo chromosomal translocation and 3.13 Mb deletion at 3q23, 1.2 Mb 5' to FOXL2. This suggests the presence of distant cis regulatory elements at the extended FOXL2 locus. In spite of 21 protein coding genes in the 3.13 Mb deleted segment, the patient had no other malformation and a strictly normal psychomotor development at age 2.5 years. Our observation confirms panethnicity of BPES and adds to the knowledge of the complex cis regulation of human FOXL2 gene expression.",
"title": ""
},
{
"docid": "2b086723a443020118b7df7f4021b4d9",
"text": "Random undersampling and oversampling are simple but well-known resampling methods applied to solve the problem of class imbalance. In this paper we show that the random oversampling method can produce better classification results than the random undersampling method, since the oversampling can increase the minority class recognition rate by sacrificing less amount of majority class recognition rate than the undersampling method. However, the random oversampling method would increase the computational cost associated with the SVM training largely due to the addition of new training examples. In this paper we present an investigation carried out to develop efficient resampling methods that can produce comparable classification results to the random oversampling results, but with the use of less amount of data. The main idea of the proposed methods is to first select the most informative data examples located closer to the class boundary region by using the separating hyperplane found by training an SVM model on the original imbalanced dataset, and then use only those examples in resampling. We demonstrate that it would be possible to obtain comparable classification results to the random oversampling results through two sets of efficient resampling methods which use 50% less amount of data and 75% less amount of data, respectively, compared to the sizes of the datasets generated by the random oversampling method.",
"title": ""
},
{
"docid": "c88c4097b0cf90031bbf3778d25bb87a",
"text": "In this paper we introduce a new data set consisting of user comments posted to the website of a German-language Austrian newspaper. Professional forum moderators have annotated 11,773 posts according to seven categories they considered crucial for the efficient moderation of online discussions in the context of news articles. In addition to this taxonomy and annotated posts, the data set contains one million unlabeled posts. Our experimental results using six methods establish a first baseline for predicting these categories. The data and our code are available for research purposes from https://ofai.github.io/million-post-corpus.",
"title": ""
},
{
"docid": "32059170608532d89b2d20724f282f4a",
"text": "Functional near infrared spectroscopy (fNIRS) is a rapidly developing neuroimaging modality for exploring cortical brain behaviour. Despite recent advances, the quality of fNIRS experimentation may be compromised in several ways: firstly, by altering the optical properties of the tissues encountered in the path of light; secondly, through adulteration of the recovered biological signals (noise) and finally, by modulating neural activity. Currently, there is no systematic way to guide the researcher regarding these factors when planning fNIRS studies. Conclusions extracted from fNIRS data will only be robust if appropriate methodology and analysis in accordance with the research question under investigation are employed. In order to address these issues and facilitate the quality control process, a taxonomy of factors influencing fNIRS data have been established. For each factor, a detailed description is provided and previous solutions are reviewed. Finally, a series of evidence-based recommendations are made with the aim of improving consistency and quality of fNIRS research.",
"title": ""
},
{
"docid": "abdf1edfb2b93b3991d04d5f6d3d63d3",
"text": "With the rapid growing of internet and networks applications, data security becomes more important than ever before. Encryption algorithms play a crucial role in information security systems. In this paper, we have a study of the two popular encryption algorithms: DES and Blowfish. We overviewed the base functions and analyzed the security for both algorithms. We also evaluated performance in execution speed based on different memory sizes and compared them. The experimental results show the relationship between function run speed and memory size.",
"title": ""
},
{
"docid": "0d1da055e444a90ec298a2926de9fe7b",
"text": "Cryptocurrencies have experienced recent surges in interest and price. It has been discovered that there are time intervals where cryptocurrency prices and certain online and social media factors appear related. In addition it has been noted that cryptocurrencies are prone to experience intervals of bubble-like price growth. The hypothesis investigated here is that relationships between online factors and price are dependent on market regime. In this paper, wavelet coherence is used to study co-movement between a cryptocurrency price and its related factors, for a number of examples. This is used alongside a well-known test for financial asset bubbles to explore whether relationships change dependent on regime. The primary finding of this work is that medium-term positive correlations between online factors and price strengthen significantly during bubble-like regimes of the price series; this explains why these relationships have previously been seen to appear and disappear over time. A secondary finding is that short-term relationships between the chosen factors and price appear to be caused by particular market events (such as hacks / security breaches), and are not consistent from one time interval to another in the effect of the factor upon the price. In addition, for the first time, wavelet coherence is used to explore the relationships between different cryptocurrencies.",
"title": ""
},
{
"docid": "444bcff9a7fdcb80041aeb01b8724eed",
"text": "The morphologic anatomy of the liver is described as 2 main and 2 accessory lobes. The more recent functional anatomy of the liver is based on the distribution of the portal pedicles and the location of the hepatic veins. The liver is divided into 4 sectors, some of them composed of 2 segments. In all, there are 8 segments. According to the anatomy, typical hepatectomies (or “réglées”) are those which are performed along anatomical scissurae. The 2 main technical conceptions of typical hepatectomies are those with preliminary vascular control (Lortat-Jacob's technique) and hepatectomies with primary parenchymatous transection (Ton That Tung's technique). A good knowledge of the anatomy of the liver is a prerequisite for anatomical surgery of this organ. L'anatomie morphologique du foie permet d'individualiser 2 lobes principaux et 2 lobes accessoires. L'anatomie fonctionnelle du foie, plus récemment décrite, est fondée sur la distribution des pédicules portaux et sur la localisation des veines sus-hépatiques. Le foie est divisé en 4 secteurs, eux-mÊmes composés en général de 2 segments. Au total, il y a 8 segments. Selon les données anatomiques, les hépatectomies typiques (ou réglées) sont celles qui sont réalisées le long des scissures anatomiques. Les deux conceptions principales des exérèses hépatiques typiques sont, du point de vue technique, les hépatectomies avec contrÔle vasculaire préalable (technique de Lortat-Jacob) et les hépatectomies avec abord transparenchymateux premier (technique de Ton That Tung). Une connaissance approfondie de l'anatomie du foie est une condition préalable à la réalisation d'une chirurgie anatomique de cet organe.",
"title": ""
},
{
"docid": "a4c80a334a6f9cd70fe5c7000740c18f",
"text": "CMOS SRAM cell is very less power consuming and have less read and write time. Higher cell ratios can decrease the read and write time and improve stability. PMOS transistor with less width reduces the power consumption. This paper implements 6T SRAM cell with reduced read and write time, area and power consumption. It has been noticed often that increased memory capacity increases the bit-line parasitic capacitance which in turn slows down voltage sensing and make bit-line voltage swings energy expensive. This result in slower and more energy hungry memories.. In this paper Two SRAM cell is being designed for 4 Kb of memory core with supply voltage 1.8 V. A technique of global bit line is used for reducing the power consumption and increasing the memory capacity.",
"title": ""
},
{
"docid": "d5a9f4e5cf1f15a7e39e0b49e571b936",
"text": "Article history: With the growth and evolu First received in February 6, 2005 and was under review for 9 months",
"title": ""
},
{
"docid": "c32c1c16aec9bc6dcfb5fa8fb4f25140",
"text": "Logo detection is a challenging task with many practical applications in our daily life and intellectual property protection. The two main obstacles here are lack of public logo datasets and effective design of logo detection structure. In this paper, we first manually collected and annotated 6,400 images and mix them with FlickrLogo-32 dataset, forming a larger dataset. Secondly, we constructed Faster R-CNN frameworks with several widely used classification models for logo detection. Furthermore, the transfer learning method was introduced in the training process. Finally, clustering was used to guarantee suitable hyper-parameters and more precise anchors of RPN. Experimental results show that the proposed framework outper-forms the state of-the-art methods with a noticeable margin.",
"title": ""
}
] |
scidocsrr
|
4ed911e2e310b5d42e2b0e8d97de00e0
|
Towards New Human-Humanoid Communication: Listening During Speaking by Using Ultrasonic Directional Speaker
|
[
{
"docid": "aa4de4dce2a7d7b0630e91ab4cf6f692",
"text": "This paper presents part of an on-going project to integrate perception, attention, drives, emotions, behavior arbitration, and expressive acts for a robot designed to interact socially with humans. We present the design of a visual attention system based on a model of human visual search behavior from Wolfe (1994). The attention system integrates perceptions (motion detection, color saliency, and face popouts) with habituation effects and influences from the robot’s motivational and behavioral state to create a context-dependent attention activation map. This activation map is used to direct eye movements and to satiate the drives of the motivational system.",
"title": ""
}
] |
[
{
"docid": "7b4400c6ef5801e60a6f821810538381",
"text": "A CMOS self-biased fully differential amplifier is presented. Due to the self-biasing structure of the amplifier and its associated negative feedback, the amplifier is compensated to achieve low sensitivity to process, supply voltage and temperature (PVT) variations. The output common-mode voltage of the amplifier is adjusted through the same biasing voltages provided by the common-mode feedback (CMFB) circuit. The amplifier core is based on a simple structure that uses two CMOS inverters to amplify the input differential signal. Despite its simple structure, the proposed amplifier is attractive to a wide range of applications, specially those requiring low power and small silicon area. As two examples, a sample-and-hold circuit and a second order multi-bit sigma-delta modulator either employing the proposed amplifier are presented. Besides these application examples, a set of amplifier performance parameters is given.",
"title": ""
},
{
"docid": "356a72153f61311546f6ff874ee79bb4",
"text": "In this paper, an object cosegmentation method based on shape conformability is proposed. Different from the previous object cosegmentation methods which are based on the region feature similarity of the common objects in image set, our proposed SaCoseg cosegmentation algorithm focuses on the shape consistency of the foreground objects in image set. In the proposed method, given an image set where the implied foreground objects may be varied in appearance but share similar shape structures, the implied common shape pattern in the image set can be automatically mined and regarded as the shape prior of those unsatisfactorily segmented images. The SaCoseg algorithm mainly consists of four steps: 1) the initial Grabcut segmentation; 2) the shape mapping by coherent point drift registration; 3) the common shape pattern discovery by affinity propagation clustering; and 4) the refinement by Grabcut with common shape constraint. To testify our proposed algorithm and establish a benchmark for future work, we built the CoShape data set to evaluate the shape-based cosegmentation. The experiments on CoShape data set and the comparison with some related cosegmentation algorithms demonstrate the good performance of the proposed SaCoseg algorithm.",
"title": ""
},
{
"docid": "4fb0803aa12b7dfb2b3661822ea67c2b",
"text": "In this paper we present a broad overview of the last 40 years of research on cognitive architectures. Although the number of existing architectures is nearing several hundred, most of the existing surveys do not reflect this growth and focus on a handful of well-established architectures. Thus, in this survey we wanted to shift the focus towards a more inclusive and high-level overview of the research on cognitive architectures. Our final set of 85 architectures includes 49 that are still actively developed, and borrow from a diverse set of disciplines, spanning areas from psychoanalysis to neuroscience. To keep the length of this paper within reasonable limits we discuss only the core cognitive abilities, such as perception, attention mechanisms, action selection, memory, learning and reasoning. In order to assess the breadth of practical applications of cognitive architectures we gathered information on over 900 practical projects implemented using the cognitive architectures in our list. We use various visualization techniques to highlight overall trends in the development of the field. In addition to summarizing the current state-of-the-art in the cognitive architecture research, this survey describes a variety of methods and ideas that have been tried and their relative success in modeling human cognitive abilities, as well as which aspects of cognitive behavior need more research with respect to their mechanistic counterparts and thus can further inform how cognitive science might progress.",
"title": ""
},
{
"docid": "17de31cccc12b401a949ff5660d4f4c6",
"text": "In this paper we propose a system that automates the whole process of taking attendance and maintaining its records in an academic institute. Managing people is a difficult task for most of the organizations, and maintaining the attendance record is an important factor in people management. When considering academic institutes, taking the attendance of students on daily basis and maintaining the records is a major task. Manually taking the attendance and maintaining it for a long time adds to the difficulty of this task as well as wastes a lot of time. For this reason an efficient system is designed. This system takes attendance electronically with the help of a fingerprint sensor and all the records are saved on a computer server. Fingerprint sensors and LCD screens are placed at the entrance of each room. In order to mark the attendance, student has to place his/her finger on the fingerprint sensor. On identification student’s attendance record is updated in the database and he/she is notified through LCD screen. No need of all the stationary material and special personal for keeping the records. Furthermore an automated system replaces the manual system.",
"title": ""
},
{
"docid": "98ead4f3cee84b4db8be568ec125c786",
"text": "This paper assesses the potential impact of FinTech on the finance industry, focusing on financial stability and access to services. I document first that financial services remain surprisingly expensive, which explains the emergence of new entrants. I then argue that the current regulatory approach is subject to significant political economy and coordination costs, and therefore unlikely to deliver much structural change. FinTech, on the other hand, can bring deep changes but is likely to create significant regulatory challenges.",
"title": ""
},
{
"docid": "2b8b06965cca346f3714cbaa1704ab83",
"text": "Visual question answering (Visual QA) has attracted a lot of attention lately, seen essentially as a form of (visual) Turing test that artificial intelligence should strive to achieve. In this paper, we study a crucial component of this task: how can we design good datasets for the task? We focus on the design of multiplechoice based datasets where the learner has to select the right answer from a set of candidate ones including the target (i.e. the correct one) and the decoys (i.e. the incorrect ones). Through careful analysis of the results attained by state-of-the-art learning models and human annotators on existing datasets, we show that the design of the decoy answers has a significant impact on how and what the learning models learn from the datasets. In particular, the resulting learner can ignore the visual information, the question, or both while still doing well on the task. Inspired by this, we propose automatic procedures to remedy such design deficiencies. We apply the procedures to re-construct decoy answers for two popular Visual QA datasets as well as to create a new Visual QA dataset from the Visual Genome project, resulting in the largest dataset for this task. Extensive empirical studies show that the design deficiencies have been alleviated in the remedied datasets and the performance on them is likely a more faithful indicator of the difference among learning models. The datasets are released and publicly available via http://www.teds. usc.edu/website_vqa/.",
"title": ""
},
{
"docid": "4bbe3b4512ff5bf18aa17d54b6645049",
"text": "The aim of this study is to find a minimal size of text samples for authorship attribution that would provide stable results independent of random noise. A few controlled tests for different sample lengths, languages and genres are discussed and compared. Although I focus on Delta methodology, the results are valid for many other multidimensional methods relying on word frequencies and \"nearest neighbor\" classifications.",
"title": ""
},
{
"docid": "f11a80fc33b3c0a5b6aa4893e32ee045",
"text": "Assessment plays important role in learning process in higher education institutions. However, poorly designed exams can fail to achieve the intended learning outcomes of a specific course, which can also have a bad impact on the programs and educational institutes. One of the possible solutions is to standardize the exams based on educational taxonomies. However, this is not an easy process for educators. With the recent technologies, the assessment approaches have been improved by automatically generating exams based on educational taxonomies. This paper presents a framework that allow educators to map questions to intended learning outcomes based on Bloom’s taxonomy. Furthermore, it elaborates on the principles and requirements for generating exams automatically. It also report on a prototype implementation of an authoring tool for generating exams to evaluate the achievements of intended learning outcomes.",
"title": ""
},
{
"docid": "2cd54d9d7f65d6346db31d67a3529e20",
"text": "This paper proposes a modification in the maximum power point tracking (MPPT) by using model predictive control (MPC). The modification scheme of the MPPT control is based on the perturb and observe algorithm (P&O). This modified control is implemented on the dc-dc multilevel boost converter (MLBC) to increase the response of the controller to extract the maximum power from the photovoltaic (PV) module and to boost a small dc voltage of it. The total system consisting of a PV model, a MLBC and the modified MPPT has been analyzed and then simulated with changing the solar radiation and the temperature. The proposed control scheme is implemented under program MATLAB/SIMULINK and the obtained results are validated with real time simulation using dSPACE 1103 ControlDesk. The real time simulation results have been provided for principle validation.",
"title": ""
},
{
"docid": "c0d63538c27c83c8027c2feeeb34eb05",
"text": "Blockchain is an emerging technology that is perceived as groundbreaking. However, blockchain presents incumbent organizations with significant challenges. How should they respond to the advent of this innovative technology, and how can they build the capabilities that are necessary to successfully engage with blockchain? In this case study, we analyze how an incumbent bank deals with the radical innovation of blockchain. We find that blockchain as an innovation is unique, because its transaction cost-lowering nature requires cooperation not only on an intra-organizational, but also on an inter-organizational level to fully leverage the technology. We develop a framework illustrating how the process of discovering, incubating, and accelerating with blockchain can look like. Our research is one of the first case studies in the area; shedding light on the organizational challenges of incumbents as they engage with blockchain. The paper provides a blueprint for business executives in their endeavor of embracing blockchain technology.",
"title": ""
},
{
"docid": "4bac21b34aad0ec96d0548fc6451335b",
"text": "Models of human motor behavior are well known as an aid in the design of user interfaces (UIs). Most current models apply primarily to desktop interaction, but with the development of non-desktop UIs, new types of motor behaviors need to be modeled. Distal pointing—pointing directly at a target that is remotely situated with respect to the input device—is such a motor behavior. A model of distal pointing would be particularly useful in the comparison of different interaction techniques, because the performance of such techniques is highly dependent on user strategy, making controlled studies difficult to perform. Inspired by Fitts’ law, we studied four possible models and concluded that movement time for a distal pointing task is best described as a function of the angular amplitude of movement and the angular size of the target. Contrary to Fitts’ law, our model shows that the angular size has a much larger effect on movement time than the angular amplitude and that the growth in the difficulty of the tasks is quadratic, rather than linear. We estimated the model’s parameters experimentally with a correlation coefficient of 96%. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3f0a9507d6538827faa5a42e87dc2115",
"text": "Traditional machine learning requires data to be described by attributes prior to applying a learning algorithm. In text classification tasks, many feature engineering methodologies have been proposed to extract meaningful features, however, no best practice approach has emerged. Traditional methods of feature engineering have inherent limitations due to loss of information and the limits of human design. An alternative is to use deep learning to automatically learn features from raw text data. One promising deep learning approach is to use convolutional neural networks. These networks can learn abstract text concepts from character representations and be trained to perform discriminate tasks, such as classification. In this paper, we propose a new approach to encoding text for use with convolutional neural networks that greatly reduces memory requirements and training time for learning from character-level text representations. Additionally, this approach scales well with alphabet size allowing us to preserve more information from the original text, potentially enhancing classification performance. By training tweet sentiment classifiers, we demonstrate that our approach uses less computational resources, allows faster training for networks and achieves similar, or better performance compared to the previous method of character encoding.",
"title": ""
},
{
"docid": "c8e321ac8b32643ac9cbe151bb9e5f8f",
"text": "The most expressive way humans display emotions is through facial expressions. In this work we report on several advances we have made in building a system for classification of facial expressions from continuous video input. We introduce and test different Bayesian network classifiers for classifying expressions from video, focusing on changes in distribution assumptions, and feature dependency structures. In particular we use Naive–Bayes classifiers and change the distribution from Gaussian to Cauchy, and use Gaussian Tree-Augmented Naive Bayes (TAN) classifiers to learn the dependencies among different facial motion features. We also introduce a facial expression recognition from live video input using temporal cues. We exploit the existing methods and propose a new architecture of hidden Markov models (HMMs) for automatically segmenting and recognizing human facial expression from video sequences. The architecture performs both segmentation and recognition of the facial expressions automatically using a multi-level architecture composed of an HMM layer and a Markov model layer. We explore both person-dependent and person-independent recognition of expressions and compare the different methods. 2003 Elsevier Inc. All rights reserved. * Corresponding author. E-mail addresses: iracohen@ifp.uiuc.edu (I. Cohen), nicu@science.uva.nl (N. Sebe), ashutosh@ us.ibm.com (A. Garg), lawrence.chen@kodak.com (L. Chen), huang@ifp.uiuc.edu (T.S. Huang). 1077-3142/$ see front matter 2003 Elsevier Inc. All rights reserved. doi:10.1016/S1077-3142(03)00081-X I. Cohen et al. / Computer Vision and Image Understanding 91 (2003) 160–187 161",
"title": ""
},
{
"docid": "e343f97f18c9cd2b52ca8abdf40051df",
"text": "Due to the increased demand of animal protein in developing countries, intensive farming is instigated, which results in antibiotic residues in animal-derived products, and eventually, antibiotic resistance. Antibiotic resistance is of great public health concern because the antibiotic-resistant bacteria associated with the animals may be pathogenic to humans, easily transmitted to humans via food chains, and widely disseminated in the environment via animal wastes. These may cause complicated, untreatable, and prolonged infections in humans, leading to higher healthcare cost and sometimes death. In the said countries, antibiotic resistance is so complex and difficult, due to irrational use of antibiotics both in the clinical and agriculture settings, low socioeconomic status, poor sanitation and hygienic status, as well as that zoonotic bacterial pathogens are not regularly cultured, and their resistance to commonly used antibiotics are scarcely investigated (poor surveillance systems). The challenges that follow are of local, national, regional, and international dimensions, as there are no geographic boundaries to impede the spread of antibiotic resistance. In addition, the information assembled in this study through a thorough review of published findings, emphasized the presence of antibiotics in animal-derived products and the phenomenon of multidrug resistance in environmental samples. This therefore calls for strengthening of regulations that direct antibiotic manufacture, distribution, dispensing, and prescription, hence fostering antibiotic stewardship. Joint collaboration across the world with international bodies is needed to assist the developing countries to implement good surveillance of antibiotic use and antibiotic resistance.",
"title": ""
},
{
"docid": "3472ffbc39fce27a2878c6564a99e1fe",
"text": "This paper tests for evidence of contagion between the financial markets of Thailand, Malaysia, Indonesia, Korea, and the Philippines. We find that correlations in currency and sovereign spreads increase significantly during the crisis period, whereas the equity market correlations offer mixed evidence. We construct a set of dummy variables using daily news to capture the impact of own-country and cross-border news on the markets. We show that after controlling for owncountry news and other fundamentals, there is evidence of cross-border contagion in the currency and equity markets. [JEL F30, F40, G15]",
"title": ""
},
{
"docid": "d67cd936448ea71c8f4f54edbc04c292",
"text": "Matching elements of two data schemas or two data instances plays a key role in data warehousing, e-business, or even biochemical applications. In this paper we present a matching algorithm based on a fixpoint computation that is usable across different scenarios. The algorithm takes two graphs (schemas, catalogs, or other data structures) as input, and produces as output a mapping between corresponding nodes of the graphs. Depending on the matching goal, a subset of the mapping is chosen using filters. After our algorithm runs, we expect a human to check and if necessary adjust the results. As a matter of fact, we evaluate the ‘accuracy’ of the algorithm by counting the number of needed adjustments. We conducted a user study, in which our accuracy metric was used to estimate the labor savings that the users could obtain by utilizing our algorithm to obtain an initial matching. Finally, we illustrate how our matching algorithm is deployed as one of several high-level operators in an implemented testbed for managing information models and mappings.",
"title": ""
},
{
"docid": "b23230f0386f185b7d5eb191034d58ec",
"text": "Risk management in global information technology (IT) projects is becoming a critical area of concern for practitioners. Global IT projects usually span multiple locations involving various culturally diverse groups that use multiple standards and technologies. These multiplicities cause dynamic risks through interactions among internal (i.e., people, process, and technology) and external elements (i.e., business and natural environments) of global IT projects. This study proposes an agile risk-management framework for global IT project settings. By analyzing the dynamic interactions among multiplicities (e.g., multi-locations, multi-cultures, multi-groups, and multi-interests) embedded in the project elements, we identify the dynamic risks threatening the success of a global IT project. Adopting the principles of service-oriented architecture (SOA), we further propose a set of agile management strategies for mitigating the dynamic risks. The mitigation strategies are conceptually validated. The proposed framework will help practitioners understand the potential risks in their global IT projects and resolve their complex situations when certain types of dynamic risks arise.",
"title": ""
},
{
"docid": "2f994630f8fc709381dcc760d830cce7",
"text": "7 Addressing the issue of SVMs parameters optimization, this study proposes an efficient 8 memetic algorithm based on Particle Swarm Optimization algorithm (PSO) and Pattern Search 9 (PS). In the proposed memetic algorithm, PSO is responsible for exploration of the search space 10 and the detection of the potential regions with optimum solutions, while pattern search (PS) is 11 used to produce an effective exploitation on the potential regions obtained by PSO. Moreover, a 12 novel probabilistic selection strategy is proposed to select the appropriate individuals among the 13 current population to undergo local refinement, keeping a well balance between exploration and 14 exploitation. Experimental results confirm that the local refinement with PS and our proposed 15 selection strategy are effective, and finally demonstrate effectiveness and robustness of the 16 proposed PSO-PS based MA for SVMs parameters optimization. 17 18",
"title": ""
},
{
"docid": "d72fac3211873790abdb9fb4cbd56cf8",
"text": "With the advent of new technology paradigm, SMAC (Social media, Mobile, Analytics and Cloud) the information network generates an infinite ocean of data spreading faster and larger than earlier. A high quality information extracted from this massive volume of data, named as big data, urges the development of an efficient and effective decision support system and powerful strategic tools in the area of government intelligence. This pool of information can be explored for the benefit of an organization/system and to better understand its stakeholder needs by collecting, mining opinions about every point or subject of interest. Digitally intelligent and smart governance has been identified as a dynamic field with new studies being reported at various research avenues. The need to review, analyze and evaluate research studies across literature is thus fostered motivating us to identify existing trends, research gaps and potential directions of future work within this domain. This paper intends to provide a systematic literature review within the promising area of opinion mining and its application to the area of government",
"title": ""
},
{
"docid": "aafda1cab832f1fe92ce406676e3760f",
"text": "In this paper, we present MADAMIRA, a system for morphological analysis and disambiguation of Arabic that combines some of the best aspects of two previously commonly used systems for Arabic processing, MADA (Habash and Rambow, 2005; Habash et al., 2009; Habash et al., 2013) and AMIRA (Diab et al., 2007). MADAMIRA improves upon the two systems with a more streamlined Java implementation that is more robust, portable, extensible, and is faster than its ancestors by more than an order of magnitude. We also discuss an online demo (see http://nlp.ldeo.columbia.edu/madamira/) that highlights these aspects.",
"title": ""
}
] |
scidocsrr
|
2da136409224c2ce789d77a012334dca
|
Argument Mining from Speech: Detecting Claims in Political Debates
|
[
{
"docid": "b69686c780d585d6b53fe7ec37e22b80",
"text": "In written dialog, discourse participants need to justify claims they make, to convince the reader the claim is true and/or relevant to the discourse. This paper presents a new task (with an associated corpus), namely detecting such justifications. We investigate the nature of such justifications, and observe that the justifications themselves often contain discourse structure. We therefore develop a method to detect the existence of certain types of discourse relations, which helps us classify whether a segment is a justification or not. Our task is novel, and our work is novel in that it uses a large set of connectives (which we call indicators), and in that it uses a large set of discourse relations, without choosing among them.",
"title": ""
},
{
"docid": "236bef55b95e62e3ad3d5b1de8449abb",
"text": "In this paper, we argue that an annotation scheme for argumentation mining is a function of the task requirements and the corpus properties. There is no one-sizefits-all argumentation theory to be applied to realistic data on the Web. In two annotation studies, we experiment with 80 German newspaper editorials from the Web and about one thousand English documents from forums, comments, and blogs. Our example topics are taken from the educational domain. To formalize the problem of annotating arguments, in the first case, we apply a Claim-Premise scheme, and in the second case, we modify Toulmin’s scheme. We find that the choice of the argument components to be annotated strongly depends on the register, the length of the document, and inherently on the literary devices and structures used for expressing argumentation. We hope that these findings will facilitate the creation of reliably annotated argumentation corpora for a wide range of tasks and corpus types and will help to bridge the gap between argumentation theories and actual application needs.",
"title": ""
},
{
"docid": "c3ce19739dd1220288dfecd18d1be64a",
"text": "While discussing a concrete controversial topic, most humans will find it challenging to swiftly raise a diverse set of convincing and relevant claims that should set the basis of their arguments. Here, we formally define the challenging task of automatic claim detection in a given context and discuss its associated unique difficulties. Further, we outline a preliminary solution to this task, and assess its performance over annotated real world data, collected specifically for that purpose over hundreds of Wikipedia articles. We report promising results of a supervised learning approach, which is based on a cascade of classifiers designed to properly handle the skewed data which is inherent to the defined task. These results demonstrate the viability of the introduced task.",
"title": ""
}
] |
[
{
"docid": "ab19cf426f56ee1c3bf47418f3815b9e",
"text": "The paper ‘Cognitive load predicts point-of-care ultrasound simulator performance’ by Aldekhyl, Cavalcanti, and Naismith, in this issue of Perspectives on Medical Education [1], is an important paper that adds to work on cognitive load theory and medical education [2–4]. The implications of the findings of this paper extend substantially beyond the confines of medical practice that is the focus of the work. In this commentary, I will discuss issues associated with obtaining measures of cognitive load independently of content task performance during instruction. I will begin with a brief history of attempts to provide independent measures of cognitive load. In the 1980s, cognitive load was used as a theoretical construct to explain experimental results with very little attempt to directly measure load [5]. The theory was used to predict differential learning using particular instructional designs. Randomized controlled trials were run to test the predictions and if the hypothesized results were obtained they were attributed to cognitive load factors. The distinction between extraneous and intrinsic cognitive load had not been specified but the results were due to what was called and continues to be called extraneous cognitive load. Cognitive load was an assumed rather than a measured construct. At that time, the only attempt to provide an independent indicator of load was to use computational models [6] with quantitative differences between models used as cognitive load proxies. The first rating scale measure of cognitive load was introduced in the early 1990s by Fred Paas [7]. The Paas scale continues to be the most popular measure of cognitive load and was used by Aldekhyl et al. to validate alternative measures of load. It is very easy to use and requires no more than a minute or so of a participant’s time. Used primarily to measure extraneous cognitive load it has repeatedly indicated that instructional designs hypothesized to decrease",
"title": ""
},
{
"docid": "c3a7d3fa13bed857795c4cce2e992b87",
"text": "Healthcare consumers, researchers, patients and policy makers increasingly use systematic reviews (SRs) to aid their decision-making process. However, the conduct of SRs can be a time-consuming and resource-intensive task. Often, clinical practice guideline developers or other decision-makers need to make informed decisions in a timely fashion (e.g. outbreaks of infection, hospital-based health technology assessments). Possible approaches to address the issue of timeliness in the production of SRs are to (a) implement process parallelisation, (b) adapt and apply innovative technologies, and/or (c) modify SR processes (e.g. study eligibility criteria, search sources, data extraction or quality assessment). Highly parallelised systematic reviewing requires substantial resources to support a team of experienced information specialists, reviewers and methodologists working alongside with clinical content experts to minimise the time for completing individual review steps while maximising the parallel progression of multiple steps. Effective coordination and management within the team and across external stakeholders are essential elements of this process. Emerging innovative technologies have a great potential for reducing workload and improving efficiency of SR production. The most promising areas of application would be to allow automation of specific SR tasks, in particular if these tasks are time consuming and resource intensive (e.g. language translation, study selection, data extraction). Modification of SR processes involves restricting, truncating and/or bypassing one or more SR steps, which may risk introducing bias to the review findings. Although the growing experiences in producing various types of rapid reviews (RR) and the accumulation of empirical studies exploring potential bias associated with specific SR tasks have contributed to the methodological development for expediting SR production, there is still a dearth of research examining the actual impact of methodological modifications and comparing the findings between RRs and SRs. This evidence would help to inform as to which SR tasks can be accelerated or truncated and to what degree, while maintaining the validity of review findings. Timely delivered SRs can be of value in informing healthcare decisions and recommendations, especially when there is practical urgency and there is no other relevant synthesised evidence.",
"title": ""
},
{
"docid": "961cc1dc7063706f8f66fc136da41661",
"text": "From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible \"statistical\" properties that are the object of learning. Much less attention has been given to defining what \"learning\" is in the context of \"statistical learning.\" One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL.",
"title": ""
},
{
"docid": "d36c3839127ecee4f22e846a91b32d6c",
"text": "Michelangelo Buonarroti (1475-1564) was a master anatomist as well as an artistic genius. He dissected numerous cadavers and developed a profound understanding of human anatomy. Among his best-known artworks are the frescoes painted on the ceiling of the Sistine Chapel (1508-1512), in Rome. Currently, there is some debate over whether the frescoes merely represent the teachings of the Catholic Church at the time or if there are other meanings hidden in the images. In addition, there is speculation regarding the image of the brain embedded in the fresco known as \"The Creation of Adam,\" which contains anatomic features of the midsagittal and lateral surfaces of the brain. Within this context, we report our use of Image Pro Plus Software 6.0 to demonstrate mathematical evidence that Michelangelo painted \"The Creation of Adam\" using the Divine Proportion/Golden Ratio (GR) (1.6). The GR is classically associated with greater structural efficiency and is found in biological structures and works of art by renowned artists. Thus, according to the evidence shown in this article, we can suppose that the beauty and harmony recognized in all Michelangelo's works may not be based solely on his knowledge of human anatomical proportions, but that the artist also probably knew anatomical structures that conform to the GR display greater structural efficiency. It is hoped that this report will at least stimulate further scientific and scholarly contributions to this fascinating topic, as the study of these works of art is essential for the knowledge of the history of Anatomy.",
"title": ""
},
{
"docid": "cdced5f45620aa620cde9a937692a823",
"text": "Due to a rapid advancement in the electronic commerce technology, the use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In this paper, we model the sequence of operations in credit card transaction processing using a hidden Markov model (HMM) and show how it can be used for the detection of frauds. An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, we try to ensure that genuine transactions are not rejected. We present detailed experimental results to show the effectiveness of our approach and compare it with other techniques available in the literature.",
"title": ""
},
{
"docid": "14b6b544144d6c14cb283fd0ac8308d8",
"text": "Disrupted daily or circadian rhythms of lung function and inflammatory responses are common features of chronic airway diseases. At the molecular level these circadian rhythms depend on the activity of an autoregulatory feedback loop oscillator of clock gene transcription factors, including the BMAL1:CLOCK activator complex and the repressors PERIOD and CRYPTOCHROME. The key nuclear receptors and transcription factors REV-ERBα and RORα regulate Bmal1 expression and provide stability to the oscillator. Circadian clock dysfunction is implicated in both immune and inflammatory responses to environmental, inflammatory, and infectious agents. Molecular clock function is altered by exposomes, tobacco smoke, lipopolysaccharide, hyperoxia, allergens, bleomycin, as well as bacterial and viral infections. The deacetylase Sirtuin 1 (SIRT1) regulates the timing of the clock through acetylation of BMAL1 and PER2 and controls the clock-dependent functions, which can also be affected by environmental stressors. Environmental agents and redox modulation may alter the levels of REV-ERBα and RORα in lung tissue in association with a heightened DNA damage response, cellular senescence, and inflammation. A reciprocal relationship exists between the molecular clock and immune/inflammatory responses in the lungs. Molecular clock function in lung cells may be used as a biomarker of disease severity and exacerbations or for assessing the efficacy of chronotherapy for disease management. Here, we provide a comprehensive overview of clock-controlled cellular and molecular functions in the lungs and highlight the repercussions of clock disruption on the pathophysiology of chronic airway diseases and their exacerbations. Furthermore, we highlight the potential for the molecular clock as a novel chronopharmacological target for the management of lung pathophysiology.",
"title": ""
},
{
"docid": "c0559cebfad123a67777868990d40c7e",
"text": "One of the attractive methods for providing natural human-computer interaction is the use of the hand as an input device rather than the cumbersome devices such as keyboards and mice, which need the user to be located in a specific location to use these devices. Since human hand is an articulated object, it is an open issue to discuss. The most important thing in hand gesture recognition system is the input features, and the selection of good features representation. This paper presents a review study on the hand postures and gesture recognition methods, which is considered to be a challenging problem in the human-computer interaction context and promising as well. Many applications and techniques were discussed here with the explanation of system recognition framework and its main phases.",
"title": ""
},
{
"docid": "bf8f46e4c85f7e45879cee4282444f78",
"text": "Influence of culture conditions such as light, temperature and C/N ratio was studied on growth of Haematococcus pluvialis and astaxanthin production. Light had significant effect on astaxanthin production and it varied with its intensity and direction of illumination and effective culture ratio (ECR, volume of culture medium/volume of flask). A 6-fold increase in astaxanthin production (37 mg/L) was achieved with 5.1468·107 erg·m−2·s−1 light intensity (high light, HL) at effective culture ratio of 0.13 compared to that at 0.52 ECR, while the difference in the astaxanthin production was less than 2 — fold between the effective culture ratios at 1.6175·107 erg·m−2·s−1 light intensity (low light, LL). Multidirectional (three-directional) light illumination considerably enhanced the astaxanthin production (4-fold) compared to unidirectional illumination. Cell count was high at low temperature (25 °C) while astaxanthin content was high at 35 °C in both autotrophic and heterotrophic media. In a heterotrophic medium at low C/N ratio H. pluvialis growth was higher with prolonged vegetative phase, while high C/N ratio favoured early encystment and higher astaxanthin formation.",
"title": ""
},
{
"docid": "ad2029825dd61a7f19815db1a59e4232",
"text": "An EMG signal shows almost one-to-one relationship with the corresponding muscle. Therefore, each joint motion can be estimated relatively easily based on the EMG signals to control wearable robots. However, necessary EMG signals are not always able to be measured with every user. On the other hand, an EEG signal is one of the strongest candidates for the additional input signals to control wearable robots. Since the EEG signals are available with almost all people, an EEG based method can be applicable to many users. However, it is more difficult to estimate the user's motion intention based on the EEG signals compared with the EMG signals. In this paper, a user's motion estimation method is proposed to control the wearable robots based on the user's motion intention. In the proposed method, the motion intention of the user is estimated based on the user's EMG and EEG signals. The EMG signals are used as main input signals because the EMG signals have higher correlation with the motion. Furthermore, the EEG signals are used to estimate the part of the motion which is not able to be estimated based on EMG signals because of the muscle unavailability.",
"title": ""
},
{
"docid": "8ed6c9e82c777aa092a78959391a37b2",
"text": "The trie data structure has many properties which make it especially attractive for representing large files of data. These properties include fast retrieval time, quick unsuccessful search determination, and finding the longest match to a given identifier. The main drawback is the space requirement. In this paper the concept of trie compaction is formalized. An exact algorithm for optimal trie compaction and three algorithms for approximate trie compaction are given, and an analysis of the three algorithms is done. The analysis indicate that for actual tries, reductions of around 70 percent in the space required by the uncompacted trie can be expected. The quality of the compaction is shown to be insensitive to the number of nodes, while a more relevant parameter is the alphabet size of the key.",
"title": ""
},
{
"docid": "0506a7f5dddf874487c90025dff0bc7d",
"text": "This paper presents a low-power decision-feedback equalizer (DFE) receiver front-end and a two-step minimum bit-error-rate (BER) adaptation algorithm. A high energy efficiency of 0.46 mW/Gbps is made possible by the combination of a direct-feedback finite-impulse-response (FIR) DFE, an infinite-impulse-response (IIR) DFE, and a clock-and-data recovery (CDR) circuit with adjustable timing offsets. Based on this architecture, the power-hungry stages used in prior DFE receivers such as the continuous-time linear equalizer (CTLE), the current-mode summing circuit for a multitap DFE, and the fast selection logic for a loop-unrolling DFE can all be removed. A two-step adaptation algorithm that finds the equalizer coefficients minimizing the BER is described. First, an extra data sampler with adjustable voltage and timing offsets measures the single-bit response (SBR) of the channel and coarsely tunes the initial coefficient values in the foreground. Next, the same circuit measures the eye-opening and bit-error rates and fine tunes the coefficients in background using a stochastic hill-climbing algorithm. A prototype DFE receiver fabricated in a 65-nm LP/RF CMOS dissipates 2.3 mW and demonstrates measured eye-opening values of 174 mV pp and 0.66 UIpp while operating at 5 Gb/s with a -15-dB loss channel.",
"title": ""
},
{
"docid": "9a8b397bb95b9123a8d41342a850a456",
"text": "We present a novel task: the chronological classification of Hafez’s poems (ghazals). We compiled a bilingual corpus in digital form, with consistent idiosyncratic properties. We have used Hooman’s labeled ghazals in order to train automatic classifiers to classify the remaining ghazals. Our classification framework uses a Support Vector Machine (SVM) classifier with similarity features based on Latent Dirichlet Allocation (LDA). In our analysis of the results we use the LDA topics’ main terms that are passed on to a Principal Component Analysis (PCA) module.",
"title": ""
},
{
"docid": "d3444b0cee83da2a94f4782c79e0ce48",
"text": "Predicting student academic performance plays an important role in academics. Classifying st udents using conventional techniques cannot give the desired lev l of accuracy, while doing it with the use of soft computing techniques may prove to be beneficial. A student can be classi fied into one of the available categories based on his behavioral and qualitative features. The paper presents a Neural N etwork model fused with Fuzzy Logic to model academi c profile of students. The model mimics teacher’s ability to deal with imprecise information representing student’s characteristics in linguistic form. The suggested model is developed in MATLAB which takes into consideration various features of students under study. The input to the model consists of dat of students studying in any faculty. A combination of Fuzzy Logic ARTMAP Neural Network results into a model useful for management of educational institutes for improving the quality of education. A good prediction of student’s success ione way to be in the competition in education sys tem. The use of Soft Computing methodology is justified for its real-time applicability in education system.",
"title": ""
},
{
"docid": "34511b4ab4a5e3b8cfa914cb8b943dac",
"text": "Hybrid automata model systems with both digital and analog components, such az embedded control programs. Many verification tasks for such programs can be expressed as reachability problems for hybrid automata. By improving on previous decidability and undecidability results, we identify the precise boundary between decidability and undecidability of the reachability problem for hybrid automata. On the positive side, we give an (optimal) PSPACE reachability algorithm for the case of initialized rectangular automata, where all analog variables follow trajectories within piecewise-linear envelopes and are reinitialized whenever the envelope changes. Our algorithm is based on a translation of an initialized rectangular automaton into a timed automaton that defines the same timed language. The translation has practical significance for verification, because it guarantees the termination of symbolic procedures for the reachability analysis of initialized rectangular automata. On the negative side, we show that several slight generalizations of initialized rectangular automata lead to an undecidable reachability problem. In particular, we prove that the reachability problem is undecidable for timed automata with a single stopwatch.",
"title": ""
},
{
"docid": "61243568f7d06ee7791307df31310ae2",
"text": "As data represent a key asset for today’s organizations, the problem of how to protect this data from theft and misuse is at the forefront of these organizations’ minds. Even though today several data security techniques are available to protect data and computing infrastructures, many such techniques—such as firewalls and network security tools—are unable to protect data from attacks posed by those working on an organization’s “inside.” These “insiders” usually have authorized access to relevant information systems, making it extremely challenging to block the misuse of information while still allowing them to do their jobs. This book discusses several techniques that can provide effective protection against attacks posed by people working on the inside of an organization. Chapter 1 introduces the notion of insider threat and reports some data about data breaches due to insider threats. Chapter 2 covers authentication and access control techniques, and Chapter 3 shows how these general security techniques can be extended and used in the context of protection from insider threats. Chapter 4 addresses anomaly detection techniques that are used to determine anomalies in data accesses by insiders. These anomalies are often indicative of potential insider data attacks and therefore play an important role in protection from these attacks. Security information and event management (SIEM) tools and fine-grained auditing are discussed in Chapter 5. These tools aim at collecting, analyzing, and correlating—in real-time—any information and event that may be relevant for the security of an organization. As such, they can be a key element in finding a solution to such undesirable insider threats. Chapter 6 goes on to provide a survey of techniques for separation-of-duty (SoD). SoD is an important principle that, when implemented in systems and tools, can strengthen data protection from malicious insiders. However, to date, very few approaches have been proposed for implementing SoD in systems. In Chapter 7, a short survey of a commercial product is presented, which provides different techniques for protection from malicious users with system privileges—such as a DBA in database management systems. Finally, in Chapter 8, the book concludes with a few remarks and additional research directions.",
"title": ""
},
{
"docid": "5d11188bf08cc7abc057241837b263bb",
"text": "This paper presents the design and development of a sensorized soft robotic glove based on pneumatic soft-and-rigid hybrid actuators for providing continuous passive motion (CPM) in hand rehabilitation. This hybrid actuator is comprised of bellow-type soft actuator sections connected through block-shaped semi-rigid sections to form robotic digits. The actuators were designed to satisfy the anatomical range of motion for each joint. Each digit was sensorized at the tip with an inertial measurement unit sensor in order to track the rotation of the distal end. A pneumatic feedback control system was developed to control the motion of the soft robotic digit in following desired trajectories. The performance of the soft robotic glove and the associated control system were examined on an able-bodied subject during flexion and extension to show the glove's applicability to CPM applications.",
"title": ""
},
{
"docid": "0ec0a91eeb68f74aa112b6e8960876c3",
"text": "Nowadays soccer is the most practiced sport in the world and moves a multimillionaire market. Therefore, a club that is able to recruit and develop talented players to theirs fullest potential has a lot of advantages and economic benefits. However, in most clubs the players are selected through scouts and coaches recommendation, with predictive success based mostly on intuition than other objective criteria. In addition, it is known that talent development and identification is a multifactorial process involving many characteristics. To this end, this paper proposes the creation of performance indicators based on multivariate statistical analysis. Usual principal components and factor analysis are performed to construct physical, technical and general score and copula modeling is proposed to create the consistency index, which generalizes the Z score method. With these indicators, a web-oriented expert system for analyzing sport data in real time via R software is proposed as a powerful tool for talent identification in soccer. This system, the so called iSports, allows the monitoring and continuous comparison of athletes in a simple and efficient way, taking into account essentials aspects, as well as identifying candidate talented that have above the average performance, that is, who stand out from the studied population of soccer players. In order to promote and popularize the access of information and the statistical science applied in the sports context, the iSports system can be used in any training center of the country, impacting the increase of knowledge of the athletes in training phase at any school, city or region. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2445d7f1f7e22ace1b8e84d4766b5ab1",
"text": "Voice assistants are software agents that can interpret human speech and respond via synthesized voices. Apple's Siri, Amazon's Alexa, Microsoft's Cortana, and Google's Assistant are the most popular voice assistants and are embedded in smartphones or dedicated home speakers. Users can ask their assistants questions, control home automation devices and media playback via voice, and manage other basic tasks such as email, to-do lists, and calendars with verbal commands. This column will explore the basic workings and common features of today's voice assistants. It will also discuss some of the privacy and security issues inherent to voice assistants and some potential future uses for these devices. As voice assistants become more widely used, librarians will want to be familiar with their operation and perhaps consider them as a means to deliver library services and materials.",
"title": ""
},
{
"docid": "1bbd89df0a0b1ec6155361aa15eccc73",
"text": "BACKGROUND\nIn dentistry, allergic reactions to Ti implants have not been studied, nor considered by professionals. Placing permanent metal dental implants in allergic patients can provoke type IV or I reactions. Several symptoms have been described, from skin rashes and implant failure, to non-specific immune suppression.\n\n\nOBJECTIVE\nOur objective was to evaluate the presence of titanium allergy by the anamnesis and examination of patients, together with the selective use of cutaneous and epicutaneous testing, in patients treated with or intending to receive dental implants of such material.\n\n\nMATERIAL AND METHODS\nThirty-five subjects out of 1500 implant patients treated and/or examined (2002-2004) were selected for Ti allergy analysis. Sixteen presented allergic symptoms after implant placement or unexplained implant failures [allergy compatible response group (ACRG)], while 19 had a history of other allergies, or were heavily Ti exposed during implant surgeries or had explained implant failures [predisposing factors group (PFG)]. Thirty-five controls were randomly selected (CG) in the Allergy Centre. Cutaneous and epicutaneous tests were carried out.\n\n\nRESULTS\nNine out of the 1500 patients displayed positive (+) reactions to Ti allergy tests (0.6%): eight in the ACRG (50%), one in the PFG (5.3%)(P=0.009) and zero in the control group. Five positives were unexplained implant failures (five out of eight).\n\n\nCONCLUSIONS\nTi allergy can be detected in dental implant patients, even though its estimated prevalence is low (0.6%). A significantly higher risk of positive allergic reaction was found in patients showing post-op allergy compatible response (ACRG), in which cases allergy tests could be recommended.",
"title": ""
},
{
"docid": "737dfbd7637337c294ee70c05c62acb1",
"text": "T he Pirogoff amputation, removal of the forefoot and talus followed by calcaneotibial arthrodesis, produces a lower extremity with a minimum loss of length that is capable of bearing full weight. Although the technique itself is not new, patients who have already undergone amputation of the contralateral leg may benefit particularly from this littleused amputation. Painless weight-bearing is essential for the patient who needs to retain the ability to make indoor transfers independently of helpers or a prosthesis. As the number of patients with peripheral vascular disease continues to increase, this amputation should be in the armamentarium of the treating orthopaedic surgeon. Our primary indication for a Pirogoff amputation is a forefoot lesion that is too extensive for reconstruction or nonoperative treatment because of gangrene or infection, as occurs in patients with diabetes or arteriosclerosis. Other causes, such as trauma, malignancy, osteomyelitis, congenital abnormalities, and rare cases of frostbite, are also considered. To enhance the success rate, we only perform surgery if four criteria are met: (1) the blood supply to the soft tissues and the calcaneal region should support healing, (2) there should be no osteomyelitis of the distal part of the tibia or the calcaneus, (3) the heel pad should be clinically viable and painless, and (4) the patient should be able to walk with two prostheses after rehabilitation. Warren mentioned uncontrolled diabetes mellitus, severe Charcot arthropathy of the foot, and smoking as relative contraindications. There are other amputation options. In developed countries, the most common indication for transtibial amputation is arteriosclerosis (>90%). Although the results of revascularization operations and interventional radiology are promising, amputation remains the only option for 40% of all patients with severe ischemia. Various types of amputation of the lower extremity have been described. The advantages and disadvantages have to be considered and discussed with the patient. For the Syme ankle disarticulation, amputation is performed at the level of the talocrural joint and the plantar fat pad is dissected from the calcaneus and is preserved. Woundhealing and proprioception are good, but patients have an inconvenient leg-length discrepancy and in some cases the heel is not pain-free on weight-bearing. Prosthetic fitting can be difficult because of a bulbous distal end or shift of the plantar fat pad. However, the latter complication can be prevented in most cases by anchoring the heel pad to the distal aspect of",
"title": ""
}
] |
scidocsrr
|
c4ea876b5e385a6a37bcd2274bff570c
|
Reducing drift in visual odometry by inferring sun direction using a Bayesian Convolutional Neural Network
|
[
{
"docid": "b2ba17cb2e2e2ef878bd87f657e3dd5e",
"text": "We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 3 degrees accuracy for large scale outdoor scenes and 0.5m and 5 degrees accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show that the PoseNet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples.",
"title": ""
},
{
"docid": "3b6e3884a9d3b09d221d06f3dea20683",
"text": "Convolutional neural networks (CNNs) work well on large datasets. But labelled data is hard to collect, and in some applications larger amounts of data are not available. The problem then is how to use CNNs with small data – as CNNs overfit quickly. We present an efficient Bayesian CNN, offering better robustness to over-fitting on small data than traditional approaches. This is by placing a probability distribution over the CNN’s kernels. We approximate our model’s intractable posterior with Bernoulli variational distributions, requiring no additional model parameters. On the theoretical side, we cast dropout network training as approximate inference in Bayesian neural networks. This allows us to implement our model using existing tools in deep learning with no increase in time complexity, while highlighting a negative result in the field. We show a considerable improvement in classification accuracy compared to standard techniques and improve on published state-of-theart results for CIFAR-10.",
"title": ""
},
{
"docid": "2353942ce5857a8d7163fce6cb00d509",
"text": "Here, we present a general framework for combining visual odometry and lidar odometry in a fundamental and first principle method. The method shows improvements in performance over the state of the art, particularly in robustness to aggressive motion and temporary lack of visual features. The proposed on-line method starts with visual odometry to estimate the ego-motion and to register point clouds from a scanning lidar at a high frequency but low fidelity. Then, scan matching based lidar odometry refines the motion estimation and point cloud registration simultaneously.We show results with datasets collected in our own experiments as well as using the KITTI odometry benchmark. Our proposed method is ranked #1 on the benchmark in terms of average translation and rotation errors, with a 0.75% of relative position drift. In addition to comparison of the motion estimation accuracy, we evaluate robustness of the method when the sensor suite moves at a high speed and is subject to significant ambient lighting changes.",
"title": ""
}
] |
[
{
"docid": "0868a18f156526ab4f5f1a2648bd3093",
"text": "BACKGROUND\nThe correlation between noninvasive markers with endoscopic activity according to the modified Baron Index in patients with ulcerative colitis (UC) is unknown. We aimed to evaluate the correlation between endoscopic activity and fecal calprotectin (FC), C-reactive protein (CRP), hemoglobin, platelets, blood leukocytes, and the Lichtiger Index (clinical score).\n\n\nMETHODS\nUC patients undergoing complete colonoscopy were prospectively enrolled and scored clinically and endoscopically. Samples from feces and blood were analyzed in UC patients and controls.\n\n\nRESULTS\nWe enrolled 228 UC patients and 52 healthy controls. Endoscopic disease activity correlated best with FC (Spearman's rank correlation coefficient r = 0.821), followed by the Lichtiger Index (r = 0.682), CRP (r = 0.556), platelets (r = 0.488), blood leukocytes (r = 0.401), and hemoglobin (r = -0.388). FC was the only marker that could discriminate between different grades of endoscopic activity (grade 0, 16 [10-30] μg/g; grade 1, 35 [25-48] μg/g; grade 2, 102 [44-159] μg/g; grade 3, 235 [176-319] μg/g; grade 4, 611 [406-868] μg/g; P < 0.001 for discriminating the different grades). FC with a cutoff of 57 μg/g had a sensitivity of 91% and a specificity of 90% to detect endoscopically active disease (modified Baron Index ≥ 2).\n\n\nCONCLUSIONS\nFC correlated better with endoscopic disease activity than clinical activity, CRP, platelets, hemoglobin, and blood leukocytes. The strong correlation with endoscopic disease activity suggests that FC represents a useful biomarker for noninvasive monitoring of disease activity in UC patients.",
"title": ""
},
{
"docid": "799912616c6978f63938bfac6b21b1ec",
"text": "Friction stir welding is a solid state joining process. High strength aluminum alloys are widely used in aircraft and marine industries. Generally, the mechanical properties of fusion welded aluminum joints are poor. As friction stir welding occurs in solid state, no solidification structures are created thereby eliminating the brittle and eutectic phases common in fusion welding of high strength aluminum alloys. In this review the process parameters, microstructural evolution, and effect of friction stir welding on the properties of weld specific to aluminum alloys have been discussed. Keywords—Aluminum alloys, Friction stir welding (FSW), Microstructure, Properties.",
"title": ""
},
{
"docid": "2013fc509f8f6d3fa2966d7d76169f43",
"text": "Graphene, whose discovery won the 2010 Nobel Prize in physics, has been a shining star in the material science in the past few years. Owing to its interesting electrical, optical, mechanical and chemical properties, graphene has found potential applications in a wide range of areas, including biomedicine. In this article, we will summarize the latest progress of using graphene for various biomedical applications, including drug delivery, cancer therapies and biosensing, and discuss the opportunities and challenges in this emerging field.",
"title": ""
},
{
"docid": "23a5152da5142048332c09164bade40f",
"text": "Knowledge bases extracted automatically from the Web present new opportunities for data mining and exploration. Given a large, heterogeneous set of extracted relations, new tools are needed for searching the knowledge and uncovering relationships of interest. We present WikiTables, a Web application that enables users to interactively explore tabular knowledge extracted from Wikipedia.\n In experiments, we show that WikiTables substantially outperforms baselines on the novel task of automatically joining together disparate tables to uncover \"interesting\" relationships between table columns. We find that a \"Semantic Relatedness\" measure that leverages the Wikipedia link structure accounts for a majority of this improvement. Further, on the task of keyword search for tables, we show that WikiTables performs comparably to Google Fusion Tables despite using an order of magnitude fewer tables. Our work also includes the release of a number of public resources, including over 15 million tuples of extracted tabular data, manually annotated evaluation sets, and public APIs.",
"title": ""
},
{
"docid": "b634d8eb5016f93604ed460cebe07468",
"text": "The basis of science is the hypothetico-deductive method and the recording of experiments in sufficient detail to enable reproducibility. We report the development of Robot Scientist \"Adam,\" which advances the automation of both. Adam has autonomously generated functional genomics hypotheses about the yeast Saccharomyces cerevisiae and experimentally tested these hypotheses by using laboratory automation. We have confirmed Adam's conclusions through manual experiments. To describe Adam's research, we have developed an ontology and logical language. The resulting formalization involves over 10,000 different research units in a nested treelike structure, 10 levels deep, that relates the 6.6 million biomass measurements to their logical description. This formalization describes how a machine contributed to scientific knowledge.",
"title": ""
},
{
"docid": "73300c22cc92eac1133d84cdad0d00e7",
"text": "BACKGROUND\nVideo-games are becoming a common tool to guide patients through rehabilitation because of their power of motivating and engaging their users. Video-games may also be integrated into an infrastructure that allows patients, discharged from the hospital, to continue intensive rehabilitation at home under remote monitoring by the hospital itself, as suggested by the recently funded Rewire project.\n\n\nOBJECTIVE\nGoal of this work is to describe a novel low cost platform, based on video-games, targeted to neglect rehabilitation.\n\n\nMETHODS\nThe patient is guided to explore his neglected hemispace by a set of specifically designed games that ask him to reach targets, with an increasing level of difficulties. Visual and auditory cues helped the patient in the task and are progressively removed. A controlled randomization of scenarios, targets and distractors, a balanced reward system and music played in the background, all contribute to make rehabilitation more attractive, thus enabling intensive prolonged treatment.\n\n\nRESULTS\nResults from our first patient, who underwent rehabilitation for half an hour, for five days a week for one month, showed on one side a very positive attitude of the patient towards the platform for the whole period, on the other side a significant improvement was obtained. Importantly, this amelioration was confirmed at a follow up evaluation five months after the last rehabilitation session and generalized to everyday life activities.\n\n\nCONCLUSIONS\nSuch a system could well be integrated into a home based rehabilitation system.",
"title": ""
},
{
"docid": "476aa14f6b71af480e8ab4747849d7e3",
"text": "The present study explored the relationship between risky cybersecurity behaviours, attitudes towards cybersecurity in a business environment, Internet addiction, and impulsivity. 538 participants in part-time or full-time employment in the UK completed an online questionnaire, with responses from 515 being used in the data analysis. The survey included an attitude towards cybercrime and cybersecurity in business scale, a measure of impulsivity, Internet addiction and a 'risky' cybersecurity behaviours scale. The results demonstrated that Internet addiction was a significant predictor for risky cybersecurity behaviours. A positive attitude towards cybersecurity in business was negatively related to risky cybersecurity behaviours. Finally, the measure of impulsivity revealed that both attentional and motor impulsivity were both significant positive predictors of risky cybersecurity behaviours, with non-planning being a significant negative predictor. The results present a further step in understanding the individual differences that may govern good cybersecurity practices, highlighting the need to focus directly on more effective training and awareness mechanisms.",
"title": ""
},
{
"docid": "35b668eeecb71fc1931e139a90f2fd1f",
"text": "In this article we present novel learning methods for estimating the quality of results returned by a search engine in response to a query. Estimation is based on the agreement between the top results of the full query and the top results of its sub-queries. We demonstrate the usefulness of quality estimation for several applications, among them improvement of retrieval, detecting queries for which no relevant content exists in the document collection, and distributed information retrieval. Experiments on TREC data demonstrate the robustness and the effectiveness of our learning algorithms.",
"title": ""
},
{
"docid": "a5cd7d46dc74d15344e2f3e9b79388a3",
"text": "A number of differences have emerged between modern and classic approaches to constituency parsing in recent years, with structural components like grammars and featurerich lexicons becoming less central while recurrent neural network representations rise in popularity. The goal of this work is to analyze the extent to which information provided directly by the model structure in classical systems is still being captured by neural methods. To this end, we propose a high-performance neural model (92.08 F1 on PTB) that is representative of recent work and perform a series of investigative experiments. We find that our model implicitly learns to encode much of the same information that was explicitly provided by grammars and lexicons in the past, indicating that this scaffolding can largely be subsumed by powerful general-purpose neural machinery.",
"title": ""
},
{
"docid": "1f8be01ff656d9414a8bd1e12111081d",
"text": "Gaining an architectural level understanding of a software system is important for many reasons. When the description of a system's architecture does not exist, attempts must be made to recover it. In recent years, researchers have explored the use of clustering for recovering a software system's architecture, given only its source code. The main contributions of this paper are given as follows. First, we review hierarchical clustering research in the context of software architecture recovery and modularization. Second, to employ clustering meaningfully, it is necessary to understand the peculiarities of the software domain, as well as the behavior of clustering measures and algorithms in this domain. To this end, we provide a detailed analysis of the behavior of various similarity and distance measures that may be employed for software clustering. Third, we analyze the clustering process of various well-known clustering algorithms by using multiple criteria, and we show how arbitrary decisions taken by these algorithms during clustering affect the quality of their results. Finally, we present an analysis of two recently proposed clustering algorithms, revealing close similarities in their apparently different clustering approaches. Experiments on four legacy software systems provide insight into the behavior of well-known clustering algorithms and their characteristics in the software domain.",
"title": ""
},
{
"docid": "4f6979ca99ec7fb0010fd102e7796248",
"text": "Cryptographic systems are essential for computer and communication security, for instance, RSA is used in PGP Email clients and AES is employed in full disk encryption. In practice, the cryptographic keys are loaded and stored in RAM as plain-text, and therefore vulnerable to physical memory attacks (e.g., cold-boot attacks). To tackle this problem, we propose Copker, which implements asymmetric cryptosystems entirely within the CPU, without storing plain-text private keys in the RAM. In its active mode, Copker stores kilobytes of sensitive data, including the private key and the intermediate states, only in onchip CPU caches (and registers). Decryption/signing operations are performed without storing sensitive information in system memory. In the suspend mode, Copker stores symmetrically encrypted private keys in memory, while employs existing solutions to keep the key-encryption key securely in CPU registers. Hence, Copker releases the system resources in the suspend mode. In this paper, we implement Copker with the most common asymmetric cryptosystem, RSA, with the support of multiple private keys. We show that Copker provides decryption/signing services that are secure against physical memory attacks. Meanwhile, with intensive experiments, we demonstrate that our implementation of Copker is secure and requires reasonable overhead. Keywords—Cache-as-RAM; cold-boot attack; key management; asymmetric cryptography implementation.",
"title": ""
},
{
"docid": "e7d5dd2926238db52cf406f20947f90e",
"text": "The development of the capital markets is changing the relevance and empirical validity of the efficient market hypothesis. The dynamism of capital markets determines the need for efficiency research. The authors analyse the development and the current status of the efficient market hypothesis with an emphasis on the Baltic stock market. Investors often fail to earn an excess profit, but yet stock market anomalies are observed and market prices often deviate from their intrinsic value. The article presents an analysis of the concept of efficient market. Also, the market efficiency evolution is reviewed and its current status is analysed. This paper presents also an examination of stock market efficiency in the Baltic countries. Finally, the research methods are reviewed and the methodology of testing the weak-form efficiency in a developing market is suggested.",
"title": ""
},
{
"docid": "ec90e30c0ae657f25600378721b82427",
"text": "We use deep max-pooling convolutional neural networks to detect mitosis in breast histology images. The networks are trained to classify each pixel in the images, using as context a patch centered on the pixel. Simple postprocessing is then applied to the network output. Our approach won the ICPR 2012 mitosis detection competition, outperforming other contestants by a significant margin.",
"title": ""
},
{
"docid": "6be88914654c736c8e1575aeb37532a3",
"text": "Coding EMRs with diagnosis and procedure codes is an indispensable task for billing, secondary data analyses, and monitoring health trends. Both speed and accuracy of coding are critical. While coding errors could lead to more patient-side financial burden and mis-interpretation of a patient's well-being, timely coding is also needed to avoid backlogs and additional costs for the healthcare facility. In this paper, we present a new neural network architecture that combines ideas from few-shot learning matching networks, multi-label loss functions, and convolutional neural networks for text classification to significantly outperform other state-of-the-art models. Our evaluations are conducted using a well known deidentified EMR dataset (MIMIC) with a variety of multi-label performance measures.",
"title": ""
},
{
"docid": "caa660feb6bb35ad92f6da6293cb0279",
"text": "Our ability to express and accurately assess emotional states is central to human life. The present study examines how people express and detect emotions during text-based communication, an environment that eliminates the nonverbal cues typically associated with emotion. The results from 40 dyadic interactions suggest that users relied on four strategies to express happiness versus sadness, including disagreement, negative affect terms, punctuation, and verbosity. Contrary to conventional wisdom, communication partners readily distinguished between positive and negative valence emotional communicators in this text-based context. The results are discussed with respect to the Social Information Processing model of strategic relational adaptation in mediated communication.",
"title": ""
},
{
"docid": "d29cca7c16b0e5b43c85e1a8701d735f",
"text": "The sparse matrix solver by LU factorization is a serious bottleneck in Simulation Program with Integrated Circuit Emphasis (SPICE)-based circuit simulators. The state-of-the-art Graphics Processing Units (GPU) have numerous cores sharing the same memory, provide attractive memory bandwidth and compute capability, and support massive thread-level parallelism, so GPUs can potentially accelerate the sparse solver in circuit simulators. In this paper, an efficient GPU-based sparse solver for circuit problems is proposed. We develop a hybrid parallel LU factorization approach combining task-level and data-level parallelism on GPUs. Work partitioning, number of active thread groups, and memory access patterns are optimized based on the GPU architecture. Experiments show that the proposed LU factorization approach on NVIDIA GTX580 attains an average speedup of 7.02× (geometric mean) compared with sequential PARDISO, and 1.55× compared with 16-threaded PARDISO. We also investigate bottlenecks of the proposed approach by a parametric performance model. The performance of the sparse LU factorization on GPUs is constrained by the global memory bandwidth, so the performance can be further improved by future GPUs with larger memory bandwidth.",
"title": ""
},
{
"docid": "1e464db177e96b6746f8f827c582cc31",
"text": "In order to respond correctly to a free form factual question given a large collection of text data, one needs to understand the question to a level that allows determining some of the constraints the question imposes on a possible answer. These constraints may include a semantic classification of the sought after answer and may even suggest using different strategies when looking for and verifying a candidate answer. This work presents the first work on a machine learning approach to question classification. Guided by a layered semantic hierarchy of answer types, we develop a hierarchical classifier that classifies questions into fine-grained classes. This work also performs a systematic study of the use of semantic information sources in natural language classification tasks. It is shown that, in the context of question classification, augmenting the input of the classifier with appropriate semantic category information results in significant improvements to classification accuracy. We show accurate results on a large collection of free-form questions used in TREC 10 and 11.",
"title": ""
},
{
"docid": "019d5deed0ed1e5b50097d5dc9121cb6",
"text": "Within interactive narrative research, agency is largely considered in terms of a player's autonomy in a game, defined as theoretical agency. Rather than in terms of whether or not the player feels they have agency, their perceived agency. An effective interactive narrative needs to provide a player a level of agency that satisfies their desires and must do that without compromising its own structure. Researchers frequently turn to techniques for increasing theoretical agency to accomplish this. This paper proposes an approach to categorize and explore techniques in which a player's level of perceived agency is affected without requiring more or less theoretical agency.",
"title": ""
}
] |
scidocsrr
|
0e6595fab79e8416cc1ec921230b09c6
|
Gosig: Scalable Byzantine Consensus on Adversarial Wide Area Network for Blockchains
|
[
{
"docid": "6cf51a180d846abc8c96b1106c38a905",
"text": "We introduce a short signature scheme based on the Computational Diffie-Hellman assumption on certain elliptic and hyper-elliptic curves. For standard security parameters, the signature length is about half that of a DSA signature with a similar level of security. Our short signature scheme is designed for systems where signatures are typed in by a human or are sent over a low-bandwidth channel. We survey a number of properties of our signature scheme such as signature aggregation and batch verification.",
"title": ""
},
{
"docid": "4fc67f5a4616db0906b943d7f13c856d",
"text": "Overview. A blockchain is best understood in the model of state-machine replication [8], where a service maintains some state and clients invoke operations that transform the state and generate outputs. A blockchain emulates a “trusted” computing service through a distributed protocol, run by nodes connected over the Internet. The service represents or creates an asset, in which all nodes have some stake. The nodes share the common goal of running the service but do not necessarily trust each other for more. In a “permissionless” blockchain such as the one underlying the Bitcoin cryptocurrency, anyone can operate a node and participate through spending CPU cycles and demonstrating a “proof-of-work.” On the other hand, blockchains in the “permissioned” model control who participates in validation and in the protocol; these nodes typically have established identities and form a consortium. A report of Swanson compares the two models [9].",
"title": ""
}
] |
[
{
"docid": "e11a1e3ef5093aa77797463b7b8994ea",
"text": "Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human–robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame.",
"title": ""
},
{
"docid": "3f1369558528c2962fb4bec65b165bf0",
"text": "This article presents a review on how software usab ility could be increased for users with less comput er literacy. The literature was reviewed to extract us er interface design principles by identifying the s imilar problems of this group of users. There are differen t groups of users with less computer literacy. Howe ver, based on the literature three groups of them need s p cial attention from software designers. The first group is elderly users, as users with lack of computer ba ckground. The second group is children, as novice u sers and the third group is users with mental or physica l disorders. Therefore, this study intends to focus on the mentioned groups, followed by a comparison between pr vious researches in the field, which reveals tha t some commonalities exist between the needs of these users. These commonalities were used to extract us er interface design principles such as (a) reducing th e number of features available at any given time, ( b) avoiding using computer terms, (c) putting customiz ation ability for font, color, size and (d) using appropriate graphical objects such as avatar or ico n. Taking these principles into account can solve s oftware usability problems and increase satisfaction of use rs with less computer literacy.",
"title": ""
},
{
"docid": "fe59da1f9d7d6d700ee7b3f65462560b",
"text": "Sea–land segmentation and ship detection are two prevalent research domains for optical remote sensing harbor images and can find many applications in harbor supervision and management. As the spatial resolution of imaging technology improves, traditional methods struggle to perform well due to the complicated appearance and background distributions. In this paper, we unify the above two tasks into a single framework and apply the deep convolutional neural networks to predict pixelwise label for an input. Specifically, an edge aware convolutional network is proposed to parse a remote sensing harbor image into three typical objects, e.g., sea, land, and ship. Two innovations are made on top of the deep structure. First, we design a multitask model by simultaneously training the segmentation and edge detection networks. Hierarchical semantic features from the segmentation network are extracted to learn the edge network. Second, the outputs of edge pipeline are further employed to refine entire model by adding an edge aware regularization, which helps our method to yield very desirable results that are spatially consistent and well boundary located. It also benefits the segmentation of docked ships that are quite challenging for many previous methods. Experimental results on two datasets collected from Google Earth have demonstrated the effectiveness of our approach both in quantitative and qualitative performance compared with state-of-the-art methods.",
"title": ""
},
{
"docid": "f2b552e97cd929d5780fae80223ae179",
"text": "Blockchains are distributed data structures that are used to achieve consensus in systems for cryptocurrencies (like Bitcoin) or smart contracts (like Ethereum). Although blockchains gained a lot of popularity recently, there are only few logic-based models for blockchains available. We introduce BCL, a dynamic logic to reason about blockchain updates, and show that BCL is sound and complete with respect to a simple blockchain model.",
"title": ""
},
{
"docid": "ea1d408c4e4bfe69c099412da30949b0",
"text": "The amount of scientific papers in the Molecular Biology field has experienced an enormous growth in the last years, prompting the need of developing automatic Information Extraction (IE) systems. This work is a first step towards the ontology-based domain-independent generalization of a system that identifies Escherichia coli regulatory networks. First, a domain ontology based on the RegulonDB database was designed and populated. After that, the steps of the existing IE system were generalized to use the knowledge contained in the ontology, so that it could be potentially applied to other domains. The resulting system has been tested both with abstract and full articles that describe regulatory interactions for E. coli, obtaining satisfactory results. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8ae559f0ffdeb162a5c7f4eac287464b",
"text": "Shadows have long been a problem to computer vision algorithms. Removing shadow can significantly improve the performance of several vision task such as object detection and image segmentation. Various methods of shadow removal in images had been developed. In this paper, we discussed the characteristic of shadowed region with single texture statistically, based on an illumination model, and then developed a simple and fast way of removal shadow using local histogram matching in different illumination reduction level.",
"title": ""
},
{
"docid": "7014aed05a8f518b4171abbdfaaa86c5",
"text": "In this paper, we observe that there exists a second dimension to the relation extraction (RE) problem that is orthogonal to the relation type dimension. We show that most of these second dimensional structures are relatively constrained and not difficult to identify. We propose a novel algorithmic approach to RE that starts by first identifying these structures and then, within these, identifying the semantic type of the relation. In the real RE problem where relation arguments need to be identified, exploiting these structures also allows reducing pipelined propagated errors. We show that this RE framework provides significant improvement in RE performance.",
"title": ""
},
{
"docid": "a036dd162a23c5d24125d3270e22aaf7",
"text": "1 Problem Description This work is focused on the relationship between the news articles (breaking news) and stock prices. The student will design and develop methods to analyze how and when the news articles influence the stock market. News articles about Norwegian oil related companies and stock prices from \" BW Offshore Limited \" (BWO), \" DNO International \" (DNO), \" Frontline \" (FRO), \" Petroleum Geo-Services \" (PGS), \" Seadrill \" (SDRL), \" Sevan Marine \" (SEVAN), \" Siem Offshore \" (SIOFF), \" Statoil \" (STL) and \" TGS-NOPEC Geophysical Company \" (TGS) will be crawled, preprocessed and the important features in the text will be extracted to effectively represent the news in a form that allows the application of computational techniques. This data will then be used to train text sense classifiers. A prototype system that employs such classifiers will be developed to support the trader in taking sell/buy decisions. Methods will be developed for automaticall sense-labeling of news that are informed by the correlation between the changes in the stock prices and the breaking news. Performance of the prototype decision support system will be compared with a chosen baseline method for trade-related decision making. Abstract This thesis investigates the prediction of possible stock price changes immediately after news article publications. This is done by automatic analysis of these news articles. Some background information about financial trading theory and text mining is given in addition to an overview of earlier related research in the field of automatic news article analyzes with the purpose of predicting future stock prices. In this thesis a system is designed and implemented to predict stock price trends for the time immediately after the publication of news articles. This system consists mainly of four components. The first component gathers news articles and stock prices automatically from internet. The second component prepares the news articles by sending them to some document preprocessing steps and finding relevant features before they are sent to a document representation process. The third component categorizes the news articles into predefined categories, and finally the fourth component applies appropriate trading strategies depending on the category of the news article. This system requires a labeled data set to train the categorization component. This data set is labeled automatically on the basis of the price trends directly after the news article publication. An additional label refining step using clustering is added in an …",
"title": ""
},
{
"docid": "07ffad18ed2f35e1690547d5a999ab37",
"text": "This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.",
"title": ""
},
{
"docid": "bd6470c57f9066c5902385e5fddaea28",
"text": "BACKGROUND\nHandwriting difficulties are among the most common reasons for referral of children to occupational therapy.\n\n\nPURPOSE\nTo determine the effectiveness of handwriting interventions.\n\n\nMETHODS\nA systematic review was carried out. Included studies were randomized or nonrandomized controlled trials of interventions that could be used by an occupational therapist to improve written output (printing or writing) among school-aged children identified as having difficulties with handwriting. Electronic searches of relevant databases were conducted up to January 2010.\n\n\nFINDINGS\nEleven studies met the inclusion criteria. These studies tested (1) relaxation and practice with or without EMG, (2) sensory-based training without handwriting practice, and (3) handwriting-based practice (including sensory-focused or cognitive focused handwriting practice). Regardless of treatment type, interventions that did not include handwriting practice and those that included less than 20 practice sessions were ineffective.\n\n\nIMPLICATIONS\nEffective occupational therapy for improving handwriting must include adequate handwriting practice.",
"title": ""
},
{
"docid": "f160e297ece985bd23b72cc5eef1b11d",
"text": "We propose to exploit reconstruction as a layer-local training signal for deep learning. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on derivatives in order to perform credit assignment across many levels of possibly strong nonlinearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can could thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally, including a hypothesis stating that such auto-encoder mediated target propagation could play in brains the role of credit assignment through many non-linear, noisy and discrete transformations.",
"title": ""
},
{
"docid": "2b71cfacf2b1e0386094711d8b326ff7",
"text": "In-car navigation systems are designed with effectiveness and efficiency (e.g., guiding accuracy) in mind. However, finding a way and discovering new places could also be framed as an adventurous, stimulating experience for the driver and passengers. Inspired by Gaver and Martin's (2000) notion of \"ambiguity and detour\" and Hassenzahl's (2010) Experience Design, we built ExplorationRide, an in-car navigation system to foster exploration. An empirical in situ exploration demonstrated the system's ability to create an exploration experience, marked by a relaxed at-mosphere, a loss of sense of time, excitement about new places and an intensified relationship with the landscape.",
"title": ""
},
{
"docid": "82335fb368198a2cf7e3021627449058",
"text": "While cancer treatments are constantly advancing, there is still a real risk of relapse after potentially curative treatments. At the risk of adverse side effects, certain adjuvant treatments can be given to patients that are at high risk of recurrence. The challenge, however, is in finding the best tradeoff between these two extremes. Patients that are given more potent treatments, such as chemotherapy, radiation, or systemic treatment, can suffer unnecessary consequences, especially if the cancer does not return. Predictive modeling of recurrence can help inform patients and practitioners on a case-by-case basis, personalized for each patient. For large-scale predictive models to be built, structured data must be captured for a wide range of diverse patients. This paper explores current methods for building cancer recurrence risk models using structured clinical patient data.",
"title": ""
},
{
"docid": "6c68bccf376da1f963aaa8ec5e08b646",
"text": "The composition of the gut microbiota is in constant flow under the influence of factors such as the diet, ingested drugs, the intestinal mucosa, the immune system, and the microbiota itself. Natural variations in the gut microbiota can deteriorate to a state of dysbiosis when stress conditions rapidly decrease microbial diversity and promote the expansion of specific bacterial taxa. The mechanisms underlying intestinal dysbiosis often remain unclear given that combinations of natural variations and stress factors mediate cascades of destabilizing events. Oxidative stress, bacteriophages induction and the secretion of bacterial toxins can trigger rapid shifts among intestinal microbial groups thereby yielding dysbiosis. A multitude of diseases including inflammatory bowel diseases but also metabolic disorders such as obesity and diabetes type II are associated with intestinal dysbiosis. The characterization of the changes leading to intestinal dysbiosis and the identification of the microbial taxa contributing to pathological effects are essential prerequisites to better understand the impact of the microbiota on health and disease.",
"title": ""
},
{
"docid": "dbe6b892a0c1dd4a7082c1220d4b88b2",
"text": "This paper describes the participation of our MCAT search system in the NTCIR-12 MathIR Task. We introduce three granularity levels of textual information, new approach for generating dependency graph of math expressions, score normalization, cold-start weights, and unification. We find that these modules, except the cold-start weights, have a very good impact on the search performance of our system. The use of dependency graph significantly improves precision of our system, i.e., up to 24.52% and 104.20% relative improvements in the Main and Simto subtasks of the arXiv task, respectively. In addition, the implementation of unification delivers up to 2.90% and 57.14% precision improvements in the Main and Simto subtasks, respectively. Overall, our best submission achieves P@5 of 0.5448 in the Main subtask and 0.5500 in the Simto subtask. In the Wikipedia task, our system also performs well at the MathWikiFormula subtask. At the MathWiki subtask, however, due to a problem with handling queries formed as questions that contain many stop words, our system finishes second.",
"title": ""
},
{
"docid": "0562b3b1692f07060cf4eeb500ea6cca",
"text": "As the volume of medicinal information stored electronically increase, so do the need to enhance how it is secured. The inaccessibility to patient record at the ideal time can prompt death toll and also well degrade the level of health care services rendered by the medicinal professionals. Criminal assaults in social insurance have expanded by 125% since 2010 and are now the leading cause of medical data breaches. This study therefore presents the combination of 3DES and LSB to improve security measure applied on medical data. Java programming language was used to develop a simulation program for the experiment. The result shows medical data can be stored, shared, and managed in a reliable and secure manner using the combined model. Keyword: Information Security; Health Care; 3DES; LSB; Cryptography; Steganography 1.0 INTRODUCTION In health industries, storing, sharing and management of patient information have been influenced by the current technology. That is, medical centres employ electronical means to support their mode of service in order to deliver quality health services. The importance of the patient record cannot be over emphasised as it contributes to when, where, how, and how lives can be saved. About 91% of health care organizations have encountered no less than one data breach, costing more than $2 million on average per organization [1-3]. Report also shows that, medical records attract high degree of importance to hoodlums compare to Mastercard information because they infer more cash base on the fact that bank",
"title": ""
},
{
"docid": "40e0b3cfe54b69dce5977f6bc22c2bd6",
"text": "This paper links the direct-sequence code-division multiple access (DS-CDMA) multiuser separation-equalization-detection problem to the parallel factor (PARAFAC) model, which is an analysis tool rooted in psychometrics and chemometrics. Exploiting this link, it derives a deterministic blind PARAFAC DS-CDMA receiver with performance close to nonblind minimum mean-squared error (MMSE). The proposed PARAFAC receiver capitalizes on code, spatial, and temporal diversity-combining, thereby supporting small sample sizes, more users than sensors, and/or less spreading than users. Interestingly, PARAFAC does not require knowledge of spreading codes, the specifics of multipath (interchip interference), DOA-calibration information, finite alphabet/constant modulus, or statistical independence/whiteness to recover the information-bearing signals. Instead, PARAFAC relies on a fundamental result regarding the uniqueness of low-rank three-way array decomposition due to Kruskal (and generalized herein to the complex-valued case) that guaranteesidentifiability of all relevant signals and propagation parameters. These and other issues are also demonstrated in pertinent simulation experiments.",
"title": ""
},
{
"docid": "489127100b00493d81dc7644648732ad",
"text": "This paper presents a software tool - called Fractal Nature - that provides a set of fractal and physical based methods for creating realistic terrains called Fractal Nature. The output of the program can be used for creating content for video games and serious games. The approach for generating the terrain is based on noise filters, such as Gaussian distribution, capable of rendering highly realistic environments. It is demonstrated how a random terrain can change its shape and visual appearance containing artefacts such as steep slopes and smooth riverbeds. Moreover, two interactive erosion systems, hydraulic and thermal, were implemented. An initial evaluation with 12 expert users provided useful feedback for the applicability of the algorithms in video games as well as insights for future improvements.",
"title": ""
},
{
"docid": "33915af49384d028a591d93336feffd6",
"text": "This paper presents a new approach for recognition of 3D objects that are represented as 3D point clouds. We introduce a new 3D shape descriptor called Intrinsic Shape Signature (ISS) to characterize a local/semi-local region of a point cloud. An intrinsic shape signature uses a view-independent representation of the 3D shape to match shape patches from different views directly, and a view-dependent transform encoding the viewing geometry to facilitate fast pose estimation. In addition, we present a highly efficient indexing scheme for the high dimensional ISS shape descriptors, allowing for fast and accurate search of large model databases. We evaluate the performance of the proposed algorithm on a very challenging task of recognizing different vehicle types using a database of 72 models in the presence of sensor noise, obscuration and scene clutter.",
"title": ""
},
{
"docid": "fff6fe0a87a750e83745428b630149d2",
"text": "From 1960 through 1987, 89 patients with stage I (44 patients) or II (45 patients) vaginal carcinoma (excluding melanomas) were treated primarily at the Mayo Clinic. Treatment consisted of surgery alone in 52 patients, surgery plus radiation in 14, and radiation alone in 23. The median duration of follow-up was 4.4 years. The 5-year survival (Kaplan-Meier method) was 82% for patients with stage I disease and 53% for those with stage II disease (p = 0.009). Analysis of survival according to treatment did not show statistically significant differences. This report is consistent with previous studies showing that stage is an important prognostic factor and that treatment can be individualized, including surgical treatment for primary early-stage vaginal cancer.",
"title": ""
}
] |
scidocsrr
|
97e9c7b3d490dc41be471334ed63a541
|
The e-puck , a Robot Designed for Education in Engineering
|
[
{
"docid": "aaba5dc8efc9b6a62255139965b6f98d",
"text": "The interaction of an autonomous mobile robot with the real world critically depends on the robots morphology and on its environment. Building a model of these aspects is extremely complex, making simulation insu cient for accurate validation of control algorithms. If simulation environments are often very e cient, the tools for experimenting with real robots are often inadequate. The traditional programming languages and tools seldom provide enought support for realtime experiments, thus hindering the understanding of the control algorithms and making the experimentation complex and time-consuming. A miniature robot is presented: it has a cylindrical shape measuring 55 mm in diameter and 30 mm in height. Due to its small size, experiments can be performed quickly and cost-e ectively in a small working area. Small peripherals can be designed and connected to the basic module and can take advantage of a versatile communication scheme. A serial-link is provided to run control algorithms on a workstation during debugging, thereby giving the user the opportunity of employing all available graphical tools. Once debugged, the algorithm can be downloaded to the robot and run on its own processor. Experimentation with groups of robots is hardly possible with commercially available hardware. The size and the price of the described robot open the way to cost-e ective investigations into collective behaviour. This aspect of research drives the design of the robot described in this paper. Experiments with some twenty units are planned for the near future.",
"title": ""
}
] |
[
{
"docid": "f1b691a8072eaaaaf0e540a2d24445fa",
"text": "We describe a framework for finding and tracking “trails” for autonomous outdoor robot navigation. Through a combination of visual cues and ladar-derived structural information, the algorithm is able to follow paths which pass through multiple zones of terrain smoothness, border vegetation, tread material, and illumination conditions. Our shape-based visual trail tracker assumes that the approaching trail region is approximately triangular under perspective. It generates region hypotheses from a learned distribution of expected trail width and curvature variation, and scores them using a robust measure of color and brightness contrast with flanking regions. The structural component analogously rewards hypotheses which correspond to empty or low-density regions in a groundstrike-filtered ladar obstacle map. Our system's performance is analyzed on several long sequences with diverse appearance and structural characteristics. Ground-truth segmentations are used to quantify performance where available, and several alternative algorithms are compared on the same data.",
"title": ""
},
{
"docid": "27465b2c8ce92ccfbbda6c802c76838f",
"text": "Nonlinear hyperelastic energies play a key role in capturing the fleshy appearance of virtual characters. Real-world, volume-preserving biological tissues have Poisson’s ratios near 1/2, but numerical simulation within this regime is notoriously challenging. In order to robustly capture these visual characteristics, we present a novel version of Neo-Hookean elasticity. Our model maintains the fleshy appearance of the Neo-Hookean model, exhibits superior volume preservation, and is robust to extreme kinematic rotations and inversions. We obtain closed-form expressions for the eigenvalues and eigenvectors of all of the system’s components, which allows us to directly project the Hessian to semipositive definiteness, and also leads to insights into the numerical behavior of the material. These findings also inform the design of more sophisticated hyperelastic models, which we explore by applying our analysis to Fung and Arruda-Boyce elasticity. We provide extensive comparisons against existing material models.",
"title": ""
},
{
"docid": "02a6e024c1d318862ad4c17b9a56ca36",
"text": "Artificial food colors (AFCs) have not been established as the main cause of attention-deficit hyperactivity disorder (ADHD), but accumulated evidence suggests that a subgroup shows significant symptom improvement when consuming an AFC-free diet and reacts with ADHD-type symptoms on challenge with AFCs. Of children with suspected sensitivities, 65% to 89% reacted when challenged with at least 100 mg of AFC. Oligoantigenic diet studies suggested that some children in addition to being sensitive to AFCs are also sensitive to common nonsalicylate foods (milk, chocolate, soy, eggs, wheat, corn, legumes) as well as salicylate-containing grapes, tomatoes, and orange. Some studies found \"cosensitivity\" to be more the rule than the exception. Recently, 2 large studies demonstrated behavioral sensitivity to AFCs and benzoate in children both with and without ADHD. A trial elimination diet is appropriate for children who have not responded satisfactorily to conventional treatment or whose parents wish to pursue a dietary investigation.",
"title": ""
},
{
"docid": "1ebdcfe9c477e6a29bfce1ddeea960aa",
"text": "Bitcoin—a cryptocurrency built on blockchain technology—was the first currency not controlled by a single entity.1 Initially known to a few nerds and criminals,2 bitcoin is now involved in hundreds of thousands of transactions daily. Bitcoin has achieved values of more than US$15,000 per coin (at the end of 2017), and this rising value has attracted attention. For some, bitcoin is digital fool’s gold. For others, its underlying blockchain technology heralds the dawn of a new digital era. Both views could be right. The fortunes of cryptocurrencies don’t define blockchain. Indeed, the biggest effects of blockchain might lie beyond bitcoin, cryptocurrencies, or even the economy. Of course, the technical questions about blockchain have not all been answered. We still struggle to overcome the high levels of processing intensity and energy use. These questions will no doubt be confronted over time. If the technology fails, the future of blockchain will be different. In this article, I’ll assume technical challenges will be solved, and although I’ll cover some technical issues, these aren’t the main focus of this paper. In a 2015 article, “The Trust Machine,” it was argued that the biggest effects of blockchain are on trust.1 The article referred to public trust in economic institutions, that is, that such organizations and intermediaries will act as expected. When they don’t, trust deteriorates. Trust in economic institutions hasn’t recovered from the recession of 2008.3 Technology can exacerbate distrust: online trades with distant counterparties can make it hard to settle disputes face to face. Trusted intermediaries can be hard to find, and that’s where blockchain can play a part. Permanent record-keeping that can be sequentially updated but not erased creates visible footprints of all activities conducted on the chain. This reduces the uncertainty of alternative facts or truths, thus creating the “trust machine” The Economist describes. As trust changes, so too does governance.4 Vitalik Buterin of the Ethereum blockchain platform calls blockchain “a magic computer” to which anyone can upload self-executing programs.5 All states of every Beyond Bitcoin: The Rise of Blockchain World",
"title": ""
},
{
"docid": "bc3f2f0c2e33668668714dcebe1365a2",
"text": "Our dexterous hand is a fundmanetal human feature that distinguishes us from other animals by enabling us to go beyond grasping to support sophisticated in-hand object manipulation. Our aim was the design of a dexterous anthropomorphic robotic hand that matches the human hand's 24 degrees of freedom, under-actuated by seven motors. With the ability to replicate human hand movements in a naturalistic manner including in-hand object manipulation. Therefore, we focused on the development of a novel thumb and palm articulation that would facilitate in-hand object manipulation while avoiding mechanical design complexity. Our key innovation is the use of a tendon-driven ball joint as a basis for an articulated thumb. The design innovation enables our under-actuated hand to perform complex in-hand object manipulation such as passing a ball between the fingers or even writing text messages on a smartphone with the thumb's end-point while holding the phone in the palm of the same hand. We then proceed to compare the dexterity of our novel robotic hand design to other designs in prosthetics, robotics and humans using simulated and physical kinematic data to demonstrate the enhanced dexterity of our novel articulation exceeding previous designs by a factor of two. Our innovative approach achieves naturalistic movement of the human hand, without requiring translation in the hand joints, and enables teleoperation of complex tasks, such as single (robot) handed messaging on a smartphone without the need for haptic feedback. Our simple, under-actuated design outperforms current state-of-the-art prostheses or robotic and prosthetic hands regarding abilities that encompass from grasps to activities of daily living which involve complex in-hand object manipulation.",
"title": ""
},
{
"docid": "60736095287074c8a81c9ce5afa93f75",
"text": "The visualization of high-quality isosurfaces at interactive rates is an important tool in many simulation and visualization applications. Today, isosurfaces are most often visualized by extracting a polygonal approximation that is then rendered via graphics hardware or by using a special variant of preintegrated volume rendering. However, these approaches have a number of limitations in terms of the quality of the isosurface, lack of performance for complex data sets, or supported shading models. An alternative isosurface rendering method that does not suffer from these limitations is to directly ray trace the isosurface. However, this approach has been much too slow for interactive applications unless massively parallel shared-memory supercomputers have been used. In this paper, we implement interactive isosurface ray tracing on commodity desktop PCs by building on recent advances in real-time ray tracing of polygonal scenes and using those to improve isosurface ray tracing performance as well. The high performance and scalability of our approach will be demonstrated with several practical examples, including the visualization of highly complex isosurface data sets, the interactive rendering of hybrid polygonal/isosurface scenes, including high-quality ray traced shading effects, and even interactive global illumination on isosurfaces.",
"title": ""
},
{
"docid": "467c538a696027d92f1b510d6179f73f",
"text": "We investigated the acute and chronic effects of low-intensity concentric or eccentric resistance training with blood flow restriction (BFR) on muscle size and strength. Ten young men performed 30% of concentric one repetition maximal dumbbell curl exercise (four sets, total 75 reps) 3 days/week for 6 weeks. One arm was randomly chosen for concentric BFR (CON-BFR) exercise only and the other arm performed eccentric BFR (ECC-BFR) exercise only at the same exercise load. During the exercise session, iEMG for biceps brachii muscles increased progressively during CON-BFR, which was greater (p<0.05) than that of the ECC-BFR. Immediately after the exercise, muscle thickness (MTH) of the elbow flexors acutely increased (p<0.01) with both CON-BFR and ECC-BFR, but was greater with CON-BFR (11.7%) (p<0.01) than ECC-BFR (3.9%) at 10-cm above the elbow joint. Following 6-weeks of training, MRI-measured muscle cross-sectional area (CSA) at 10-cm position and mid-upper arm (12.0% and 10.6%, respectively) as well as muscle volume (12.5%) of the elbow flexors were increased (p<0.01) with CON-BFR. Increases in muscle CSA and volume were lower in ECC-BFR (5.1%, 0.8% and 2.9%, respectively) than in the CON-BFR and only muscle CSA at 10-cm position increased significantly (p<0.05) after the training. Maximal voluntary isometric strength of elbow flexors was increased (p<0.05) in CON-BFR (8.6%), but not in ECC (3.8%). These results suggest that CON-BFR training leads to pronounced acute changes in muscle size, an index of muscle cell swelling, the response to which may be an important factor for promoting muscle hypertrophy with BFR resistance training.",
"title": ""
},
{
"docid": "c4a895af5fe46e91f599f71403948a2b",
"text": "The rise in popularity of the Android platform has resulted in an explosion of malware threats targeting it. As both Android malware and the operating system itself constantly evolve, it is very challenging to design robust malware mitigation techniques that can operate for long periods of time without the need for modifications or costly re-training. In this paper, we present MAMADROID, an Android malware detection system that relies on app behavior. MAMADROID builds a behavioral model, in the form of a Markov chain, from the sequence of abstracted API calls performed by an app, and uses it to extract features and perform classification. By abstracting calls to their packages or families, MAMADROID maintains resilience to API changes and keeps the feature set size manageable. We evaluate its accuracy on a dataset of 8.5K benign and 35.5K malicious apps collected over a period of six years, showing that it not only effectively detects malware (with up to 99% F-measure), but also that the model built by the system keeps its detection capabilities for long periods of time (on average, 87% and 73% F-measure, respectively, one and two years after training). Finally, we compare against DROIDAPIMINER, a state-of-the-art system that relies on the frequency of API calls performed by apps, showing that MAMADROID significantly outperforms it.",
"title": ""
},
{
"docid": "c8dc06de68e4706525e98f444e9877e4",
"text": "This study used two field trials with 5 and 34 years of liming histories, respectively, and aimed to elucidate the long-term effect of liming on soil organic C (SOC) in acid soils. It was hypothesized that long-term liming would increase SOC concentration, macro-aggregate stability and SOC concentration within aggregates. Surface soils (0–10 cm) were sampled and separated into four aggregate-size classes: large macro-aggregates (>2 mm), small macro-aggregates (0.25–2 mm), micro-aggregates (0.053–0.25 mm) and silt and clay fraction (<0.053 mm) by wet sieving, and the SOC concentration of each aggregate-size was quantified. Liming decreased SOC in the bulk soil and in aggregates as well as macro-aggregate stability in the low-input and cultivated 34-year-old trial. In contrast, liming did not significantly change the concentration of SOC in the bulk soil or in aggregates but improved macro-aggregate stability in the 5-year-old trial under undisturbed unimproved pastures. Furthermore, the single application of lime to the surface soil increased pH in both topsoil (0–10 cm) and subsurface soil (10–20 cm) and increased K2SO4-extractable C, microbial biomass C (Cmic) and basal respiration (CO2) in both soil layers of both lime trials. Liming increased the percentage of SOC present as microbial biomass C (Cmic/Corg) and decreased the respiration rate per unit biomass (qCO2). The study concludes that despite long-term liming decreased total SOC in the low-input systems, it increased labile C pools and the percentage of SOC present as microbial biomass C.",
"title": ""
},
{
"docid": "cea0f4b7409729fd310024d2e9a31b71",
"text": "Relative ranging between Wireless Sensor Network (WSN) nod es is considered to be an important requirement for a number of dis tributed applications. This paper focuses on a two-way, time of flight (ToF) te chnique which achieves good accuracy in estimating the point-to-point di s ance between two wireless nodes. The underlying idea is to utilize a two-way t ime transfer approach in order to avoid the need for clock synchronization b etween the participating wireless nodes. Moreover, by employing multipl e ToF measurements, sub-clock resolution is achieved. A calibration stage is us ed to estimate the various delays that occur during a message exchange and require subtraction from the initial timed value. The calculation of the range betwee n the nodes takes place on-node making the proposed scheme suitable for distribute d systems. Care has been taken to exclude the erroneous readings from the set of m easurements that are used in the estimation of the desired range. The two-way T oF technique has been implemented on commercial off-the-self (COTS) device s without the need for additional hardware. The system has been deployed in var ous experimental locations both indoors and outdoors and the obtained result s reveal that accuracy between 1m RMS and 2.5m RMS in line-of-sight conditions over a 42m range can be achieved.",
"title": ""
},
{
"docid": "9e84bd8c033bf04592b732e6c6a604c6",
"text": "In recent years, endomicroscopy has become increasingly used for diagnostic purposes and interventional guidance. It can provide intraoperative aids for real-time tissue characterization and can help to perform visual investigations aimed for example to discover epithelial cancers. Due to physical constraints on the acquisition process, endomicroscopy images, still today have a low number of informative pixels which hampers their quality. Post-processing techniques, such as Super-Resolution (SR), are a potential solution to increase the quality of these images. SR techniques are often supervised, requiring aligned pairs of low-resolution (LR) and high-resolution (HR) images patches to train a model. However, in our domain, the lack of HR images hinders the collection of such pairs and makes supervised training unsuitable. For this reason, we propose an unsupervised SR framework based on an adversarial deep neural network with a physically-inspired cycle consistency, designed to impose some acquisition properties on the super-resolved images. Our framework can exploit HR images, regardless of the domain where they are coming from, to transfer the quality of the HR images to the initial LR images. This property can be particularly useful in all situations where pairs of LR/HR are not available during the training. Our quantitative analysis, validated using a database of 238 endomicroscopy video sequences from 143 patients, shows the ability of the pipeline to produce convincing super-resolved images. A Mean Opinion Score (MOS) study also confirms this quantitative image quality assessment.",
"title": ""
},
{
"docid": "25bddb3111da2485c341eec1d7fdf7c0",
"text": "Security protocols are building blocks in secure communications. Security protocols deploy some security mechanisms to provide certain security services. Security protocols are considered abstract when analyzed. They might involve more vulnerabilities when implemented. This manuscript provides a holistic study on security protocols. It reviews foundations of security protocols, taxonomy of attacks on security protocols and their implementations, and different methods and models for security analysis of protocols. Specifically, it clarifies differences between information-theoretic and computational security, and computational and symbolic models. Furthermore, a survey on computational security models for authenticated key exchange (AKE) and passwordauthenticated key exchange (PAKE) protocols, as the most important and well-studied type of security protocols, is provided.",
"title": ""
},
{
"docid": "e01d5be587c73aaa133acb3d8aaed996",
"text": "This paper presents a new optimization-based method to control three micro-scale magnetic agents operating in close proximity to each other for applications in microrobotics. Controlling multiple magnetic microrobots close to each other is difficult due to magnetic interactions between the agents, and here we seek to control those interactions for the creation of desired multi-agent formations. Our control strategy arises from physics that apply force in the negative direction of states errors. The objective is to regulate the inter-agent spacing, heading and position of the set of agents, for motion in two dimensions, while the system is inherently underactuated. Simulation results on three agents and a proof-of-concept experiment on two agents show the feasibility of the idea to shed light on future micro/nanoscale multi-agent explorations. Average tracking error of less than 50 micrometers and 1.85 degrees is accomplished for the regulation of the inter-agent space and the pair heading angle, respectively, for identical spherical-shape agents with nominal radius less than of 250 micrometers operating within several body-lengths of each other.",
"title": ""
},
{
"docid": "c3f1a534afe9f5c48aac88812a51ab09",
"text": "We propose a novel method MultiModal Pseudo Relevance Feedback (MMPRF) for event search in video, which requires no search examples from the user. Pseudo Relevance Feedback has shown great potential in retrieval tasks, but previous works are limited to unimodal tasks with only a single ranked list. To tackle the event search task which is inherently multimodal, our proposed MMPRF takes advantage of multiple modalities and multiple ranked lists to enhance event search performance in a principled way. The approach is unique in that it leverages not only semantic features, but also non-semantic low-level features for event search in the absence of training data. Evaluated on the TRECVID MEDTest dataset, the approach improves the baseline by up to 158% in terms of the mean average precision. It also significantly contributes to CMU Team's final submission in TRECVID-13 Multimedia Event Detection.",
"title": ""
},
{
"docid": "4d9a4cb23ad4ac56a3fbfece57fb6647",
"text": "Gene therapy refers to a rapidly growing field of medicine in which genes are introduced into the body to treat or prevent diseases. Although a variety of methods can be used to deliver the genetic materials into the target cells and tissues, modified viral vectors represent one of the more common delivery routes because of its transduction efficiency for therapeutic genes. Since the introduction of gene therapy concept in the 1970s, the field has advanced considerably with notable clinical successes being demonstrated in many clinical indications in which no standard treatment options are currently available. It is anticipated that the clinical success the field observed in recent years can drive requirements for more scalable, robust, cost effective, and regulatory-compliant manufacturing processes. This review provides a brief overview of the current manufacturing technologies for viral vectors production, drawing attention to the common upstream and downstream production process platform that is applicable across various classes of viral vectors and their unique manufacturing challenges as compared to other biologics. In addition, a case study of an industry-scale cGMP production of an AAV-based gene therapy product performed at 2,000 L-scale is presented. The experience and lessons learned from this largest viral gene therapy vector production run conducted to date as discussed and highlighted in this review should contribute to future development of commercial viable scalable processes for vial gene therapies.",
"title": ""
},
{
"docid": "f92087a8e81c45cd8bedc12fddd682fc",
"text": "This paper presented a novel power conversion method of realizing the galvanic isolation by dual safety capacitors (Y-cap) instead of conventional transformer. With limited capacitance of the Y capacitor, series resonant is proposed to achieve the power transfer. The basic concept is to control the power path impedance, which blocks the dominant low-frequency part of touch current and let the high-frequency power flow freely. Conceptual analysis, simulation and design considerations are mentioned in this paper. An 85W AC/AC prototype is designed and verified to substitute the isolation transformer of a CCFL LCD TV backlight system. Compared with the conventional transformer isolation, the new method is proved to meet the function and safety requirements of its specification while has higher efficiency and smaller size.",
"title": ""
},
{
"docid": "a88e52b2aff5d30a5b4314d59392910e",
"text": "The design and implementation of a compact monopole antenna with broadband circular polarization is presented in this letter. The proposed antenna consists of a simple C-shaped patch and a modified ground plane with the overall size of 0.33 λ × 0.37 λ. By properly embedding a slit in the C-shaped patch and improving the ground plane with two triangular stubs, the measured broadband 3-dB axial-ratio bandwidth of 104.7% (2.05–6.55 GHz) is obtained, while the measured impedance bandwidth of 106.3% (2.25–7.35 GHz), defined by –10-dB return loss, is achieved. The performance for different parameters is analyzed. The proposed antenna is a good candidate for the application of various wireless communication systems.",
"title": ""
},
{
"docid": "1ef0a2569a1e6a4f17bfdc742ad30a7f",
"text": "Internet of Things (IoT) is becoming more and more popular. Increasingly, European projects (CityPulse, IoT.est, IoT-i and IERC), standard development organizations (ETSI M2M, oneM2M and W3C) and developers are involved in integrating Semantic Web technologies to Internet of Things. All of them design IoT application uses cases which are not necessarily interoperable with each other. The main innovative research challenge is providing a unified system to build interoperable semantic-based IoT applications. In this paper, to overcome this challenge, we design the Semantic Web of Things (SWoT) generator to assist IoT projects and developers in: (1) building interoperable Semantic Web of Things (SWoT) applications by providing interoperable semantic-based IoT application templates, (2) easily inferring high-level abstractions from sensor measurements thanks to the rules provided by the template, (3) designing domain-specific or inter-domain IoT applications thanks to the interoperable domain knowledge provided by the template, and (4) encouraging to reuse as much as possible the background knowledge already designed. We demonstrate the usefulness of our contribution though three use cases: (1) cloud-based IoT developers, (2) mobile application developers, and (3) assisting IoT projects. A proof-of concept for providing Semantic Web of Things application templates is available at http://www.sensormeasurement.appspot.com/?p=m3api.",
"title": ""
},
{
"docid": "0ef2a90669c0469df0dc2281a414cf37",
"text": "Web Intelligence is a direction for scientific research that explores practical applications of Artificial Intelligence to the next generation of Web-empowered systems. In this paper, we present a Web-based intelligent tutoring system for computer programming. The decision making process conducted in our intelligent system is guided by Bayesian networks, which are a formal framework for uncertainty management in Artificial Intelligence based on probability theory. Whereas many tutoring systems are static HTML Web pages of a class textbook or lecture notes, our intelligent system can help a student navigate through the online course materials, recommend learning goals, and generate appropriate reading sequences.",
"title": ""
},
{
"docid": "fe2bc36e704b663c8b9a72e7834e6c7e",
"text": "Driven by deep learning, there has been a surge of specialized processors for matrix multiplication, referred to as Tensor Core Units (TCUs). These TCUs are capable of performing matrix multiplications on small matrices (usually 4× 4 or 16×16) to accelerate the convolutional and recurrent neural networks in deep learning workloads. In this paper we leverage NVIDIA’s TCU to express both reduction and scan with matrix multiplication and show the benefits — in terms of program simplicity, efficiency, and performance. Our algorithm exercises the NVIDIA TCUs which would otherwise be idle, achieves 89%− 98% of peak memory copy bandwidth, and is orders of magnitude faster (up to 100× for reduction and 3× for scan) than state-of-the-art methods for small segment sizes — common in machine learning and scientific applications. Our algorithm achieves this while decreasing the power consumption by up to 22% for reduction and 16% for scan.",
"title": ""
}
] |
scidocsrr
|
64a356081758b34c9f3bce1b948f32bb
|
Development of a Small Legged Wall Climbing Robot with Passive Suction Cups
|
[
{
"docid": "08fcc60aad5e9183c9c9440698317bcd",
"text": "This paper proposes a small-scale agile wall climbing robot able to navigate on smooth surfaces of any orientation, including vertical and inverted surfaces, which uses adhesive elastomer materials for attachment. Using two actuated legs with rotary motion and two passive revolute joints at each foot the robot can climb and steer in any orientation. Due to its compact design, a high degree of miniaturization is possible. It has onboard power, sensing, computing, and wireless communication which allow for semi-autonomous operation. Various aspects of a functioning prototype design and performance are discussed in detail, including leg and feet design and gait control. The current prototype can climb 90deg slopes at a speed of 6 cm/s and steer to any angle. This robot is intended for inspection and surveillance applications and, ultimately, space missions",
"title": ""
}
] |
[
{
"docid": "d42ed4f231d51cacaf1f42de1c723c31",
"text": "A stepped circular waveguide dual-mode (SCWDM) filter is fully investigated in this paper, from its basic characteristic to design formula. As compared to a conventional circular waveguide dual-mode (CWDM) filter, it provides more freedoms for shifting and suppressing the spurious modes in a wide frequency band. This useful attribute can be used for a broadband waveguide contiguous output multiplexer (OMUX) in satellite payloads. The scaling factor for relating coupling value M to its corresponding impedance inverter K in a stepped cavity is derived for full-wave EM design. To validate the design technique, four design examples are presented. One challenging example is a wideband 17-channel Ku-band contiguous multiplexer with two SCWDM channel filters. A triplexer hardware covering the same included bandwidth is also designed and measured. The measurement results show excellent agreement with those of the theoretical EM designs, justifying the effectiveness of full-wave EM modal analysis. Comparing to the best possible design of conventional CWDM filters, at least 30% more spurious-free range in both Ku-band and C-band can be achieved by using SCWDM filters.",
"title": ""
},
{
"docid": "63c815c9aa92acec6664c0865f1856e1",
"text": "We examined the role of kisspeptin and its receptor, the G-protein-coupled receptor GPR54, in governing the onset of puberty in the mouse. In the adult male and female mouse, kisspeptin (10-100 nM) evoked a remarkably potent, long-lasting depolarization of >90% of gonadotropin-releasing hormone (GnRH)-green fluorescent protein neurons in situ. In contrast, in juvenile [postnatal day 8 (P8) to P19] and prepubertal (P26-P33) male mice, kisspeptin activated only 27 and 44% of GnRH neurons, respectively. This developmental recruitment of GnRH neurons into a kisspeptin-responsive pool was paralleled by an increase in the ability of centrally administered kisspeptin to evoke luteinizing hormone secretion in vivo. To learn more about the mechanisms through which kisspeptin-GPR54 signaling at the GnRH neuron may change over postnatal development, we performed quantitative in situ hybridization for kisspeptin and GPR54 transcripts. Approximately 90% of GnRH neurons were found to express GPR54 mRNA in both juvenile and adult mice, without a detectable difference in the mRNA content between the age groups. In contrast, the expression of KiSS-1 mRNA increased dramatically across the transition from juvenile to adult life in the anteroventral periventricular nucleus (AVPV; p < 0.001). These results demonstrate that kisspeptin exerts a potent depolarizing effect on the excitability of almost all adult GnRH neurons and that the responsiveness of GnRH neurons to kisspeptin increases over postnatal development. Together, these observations suggest that activation of GnRH neurons by kisspeptin at puberty reflects a dual process involving an increase in kisspeptin input from the AVPV and a post-transcriptional change in GPR54 signaling within the GnRH neuron.",
"title": ""
},
{
"docid": "fd2d04af3b259a433eb565a41b11ffbd",
"text": "OVERVIEW • We develop novel orthogonality regularizations on training deep CNNs, by borrowing ideas and tools from sparse optimization. • These plug-and-play regularizations can be conveniently incorporated into training almost any CNN without extra hassle. • The proposed regularizations can consistently improve the performances of baseline deep networks on CIFAR-10/100, ImageNet and SVHN datasets, based on intensive empirical experiments, as well as accelerate/stabilize the training curves. • The proposed orthogonal regularizations outperform existing competitors.",
"title": ""
},
{
"docid": "96a79bc015e34db18e32a31bfaaace36",
"text": "We consider social media as a promising tool for public health, focusing on the use of Twitter posts to build predictive models about the forthcoming influence of childbirth on the behavior and mood of new mothers. Using Twitter posts, we quantify postpartum changes in 376 mothers along dimensions of social engagement, emotion, social network, and linguistic style. We then construct statistical models from a training set of observations of these measures before and after the reported childbirth, to forecast significant postpartum changes in mothers. The predictive models can classify mothers who will change significantly following childbirth with an accuracy of 71%, using observations about their prenatal behavior, and as accurately as 80-83% when additionally leveraging the initial 2-3 weeks of postnatal data. The study is motivated by the opportunity to use social media to identify mothers at risk of postpartum depression, an underreported health concern among large populations, and to inform the design of low-cost, privacy-sensitive early-warning systems and intervention programs aimed at promoting wellness postpartum.",
"title": ""
},
{
"docid": "6660bcfd564726421d9eaaa696549454",
"text": "When building intelligent spaces, the knowledge representation for encapsulating rooms, users, groups, roles, and other information is a fundamental design question. We present a semantic network as such a representation, and demonstrate its utility as a basis for ongoing work.",
"title": ""
},
{
"docid": "2b471e61a6b95221d9ca9c740660a726",
"text": "We propose a low-overhead sampling infrastructure for gathering information from the executions experienced by a program's user community. Several example applications illustrate ways to use sampled instrumentation to isolate bugs. Assertion-dense code can be transformed to share the cost of assertions among many users. Lacking assertions, broad guesses can be made about predicates that predict program errors and a process of elimination used to whittle these down to the true bug. Finally, even for non-deterministic bugs such as memory corruption, statistical modeling based on logistic regression allows us to identify program behaviors that are strongly correlated with failure and are therefore likely places to look for the error.",
"title": ""
},
{
"docid": "99bf50d4a382d9ed8548b3be3d91acd4",
"text": "We present a new descriptor for tactile 3D object classification. It is invariant to object movement and simple to construct, using only the relative geometry of points on the object surface. We demonstrate successful classification of 185 objects in 10 categories, at sparse to dense surface sampling rate in point cloud simulation, with an accuracy of 77.5% at the sparsest and 90.1% at the densest. In a physics-based simulation, we show that contact clouds resembling the object shape can be obtained by a series of gripper closures using a robotic hand equipped with sparse tactile arrays. Despite sparser sampling of the object's surface, classification still performs well, at 74.7%. On a real robot, we show the ability of the descriptor to discriminate among different object instances, using data collected by a tactile hand.",
"title": ""
},
{
"docid": "34c441bfb1394ac9f4f561cb19c3ace7",
"text": "Deep reinforcement learning methods attain super-human performance in a wide range of environments. Such methods are grossly inefficient, often taking orders of magnitudes more data than humans to achieve reasonable performance. We propose Neural Episodic Control: a deep reinforcement learning agent that is able to rapidly assimilate new experiences and act upon them. Our agent uses a semi-tabular representation of the value function: a buffer of past experience containing slowly changing state representations and rapidly updated estimates of the value function. We show across a wide range of environments that our agent learns significantly faster than other state-of-the-art, general purpose deep reinforcement learning agents.",
"title": ""
},
{
"docid": "bb361bc0ce796ab9435c281720ce2ae1",
"text": "Developers typically rely on the information submitted by end-users to resolve bugs. We conducted a survey on information needs and commonly faced problems with bug reporting among several hundred developers and users of the APACHE, ECLIPSE and MOZILLA projects. In this paper, we present the results of a card sort on the 175 comments sent back to us by the responders of the survey. The card sort revealed several hurdles involved in reporting and resolving bugs, which we present in a collection of recommendations for the design of new bug tracking systems. Such systems could provide contextual assistance, reminders to add information, and most important, assistance to collect and report crucial information to developers.",
"title": ""
},
{
"docid": "34901b8e3e7667e3a430b70a02595f69",
"text": "In the previous NTCIR8-GeoTime task, ABRIR (Appropriate Boolean query Reformulation for Information Retrieval) proved to be one of the most effective systems for retrieving documents with Geographic and Temporal constraints. However, failure analysis showed that the identification of named entities and relationships between these entities and the query is important in improving the quality of the system. In this paper, we propose to use Wikipedia and GeoNames as resources for extracting knowledge about named entities. We also modify our system to use such information.",
"title": ""
},
{
"docid": "93bc875cf2145dfdcd8a2ce44049aa0d",
"text": "We construct a counterfactual statement when we reason conjecturally about an event which did or did not occur in the past: If an event had occurred, what would have happened? Would it be relevant? Real world examples, as studied by Byrne, Rescher and many others, show that these conditionals involve a complex reasoning process. An intuitive and elegant approach to evaluate counterfactuals, without deep revision mechanisms, is proposed by Pearl. His Do-Calculus identifies causal relations in a Bayesian network resorting to counterfactuals. Though leaving out probabilities, we adopt Pearl’s stance, and its prior epistemological justification to counterfactuals in causal Bayesian networks, but for programs. Logic programming seems a suitable environment for several reasons. First, its inferential arrow is adept at expressing causal direction and conditional reasoning. Secondly, together with its other functionalities such as abduction, integrity constraints, revision, updating and debugging (a form of counterfactual reasoning), it proffers a wide range of expressibility itself. We show here how programs under the weak completion semantics in an abductive framework, comprising the integrity constraints, can smoothly and uniformly capture well-known and off-the-shelf counterfactual problems and conundrums, taken from the psychological and philosophical literature. Our approach is adroitly reconstructable in other three-valued LP semantics, or restricted to two-valued ones.",
"title": ""
},
{
"docid": "5cd726f49dd0cb94fe7d2d724da9f215",
"text": "We implement pedestrian dead reckoning (PDR) for indoor localization. With a waist-mounted PDR based system on a smart-phone, we estimate the user's step length that utilizes the height change of the waist based on the Pythagorean Theorem. We propose a zero velocity update (ZUPT) method to address sensor drift error: Simple harmonic motion and a low-pass filtering mechanism combined with the analysis of gait characteristics. This method does not require training to develop the step length model. Exploiting the geometric similarity between the user trajectory and the floor map, our map matching algorithm includes three different filters to calibrate the direction errors from the gyro using building floor plans. A sliding-window-based algorithm detects corners. The system achieved 98% accuracy in estimating user walking distance with a waist-mounted phone and 97% accuracy when the phone is in the user's pocket. ZUPT improves sensor drift error (the accuracy drops from 98% to 84% without ZUPT) using 8 Hz as the cut-off frequency to filter out sensor noise. Corner length impacted the corner detection algorithm. In our experiments, the overall location error is about 0.48 meter.",
"title": ""
},
{
"docid": "fd4753610f46c566d263dd7a7837cf05",
"text": "EcoLexiCAT is a web-based tool for the terminology-enhanced translation of specialized environmental texts for the language combination English-Spanish-English. It uses the open source version of the web-based CAT tool MateCat and enriches a source text with information from: (1) EcoLexicon, a multimodal and multilingual terminological knowledge base on the environment (Faber et al., 2014; Faber et al., 2016); (2) BabelNet, an automatically constructed multilingual encyclopedic dictionary and semantic network (Navigli & Ponzetto, 2012); (3) Sketch Engine, the well-known corpus query system (Kilgarriff et al., 2004); (4) IATE, the multilingual glossary of the European Commission; and (4) other external resources (i.e. Wikipedia, Collins, Wordreference, Linguee, etc.) that can also be customized by the user. The tool was built with the aim of integrating terminology management – often considered complex and time-consuming – in the translation workflow of a CAT tool. In this paper, EcoLexiCAT is described along the procedure with which it was evaluated and the results of the evaluation.",
"title": ""
},
{
"docid": "51b766b0a7f1e3bc1f49d16df04a69f7",
"text": "This study reports the results of a biometrical genetical analysis of scores on a personality inventory (The Eysenck Personality Questionnaire, or EPQ), which purports to measure psychoticism, neuroticism, extraversion and dissimulation (Lie Scale). The subjects were 544 pairs of twins, from the Maudsley Twin Register. The purpose of the study was to test the applicability of various genotypeenvironmental models concerning the causation of P scores. Transformation of the raw scores is required to secure a scale on which the effects of genes and environment are additive. On such a scale 51% of the variation in P is due to environmental differences within families, but the greater part (77%) of this environmental variation is due to random effects which are unlikely to be controllable. . The genetical consequences ot'assortative mating were too slight to be detectable in this study, and the genetical variation is consistent with the hypothesis that gene effects are additive. This is a general finding for traits which have been subjected to stabilizing selection. Our model for P is consistent with these advanced elsewhere to explain the origin of certain kinds of psychopathology. The data provide little support for the view that the \"family environment\" (including the environmental influence of parents) plays a major part in the determination of individual differences in P, though we cite evidence suggesting that sibling competition effects are producing genotypeenvironmental covariation for the determinants of P in males. The genetical and environmental determinants of the covariation of P with other personality dimensions are considered. Assumptions are discussed and tested where possible.",
"title": ""
},
{
"docid": "566144a980fe85005f7434f7762bfeb9",
"text": "This article describes the rationale, development, and validation of the Scale for Suicide Ideation (SSI), a 19-item clinical research instrument designed to quantify and assess suicidal intention. The scale was found to have high internal consistency and moderately high correlations with clinical ratings of suicidal risk and self-administered measures of self-harm. Furthermore, it was sensitive to changes in levels of depression and hopelessness over time. Its construct validity was supported by two studies by different investigators testing the relationship between hopelessness, depression, and suicidal ideation and by a study demonstrating a significant relationship between high level of suicidal ideation and \"dichotomous\" attitudes about life and related concepts on a semantic differential test. Factor analysis yielded three meaningful factors: active suicidal desire, specific plans for suicide, and passive suicidal desire.",
"title": ""
},
{
"docid": "eaf7b6b0cc18453538087cc90254dbd8",
"text": "We present a real-time system that renders antialiased hard shadows using irregular z-buffers (IZBs). For subpixel accuracy, we use 32 samples per pixel at roughly twice the cost of a single sample. Our system remains interactive on a variety of game assets and CAD models while running at 1080p and 2160p and imposes no constraints on light, camera or geometry, allowing fully dynamic scenes without precomputation. Unlike shadow maps we introduce no spatial or temporal aliasing, smoothly animating even subpixel shadows from grass or wires.\n Prior irregular z-buffer work relies heavily on GPU compute. Instead we leverage the graphics pipeline, including hardware conservative raster and early-z culling. We observe a duality between irregular z-buffer performance and shadow map quality; this allows common shadow map algorithms to reduce our cost. Compared to state-of-the-art ray tracers, we spawn similar numbers of triangle intersections per pixel yet completely rebuild our data structure in under 2 ms per frame.",
"title": ""
},
{
"docid": "06efa7ddc20dc499c9db3127217883ce",
"text": "The development of Space Shuttle software posed unique requirements above and beyond raw size (30 times larger than Saturn V software), complexity, and criticality.",
"title": ""
},
{
"docid": "c85bd1c2ffb6b53bfeec1ec69f871360",
"text": "In this paper, we present a new design of a compact power divider based on the modification of the conventional Wilkinson power divider. In this new configuration, length reduction of the high-impedance arms is achieved through capacitive loading using open stubs. Radial configuration was adopted for bandwidth enhancement. Additionally, by insertion of the complex isolation network between the high-impedance transmission lines at an arbitrary phase angle other than 90 degrees, both electrical and physical isolation were achieved. Design equations as well as the synthesis procedure of the isolation network are demonstrated using an example centred at 1 GHz. The measurement results revealed a reduction of 60% in electrical length compared to the conventional Wilkinson power divider with a total length of only 30 degrees at the centre frequency of operation.",
"title": ""
},
{
"docid": "9b5b10031ab67dfd664993f727f1bce8",
"text": "PURPOSE\nWe propose a single network trained by pixel-to-label deep learning to address the general issue of automatic multiple organ segmentation in three-dimensional (3D) computed tomography (CT) images. Our method can be described as a voxel-wise multiple-class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image.\n\n\nMETHODS\nWe simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of \"convolution\" and \"deconvolution\" layers for 2D semantic image segmentation, and expands the core structure with 3D-2D-3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel-to-label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment.\n\n\nRESULTS\nThe proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground-truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth.\n\n\nCONCLUSIONS\nWe propose a single network based on pixel-to-label deep learning to address the challenging issue of anatomical structure segmentation in 3D CT cases. The novelty of this work is the policy of deep learning of the different 2D sectional appearances of 3D anatomical structures for CT cases and the majority voting of the 3D segmentation results from multiple crossed 2D sections to achieve availability and reliability with better efficiency, generality, and flexibility than conventional segmentation methods, which must be guided by human expertise.",
"title": ""
},
{
"docid": "c74d73c3d09d812099550c5a0ab18b36",
"text": "In this paper, we present a novel shadow removal system for single natural images as well as color aerial images using an illumination recovering optimization method. We first adaptively decompose the input image into overlapped patches according to the shadow distribution. Then, by building the correspondence between the shadow patch and the lit patch based on texture similarity, we construct an optimized illumination recovering operator, which effectively removes the shadows and recovers the texture detail under the shadow patches. Based on coherent optimization processing among the neighboring patches, we finally produce high-quality shadow-free results with consistent illumination. Our shadow removal system is simple and effective, and can process shadow images with rich texture types and nonuniform shadows. The illumination of shadow-free results is consistent with that of surrounding environment. We further present several shadow editing applications to illustrate the versatility of the proposed method.",
"title": ""
}
] |
scidocsrr
|
fc0dd612e5493af4741bdc4dead85fbe
|
Ensuring Security and Privacy Preservation for Cloud Data Services
|
[
{
"docid": "e2e71d3ba1a2cf1b4f0fa2c5d2bf9a10",
"text": "An important problem in public clouds is how to selectively share documents based on fine-grained attribute-based access control policies (acps). An approach is to encrypt documents satisfying different policies with different keys using a public key cryptosystem such as attribute-based encryption, and/or proxy re-encryption. However, such an approach has some weaknesses: it cannot efficiently handle adding/revoking users or identity attributes, and policy changes; it requires to keep multiple encrypted copies of the same documents; it incurs high computational costs. A direct application of a symmetric key cryptosystem, where users are grouped based on the policies they satisfy and unique keys are assigned to each group, also has similar weaknesses. We observe that, without utilizing public key cryptography and by allowing users to dynamically derive the symmetric keys at the time of decryption, one can address the above weaknesses. Based on this idea, we formalize a new key management scheme, called broadcast group key management (BGKM), and then give a secure construction of a BGKM scheme called ACV-BGKM. The idea is to give some secrets to users based on the identity attributes they have and later allow them to derive actual symmetric keys based on their secrets and some public information. A key advantage of the BGKM scheme is that adding users/revoking users or updating acps can be performed efficiently by updating only some public information. Using our BGKM construct, we propose an efficient approach for fine-grained encryption-based access control for documents stored in an untrusted cloud file storage.",
"title": ""
},
{
"docid": "01209a2ace1a4bc71ad4ff848bb8a3f4",
"text": "For data storage outsourcing services, it is important to allow data owners to efficiently and securely verify that the storage server stores their data correctly. To address this issue, several proof-of-retrievability (POR) schemes have been proposed wherein a storage server must prove to a verifier that all of a client's data are stored correctly. While existing POR schemes offer decent solutions addressing various practical issues, they either have a non-trivial (linear or quadratic) communication complexity, or only support private verification, i.e., only the data owner can verify the remotely stored data. It remains open to design a POR scheme that achieves both public verifiability and constant communication cost simultaneously.\n In this paper, we solve this open problem and propose the first POR scheme with public verifiability and constant communication cost: in our proposed scheme, the message exchanged between the prover and verifier is composed of a constant number of group elements; different from existing private POR constructions, our scheme allows public verification and releases the data owners from the burden of staying online. We achieved these by tailoring and uniquely combining techniques such as constant size polynomial commitment and homomorphic linear authenticators. Thorough analysis shows that our proposed scheme is efficient and practical. We prove the security of our scheme based on the Computational Diffie-Hellman Problem, the Strong Diffie-Hellman assumption and the Bilinear Strong Diffie-Hellman assumption.",
"title": ""
}
] |
[
{
"docid": "438d69760d828fe9f94a68dbd426778e",
"text": "Beginning with the assumption that implicit theories of personality are crucial tools for understanding social behavior, the authors tested the hypothesis that perceivers would process person information that violated their predominant theory in a biased manner. Using an attentional probe paradigm (Experiment 1) and a recognition memory paradigm (Experiment 2), the authors presented entity theorists (who believe that human attributes are fixed) and incremental theorists (who believe that human attributes are malleable) with stereotype-relevant information about a target person that supported or violated their respective theory. Both groups of participants showed evidence of motivated, selective processing only with respect to theory-violating information. In Experiment 3, the authors found that after exposure to theory-violating information, participants felt greater anxiety and worked harder to reestablish their sense of prediction and control mastery. The authors discuss the epistemic functions of implicit theories of personality and the impact of violated assumptions.",
"title": ""
},
{
"docid": "8399ff9241f59ce76937536cc8fc04a4",
"text": "NOTES: Basic EHR adoption requires the EHR system to have at least a basic set of EHR functions, including clinician notes, as defined in Table 2. A certified EHR is EHR technology that has been certified as meeting federal requirements for some or all of the hospital objectives of Meaningful Use. Possession means that the hospital has a legal agreement with the EHR vendor, but is not equivalent to adoption. *Significantly different from previous year (p < 0.05). SOURCE: ONC/American Hospital Association (AHA), AHA Annual Survey Information Technology Supplement",
"title": ""
},
{
"docid": "eca2bfe1b96489e155e19d02f65559d6",
"text": "• Oracle experiment: to understand how well these attributes, when used together, can explain persuasiveness, we train 3 linear SVM regressors, one for each component type, to score an arguments persuasiveness using gold attribute’s as features • Two human annotators who were both native speakers of English were first familiarized with the rubrics and definitions and then trained on five essays • 30 essays were doubly annotated for computing inter-annotator agreement • Each of the remaining essays was annotated by one of the annotators • Score/Class distributions by component type: Give me More Feedback: Annotating Argument Persusiveness and Related Attributes in Student Essays",
"title": ""
},
{
"docid": "9157378112fedfd9959683effe7a0a47",
"text": "Studies indicate that substance use among Ethiopian adolescents is considerably rising; in particular college and university students are the most at risk of substance use. The aim of the study was to assess substance use and associated factors among university students. A cross-sectional survey was carried out among 1040 Haramaya University students using self-administered structured questionnaire. Multistage sampling technique was used to select students. Descriptive statistics, bivariate, and multivariate analysis were done. About two-thirds (62.4%) of the participants used at least one substance. The most commonly used substance was alcohol (50.2%). Being male had strong association with substance use (AOR (95% CI), 3.11 (2.20, 4.40)). The odds of substance use behaviour is higher among third year students (AOR (95% CI), 1.48 (1.01, 2.16)). Being a follower of Muslim (AOR (95% CI), 0.62 (0.44, 0.87)) and Protestant (AOR (95% CI), 0.25 (0.17, 0.36)) religions was shown to be protective of substance use. Married (AOR (95% CI), 1.92 (1.12, 3.30)) and depressed (AOR (95% CI), 3.30 (2.31, 4.72)) students were more likely to use substances than others. The magnitude of substance use was high. This demands special attention, emergency preventive measures, and targeted information, education and communication activity.",
"title": ""
},
{
"docid": "b52a29cd426c5861dbb97aeb91efda4b",
"text": "In recent years, inexact computing has been increasingly regarded as one of the most promising approaches for slashing energy consumption in many applications that can tolerate a certain degree of inaccuracy. Driven by the principle of trading tolerable amounts of application accuracy in return for significant resource savings-the energy consumed, the (critical path) delay, and the (silicon) area-this approach has been limited to application-specified integrated circuits (ASICs) so far. These ASIC realizations have a narrow application scope and are often rigid in their tolerance to inaccuracy, as currently designed; the latter often determining the extent of resource savings we would achieve. In this paper, we propose to improve the application scope, error resilience and the energy savings of inexact computing by combining it with hardware neural networks. These neural networks are fast emerging as popular candidate accelerators for future heterogeneous multicore platforms and have flexible error resilience limits owing to their ability to be trained. Our results in 65-nm technology demonstrate that the proposed inexact neural network accelerator could achieve 1.78-2.67× savings in energy consumption (with corresponding delay and area savings being 1.23 and 1.46×, respectively) when compared to the existing baseline neural network implementation, at the cost of a small accuracy loss (mean squared error increases from 0.14 to 0.20 on average).",
"title": ""
},
{
"docid": "3f23f5452c53ae5fcc23d95acdcdafd8",
"text": "Metamorphism is a technique that mutates the binary code using different obfuscations and never keeps the same sequence of opcodes in the memory. This stealth technique provides the capability to a malware for evading detection by simple signature-based (such as instruction sequences, byte sequences and string signatures) anti-malware programs. In this paper, we present a new scheme named Annotated Control Flow Graph (ACFG) to efficiently detect such kinds of malware. ACFG is built by annotating CFG of a binary program and is used for graph and pattern matching to analyse and detect metamorphic malware. We also optimize the runtime of malware detection through parallelization and ACFG reduction, maintaining the same accuracy (without ACFG reduction) for malware detection. ACFG proposed in this paper: (i) captures the control flow semantics of a program; (ii) provides a faster matching of ACFGs and can handle malware with smaller CFGs, compared with other such techniques, without compromising the accuracy; (iii) contains more information and hence provides more accuracy than a CFG. Experimental evaluation of the proposed scheme using an existing dataset yields malware detection rate of 98.9% and false positive rate of 4.5%.",
"title": ""
},
{
"docid": "d8c367a18d7a8248b0600e3f295d14d3",
"text": "The digital world is growing day by day; many new risks have emerged during the exchange of information around the world; and many ways have evolved to protect the information. In this paper, this paper will conceal information into an image by using three methods that concentrate on the compression of the date before hiding it into the image and then compare the results using Peak Signal to Noise Ratio (PSNR). The three methods that will be used are Least Significant Bit (LSB), Huffman Code, and Arithmetic Coding and then the result will be compared.",
"title": ""
},
{
"docid": "45458f6e7160b32a2e82c76568bfe46a",
"text": "PURPOSE\nTo assess the effectiveness and clinical outcomes of catheter-directed thrombolysis in patients with atresia of the inferior vena cava (IVC) and acute iliofemoral deep vein thrombosis (DVT).\n\n\nMATERIALS AND METHODS\nFrom 2001 to 2009, 11 patients (median age, 32 y) with atresia of the IVC and acute iliofemoral DVT in 13 limbs were admitted for catheter-directed thrombolysis. Through a multiple-side hole catheter inserted in the popliteal vein, continuous pulse-spray infusion of tissue plasminogen activator and heparin was performed. Thrombolysis was terminated when all thrombus was resolved and venous outflow through the paravertebral collateral vessels was achieved. After thrombolysis, all patients received lifelong anticoagulation and compression stockings and were followed up at regular intervals.\n\n\nRESULTS\nUltrasound or computed tomography revealed absence of the suprarenal segment of the IVC in two patients, and nine were diagnosed with absence of the infrarenal segment of the IVC. Median treatment time was 58 hours (range, 42-95 h). No deaths or serious complications occurred. Overall, complications were observed in four patients, one of whom required blood transfusion. Three patients were diagnosed with thrombophilia. Median follow-up was 37 months (range, 51 d to 96 mo). All patients had patent deep veins and one developed reflux in the popliteal fossa after 4 years. No thromboembolic recurrences were observed during follow-up.\n\n\nCONCLUSIONS\nCatheter-directed thrombolysis of patients with acute iliofemoral DVT and atresia of the IVC is a viable treatment option, as reasonable clinical outcomes can be obtained.",
"title": ""
},
{
"docid": "39208755abbd92af643d0e30029f6cc0",
"text": "The biomedical community makes extensive use of text mining technology. In the past several years, enormous progress has been made in developing tools and methods, and the community has been witness to some exciting developments. Although the state of the community is regularly reviewed, the sheer volume of work related to biomedical text mining and the rapid pace in which progress continues to be made make this a worthwhile, if not necessary, endeavor. This chapter provides a brief overview of the current state of text mining in the biomedical domain. Emphasis is placed on the resources and tools available to biomedical researchers and practitioners, as well as the major text mining tasks of interest to the community. These tasks include the recognition of explicit facts from biomedical literature, the discovery of previously unknown or implicit facts, document summarization, and question answering. For each topic, its basic challenges and methods are outlined and recent and influential work is reviewed.",
"title": ""
},
{
"docid": "dbc3355eb2b88432a4bd21d42c090ef1",
"text": "With advancement of technology things are becoming simpler and easier for us. Automatic systems are being preferred over manual system. This unit talks about the basic definitions needed to understand the Project better and further defines the technical criteria to be implemented as a part of this project. Keywords-component; Automation, 8051 microcontroller, LDR, LED, ADC, Relays, LCD display, Sensors, Stepper motor",
"title": ""
},
{
"docid": "69058572e8baaef255a3be6ac9eef878",
"text": "Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is challenging because developers must find and connect all of the non-local interactions between event-based JavaScript code, declarative CSS styles, and web page content that combine to express the behavior.\n The Scry tool embodies a new approach to locating the code that implements interactive behaviors. A developer selects a page element; whenever the element changes, Scry captures the rendering engine's inputs (DOM, CSS) and outputs (screenshot) for the element. For any two captured element states, Scry can compute how the states differ and which lines of JavaScript code were responsible. Using Scry, a developer can locate an interactive behavior's implementation by picking two output states; Scry indicates the JavaScript code directly responsible for their differences.",
"title": ""
},
{
"docid": "2d6718172b83ef2a109f91791af6a0c3",
"text": "BACKGROUND & AIMS\nWe previously established long-term culture conditions under which single crypts or stem cells derived from mouse small intestine expand over long periods. The expanding crypts undergo multiple crypt fission events, simultaneously generating villus-like epithelial domains that contain all differentiated types of cells. We have adapted the culture conditions to grow similar epithelial organoids from mouse colon and human small intestine and colon.\n\n\nMETHODS\nBased on the mouse small intestinal culture system, we optimized the mouse and human colon culture systems.\n\n\nRESULTS\nAddition of Wnt3A to the combination of growth factors applied to mouse colon crypts allowed them to expand indefinitely. Addition of nicotinamide, along with a small molecule inhibitor of Alk and an inhibitor of p38, were required for long-term culture of human small intestine and colon tissues. The culture system also allowed growth of mouse Apc-deficient adenomas, human colorectal cancer cells, and human metaplastic epithelia from regions of Barrett's esophagus.\n\n\nCONCLUSIONS\nWe developed a technology that can be used to study infected, inflammatory, or neoplastic tissues from the human gastrointestinal tract. These tools might have applications in regenerative biology through ex vivo expansion of the intestinal epithelia. Studies of these cultures indicate that there is no inherent restriction in the replicative potential of adult stem cells (or a Hayflick limit) ex vivo.",
"title": ""
},
{
"docid": "e4069b8312b8a273743b31b12b1dfbae",
"text": "Automatic keyphrase extraction techniques play an important role for many tasks including indexing, categorizing, summarizing, and searching. In this paper, we develop and evaluate an automatic keyphrase extraction system for scientific documents. Compared with previous work, our system concentrates on two important issues: (1) more precise location for potential keyphrases: a new candidate phrase generation method is proposed based on the core word expansion algorithm, which can reduce the size of the candidate set by about 75% without increasing the computational complexity; (2) overlap elimination for the output list: when a phrase and its sub-phrases coexist as candidates, an inverse document frequency feature is introduced for selecting the proper granularity. Additional new features are added for phrase weighting. Experiments based on real-world datasets were carried out to evaluate the proposed system. The results show the efficiency and effectiveness of the refined candidate set and demonstrate that the new features improve the accuracy of the system. The overall performance of our system compares favorably with other state-of-the-art keyphrase extraction systems.",
"title": ""
},
{
"docid": "07db8f037ff720c8b8b242879c14531f",
"text": "PURPOSE\nMatriptase-2 (also known as TMPRSS6) is a critical regulator of the iron-regulatory hormone hepcidin in the liver; matriptase-2 cleaves membrane-bound hemojuvelin and consequently alters bone morphogenetic protein (BMP) signaling. Hemojuvelin and hepcidin are expressed in the retina and play a critical role in retinal iron homeostasis. However, no information on the expression and function of matriptase-2 in the retina is available. The purpose of the present study was to examine the retinal expression of matriptase-2 and its role in retinal iron homeostasis.\n\n\nMETHODS\nRT-PCR, quantitative PCR (qPCR), and immunofluorescence were used to analyze the expression of matriptase-2 and other iron-regulatory proteins in the mouse retina. Polarized localization of matriptase-2 in the RPE was evaluated using markers for the apical and basolateral membranes. Morphometric analysis of retinas from wild-type and matriptase-2 knockout (Tmprss6(msk/msk) ) mice was also performed. Retinal iron status in Tmprss6(msk/msk) mice was evaluated by comparing the expression levels of ferritin and transferrin receptor 1 between wild-type and knockout mice. BMP signaling was monitored by the phosphorylation status of Smads1/5/8 and expression levels of Id1 while interleukin-6 signaling was monitored by the phosphorylation status of STAT3.\n\n\nRESULTS\nMatriptase-2 is expressed in the mouse retina with expression detectable in all retinal cell types. Expression of matriptase-2 is restricted to the apical membrane in the RPE where hemojuvelin, the substrate for matriptase-2, is also present. There is no marked difference in retinal morphology between wild-type mice and Tmprss6(msk/msk) mice, except minor differences in specific retinal layers. The knockout mouse retina is iron-deficient, demonstrable by downregulation of the iron-storage protein ferritin and upregulation of transferrin receptor 1 involved in iron uptake. Hepcidin is upregulated in Tmprss6(msk/msk) mouse retinas, particularly in the neural retina. BMP signaling is downregulated while interleukin-6 signaling is upregulated in Tmprss6(msk/msk) mouse retinas, suggesting that the upregulaton of hepcidin in knockout mouse retinas occurs through interleukin-6 signaling and not through BMP signaling.\n\n\nCONCLUSIONS\nThe iron-regulatory serine protease matriptase-2 is expressed in the retina, and absence of this enzyme leads to iron deficiency and increased expression of hemojuvelin and hepcidin in the retina. The upregulation of hepcidin expression in Tmprss6(msk/msk) mouse retinas does not occur via BMP signaling but likely via the proinflammatory cytokine interleukin-6. We conclude that matriptase-2 is a critical participant in retinal iron homeostasis.",
"title": ""
},
{
"docid": "da8a41e844c519842de524d791527ace",
"text": "Advances in NLP techniques have led to a great demand for tagging and analysis of the sentiments from unstructured natural language data over the last few years. A typical approach to sentiment analysis is to start with a lexicon of positive and negative words and phrases. In these lexicons, entries are tagged with their prior out of context polarity. Unfortunately all efforts found in literature deal mostly with English texts. In this squib, we propose a computational technique of generating an equivalent SentiWordNet (Bengali) from publicly available English Sentiment lexicons and English-Bengali bilingual dictionary. The target language for the present task is Bengali, though the methodology could be replicated for any new language. There are two main lexical resources widely used in English for Sentiment analysis: SentiWordNet (Esuli et. al., 2006) and Subjectivity Word List (Wilson et. al., 2005). SentiWordNet is an automatically constructed lexical resource for English which assigns a positivity score and a negativity score to each WordNet synset. The subjectivity lexicon was compiled from manually developed resources augmented with entries learned from corpora. The entries in the Subjectivity lexicon have been labelled for part of speech (POS) as well as either strong or weak subjective tag depending on reliability of the subjective nature of the entry.",
"title": ""
},
{
"docid": "b2c05f820195154dbbb76ee68740b5d9",
"text": "DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model. Yet they are used on multi-layer networks with millions of parameters. This is a cause for concern since linear models are simple neural networks. We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models. Based on our analysis of linear models we propose a generalization that yields two explanation techniques (PatternNet and PatternAttribution) that are theoretically sound for linear models and produce improved explanations for deep networks.",
"title": ""
},
{
"docid": "9897f5e64b4a5d6d80fadb96cb612515",
"text": "Deep convolutional neural networks (CNNs) are rapidly becoming the dominant approach to computer vision and a major component of many other pervasive machine learning tasks, such as speech recognition, natural language processing, and fraud detection. As a result, accelerators for efficiently evaluating CNNs are rapidly growing in popularity. The conventional approaches to designing such CNN accelerators is to focus on creating accelerators to iteratively process the CNN layers. However, by processing each layer to completion, the accelerator designs must use off-chip memory to store intermediate data between layers, because the intermediate data are too large to fit on chip. In this work, we observe that a previously unexplored dimension exists in the design space of CNN accelerators that focuses on the dataflow across convolutional layers. We find that we are able to fuse the processing of multiple CNN layers by modifying the order in which the input data are brought on chip, enabling caching of intermediate data between the evaluation of adjacent CNN layers. We demonstrate the effectiveness of our approach by constructing a fused-layer CNN accelerator for the first five convolutional layers of the VGGNet-E network and comparing it to the state-of-the-art accelerator implemented on a Xilinx Virtex-7 FPGA. We find that, by using 362KB of on-chip storage, our fused-layer accelerator minimizes off-chip feature map data transfer, reducing the total transfer by 95%, from 77MB down to 3.6MB per image.",
"title": ""
},
{
"docid": "33ae11cfc67a9afe34483444a03bfd5a",
"text": "In today’s interconnected digital world, targeted attacks have become a serious threat to conventional computer systems and critical infrastructure alike. Many researchers contribute to the fight against network intrusions or malicious software by proposing novel detection systems or analysis methods. However, few of these solutions have a particular focus on Advanced Persistent Threats or similarly sophisticated multi-stage attacks. This turns finding domain-appropriate methodologies or developing new approaches into a major research challenge. To overcome these obstacles, we present a structured review of semantics-aware works that have a high potential for contributing to the analysis or detection of targeted attacks. We introduce a detailed literature evaluation schema in addition to a highly granular model for article categorization. Out of 123 identified papers, 60 were found to be relevant in the context of this study. The selected articles are comprehensively reviewed and assessed in accordance to Kitchenham’s guidelines for systematic literature reviews. In conclusion, we combine new insights and the status quo of current research into the concept of an ideal systemic approach capable of semantically processing and evaluating information from different observation points.",
"title": ""
},
{
"docid": "60e3e47f0c12df306b6686ee358c4155",
"text": "Stroke affects 750,000 people annually, and 80% of stroke survivors are left with weakened limbs and hands. Repetitive hand movement is often used as a rehabilitation technique in order to regain hand movement and strength. In order to facilitate this rehabilitation, a robotic glove was designed to aid in the movement and coordination of gripping exercises. This glove utilizes a cable system to open and close a patients hand. The cables are actuated by servomotors, mounted in a backpack weighing 13.2lbs including battery power sources. The glove can be controlled in terms of finger position and grip force through switch interface, software program, or surface myoelectric (sEMG) signal. The primary control modes of the system provide: active assistance, active resistance and a preprogrammed mode. This project developed a working prototype of the rehabilitative robotic glove which actuates the fingers over a full range of motion across one degree-of-freedom, and is capable of generating a maximum 15N grip force.",
"title": ""
},
{
"docid": "fdd59ff419b9613a1370babe64ef1c98",
"text": "The disentangling problem is to discover multiple complex factors of variations hidden in data. One recent approach is to take a dataset with grouping structure and separately estimate a factor common within a group (content) and a factor specific to each group member (transformation). Notably, this approach can learn to represent a continuous space of contents, which allows for generalization to data with unseen contents. In this study, we aim at cultivating this approach within probabilistic deep generative models. Motivated by technical complication in existing groupbased methods, we propose a simpler probabilistic method, called group-contrastive variational autoencoders. Despite its simplicity, our approach achieves reasonable disentanglement with generalizability for three grouped datasets of 3D object images. In comparison with a previous model, although conventional qualitative evaluation shows little difference, our qualitative evaluation using few-shot classification exhibits superior performances for some datasets. We analyze the content representations from different methods and discuss their transformation-dependency and potential performance impacts.",
"title": ""
}
] |
scidocsrr
|
9ff35a936b93c47237705abcab48425f
|
Becoming syntactic.
|
[
{
"docid": "7b78b138539b876660c2a320aa10cd2e",
"text": "What are the psychological, computational and neural underpinnings of language? Are these neurocognitive correlates dedicated to language? Do different parts of language depend on distinct neurocognitive systems? Here I address these and other issues that are crucial for our understanding of two fundamental language capacities: the memorization of words in the mental lexicon, and the rule-governed combination of words by the mental grammar. According to the declarative/procedural model, the mental lexicon depends on declarative memory and is rooted in the temporal lobe, whereas the mental grammar involves procedural memory and is rooted in the frontal cortex and basal ganglia. I argue that the declarative/procedural model provides a new framework for the study of lexicon and grammar.",
"title": ""
}
] |
[
{
"docid": "f6574fbbdd53b2bc92af485d6c756df0",
"text": "A comparative analysis between Nigerian English (NE) and American English (AE) is presented in this article. The study is aimed at highlighting differences in the speech parameters, and how they influence speech processing and automatic speech recognition (ASR). The UILSpeech corpus of Nigerian-Accented English isolated word recordings, read speech utterances, and video recordings are used as a reference for Nigerian English. The corpus captures the linguistic diversity of Nigeria with data collected from native speakers of Hausa, Igbo, and Yoruba languages. The UILSpeech corpus is intended to provide a unique opportunity for application and expansion of speech processing techniques to a limited resource language dialect. The acoustic-phonetic differences between American English (AE) and Nigerian English (NE) are studied in terms of pronunciation variations, vowel locations in the formant space, mean fundamental frequency, and phone model distances in the acoustic space, as well as through visual speech analysis of the speakers’ articulators. A strong impact of the AE–NE acoustic mismatch on ASR is observed. A combination of model adaptation and extension of the AE lexicon for newly established NE pronunciation variants is shown to substantially improve performance of the AE-trained ASR system in the new NE task. This study is a part of the pioneering efforts towards incorporating speech technology in Nigerian English and is intended to provide a development basis for other low resource language dialects and languages.",
"title": ""
},
{
"docid": "1397a3996f2283ff718512af5b9a6294",
"text": "Two experiments showed that framing an athletic task as diagnostic of negative racial stereotypes about Black or White athletes can impede their performance in sports. In Experiment 1, Black participants performed significantly worse than did control participants when performance on a golf task was framed as diagnostic of \"sports intelligence.\" In comparison, White participants performed worse than did control participants when the golf task was framed as diagnostic of \"natural athletic ability.\" Experiment 2 observed the effect of stereotype threat on the athletic performance of White participants for whom performance in sports represented a significant measure of their self-worth. The implications of the findings for the theory of stereotype threat (C. M. Steele, 1997) and for participation in sports are discussed.",
"title": ""
},
{
"docid": "b3a39ce63bc78f0eef65fb40e06cf75e",
"text": "The Internet of Things (IoT) is not only about improving business processes, but has also the potential to profoundly impact the life of many citizens. Likewise the IoT can provide an useful tool for the longitudinal observation of human behavior and the understanding of behavioral patterns that can inform further IoT technology design. Today experimentation with IoT technologies is predominately carried out in lab based testbeds. There is however an emerging need for increased realism of the experimentation environment, as well as involvement of real end users into the experimentation lifecycle. In this paper we present SmartCampus, a user centric experimental research facility for IoT technologies. The current testbed deployment is focused on Smart Buildings, a key building block for cities of the future. Unlike current lab based testbeds, SmartCampus deeply embeds heterogeneous IoT devices as a programmable experimentation substrate in a real life office environment and makes flexible experimentation with real end users possible. We present the architecture realization of the current facility and underlying considerations that motivated its design. Using several recent experimental use cases, we demonstrate the usefulness of such experimental facilities for user-centric IoT research.",
"title": ""
},
{
"docid": "a56a95db6d9d0f0ccf26192b7e2322ff",
"text": "CRISPR-Cas9 is a versatile genome editing technology for studying the functions of genetic elements. To broadly enable the application of Cas9 in vivo, we established a Cre-dependent Cas9 knockin mouse. We demonstrated in vivo as well as ex vivo genome editing using adeno-associated virus (AAV)-, lentivirus-, or particle-mediated delivery of guide RNA in neurons, immune cells, and endothelial cells. Using these mice, we simultaneously modeled the dynamics of KRAS, p53, and LKB1, the top three significantly mutated genes in lung adenocarcinoma. Delivery of a single AAV vector in the lung generated loss-of-function mutations in p53 and Lkb1, as well as homology-directed repair-mediated Kras(G12D) mutations, leading to macroscopic tumors of adenocarcinoma pathology. Together, these results suggest that Cas9 mice empower a wide range of biological and disease modeling applications.",
"title": ""
},
{
"docid": "5632301086858f470e8aa9bd73bea5bc",
"text": "We present a coding system combined with an annotation tool for the analysis of gestural behavior. The NEUROGES coding system consists of three modules that progress from gesture kinetics to gesture function. Grounded on empirical neuropsychological and psychological studies, the theoretical assumption behind NEUROGES is that its main kinetic and functional movement categories are differentially associated with specific cognitive, emotional, and interactive functions. ELAN is a free, multimodal annotation tool for digital audio and video media. It supports multileveled transcription and complies with such standards as XML and Unicode. ELAN allows gesture categories to be stored with associated vocabularies that are reusable by means of template files. The combination of the NEUROGES coding system and the annotation tool ELAN creates an effective tool for empirical research on gestural behavior.",
"title": ""
},
{
"docid": "ebd72a597dba9a41dba5f3f0b4d1e6b9",
"text": "One may consider that drug-drug interactions (DDIs) associated with antacids is an obsolete topic because they are prescribed less frequently by medical professionals due to the advent of drugs that more effectively suppress gastric acidity (i.e. histamine H2-receptor antagonists [H2RAs] and proton pump inhibitors [PPIs]). Nevertheless, the use of antacids by ambulant patients may be ever increasing, because they are freely available as over-the-counter (OTC) drugs. Antacids consisting of weak basic substances coupled with polyvalent cations may alter the rate and/or the extent of absorption of concomitantly administered drugs via different mechanisms. Polyvalent cations in antacid formulations may form insoluble chelate complexes with drugs and substantially reduce their bioavailability. Clinical studies demonstrated that two classes of antibacterial s (tetracyclines and fluoroquinolones) are susceptible to clinically relevant DDIs with antacids through this mechanism. Countermeasures against this type of DDI include spacing out the dosing interval —taking antacid either 4 hours before or 2 hours after administration of these antibacterials. Bisphosphonates may be susceptible to DDIs with antacids by the same mechanism, as described in the prescription information of most bisphosphonates, but no quantitative data about the DDIs are available. For drugs with solubility critically dependent on pH, neutralization of gastric fluid by antacids may alter the dissolution of these drugs and the rate and/or extent of their absorption. However, the magnitude of DDIs elicited by antacids through this mechanism is less than that produced by H2RAs or PPIs; therefore, the clinical relevance of such DDIs is often obscure. Magnesium ions contained in some antacid formulas may increase gastric emptying, thereby accelerating the rate of absorption of some drugs. However, the clinical relevance of this is unclear in most cases because the difference in plasma drug concentration observed after dosing shortly disappears. Recent reports have indicated that some of the molecular-targeting agents such as the tyrosine kinase inhibitors dasatinib and imatinib, and the thrombopoietin receptor agonist eltrombopag may be susceptible to DDIs with antacids. Finally, the recent trend of developing OTC drugs as combination formulations of an antacid and an H2RA is a concern because these drugs will increase the risk of DDIs by dual mechanisms, i.e. a gastric pH-dependent mechanism by H2RAs and a cation-mediated chelation mechanism by antacids.",
"title": ""
},
{
"docid": "5a456d19b617b2a1d521424b8f98ad91",
"text": "Abstract: Dynamic programming (DP) is a very general optimization technique, which can be applied to numerous decision problems that typically require a sequence of decisions to be made. The solver software DP2PN2Solver presented in this paper is a general, flexible, and expandable software tool that solves DP problems. It consists of modules on two levels. A level one module takes the specification of a discrete DP problem instance as input and produces an intermediate Petri net (PN) representation called Bellman net (Lew, 2002; Lew, Mauch, 2003, 2004) as output — a middle layer, which concisely captures all the essential elements of a DP problem in a standardized and mathematically precise fashion. The optimal solution for the problem instance is computed by an “executable” code (e.g. Java, Spreadsheet, etc.) derived by a level two module from the Bellman net representation. DP2PN2Solver’s unique potential lies in its Bellman net representation. In theory, a PN’s intrinsic concurrency allows to distribute the computational load encountered when solving a single DP problem instance to several computational units.",
"title": ""
},
{
"docid": "c5f299c49fc72e247a94ab7cb4212038",
"text": "Recent years have witnessed promising results of face detection using deep learning. Despite making remarkable progresses, face detection in the wild remains an open research challenge especially when detecting faces at vastly different scales and characteristics. In this paper, we propose a novel simple yet effective framework of “Feature Agglomeration Networks” (FANet) to build a new single stage face detector, which not only achieves state-of-the-art performance but also runs efficiently. As inspired by Feature Pyramid Networks (FPN) [11], the key idea of our framework is to exploit inherent multi-scale features of a single convolutional neural network by aggregating higher-level semantic feature maps of different scales as contextual cues to augment lower-level feature maps via a hierarchical agglomeration manner at marginal extra computation cost. We further propose a Hierarchical Loss to effectively train the FANet model. We evaluate the proposed FANet detector on several public face detection benchmarks, including PASCAL face, FDDB and WIDER FACE datasets and achieved state-of-the-art results. Our detector can run in real time for VGA-resolution images on GPU.",
"title": ""
},
{
"docid": "991badc98ce19f9e607a7780927e2513",
"text": "The purpose of this study was to evaluate the frequency of oedema and fatty degeneration of the soleus and gastrocnemius muscles in patients with Achilles tendon abnormalities. Forty-five consecutive patients (mean 51 years; range 14–84 years) with achillodynia were examined with magnetic resonance (MR) images of the calf. The frequency of oedema and fatty degeneration in the soleus and gastrocnemius muscles was determined in patients with normal tendons, tendinopathy and in patients with a partial tear or a complete tear of the Achilles tendon. Oedema was encountered in 35% (7/20) of the patients with tendinopathy (n = 20; range 13–81 years), and in 47% (9/19) of the patients with partial tears or complete tears (n = 19; 28–78 years). Fatty degeneration was encountered in 10% (2/20) of the patients with tendinopathy, and in 32% (6/19) of the patients with tears. The prevalence of fatty degeneration was significantly more common in patients with a partial or complete tear compared with the patients with a normal Achilles tendon (p = 0.032 and p = 0.021, respectively). Oedema and fatty degeneration of the soleus and gastrocnemius muscles are common in patients with Achilles tendon abnormalities.",
"title": ""
},
{
"docid": "4244db44909f759b2acdb1bd9d23632e",
"text": "This paper implements of a three phase grid synchronization for doubly-fed induction generators (DFIG) in wind generation system. A stator flux oriented vector is used to control the variable speed DFIG for the utility grid synchronization, active power and reactive power. Before synchronization, the stator voltage is adjusted equal to the amplitude of the grid voltage by controlling the d-axis rotor current. The frequency of stator voltage is synchronized with the grid by controlling the rotor flux angle equal to the difference between the rotor angle (mechanical speed in electrical degree) and the grid angle. The phase shift between stator voltage and the grid voltage is compensated by comparing the d-axis stator voltage and the grid voltage to generate a compensation angle. After the synchronization is achieved, the active power and reactive power are controlled to extract the optimum energy capture and fulfilled with the standard of utility grid requirements for wind turbine. The q-axis and d-axis rotor current are used to control the active and reactive power respectively. The implementation was conducted on a 1 kW conventional induction wound rotor controlled the digital signal controller board. The experimentation results confirm that the DFIG can be synchronized to the utility grid and the active power and the reactive power can be independently controlled.",
"title": ""
},
{
"docid": "eddeeb5b00dc7f82291b3880956e2f01",
"text": "This study aims at building a robust method for semiautomated information extraction of pavement markings detected from mobile laser scanning (MLS) point clouds. The proposed workflow consists of three components: 1) preprocessing, 2) extraction, and 3) classification. In preprocessing, the three-dimensional (3-D) MLS point clouds are converted into radiometrically corrected and enhanced two-dimensional (2-D) intensity imagery of the road surface. Then, the pavement markings are automatically extracted with the intensity using a set of algorithms, including Otsu's thresholding, neighbor-counting filtering, and region growing. Finally, the extracted pavement markings are classified with the geometric parameters by using a manually defined decision tree. A study was conducted by using the MLS dataset acquired in Xiamen, Fujian, China. The results demonstrated that the proposed workflow and method can achieve 92% in completeness, 95% in correctness, and 94% in F-score.",
"title": ""
},
{
"docid": "8de98d013780ca995c95b2d882caf05d",
"text": "Self-localization is the process of identifying one’s current position on a map, and it is a crucial part of any wayfinding process. During self-localization the wayfinder matches visually perceptible features of the environment, such as landmarks, with map symbols to constrain potential locations on the map. The success of this visual matching process constitutes an important factor for the success of selflocalization. In this research we aim at observing the visual matching process between environment and map during self-localization with real-world mobile eye tracking. We report on one orientation and one self-localization experiment, both in an outdoor urban environment. The gaze data collected during the experiments show that successful participants put significantly more visual attention to those symbols on the map that were helpful in the given situation than unsuccessful participants. A sequence analysis revealed that they also had significantly more switches of visual attention between map symbols and their corresponding landmarks in the environment, which suggests they were following a more effective self-localization strategy.",
"title": ""
},
{
"docid": "303098fa8e5ccd7cf50a955da7e47f2e",
"text": "This paper describes the SALSA corpus, a large German corpus manually annotated with role-semantic information, based on the syntactically annotated TIGER newspaper corpus (Brants et al., 2002). The first release, comprising about 20,000 annotated predicate instances (about half the TIGER corpus), is scheduled for mid-2006. In this paper we discuss the frame-semantic annotation framework and its cross-lingual applicability, problems arising from exhaustive annotation, strategies for quality control, and possible applications.",
"title": ""
},
{
"docid": "7bc24d47df4e452845584871b3652c86",
"text": "Currently, there are many challenges in the transportation scope that researchers are attempting to resolve, and one of them is transportation planning. The main contribution of this paper is the design and implementation of an ITS (Intelligent Transportation Systems) smart sensor prototype that incorporates and combines the Internet of Things (IoT) approaches using the Serverless and Microservice Architecture, to help the transportation planning for Bus Rapid Transit (BRT) systems. The ITS smart sensor prototype can detect several Bluetooth signals of several devices (e.g., from mobile phones) that people use while travelling by the BRT system (e.g., in Bogota city). From that information, the ITS smart-sensor prototype can create an O/D (origin/destiny) matrix for several BRT routes, and this information can be used by the Administrator Authorities (AA) to produce a suitable transportation planning for the BRT systems. In addition, this information can be used by the center of traffic management and the AA from ITS cloud services using the Serverless and Microservice architecture.",
"title": ""
},
{
"docid": "57256bce5741b23fa4827fad2ad9e321",
"text": "This study assessed the depth of online learning, with a focus on the nature of online interaction in four distance education course designs. The Study Process Questionnaire was used to measure the shift in students’ approach to learning from the beginning to the end of the courses. Design had a significant impact on the nature of the interaction and whether students approached learning in a deep and meaningful manner. Structure and leadership were found to be crucial for online learners to take a deep and meaningful approach to learning.",
"title": ""
},
{
"docid": "d15804e98b58fa5ec0985c44f6bb6033",
"text": "Urrently, the most successful learning models in computer vision are based on learning successive representations followed by a decision layer. This is usually actualized through feedforward multilayer neural networks, e.g. ConvNets, where each layer forms one of such successive representations. However, an alternative that can achieve the same goal is a feedback based approach in which the representation is formed in an iterative manner based on a feedback received from previous iterations output. We establish that a feedback based approach has several core advantages over feedforward: it enables making early predictions at the query time, its output naturally conforms to a hierarchical structure in the label space (e.g. a taxonomy), and it provides a new basis for Curriculum Learning. We observe that feedback develops a considerably different representation compared to feedforward counterparts, in line with the aforementioned advantages. We provide a general feedback based learning architecture, instantiated using existing RNNs, with the endpoint results on par or better than existing feedforward networks and the addition of the above advantages.",
"title": ""
},
{
"docid": "6b7594aa4ace0f56884d970a9e254dc5",
"text": "Recent work has explored the use of hidden Markov models for unsupervised discourse and conversation modeling, where each segment or block of text such as a message in a conversation is associated with a hidden state in a sequence. We extend this approach to allow each block of text to be a mixture of multiple classes. Under our model, the probability of a class in a text block is a log-linear function of the classes in the previous block. We show that this model performs well at predictive tasks on two conversation data sets, improving thread reconstruction accuracy by up to 15 percentage points over a standard HMM. Additionally, we show quantitatively that the induced word clusters correspond to speech acts more closely than baseline models.",
"title": ""
},
{
"docid": "ddd236e7db2d0405658dca1e13704ad0",
"text": "We propose a generalization of convolutional neural networks (CNNs) to irregular domains, through the use of a translation operator on a graph structure. In regular settings such as images, convolutional layers are designed by translating a convolutional kernel over all pixels, thus enforcing translation equivariance. In the case of general graphs however, translation is not a well-defined operation, which makes shifting a convolutional kernel not straightforward. In this article, we introduce a methodology to allow the design of convolutional layers that are adapted to signals evolving on irregular topologies, even in the absence of a natural translation. Using the designed layers, we build a CNN that we train using the initial set of signals. Contrary to other approaches that aim at extending CNNs to irregular domains, we incorporate the classical settings of CNNs for 2D signals as a particular case of our approach. Designing convolutional layers in the vertex domain directly implies weight sharing, which in other approaches is generally estimated a posteriori using heuristics.",
"title": ""
},
{
"docid": "8cf3be3c3caa4eea4bb86515c7dab9b8",
"text": "In this article, we survey the most common attacks against web sessions, that is, attacks that target honest web browser users establishing an authenticated session with a trusted web application. We then review existing security solutions that prevent or mitigate the different attacks by evaluating them along four different axes: protection, usability, compatibility, and ease of deployment. We also assess several defensive solutions that aim at providing robust safeguards against multiple attacks. Based on this survey, we identify five guidelines that, to different extents, have been taken into account by the designers of the different proposals we reviewed. We believe that these guidelines can be helpful for the development of innovative solutions approaching web security in a more systematic and comprehensive way.",
"title": ""
},
{
"docid": "2fa356bb47bf482f8585c882ad5d9409",
"text": "As an important arithmetic module, the adder plays a key role in determining the speed and power consumption of a digital signal processing (DSP) system. The demands of high speed and power efficiency as well as the fault tolerance nature of some applications have promoted the development of approximate adders. This paper reviews current approximate adder designs and provides a comparative evaluation in terms of both error and circuit characteristics. Simulation results show that the equal segmentation adder (ESA) is the most hardware-efficient design, but it has the lowest accuracy in terms of error rate (ER) and mean relative error distance (MRED). The error-tolerant adder type II (ETAII), the speculative carry select adder (SCSA) and the accuracy-configurable approximate adder (ACAA) are equally accurate (provided that the same parameters are used), however ETATII incurs the lowest power-delay-product (PDP) among them. The almost correct adder (ACA) is the most power consuming scheme with a moderate accuracy. The lower-part-OR adder (LOA) is the slowest, but it is highly efficient in power dissipation.",
"title": ""
}
] |
scidocsrr
|
7f452369d45c64cece868ccc009e04e6
|
Real-Time Temporal Action Localization in Untrimmed Videos by Sub-Action Discovery
|
[
{
"docid": "ee9c0e79b29fbe647e3e0ccb168532b5",
"text": "We propose an effective approach for spatio-temporal action localization in realistic videos. The approach first detects proposals at the frame-level and scores them with a combination of static and motion CNN features. It then tracks high-scoring proposals throughout the video using a tracking-by-detection approach. Our tracker relies simultaneously on instance-level and class-level detectors. The tracks are scored using a spatio-temporal motion histogram, a descriptor at the track level, in combination with the CNN features. Finally, we perform temporal localization of the action using a sliding-window approach at the track level. We present experimental results for spatio-temporal localization on the UCF-Sports, J-HMDB and UCF-101 action localization datasets, where our approach outperforms the state of the art with a margin of 15%, 7% and 12% respectively in mAP.",
"title": ""
},
{
"docid": "4829d8c0dd21f84c3afbe6e1249d6248",
"text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.",
"title": ""
}
] |
[
{
"docid": "4315cbfa13e9a32288c1857f231c6410",
"text": "The likelihood of soft errors increase with system complexity, reduction in operational voltages, exponential growth in transistors per chip, increases in clock frequencies and device shrinking. As the memory bit-cell area is condensed, single event upset that would have formerly despoiled only a single bit-cell are now proficient of upsetting multiple contiguous memory bit-cells per particle strike. While these error types are beyond the error handling capabilities of the frequently used error correction codes (ECCs) for single bit, the overhead associated with moving to more sophisticated codes for multi-bit errors is considered to be too costly. To address this issue, this paper presents a new approach to detect and correct multi-bit soft error by using Horizontal-Vertical-Double-Bit-Diagonal (HVDD) parity bits with a comparatively low overhead.",
"title": ""
},
{
"docid": "d8cd05b5a187e8bc3eacd8777fb36218",
"text": "In this article we review bony changes resulting from alterations in intracranial pressure (ICP) and the implications for ophthalmologists and the patients for whom we care. Before addressing ophthalmic implications, we will begin with a brief overview of bone remodeling. Bony changes seen with chronic intracranial hypotension and hypertension will be discussed. The primary objective of this review was to bring attention to bony changes seen with chronic intracranial hypotension. Intracranial hypotension skull remodeling can result in enophthalmos. In advanced disease enophthalmos develops to a degree that is truly disfiguring. The most common finding for which subjects are referred is ocular surface disease, related to loss of contact between the eyelids and the cornea. Other abnormalities seen include abnormal ocular motility and optic atrophy. Recognition of such changes is important to allow for diagnosis and treatment prior to advanced clinical deterioration. Routine radiographic assessment of bony changes may allow for the identification of patient with abnormal ICP prior to the development of clinically significant disease.",
"title": ""
},
{
"docid": "a24b4546eb2da7ce6ce70f45cd16e07d",
"text": "This paper examines the state of the art in mobile clinical and health-related apps. A 2012 estimate puts the number of health-related apps at no fewer than 40,000, as healthcare professionals and consumers continue to express concerns about the quality of many apps, calling for some form of app regulatory control or certification to be put in place. We describe the range of apps on offer as of 2013, and then present a brief survey of evaluation studies of medical and health-related apps that have been conducted to date, covering a range of clinical disciplines and topics. Our survey includes studies that highlighted risks, negative issues and worrying deficiencies in existing apps. We discuss the concept of 'apps as a medical device' and the relevant regulatory controls that apply in USA and Europe, offering examples of apps that have been formally approved using these mechanisms. We describe the online Health Apps Library run by the National Health Service in England and the calls for a vetted medical and health app store. We discuss the ingredients for successful apps beyond the rather narrow definition of 'apps as a medical device'. These ingredients cover app content quality, usability, the need to match apps to consumers' general and health literacy levels, device connectivity standards (for apps that connect to glucometers, blood pressure monitors, etc.), as well as app security and user privacy. 'Happtique Health App Certification Program' (HACP), a voluntary app certification scheme, successfully captures most of these desiderata, but is solely focused on apps targeting the US market. HACP, while very welcome, is in ways reminiscent of the early days of the Web, when many \"similar\" quality benchmarking tools and codes of conduct for information publishers were proposed to appraise and rate online medical and health information. It is probably impossible to rate and police every app on offer today, much like in those early days of the Web, when people quickly realised the same regarding informational Web pages. The best first line of defence was, is, and will always be to educate consumers regarding the potentially harmful content of (some) apps.",
"title": ""
},
{
"docid": "293e1834eef415f08e427a41e78d818f",
"text": "Autonomous robots are complex systems that require the interaction between numerous heterogeneous components (software and hardware). Because of the increase in complexity of robotic applications and the diverse range of hardware, robotic middleware is designed to manage the complexity and heterogeneity of the hardware and applications, promote the integration of new technologies, simplify software design, hide the complexity of low-level communication and the sensor heterogeneity of the sensors, improve software quality, reuse robotic software infrastructure across multiple research efforts, and to reduce production costs. This paper presents a literature survey and attribute-based bibliography of the current state of the art in robotic middleware design. The main aim of the survey is to assist robotic middleware researchers in evaluating the strengths and weaknesses of current approaches and their appropriateness for their applications. Furthermore, we provide a comprehensive set of appropriate bibliographic references that are classified based on middleware attributes.",
"title": ""
},
{
"docid": "84a2d26a0987a79baf597508543f39b6",
"text": "In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to `justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews.",
"title": ""
},
{
"docid": "3a920687e57591c1abfaf10b691132a7",
"text": "BP3TKI Palembang is the government agencies that coordinate, execute and selection of prospective migrants registration and placement. To simplify the existing procedures and improve decision-making is necessary to build a decision support system (DSS) to determine eligibility for employment abroad by applying Fuzzy Multiple Attribute Decision Making (FMADM), using the linear sequential systems development methods. The system is built using Microsoft Visual Basic. Net 2010 and SQL Server 2008 database. The design of the system using use case diagrams and class diagrams to identify the needs of users and systems as well as systems implementation guidelines. Decision Support System which is capable of ranking the dihasialkan to prospective migrants, making it easier for parties to take keputusna BP3TKI the workers who will be flown out of the country.",
"title": ""
},
{
"docid": "359d76f0b4f758c3a58e886e840c5361",
"text": "Cover crops are important components of sustainable agricultural systems. They increase surface residue and aid in the reduction of soil erosion. They improve the structure and water-holding capacity of the soil and thus increase the effectiveness of applied N fertilizer. Legume cover crops such as hairy vetch and crimson clover fix nitrogen and contribute to the nitrogen requirements of subsequent crops. Cover crops can also suppress weeds, provide suitable habitat for beneficial predator insects, and act as non-host crops for nematodes and other pests in crop rotations. This paper reviews the agronomic and economic literature on using cover crops in sustainable food production and reports on past and present research on cover crops and sustainable agriculture at the Beltsville Agricultural Research Center, Maryland. Previous studies suggested that the profitability of cover crops is primarily the result of enhanced crop yields rather than reduced input costs. The experiments at the Beltsville Agricultural Research Center on fresh-market tomato production showed that tomatoes grown with hairy vetch mulch were higher yielding and more profitable than those grown with black polyethylene and no mulch system. Previous studies of cover crops in grain production indicated that legume cover crops such as hairy vetch and crimson clover are more profitable than grass cover crops such as rye or wheat because of the ability of legumes to contribute N to the following crop. A com-",
"title": ""
},
{
"docid": "e0ff61d4b5361c3e2b39265310d02b85",
"text": "This paper presents an adaptive technique for obtaining centers of the hidden layer neurons of radial basis function neural network (RBFNN) for face recognition. The proposed technique uses firefly algorithm to obtain natural sub-clusters of training face images formed due to variations in pose, illumination, expression and occlusion, etc. Movement of fireflies in a hyper-dimensional input space is controlled by tuning the parameter gamma (γ) of firefly algorithm which plays an important role in maintaining the trade-off between effective search space exploration, firefly convergence, overall computational time and the recognition accuracy. The proposed technique is novel as it combines the advantages of evolutionary firefly algorithm and RBFNN in adaptive evolution of number and centers of hidden neurons. The strength of the proposed technique lies in its fast convergence, improved face recognition performance, reduced feature selection overhead and algorithm stability. The proposed technique is validated using benchmark face databases, namely ORL, Yale, AR and LFW. The average face recognition accuracies achieved using proposed algorithm for the above face databases outperform some of the existing techniques in face recognition.",
"title": ""
},
{
"docid": "4f0e454b8274636c56a1617668f08eed",
"text": "Mobile devices are an important part of our everyday lives, and the Android platform has become a market leader. In recent years a number of approaches for Android malware detection have been proposed, using permissions, source code analysis, or dynamic analysis. In this paper, we propose to use a probabilistic discriminative model based on regularized logistic regression for Android malware detection. Through extensive experimental evaluation, we demonstrate that it can generate probabilistic outputs with highly accurate classification results. In particular, we propose to use Android API calls as features extracted from decompiled source code, and analyze and explore issues in feature granularity, feature representation, feature selection, and regularization. We show that the probabilistic discriminative model also works well with permissions, and substantially outperforms the state-of-the-art methods for Android malware detection with application permissions. Furthermore, the discriminative learning model achieves the best detection results by combining both decompiled source code and application permissions. To the best of our knowledge, this is the first research that proposes probabilistic discriminative model for Android malware detection with a thorough study of desired representation of decompiled source code and is the first research work for Android malware detection task that combines both analysis of decompiled source code and application permissions.",
"title": ""
},
{
"docid": "5b134fae94a5cc3a2e1b7cc19c5d29e5",
"text": "We explore making virtual desktops behave in a more physically realistic manner by adding physics simulation and using piling instead of filing as the fundamental organizational structure. Objects can be casually dragged and tossed around, influenced by physical characteristics such as friction and mass, much like we would manipulate lightweight objects in the real world. We present a prototype, called BumpTop, that coherently integrates a variety of interaction and visualization techniques optimized for pen input we have developed to support this new style of desktop organization.",
"title": ""
},
{
"docid": "34c3ba06f9bffddec7a08c8109c7f4b9",
"text": "The role of e-learning technologies entirely depends on the acceptance and execution of required-change in the thinking and behaviour of the users of institutions. The research are constantly reporting that many e-learning projects are falling short of their objectives due to many reasons but on the top is the user resistance to change according to the digital requirements of new era. It is argued that the suitable way for change management in e-learning environment is the training and persuading of users with a view to enhance their digital literacy and thus gradually changing the users’ attitude in positive direction. This paper discusses change management in transition to e-learning system considering pedagogical, cost and technical implications. It also discusses challenges and opportunities for integrating these technologies in higher learning institutions with examples from Turkey GATA (Gülhane Askeri Tıp Akademisi-Gülhane Military Medical Academy).",
"title": ""
},
{
"docid": "851a966bbfee843e5ae1eaf21482ef87",
"text": "The Pittsburgh Sleep Quality Index (PSQI) is a widely used measure of sleep quality in adolescents, but information regarding its psychometric strengths and weaknesses in this population is limited. In particular, questions remain regarding whether it measures one or two sleep quality domains. The aims of the present study were to (a) adapt the PSQI for use in adolescents and young adults, and (b) evaluate the psychometric properties of the adapted measure in this population. The PSQI was slightly modified to make it more appropriate for use in youth populations and was translated into Spanish for administration to the sample population available to the study investigators. It was then administered with validity criterion measures to a community-based sample of Spanish adolescents and young adults (AYA) between 14 and 24 years old (N = 216). The results indicated that the questionnaire (AYA-PSQI-S) assesses a single factor. The total score evidenced good convergent and divergent validity and moderate reliability (Cronbach's alpha = .72). The AYA-PSQI-S demonstrates adequate psychometric properties for use in clinical trials involving adolescents and young adults. Additional research to further evaluate the reliability and validity of the measure for use in clinical settings is warranted.",
"title": ""
},
{
"docid": "10da9f0fd1be99878e280d261ea81ba3",
"text": "The fuzzy vault scheme is a cryptographic primitive being considered for storing fingerprint minutiae protected. A well-known problem of the fuzzy vault scheme is its vulnerability against correlation attack -based cross-matching thereby conflicting with the unlinkability requirement and irreversibility requirement of effective biometric information protection. Yet, it has been demonstrated that in principle a minutiae-based fuzzy vault can be secured against the correlation attack by passing the to-beprotected minutiae through a quantization scheme. Unfortunately, single fingerprints seem not to be capable of providing an acceptable security level against offline attacks. To overcome the aforementioned security issues, this paper shows how an implementation for multiple fingerprints can be derived on base of the implementation for single finger thereby making use of a Guruswami-Sudan algorithm-based decoder for verification. The implementation, of which public C++ source code can be downloaded, is evaluated for single and various multi-finger settings using the MCYTFingerprint-100 database and provides security enhancing features such as the possibility of combination with password and a slow-down mechanism.",
"title": ""
},
{
"docid": "782c8958fa9107b8d1087fe0c79de6ee",
"text": "Credit evaluation is one of the most important and difficult tasks for credit card companies, mortgage companies, banks and other financial institutes. Incorrect credit judgement causes huge financial losses. This work describes the use of an evolutionary-fuzzy system capable of classifying suspicious and non-suspicious credit card transactions. The paper starts with the details of the system used in this work. A series of experiments are described, showing that the complete system is capable of attaining good accuracy and intelligibility levels for real data.",
"title": ""
},
{
"docid": "36776b1372e745f683ca66e7c4421a76",
"text": "This paper presents the analyzed results of rotational torque and suspension force in a bearingless motor with the short-pitch winding, which are based on the computation by finite element method (FEM). The bearingless drive technique is applied to a conventional brushless DC motor, in which the stator windings are arranged at the short-pitch, and encircle only a single stator tooth. At first, the winding arrangement in the stator core, the principle of suspension force generation and the magnetic suspension control method are shown in the bearingless motor with brushless DC structure. The torque and suspension force are computed by FEM using a machine model with the short-pitch winding arrangement, and the computed results are compared between the full-pitch and short-pitch winding arrangements. The advantages of short-pitch winding arrangement are found on the basis of computed results and discussion.",
"title": ""
},
{
"docid": "d18a636768e6aea2e84c7fc59593ec89",
"text": "Enterprise social networking (ESN) techniques have been widely adopted by firms to provide a platform for public communication among employees. This study investigates how the relationships between stressors (i.e., challenge and hindrance stressors) and employee innovation are moderated by task-oriented and relationship-oriented ESN use. Since challenge-hindrance stressors and employee innovation are individual-level variables and task-oriented ESN use and relationship-oriented ESN use are team-level variables, we thus use hierarchical linear model to test this cross-level model. The results of a survey of 191 employees in 50 groups indicate that two ESN use types differentially moderate the relationship between stressors and employee innovation. Specifically, task-oriented ESN use positively moderates the effects of the two stressors on employee innovation, while relationship-oriented ESN use negatively moderates the relationship between the two stressors and employee innovation. In addition, we find that challenge stressors significantly improve employee innovation. Theoretical and practical implications are discussed.",
"title": ""
},
{
"docid": "73ec43c5ed8e245d0a1ff012a6a67f76",
"text": "HERE IS MUCH signal processing devoted to detection and estimation. Detection is the task of detetmitdng if a specific signal set is pteaettt in an obs&tion, whflc estimation is the task of obtaining the va.iues of the parameters derriblng the signal. Often the s@tal is complicated or is corrupted by interfeting signals or noise To facilitate the detection and estimation of signal sets. the obsenation is decomposed by a basis set which spans the signal space [ 1) For many problems of engineering interest, the class of aigttlls being sought are periodic which leads quite natuallv to a decomposition by a basis consistittg of simple petiodic fun=tions, the sines and cosines. The classic Fourier tran.,fot,,, h the mechanism by which we M able to perform this decomposttmn. BY necessity, every observed signal we pmmust be of finite extent. The extent may be adjustable and Axtable. but it must be fire. Proces%ng a fiite-duration observation ~POSCS mteresting and interacting considentior,s on the hamomc analysic rhese consldentions include detectability of tones in the Presence of nearby strong tones, rcoohability of similarstrength nearby tones, tesolvability of Gxifting tona, and biases in estimating the parameten of my of the alonmenhoned signals. For practicality, the data we pare N unifomdy spaced samples of the obsetvcd signal. For convenience. N is highJy composite, and we will zwtme N is evett. The harmottic estm~afes we obtain UtmugJt the discrae Fowie~ tmnsfotm (DFT) arc N mifcwmly spaced samples of the asaciated periodic spectra. This approach in elegant and attnctive when the proce~ scheme is cast as a spectral decomposition in an N-dimensional orthogonal vector space 121. Unfottunately, in mmY practical situations, to obtain meaningful results this elegance must be compmmised. One such t=O,l;..,Nl.N.N+l.",
"title": ""
},
{
"docid": "295212e614cc361b1a5fdd320d39f68b",
"text": "Aiming to meet the explosive growth of mobile data traffic and reduce the network congestion, we study Time Dependent Adaptive Pricing (TDAP) with threshold policies to motivate users to shift their Internet access from peak hours to off-peak hours. With the proposed TDAP scheme, Internet Service Providers (ISPs) will be able to use less network capacity to provide users Internet access service with the same QoS. Simulation and analysis are carried out to investigate the performance of the proposed TDAP scheme based on the real Internet traffic pattern.",
"title": ""
},
{
"docid": "d6a6ee23cd1d863164c79088f75ece30",
"text": "In our work, 3D objects classification has been dealt with convolutional neural networks which is a common paradigm recently in image recognition. In the first phase of experiments, 3D models in ModelNet10 and ModelNet40 data sets were voxelized and scaled with certain parameters. Classical CNN and 3D Dense CNN architectures were designed for training the pre-processed data. In addition, the two trained CNNs were ensembled and the results of them were observed. A success rate of 95.37% achieved on ModelNet10 by using 3D dense CNN, a success rate of 91.24% achieved with ensemble of two CNNs on ModelNet40.",
"title": ""
},
{
"docid": "7279065640e6f2b7aab7a6e91118e0d5",
"text": "Erythrocyte injury such as osmotic shock, oxidative stress or energy depletion stimulates the formation of prostaglandin E2 through activation of cyclooxygenase which in turn activates a Ca2+ permeable cation channel. Increasing cytosolic Ca2+ concentrations activate Ca2+ sensitive K+ channels leading to hyperpolarization, subsequent loss of KCl and (further) cell shrinkage. Ca2+ further stimulates a scramblase shifting phosphatidylserine from the inner to the outer cell membrane. The scramblase is sensitized for the effects of Ca2+ by ceramide which is formed by a sphingomyelinase following several stressors including osmotic shock. The sphingomyelinase is activated by platelet activating factor PAF which is released by activation of phospholipase A2. Phosphatidylserine at the erythrocyte surface is recognised by macrophages which engulf and degrade the affected cells. Moreover, phosphatidylserine exposing erythrocytes may adhere to the vascular wall and thus interfere with microcirculation. Erythrocyte shrinkage and phosphatidylserine exposure ('eryptosis') mimic features of apoptosis in nucleated cells which however, involves several mechanisms lacking in erythrocytes. In kidney medulla, exposure time is usually too short to induce eryptosis despite high osmolarity. Beyond that high Cl- concentrations inhibit the cation channel and high urea concentrations the sphingomyelinase. Eryptosis is inhibited by erythropoietin which thus extends the life span of circulating erythrocytes. Several conditions trigger premature eryptosis thus favouring the development of anemia. On the other hand, eryptosis may be a mechanism of defective erythrocytes to escape hemolysis. Beyond their significance for erythrocyte survival and death the mechanisms involved in 'eryptosis' may similarly contribute to apoptosis of nucleated cells.",
"title": ""
}
] |
scidocsrr
|
ed6afeb80b8b3da85c6d8fa09b6871a3
|
Using Pivots to Speed-Up k-Medoids Clustering
|
[
{
"docid": "1c5f53fe8d663047a3a8240742ba47e4",
"text": "Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. In this paper, we explore whether clustering methods have a role to play in spatial data mining. To this end, we develop a new clustering method called CLAHANS which is based on randomized search. We also develop two spatial data mining algorithms that use CLAHANS. Our analysis and experiments show that with the assistance of CLAHANS, these two algorithms are very effective and can lead to discoveries that are difficult to find with current spatial data mining algorithms. Furthermore, experiments conducted to compare the performance of CLAHANS with that of existing clustering methods show that CLAHANS is the most efficient.",
"title": ""
}
] |
[
{
"docid": "674339928a16b372fb13395f920561e5",
"text": "High-speed, high-efficiency photodetectors play an important role in optical communication links that are increasingly being used in data centres to handle higher volumes of data traffic and higher bandwidths, as big data and cloud computing continue to grow exponentially. Monolithic integration of optical components with signal-processing electronics on a single silicon chip is of paramount importance in the drive to reduce cost and improve performance. We report the first demonstration of microand nanoscale holes enabling light trapping in a silicon photodiode, which exhibits an ultrafast impulse response (full-width at half-maximum) of 30 ps and a high efficiency of more than 50%, for use in data-centre optical communications. The photodiode uses microand nanostructured holes to enhance, by an order of magnitude, the absorption efficiency of a thin intrinsic layer of less than 2 μm thickness and is designed for a data rate of 20 gigabits per second or higher at a wavelength of 850 nm. Further optimization can improve the efficiency to more than 70%.",
"title": ""
},
{
"docid": "590a44ab149b88e536e67622515fdd08",
"text": "Chitosan is considered to be one of the most promising and applicable materials in adsorption applications. The existence of amino and hydroxyl groups in its molecules contributes to many possible adsorption interactions between chitosan and pollutants (dyes, metals, ions, phenols, pharmaceuticals/drugs, pesticides, herbicides, etc.). These functional groups can help in establishing positions for modification. Based on the learning from previously published works in literature, researchers have achieved a modification of chitosan with a number of different functional groups. This work summarizes the published works of the last three years (2012-2014) regarding the modification reactions of chitosans (grafting, cross-linking, etc.) and their application to adsorption of different environmental pollutants (in liquid-phase).",
"title": ""
},
{
"docid": "7eebeb133a9881e69bf3c367b9e20751",
"text": "Advanced driver assistance systems or highly automated driving systems for lane change maneuvers are expected to enhance highway traffic safety, transport efficiency, and driver comfort. To extend the capability of current advanced driver assistance systems, and eventually progress to highly automated highway driving, the task of automatically determine if, when, and how to perform a lane change maneuver, is essential. This paper thereby presents a low-complexity lane change maneuver algorithm which determines whether a lane change maneuver is desirable, and if so, selects an appropriate inter-vehicle traffic gap and time instance to perform the maneuver, and calculates the corresponding longitudinal and lateral control trajectory. The ability of the proposed lane change maneuver algorithm to make appropriate maneuver decisions and generate smooth and safe lane change trajectories in various traffic situations is demonstrated by simulation and experimental results.",
"title": ""
},
{
"docid": "b56cd1e9392976f48dddf7d3a60c5aef",
"text": "This paper presents a novel single-switch converter with high voltage gain and low voltage stress for photovoltaic applications. The proposed converter is composed of coupled-inductor and switched-capacitor techniques to achieve high step-up conversion ratio without adopting extremely high duty ratio or high turns ratio. The capacitors are charged in parallel and discharged in series by the coupled inductor to achieve high step-up voltage gain with an appropriate duty ratio. Besides, the voltage stress on the main switch is reduced with a passive clamp circuit, and the conduction losses are reduced. In addition, the reverse-recovery problem of the diode is alleviated by a coupled inductor. Thus, the efficiency can be further improved. The operating principle, steady state analysis and design of the proposed single switch converter with high step-up gain is carried out. A 24 V input voltage, 400 V output, and 300W maximum output power integrated converter is designed and analysed using MATLAB simulink. Simulation result proves the performance and functionality of the proposed single switch DC-DC converter for validation.",
"title": ""
},
{
"docid": "7db555e42bff7728edb8fb199f063cba",
"text": "The need for more post-secondary students to major and graduate in STEM fields is widely recognized. Students' motivation and strategic self-regulation have been identified as playing crucial roles in their success in STEM classes. But, how students' strategy use, self-regulation, knowledge building, and engagement impact different learning outcomes is not well understood. Our goal in this study was to investigate how motivation, strategic self-regulation, and creative competency were associated with course achievement and long-term learning of computational thinking knowledge and skills in introductory computer science courses. Student grades and long-term retention were positively associated with self-regulated strategy use and knowledge building, and negatively associated with lack of regulation. Grades were associated with higher study effort and knowledge retention was associated with higher study time. For motivation, higher learning- and task-approach goal orientations, endogenous instrumentality, and positive affect and lower learning-, task-, and performance-avoid goal orientations, exogenous instrumentality and negative affect were associated with higher grades and knowledge retention and also with strategic self-regulation and engagement. Implicit intelligence beliefs were associated with strategic self-regulation, but not grades or knowledge retention. Creative competency was associated with knowledge retention, but not grades, and with higher strategic self-regulation. Implications for STEM education are discussed.",
"title": ""
},
{
"docid": "2a7bd6fbce4fef6e319664090755858d",
"text": "AIM\nThis paper is a report of a study conducted to determine which occupational stressors are present in nurses' working environment; to describe and compare occupational stress between two educational groups of nurses; to estimate which stressors and to what extent predict nurses' work ability; and to determine if educational level predicts nurses' work ability.\n\n\nBACKGROUND\nNurses' occupational stress adversely affects their health and nursing quality. Higher educational level has been shown to have positive effects on the preservation of good work ability.\n\n\nMETHOD\nA cross-sectional study was conducted in 2006-2007. Questionnaires were distributed to a convenience sample of 1392 (59%) nurses employed at four university hospitals in Croatia (n = 2364). The response rate was 78% (n = 1086). Data were collected using the Occupational Stress Assessment Questionnaire and Work Ability Index Questionnaire.\n\n\nFINDINGS\nWe identified six major groups of occupational stressors: 'Organization of work and financial issues', 'public criticism', 'hazards at workplace', 'interpersonal conflicts at workplace', 'shift work' and 'professional and intellectual demands'. Nurses with secondary school qualifications perceived Hazards at workplace and Shift work as statistically significantly more stressful than nurses a with college degree. Predictors statistically significantly related with low work ability were: Organization of work and financial issues (odds ratio = 1.69, 95% confidence interval 122-236), lower educational level (odds ratio = 1.69, 95% confidence interval 122-236) and older age (odds ratio = 1.07, 95% confidence interval 1.05-1.09).\n\n\nCONCLUSION\nHospital managers should develop strategies to address and improve the quality of working conditions for nurses in Croatian hospitals. Providing educational and career prospects can contribute to decreasing nurses' occupational stress levels, thus maintaining their work ability.",
"title": ""
},
{
"docid": "e31901738e78728a7376457f7d1acd26",
"text": "Feature selection plays a critical role in biomedical data mining, driven by increasing feature dimensionality in target problems and growing interest in advanced but computationally expensive methodologies able to model complex associations. Specifically, there is a need for feature selection methods that are computationally efficient, yet sensitive to complex patterns of association, e.g. interactions, so that informative features are not mistakenly eliminated prior to downstream modeling. This paper focuses on Relief-based algorithms (RBAs), a unique family of filter-style feature selection algorithms that have gained appeal by striking an effective balance between these objectives while flexibly adapting to various data characteristics, e.g. classification vs. regression. First, this work broadly examines types of feature selection and defines RBAs within that context. Next, we introduce the original Relief algorithm and associated concepts, emphasizing the intuition behind how it works, how feature weights generated by the algorithm can be interpreted, and why it is sensitive to feature interactions without evaluating combinations of features. Lastly, we include an expansive review of RBA methodological research beyond Relief and its popular descendant, ReliefF. In particular, we characterize branches of RBA research, and provide comparative summaries of RBA algorithms including contributions, strategies, functionality, time complexity, adaptation to key data characteristics, and software availability.",
"title": ""
},
{
"docid": "0a5ae1eb45404d6a42678e955c23116c",
"text": "This study assessed the validity of the Balance Scale by examining: how Scale scores related to clinical judgements and self-perceptions of balance, laboratory measures of postural sway and external criteria reflecting balancing ability; if scores could predict falls in the elderly; and how they related to motor and functional performance in stroke patients. Elderly residents (N = 113) were assessed for functional performance and balance regularly over a nine-month period. Occurrence of falls was monitored for a year. Acute stroke patients (N = 70) were periodically rated for functional independence, motor performance and balance for over three months. Thirty-one elderly subjects were assessed by clinical and laboratory indicators reflecting balancing ability. The Scale correlated moderately with caregiver ratings, self-ratings and laboratory measures of sway. Differences in mean Scale scores were consistent with the use of mobility aids by elderly residents and differentiated stroke patients by location of follow-up. Balance scores predicted the occurrence of multiple falls among elderly residents and were strongly correlated with functional and motor performance in stroke patients.",
"title": ""
},
{
"docid": "fcfe75abfde3edbf051ccb78387c3904",
"text": "In this paper a Fuzzy Logic Controller (FLC) for path following of a four-wheel differentially skid steer mobile robot is presented. Fuzzy velocity and fuzzy torque control of the mobile robot is compared with classical controllers. To assess controllers robot kinematics and dynamics are simulated with parameters of P2-AT mobile robot. Results demonstrate the better performance of fuzzy logic controllers in following a predefined path.",
"title": ""
},
{
"docid": "54001ce62d0b571be9fbaf0980aa1b70",
"text": "Due to the large increase of malware samples in the last 10 years, the demand of the antimalware industry for an automated classifier has increased. However, this classifier has to satisfy two restrictions in order to be used in real life situations: high detection rate and very low number of false positives. By modifying the perceptron algorithm and combining existing features, we were able to provide a good solution to the problem, called the one side perceptron. Since the power of the perceptron lies in its features, we will focus our study on improving the feature creation algorithm. This paper presents different methods, including simple mathematical operations and the usage of a restricted Boltzmann machine, for creating features designed for an increased detection rate of the one side perceptron. The analysis is carried out using a large dataset of approximately 3 million files.",
"title": ""
},
{
"docid": "d32887dfac583ed851f607807c2f624e",
"text": "For a through-wall ultrawideband (UWB) random noise radar using array antennas, subtraction of successive frames of the cross-correlation signals between each received element signal and the transmitted signal is able to isolate moving targets in heavy clutter. Images of moving targets are subsequently obtained using the back projection (BP) algorithm. This technique is not constrained to noise radar, but can also be applied to other kinds of radar systems. Different models based on the finite-difference time-domain (FDTD) algorithm are set up to simulate different through-wall scenarios of moving targets. Simulation results show that the heavy clutter is suppressed, and the signal-to-clutter ratio (SCR) is greatly enhanced using this approach. Multiple moving targets can be detected, localized, and tracked for any random movement.",
"title": ""
},
{
"docid": "44402fdc3c9f2c6efaf77a00035f38ad",
"text": "A multi-objective optimization strategy to find optimal designs of composite multi-rim flywheel rotors is presented. Flywheel energy storage systems have been expanding into applications such as rail and automotive transportation, where the construction volume is limited. Common flywheel rotor optimization approaches for these applications are single-objective, aiming to increase the stored energy or stored energy density. The proposed multi-objective optimization offers more information for decision-makers optimizing three objectives separately: stored energy, cost and productivity. A novel approach to model the manufacturing of multi-rim composite rotors facilitates the consideration of manufacturing cost and time within the optimization. An analytical stress calculation for multi-rim rotors is used, which also takes interference fits and residual stresses into account. Constrained by a failure prediction based on the Maximum Strength, Maximum Strain and Tsai-Wu criterion, the discrete and nonlinear optimization was solved. A hybrid optimization strategy is presented that combines a genetic algorithm with a local improvement executed by a sequential quadratic program. The problem was solved for two rotor geometries used for light rail transit applications showing similar design results as in industry.",
"title": ""
},
{
"docid": "9f9268761bd2335303cfe2797d7e9eaa",
"text": "CYBER attacks have risen in recent times. The attack on Sony Pictures by hackers, allegedly from North Korea, has caught worldwide attention. The President of the United States of America issued a statement and “vowed a US response after North Korea’s alleged cyber-attack”.This dangerous malware termed “wiper” could overwrite data and stop important execution processes. An analysis by the FBI showed distinct similarities between this attack and the code used to attack South Korea in 2013, thus confirming that hackers re-use code from already existing malware to create new variants. This attack along with other recently discovered attacks such as Regin, Opcleaver give one clear message: current cyber security defense mechanisms are not sufficient enough to thwart these sophisticated attacks. Today’s defense mechanisms are based on scanning systems for suspicious or malicious activity. If such an activity is found, the files under suspect are either quarantined or the vulnerable system is patched with an update. These scanning methods are based on a variety of techniques such as static analysis, dynamic analysis and other heuristics based techniques, which are often slow to react to new attacks and threats. Static analysis is based on analyzing an executable without executing it, while dynamic analysis executes the binary and studies its behavioral characteristics. Hackers are familiar with these standard methods and come up with ways to evade the current defense mechanisms. They produce new malware variants that easily evade the detection methods. These variants are created from existing malware using inexpensive easily available “factory toolkits” in a “virtual factory” like setting, which then spread over and infect more systems. Once a system is compromised, it either quickly looses control and/or the infection spreads to other networked systems. While security techniques constantly evolve to keep up with new attacks, hackers too change their ways and continue to evade defense mechanisms. As this never-ending billion dollar “cat and mouse game” continues, it may be useful to look at avenues that can bring in novel alternative and/or orthogonal defense approaches to counter the ongoing threats. The hope is to catch these new attacks using orthogonal and complementary methods which may not be well known to hackers, thus making it more difficult and/or expensive for them to evade all detection schemes. This paper focuses on such orthogonal approaches from Signal and Image Processing that complement standard approaches.",
"title": ""
},
{
"docid": "7f5af3806f0baa040a26f258944ad3f9",
"text": "Linear Discriminant Analysis (LDA) is a widely-used supervised dimensionality reduction method in computer vision and pattern recognition. In null space based LDA (NLDA), a well-known LDA extension, between-class distance is maximized in the null space of the within-class scatter matrix. However, there are some limitations in NLDA. Firstly, for many data sets, null space of within-class scatter matrix does not exist, thus NLDA is not applicable to those datasets. Secondly, NLDA uses arithmetic mean of between-class distances and gives equal consideration to all between-class distances, which makes larger between-class distances can dominate the result and thus limits the performance of NLDA. In this paper, we propose a harmonic mean based Linear Discriminant Analysis, Multi-Class Discriminant Analysis (MCDA), for image classification, which minimizes the reciprocal of weighted harmonic mean of pairwise between-class distance. More importantly, MCDA gives higher priority to maximize small between-class distances. MCDA can be extended to multi-label dimension reduction. Results on 7 single-label data sets and 4 multi-label data sets show that MCDA has consistently better performance than 10 other single-label approaches and 4 other multi-label approaches in terms of classification accuracy, macro and micro average F1 score.",
"title": ""
},
{
"docid": "97691304930a85066a15086877473857",
"text": "In the context of modern cryptosystems, a common theme is the creation of distributed trust networks. In most of these designs, permanent storage of a contract is required. However, permanent storage can become a major performance and cost bottleneck. As a result, good code compression schemes are a key factor in scaling these contract based cryptosystems. For this project, we formalize and implement a data structure called the Merkelized Abstract Syntax Tree (MAST) to address both data integrity and compression. MASTs can be used to compactly represent contractual programs that will be executed remotely, and by using some of the properties of Merkle trees, they can also be used to verify the integrity of the code being executed. A concept by the same name has been discussed in the Bitcoin community for a while, the terminology originates from the work of Russel O’Connor and Pieter Wuille, however this discussion was limited to private correspondences. We present a formalization of it and provide an implementation.The project idea was developed with Bitcoin applications in mind, and the experiment we set up uses MASTs in a crypto currency network simulator. Using MASTs in the Bitcoin protocol [2] would increase the complexity (length) of contracts permitted on the network, while simultaneously maintaining the security of broadcasted data. Additionally, contracts may contain privileged, secret branches of execution.",
"title": ""
},
{
"docid": "097879c593aa68602564c176b806a74b",
"text": "We study the recognition of surfaces made from different materials such as concrete, rug, marble, or leather on the basis of their textural appearance. Such natural textures arise from spatial variation of two surface attributes: (1) reflectance and (2) surface normal. In this paper, we provide a unified model to address both these aspects of natural texture. The main idea is to construct a vocabulary of prototype tiny surface patches with associated local geometric and photometric properties. We call these 3D textons. Examples might be ridges, grooves, spots or stripes or combinations thereof. Associated with each texton is an appearance vector, which characterizes the local irradiance distribution, represented as a set of linear Gaussian derivative filter outputs, under different lighting and viewing conditions. Given a large collection of images of different materials, a clustering approach is used to acquire a small (on the order of 100) 3D texton vocabulary. Given a few (1 to 4) images of any material, it can be characterized using these textons. We demonstrate the application of this representation for recognition of the material viewed under novel lighting and viewing conditions. We also illustrate how the 3D texton model can be used to predict the appearance of materials under novel conditions.",
"title": ""
},
{
"docid": "0ccf20f28baf8a11c78d593efb9f6a52",
"text": "From a traction application point of view, proper operation of the synchronous reluctance motor over a wide speed range and mechanical robustness is desired. This paper presents new methods to improve the rotor mechanical integrity and the flux weakening capability at high speed using geometrical and variable ampere-turns concepts. The results from computer-aided analysis and experiment are compared to evaluate the methods. It is shown that, to achieve a proper design at high speed, the magnetic and mechanical performances need to be simultaneously analyzed due to their mutual effect.",
"title": ""
},
{
"docid": "f1cfb30b328725121ed232381d43ac3a",
"text": "High-performance object detection relies on expensive convolutional networks to compute features, often leading to significant challenges in applications, e.g. those that require detecting objects from video streams in real time. The key to this problem is to trade accuracy for efficiency in an effective way, i.e. reducing the computing cost while maintaining competitive performance. To seek a good balance, previous efforts usually focus on optimizing the model architectures. This paper explores an alternative approach, that is, to reallocate the computation over a scale-time space. The basic idea is to perform expensive detection sparsely and propagate the results across both scales and time with substantially cheaper networks, by exploiting the strong correlations among them. Specifically, we present a unified framework that integrates detection, temporal propagation, and across-scale refinement on a Scale-Time Lattice. On this framework, one can explore various strategies to balance performance and cost. Taking advantage of this flexibility, we further develop an adaptive scheme with the detector invoked on demand and thus obtain improved tradeoff. On ImageNet VID dataset, the proposed method can achieve a competitive mAP 79.6% at 20 fps, or 79.0% at 62 fps as a performance/speed tradeoff.1",
"title": ""
},
{
"docid": "a41d40d8349c1071c6f532b6b8e11be3",
"text": "A novel wideband slotline antenna is proposed using the multimode resonance concept. By symmetrically introducing two slot stubs along the slotline radiator near the nulls of electric-field distribution of the second odd-order mode, two radiation modes are excited in a single slotline resonator. With the help of the two stubs, the second odd-order mode gradually merges with its first counterpart and results into a wideband radiation with two resonances. Prototype antennas are then fabricated to experimentally validate the principle and design approach of the proposed slotline antenna. It is shown that the proposed slotline antenna's impedance bandwidth could be effectively increased to 32.7% while keeping an inherent narrow slot structure.",
"title": ""
},
{
"docid": "ebe91d4e3559439af5dd729e7321883d",
"text": "Performance of data analytics in Internet of Things (IoTs) depends on effective transport services offered by the underlying network. Fog computing enables independent data-plane computational features at the edge-switches, which serves as a platform for performing certain critical analytics required at the IoT source. To this end, in this paper, we implement a working prototype of Fog computing node based on Software-Defined Networking (SDN). Message Queuing Telemetry Transport (MQTT) is chosen as the candidate IoT protocol that transports data generated from IoT devices (a:k:a: MQTT publishers) to a remote host (called MQTT broker). We implement the MQTT broker functionalities integrated at the edge-switches, that serves as a platform to perform simple message-based analytics at the switches, and also deliver messages in a reliable manner to the end-host for post-delivery analytics. We mathematically validate the improved delivery performance as offered by the proposed switch-embedded brokers.",
"title": ""
}
] |
scidocsrr
|
0a81286afb381a9f6e2825a03f13265d
|
Prediction of long-term clinical outcomes using simple functional exercise performance tests in patients with COPD: a 5-year prospective cohort study
|
[
{
"docid": "0dc0815505f065472b3929792de638b4",
"text": "Our aim was to comprehensively validate the 1-min sit-to-stand (STS) test in chronic obstructive pulmonary disease (COPD) patients and explore the physiological response to the test.We used data from two longitudinal studies of COPD patients who completed inpatient pulmonary rehabilitation programmes. We collected 1-min STS test, 6-min walk test (6MWT), health-related quality of life, dyspnoea and exercise cardiorespiratory data at admission and discharge. We assessed the learning effect, test-retest reliability, construct validity, responsiveness and minimal important difference of the 1-min STS test.In both studies (n=52 and n=203) the 1-min STS test was strongly correlated with the 6MWT at admission (r=0.59 and 0.64, respectively) and discharge (r=0.67 and 0.68, respectively). Intraclass correlation coefficients (95% CI) between 1-min STS tests were 0.93 (0.83-0.97) for learning effect and 0.99 (0.97-1.00) for reliability. Standardised response means (95% CI) were 0.87 (0.58-1.16) and 0.91 (0.78-1.07). The estimated minimal important difference was three repetitions. End-exercise oxygen consumption, carbon dioxide output, ventilation, breathing frequency and heart rate were similar in the 1-min STS test and 6MWT.The 1-min STS test is a reliable, valid and responsive test for measuring functional exercise capacity in COPD patients and elicited a physiological response comparable to that of the 6MWT.",
"title": ""
}
] |
[
{
"docid": "b25379a7a48ef2b6bcc2df8d84d7680b",
"text": "Microblogging (Twitter or Facebook) has become a very popular communication tool among Internet users in recent years. Information is generated and managed through either computer or mobile devices by one person and is consumed by many other persons, with most of this user-generated content being textual information. As there are a lot of raw data of people posting real time messages about their opinions on a variety of topics in daily life, it is a worthwhile research endeavor to collect and analyze these data, which may be useful for users or managers to make informed decisions, for example. However this problem is challenging because a micro-blog post is usually very short and colloquial, and traditional opinion mining algorithms do not work well in such type of text. Therefore, in this paper, we propose a new system architecture that can automatically analyze the sentiments of these messages. We combine this system with manually annotated data from Twitter, one of the most popular microblogging platforms, for the task of sentiment analysis. In this system, machines can learn how to automatically extract the set of messages which contain opinions, filter out nonopinion messages and determine their sentiment directions (i.e. positive, negative). Experimental results verify the effectiveness of our system on sentiment analysis in real microblogging applications.",
"title": ""
},
{
"docid": "2bba03660a752f7033e8ecd95eb6bdbd",
"text": "Crowdsensing has the potential to support human-driven sensing and data collection at an unprecedented scale. While many organizers of data collection campaigns may have extensive domain knowledge, they do not necessarily have the skills required to develop robust software for crowdsensing. In this paper, we present Mobile Campaign Designer, a tool that simplifies the creation of mobile crowdsensing applications. Using Mobile Campaign Designer, an organizer is able to define parameters about their crowdsensing campaign, and the tool generates the source code and an executable for a tailored mobile application that embodies the current best practices in crowdsensing. An evaluation of the tool shows that users at all levels of technical expertise are capable of creating a crowdsensing application in an average of five minutes, and the generated applications are comparable in quality to existing crowdsensing applications.",
"title": ""
},
{
"docid": "125259c4471d4250214fec50b5e97522",
"text": "The switched reluctance motor (SRM) is a promising drive solution for electric vehicle propulsion thanks to its simple, rugged structure, satisfying performance and low price. Among other SRMs, the axial flux SRM (AFSRM) is a strong candidate for in-wheel drive applications because of its high torque/power density and compact disc shape. In this paper, a four-phase 8-stator-pole 6-rotor-pole double-rotor AFSRM is investigated for an e-bike application. A series of analyses are conducted to reduce the torque ripple by shaping the rotor poles, and a multi-level air gap geometry is designed with specific air gap dimensions at different positions. Both static and dynamic analyses show significant torque ripple reduction while maintaining the average electromagnetic output torque at the demanded level.",
"title": ""
},
{
"docid": "78f4ac2d266d64646a7d9bc735257f9d",
"text": "To achieve dynamic inference in pixel labeling tasks, we propose Pixel-wise Attentional Gating (PAG), which learns to selectively process a subset of spatial locations at each layer of a deep convolutional network. PAG is a generic, architecture-independent, problem-agnostic mechanism that can be readily “plugged in” to an existing model with fine-tuning. We utilize PAG in two ways: 1) learning spatially varying pooling fields that improve model performance without the extra computation cost associated with multi-scale pooling, and 2) learning a dynamic computation policy for each pixel to decrease total computation (FLOPs) while maintaining accuracy. We extensively evaluate PAG on a variety of per-pixel labeling tasks, including semantic segmentation, boundary detection, monocular depth and surface normal estimation. We demonstrate that PAG allows competitive or state-ofthe-art performance on these tasks. Our experiments show that PAG learns dynamic spatial allocation of computation over the input image which provides better performance trade-offs compared to related approaches (e.g., truncating deep models or dynamically skipping whole layers). Generally, we observe PAG can reduce computation by 10% without noticeable loss in accuracy and performance degrades gracefully when imposing stronger computational constraints.",
"title": ""
},
{
"docid": "f53f739dd526e3f954aabded123f0710",
"text": "Successful Free/Libre Open Source Software (FLOSS) projects must attract and retain high-quality talent. Researchers have invested considerable effort in the study of core and peripheral FLOSS developers. To this point, one critical subset of developers that have not been studied are One-Time code Contributors (OTC) – those that have had exactly one patch accepted. To understand why OTCs have not contributed another patch and provide guidance to FLOSS projects on retaining OTCs, this study seeks to understand the impressions, motivations, and barriers experienced by OTCs. We conducted an online survey of OTCs from 23 popular FLOSS projects. Based on the 184 responses received, we observed that OTCs generally have positive impressions of their FLOSS project and are driven by a variety of motivations. Most OTCs primarily made contributions to fix bugs that impeded their work and did not plan on becoming long term contributors. Furthermore, OTCs encounter a number of barriers that prevent them from continuing to contribute to the project. Based on our findings, there are some concrete actions FLOSS projects can take to increase the chances of converting OTCs into long-term contributors.",
"title": ""
},
{
"docid": "21916d34fb470601fb6376c4bcd0839a",
"text": "BACKGROUND\nCutibacterium (Propionibacterium) acnes is assumed to play an important role in the pathogenesis of acne.\n\n\nOBJECTIVES\nTo examine if clones with distinct virulence properties are associated with acne.\n\n\nMETHODS\nMultiple C. acnes isolates from follicles and surface skin of patients with moderate to severe acne and healthy controls were characterized by multilocus sequence typing. To determine if CC18 isolates from acne patients differ from those of controls in the possession of virulence genes or lack of genes conducive to a harmonious coexistence the full genomes of dominating CC18 follicular clones from six patients and five controls were sequenced.\n\n\nRESULTS\nIndividuals carried one to ten clones simultaneously. The dominating C. acnes clones in follicles from acne patients were exclusively from the phylogenetic clade I-1a and all belonged to clonal complex CC18 with the exception of one patient dominated by the worldwide-disseminated and often antibiotic resistant clone ST3. The clonal composition of healthy follicles showed a more heterogeneous pattern with follicles dominated by clones representing the phylogenetic clades I-1a, I-1b, I-2 and II. Comparison of follicular CC18 gene contents, allelic versions of putative virulence genes and their promoter regions, and 54 variable-length intragenic and inter-genic homopolymeric tracts showed extensive conservation and no difference associated with the clinical origin of isolates.\n\n\nCONCLUSIONS\nThe study supports that C. acnes strains from clonal complex CC18 and the often antibiotic resistant clone ST3 are associated with acne and suggests that susceptibility of the host rather than differences within these clones may determine the clinical outcome of colonization.",
"title": ""
},
{
"docid": "c157b149d334b2cc1f718d70ef85e75e",
"text": "The large inter-individual variability within the normal population, the limited reproducibility due to habituation or fatigue, and the impact of instruction and the subject's motivation, all constitute a major problem in posturography. These aspects hinder reliable evaluation of the changes in balance control in the case of disease and complicate objectivation of the impact of therapy and sensory input on balance control. In this study, we examine whether measurement of balance control near individualized limits of stability and under very challenging sensory conditions might reduce inter- and intra-individual variability compared to the well-known Sensory Organization Test (SOT). To do so, subjects balance on a platform on which instability increases automatically until body orientation or body sway velocity surpasses a safety limit. The maximum tolerated platform instability is then used as a measure for balance control under 10 different sensory conditions. Ninety-seven healthy subjects and 107 patients suffering from chronic dizziness (whiplash syndrome (n = 25), Meniere's disease (n = 28), acute (n = 28) or gradual (n = 26) peripheral function loss) were tested. In both healthy subjects and patients this approach resulted in a low intra-individual variability (< 14.5(%). In healthy subjects and patients, balance control was maximally affected by closure of the eyes and by vibration of the Achilles' tendons. The other perturbation techniques applied (sway referenced vision or platform, cooling of the foot soles) were less effective. Combining perturbation techniques reduced balance control even more, but the effect was less than the linear summation of the effect induced by the techniques applied separately. The group averages of healthy subjects show that vision contributed maximum 37%, propriocepsis minimum 26%, and labyrinths maximum 44% to balance control in healthy subjects. However, a large inter-individual variability was observed. Balance control of each patient group was less than in healthy subjects in all sensory conditions. Similar to healthy subjects, patients also show a large inter-individual variability, which results in a low sensitivity of the test. With the exception of some minor differences between Whiplash and Meniere patients, balance control did not differ between the four patient groups. This points to a low specificity of the test. Balance control was not correlated with the outcome of the standard vestibular examination. This study strengthens our notion that the contribution of the sensory inputs to balance control differs considerably per individual and may simply be due to differences in the vestibular function related to the specific pathology, but also to differences in motor learning strategies in relation to daily life requirements. It is difficult to provide clinically relevant normative data. We conclude that, like the SOT, the current test is merely a functional test of balance with limited diagnostic value.",
"title": ""
},
{
"docid": "f562bd72463945bd35d42894e4815543",
"text": "Sound levels in animal shelters regularly exceed 100 dB. Noise is a physical stressor on animals that can lead to behavioral, physiological, and anatomical responses. There are currently no policies regulating noise levels in dog kennels. The objective of this study was to evaluate the noise levels dogs are exposed to in an animal shelter on a continuous basis and to determine the need, if any, for noise regulations. Noise levels at a newly constructed animal shelter were measured using a noise dosimeter in all indoor dog-holding areas. These holding areas included large dog adoptable, large dog stray, small dog adoptable, small dog stray, and front intake. The noise level was highest in the large adoptable area. Sound from the large adoptable area affected some of the noise measurements for the other rooms. Peak noise levels regularly exceeded the measuring capability of the dosimeter (118.9 dBA). Often, in new facility design, there is little attention paid to noise abatement, despite the evidence that noise causes physical and psychological stress on dogs. To meet their behavioral and physical needs, kennel design should also address optimal sound range.",
"title": ""
},
{
"docid": "c89b903e497ebe8e8d89e8d1d931fae1",
"text": "Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "eb6ee2fd1f7f1d0d767e4dde2d811bed",
"text": "This paper presents a mathematical framework for performance analysis of Behavior Trees (BTs). BTs are a recent alternative to Finite State Machines (FSMs), for doing modular task switching in robot control architectures. By encoding the switching logic in a tree structure, instead of distributing it in the states of a FSM, modularity and reusability are improved. In this paper, we compute performance measures, such as success/failure probabilities and execution times, for plans encoded and executed by BTs. To do this, we first introduce Stochastic Behavior Trees (SBT), where we assume that the probabilistic performance measures of the basic action controllers are given. We then show how Discrete Time Markov Chains (DTMC) can be used to aggregate these measures from one level of the tree to the next. The recursive structure of the tree then enables us to step by step propagate such estimates from the leaves (basic action controllers) to the root (complete task execution). Finally, we verify our analytical results using massive Monte Carlo simulations, and provide an illustrative example of the results for a complex robotic task.",
"title": ""
},
{
"docid": "eb3f72e91f13a3c6faee53c6d4cd4174",
"text": "Recent studies indicate that nearly 75% of queries issued to Web search engines aim at finding information about entities, which are material objects or concepts that exist in the real world or fiction (e.g. people, organizations, products, etc.). Most common information needs underlying this type of queries include finding a certain entity (e.g. “Einstein relativity theory”), a particular attribute or property of an entity (e.g. “Who founded Intel?”) or a list of entities satisfying a certain criteria (e.g. “Formula 1 drivers that won the Monaco Grand Prix”). These information needs can be efficiently addressed by presenting structured information about a target entity or a list of entities retrieved from a knowledge graph either directly as search results or in addition to the ranked list of documents. This tutorial provides a summary of the recent research in knowledge graph entity representation methods and retrieval models. The first part of this tutorial introduces state-of-the-art methods for entity representation, from multi-fielded documents with flat and hierarchical structure to latent dimensional representations based on tensor factorization, while the second part presents recent developments in entity retrieval models, including Fielded Sequential Dependence Model (FSDM) and its parametric extension (PFSDM), as well as entity set expansion and ranking methods.",
"title": ""
},
{
"docid": "e98e902e22d9b8acb6e9e9dcd241471c",
"text": "We introduce a novel iterative approach for event coreference resolution that gradually builds event clusters by exploiting inter-dependencies among event mentions within the same chain as well as across event chains. Among event mentions in the same chain, we distinguish withinand cross-document event coreference links by using two distinct pairwise classifiers, trained separately to capture differences in feature distributions of withinand crossdocument event clusters. Our event coreference approach alternates between WD and CD clustering and combines arguments from both event clusters after every merge, continuing till no more merge can be made. And then it performs further merging between event chains that are both closely related to a set of other chains of events. Experiments on the ECB+ corpus show that our model outperforms state-of-the-art methods in joint task of WD and CD event coreference resolution.",
"title": ""
},
{
"docid": "2d0d42a6c712d93ace0bf37ffe786a75",
"text": "Personalized search systems tailor search results to the current user intent using historic search interactions. This relies on being able to find pertinent information in that user's search history, which can be challenging for unseen queries and for new search scenarios. Building richer models of users' current and historic search tasks can help improve the likelihood of finding relevant content and enhance the relevance and coverage of personalization methods. The task-based approach can be applied to the current user's search history, or as we focus on here, all users' search histories as so-called \"groupization\" (a variant of personalization whereby other users' profiles can be used to personalize the search experience). We describe a method whereby we mine historic search-engine logs to find other users performing similar tasks to the current user and leverage their on-task behavior to identify Web pages to promote in the current ranking. We investigate the effectiveness of this approach versus query-based matching and finding related historic activity from the current user (i.e., group versus individual). As part of our studies we also explore the use of the on-task behavior of particular user cohorts, such as people who are expert in the topic currently being searched, rather than all other users. Our approach yields promising gains in retrieval performance, and has direct implications for improving personalization in search systems.",
"title": ""
},
{
"docid": "190bf6cd8a2e9a5764b42d01b7aec7c8",
"text": "We propose a method for compiling a class of Σ-protocols (3-move public-coin protocols) into non-interactive zero-knowledge arguments. The method is based on homomorphic encryption and does not use random oracles. It only requires that a private/public key pair is set up for the verifier. The method applies to all known discrete-log based Σ-protocols. As applications, we obtain non-interactive threshold RSA without random oracles, and non-interactive zero-knowledge for NP more efficiently than by previous methods.",
"title": ""
},
{
"docid": "2a0577aa61ca1cbde207306fdb5beb08",
"text": "In recent years, researchers have shown that unwanted web tracking is on the rise, as advertisers are trying to capitalize on users' online activity, using increasingly intrusive and sophisticated techniques. Among these, browser fingerprinting has received the most attention since it allows trackers to uniquely identify users despite the clearing of cookies and the use of a browser's private mode. In this paper, we investigate and quantify the fingerprintability of browser extensions, such as, AdBlock and Ghostery. We show that an extension's organic activity in a page's DOM can be used to infer its presence, and develop XHound, the first fully automated system for fingerprinting browser extensions. By applying XHound to the 10,000 most popular Google Chrome extensions, we find that a significant fraction of popular browser extensions are fingerprintable and could thus be used to supplement existing fingerprinting methods. Moreover, by surveying the installed extensions of 854 users, we discover that many users tend to install different sets of fingerprintable browser extensions and could thus be uniquely, or near-uniquely identifiable by extension-based fingerprinting. We use XHound's results to build a proof-of-concept extension-fingerprinting script and show that trackers can fingerprint tens of extensions in just a few seconds. Finally, we describe why the fingerprinting of extensions is more intrusive than the fingerprinting of other browser and system properties, and sketch two different approaches towards defending against extension-based fingerprinting.",
"title": ""
},
{
"docid": "f794b6914cc99fcd2a13b81e6fbe12d2",
"text": "An unprecedented rise in the number of asylum seekers and refugees was seen in Europe in 2015, and it seems that numbers are not going to be reduced considerably in 2016. Several studies have tried to estimate risk of infectious diseases associated with migration but only very rarely these studies make a distinction on reason for migration. In these studies, workers, students, and refugees who have moved to a foreign country are all taken to have the same disease epidemiology. A common disease epidemiology across very different migrant groups is unlikely, so in this review of infectious diseases in asylum seekers and refugees, we describe infectious disease prevalence in various types of migrants. We identified 51 studies eligible for inclusion. The highest infectious disease prevalence in refugee and asylum seeker populations have been reported for latent tuberculosis (9-45%), active tuberculosis (up to 11%), and hepatitis B (up to 12%). The same population had low prevalence of malaria (7%) and hepatitis C (up to 5%). There have been recent case reports from European countries of cutaneous diphtheria, louse-born relapsing fever, and shigella in the asylum-seeking and refugee population. The increased risk that refugees and asylum seekers have for infection with specific diseases can largely be attributed to poor living conditions during and after migration. Even though we see high transmission in the refugee populations, there is very little risk of spread to the autochthonous population. These findings support the efforts towards creating a common European standard for the health reception and reporting of asylum seekers and refugees.",
"title": ""
},
{
"docid": "be3bf1e95312cc0ce115e3aaac2ecc96",
"text": "This paper contributes a first study into how different human users deliver simultaneous control and feedback signals during human-robot interaction. As part of this work, we formalize and present a general interactive learning framework for online cooperation between humans and reinforcement learning agents. In many humanmachine interaction settings, there is a growing gap between the degrees-of-freedom of complex semi-autonomous systems and the number of human control channels. Simple human control and feedback mechanisms are required to close this gap and allow for better collaboration between humans and machines on complex tasks. To better inform the design of concurrent control and feedback interfaces, we present experimental results from a human-robot collaborative domain wherein the human must simultaneously deliver both control and feedback signals to interactively train an actor-critic reinforcement learning robot. We compare three experimental conditions: 1) human delivered control signals, 2) reward-shaping feedback signals, and 3) simultaneous control and feedback. Our results suggest that subjects provide less feedback when simultaneously delivering feedback and control signals and that control signal quality is not significantly diminished. Our data suggest that subjects may also modify when and how they provide feedback. Through algorithmic development and tuning informed by this study, we expect semi-autonomous actions of robotic agents can be better shaped by human feedback, allowing for seamless collaboration and improved performance in difficult interactive domains. University of Alberta, Dep. of Computing Science, Edmonton, Canada University of Alberta, Deps. of Medicine and Computing Science, Edmonton, Alberta, Canada. Correspondence to: Kory Mathewson <korym@ualberta.ca>. Under review for the 34 th International Conference on Machine Learning, Sydney, Australia, 2017. JMLR: W&CP. Copyright 2017 by the authors. Figure 1. Experimental configuration. One of the study participants with the Myo band on their right arm providing a control signal, while simultaneously providing feedback signals with their left hand. The Aldebaran Nao robot simulation is visible on the screen alongside experimental logging.",
"title": ""
},
{
"docid": "e4fb31ebacb093932517719884264b46",
"text": "Monitoring and control the environmental parameters in agricultural constructions are essential to improve energy efficiency and productivity. Real-time monitoring allows the detection and early correction of unfavourable situations, optimizing consumption and protecting crops against diseases. This work describes an automatic system for monitoring farm environments with the aim of increasing efficiency and quality of the agricultural environment. Based on the Internet of Things, the system uses a low-cost wireless sensor network, called Sun Spot, programmed in Java, with the Java VM running on the device itself and the Arduino platform for Internet connection. The data collected is shared through the social network of Facebook. The temperature and brightness parameters are monitored in real time. Other sensors can be added to monitor the issue for specific purposes. The results show that conditions within greenhouses may in some cases be very different from those expected. Therefore, the proposed system can provide an effective tool to improve the quality of agricultural production and energy efficiency.",
"title": ""
},
{
"docid": "370ec5c556b70ead92bc45d1f419acaf",
"text": "Despite the identification of circulating tumor cells (CTCs) and cell-free DNA (cfDNA) as potential blood-based biomarkers capable of providing prognostic and predictive information in cancer, they have not been incorporated into routine clinical practice. This resistance is due in part to technological limitations hampering CTC and cfDNA analysis, as well as a limited understanding of precisely how to interpret emergent biomarkers across various disease stages and tumor types. In recognition of these challenges, a group of researchers and clinicians focused on blood-based biomarker development met at the Canadian Cancer Trials Group (CCTG) Spring Meeting in Toronto, Canada on 29 April 2016 for a workshop discussing novel CTC/cfDNA technologies, interpretation of data obtained from CTCs versus cfDNA, challenges regarding disease evolution and heterogeneity, and logistical considerations for incorporation of CTCs/cfDNA into clinical trials, and ultimately into routine clinical use. The objectives of this workshop included discussion of the current barriers to clinical implementation and recent progress made in the field, as well as fueling meaningful collaborations and partnerships between researchers and clinicians. We anticipate that the considerations highlighted at this workshop will lead to advances in both basic and translational research and will ultimately impact patient management strategies and patient outcomes.",
"title": ""
},
{
"docid": "86fca69ae48592e06109f7b05180db28",
"text": "Background: The software development industry has been adopting agile methods instead of traditional software development methods because they are more flexible and can bring benefits such as handling requirements changes, productivity gains and business alignment. Objective: This study seeks to evaluate, synthesize, and present aspects of research on agile methods tailoring including the method tailoring approaches adopted and the criteria used for agile practice selection. Method: The method adopted was a Systematic Literature Review (SLR) on studies published from 2002 to 2014. Results: 56 out of 783 papers have been identified as describing agile method tailoring approaches. These studies have been identified as case studies regarding the empirical research, as solution proposals regarding the research type, and as evaluation studies regarding the research validation type. Most of the papers used method engineering to implement tailoring and were not specific to any agile method on their scope. Conclusion: Most of agile methods tailoring research papers proposed or improved a technique, were implemented as case studies analyzing one case in details and validated their findings using evaluation. Method engineering was the base for tailoring, the approaches are independent of agile method and the main criteria used are internal environment and objectives variables. © 2015 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
e1c46ed63c3fd803da72841ee770c658
|
Robust License Plate Detection Using Covariance Descriptor in a Neural Network Framework
|
[
{
"docid": "1d61e1eb5275444c6a2a3f8ad5c2865a",
"text": "We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore,we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance fetures is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix. European Conference on Computer Vision (ECCV) This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2006 201 Broadway, Cambridge, Massachusetts 02139 Region Covariance: A Fast Descriptor for Detection and Classification Oncel Tuzel, Fatih Porikli, and Peter Meer 1 Computer Science Department, 2 Electrical and Computer Engineering Department, Rutgers University, Piscataway, NJ 08854 {otuzel, meer}@caip.rutgers.edu 3 Mitsubishi Electric Research Laboratories, Cambridge, MA 02139 {fatih}@merl.com Abstract. We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix. We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix.",
"title": ""
}
] |
[
{
"docid": "3bb4d0f44ed5a2c14682026090053834",
"text": "A Meander Line Antenna (MLA) for 2.45 GHz is proposed. This research focuses on the optimum value of gain and reflection coefficient. Therefore, the MLA's parametric studies is discussed which involved the number of turn, width of feed (W1), length of feed (LI) and vertical length partial ground (L3). As a result, the studies have significantly achieved MLA's gain and reflection coefficient of 3.248dB and -45dB respectively. The MLA also resembles the monopole antenna behavior of Omni-directional radiation pattern. Measured and simulated results are presented. The proposed antenna has big potential to be implemented for WLAN device such as optical mouse application.",
"title": ""
},
{
"docid": "8ef80a3ae74ab4d53bad33aa79d469fd",
"text": "One of the most prolific topics of research in the field of computer vision is pattern detection in images. A large number of practical applications for face detection exist. Contemporary work even suggests that any of the results from specialized detectors can be approximated by using fast detection classifiers. In this project, we developed an algorithm which detected faces from the input image with a lower false detection rate and lower computation cost using the ensemble effects of computer vision concepts. This algorithm utilized the concepts of recognizing skin color, filtering the binary image, detecting blobs and extracting different features from the face. The result is supported by the statistics obtained from calculating the parameters defining the parts of the face. The project also implements the highly powerful concept of Support Vector Machine that is used for the classification of images into face and non-face class. This classification is based on the training data set and indicators of luminance value, chrominance value, saturation value, elliptical value and eye and mouth map values.",
"title": ""
},
{
"docid": "17f8affa7807932f58950303c3b62296",
"text": "The Internet of Things (IoT) has grown in recent years to a huge branch of research: RFID, sensors and actuators as typical IoT devices are increasingly used as resources integrated into new value added applications of the Future Internet and are intelligently combined using standardised software services. While most of the current work on IoT integration focuses on areas of the actual technical implementation, little attention has been given to the integration of the IoT paradigm and its devices coming with native software components as resources in business processes of traditional enterprise resource planning systems. In this paper, we identify and integrate IoT resources as a novel automatic resource type on the business process layer beyond the classical human resource task-centric view of the business process model in order to face expanding resource planning challenges of future enterprise environments.",
"title": ""
},
{
"docid": "df8885ad4dbf2a8c1cfa4dc2ddd33975",
"text": "Many recent state-of-the-art recommender systems such as D-ATT, TransNet and DeepCoNN exploit reviews for representation learning. This paper proposes a new neural architecture for recommendation with reviews. Our model operates on a multi-hierarchical paradigm and is based on the intuition that not all reviews are created equal, i.e., only a selected few are important. The importance, however, should be dynamically inferred depending on the current target. To this end, we propose a review-by-review pointer-based learning scheme that extracts important reviews from user and item reviews and subsequently matches them in a word-by-word fashion. This enables not only the most informative reviews to be utilized for prediction but also a deeper word-level interaction. Our pointer-based method operates with a gumbel-softmax based pointer mechanism that enables the incorporation of discrete vectors within differentiable neural architectures. Our pointer mechanism is co-attentive in nature, learning pointers which are co-dependent on user-item relationships. Finally, we propose a multi-pointer learning scheme that learns to combine multiple views of user-item interactions. We demonstrate the effectiveness of our proposed model via extensive experiments on 24 benchmark datasets from Amazon and Yelp. Empirical results show that our approach significantly outperforms existing state-of-the-art models, with up to 19% and 71% relative improvement when compared to TransNet and DeepCoNN respectively. We study the behavior of our multi-pointer learning mechanism, shedding light on 'evidence aggregation' patterns in review-based recommender systems.",
"title": ""
},
{
"docid": "b3a6bc6036376d33ef78896f21778a21",
"text": "Document clustering has many important applications in the area of data mining and information retrieval. Many existing document clustering techniques use the “bag-of-words” model to represent the content of a document. However, this representation is only effective for grouping related documents when these documents share a large proportion of lexically equivalent terms. In other words, instances of synonymy between related documents are ignored, which can reduce the effectiveness of applications using a standard full-text document representation. To address this problem, we present a new approach for clustering scientific documents, based on the utilization of citation contexts. A citation context is essentially the text surrounding the reference markers used to refer to other scientific works. We hypothesize that citation contexts will provide relevant synonymous and related vocabulary which will help increase the effectiveness of the bag-of-words representation. In this paper, we investigate the power of these citation-specific word features, and compare them with the original document’s textual representation in a document clustering task on two collections of labeled scientific journal papers from two distinct domains: High Energy Physics and Genomics. We also compare these text-based clustering techniques with a link-based clustering algorithm which determines the similarity between documents based on the number of co-citations, that is in-links represented by citing documents and out-links represented by cited documents. Our experimental results indicate that the use of citation contexts, when combined with the vocabulary in the full-text of the document, is a promising alternative means of capturing critical topics covered by journal articles. More specifically, this document representation strategy when used by the clustering algorithm investigated in this paper, outperforms both the full-text clustering approach and the link-based clustering technique on both scientific journal datasets.",
"title": ""
},
{
"docid": "14f235fa9a30d8686ea5f4bfe7823fcc",
"text": "Due to limited bandwidth, storage, and computational resources, and to the dynamic nature of the Web, search engines cannot index every Web page, and even the covered portion of the Web cannot be monitored continuously for changes. Therefore it is essential to develop effective crawling strategies to prioritize the pages to be indexed. The issue is even more important for topic-specific search engines, where crawlers must make additional decisions based on the relevance of visited pages. However, it is difficult to evaluate alternative crawling strategies because relevant sets are unknown and the search space is changing. We propose three different methods to evaluate crawling strategies. We apply the proposed metrics to compare three topic-driven crawling algorithms based on similarity ranking, link analysis, and adaptive agents.",
"title": ""
},
{
"docid": "3c135cae8654812b2a4f805cec78132e",
"text": "Binarized Neural Network (BNN) removes bitwidth redundancy in classical CNN by using a single bit (-1/+1) for network parameters and intermediate representations, which has greatly reduced the off-chip data transfer and storage overhead. However, a large amount of computation redundancy still exists in BNN inference. By analyzing local properties of images and the learned BNN kernel weights, we observe an average of ~78% input similarity and ~59% weight similarity among weight kernels, measured by our proposed metric in common network architectures. Thus there does exist redundancy that can be exploited to further reduce the amount of on-chip computations.\n Motivated by the observation, in this paper, we proposed two types of fast and energy-efficient architectures for BNN inference. We also provide analysis and insights to pick the better strategy of these two for different datasets and network models. By reusing the results from previous computation, much cycles for data buffer access and computations can be skipped. By experiments, we demonstrate that 80% of the computation and 40% of the buffer access can be skipped by exploiting BNN similarity. Thus, our design can achieve 17% reduction in total power consumption, 54% reduction in on-chip power consumption and 2.4× maximum speedup, compared to the baseline without applying our reuse technique. Our design also shows 1.9× more area-efficiency compared to state-of-the-art BNN inference design. We believe our deployment of BNN on FPGA leads to a promising future of running deep learning models on mobile devices.",
"title": ""
},
{
"docid": "047db26003f71d5d4a8f21e976e6fe9e",
"text": "Nanoscale field-programmable gate array (FPGA) circuits are more prone to radiation-induced effects in harsh environments because of their memory-based reconfigurable logic fabric. Consequently, for mission- or safety-critical applications, appropriate fault-tolerance techniques are widely employed. The most commonly applied technique for hardening FPGAs against radiation-induced upsets is triple modular redundancy (TMR). Voting circuits used in TMR implementations are decentralized and consensus is calculated from the redundant outputs off-chip. However, if there are an insufficient number of pins available on the chip carrier, the TMR system must be reduced to an on-chip unprotected simplex system, meaning voters used at those locations become single point of failure. In this paper, we propose a self-checking voting circuit for increased reliability consensus voting on FPGAs. Through fault injection and reliability analyses, we demonstrate that the proposed voter, which utilizes redundant voting copies, is approximately 26% more reliable than an unprotected simplex voter when reliability values of voters over normalized time are averaged.",
"title": ""
},
{
"docid": "62eaac4d22c2bc278f411761fc3d493f",
"text": "Smartphone users have their own unique behavioral patterns when tapping on the touch screens. These personal patterns are reflected on the different rhythm, strength, and angle preferences of the applied force. Since smart phones are equipped with various sensors like accelerometer, gyroscope, and touch screen sensors, capturing a user's tapping behaviors can be done seamlessly. Exploiting the combination of four features (acceleration, pressure, size, and time) extracted from smart phone sensors, we propose a non-intrusive user verification mechanism to substantiate whether an authenticating user is the true owner of the smart phone or an impostor who happens to know the pass code. Based on the tapping data collected from over 80 users, we conduct a series of experiments to validate the efficacy of our proposed system. Our experimental results show that our verification system achieves high accuracy with averaged equal error rates of down to 3.65%. As our verification system can be seamlessly integrated with the existing user authentication mechanisms on smart phones, its deployment and usage are transparent to users and do not require any extra hardware support.",
"title": ""
},
{
"docid": "4924441de38f1b28e66330a1cb219f4b",
"text": "Online marketing is one of the best practices used to establish a brand and to increase its popularity. Advertisements are used in a better way to showcase the company’s product/service and give rise to a worthy online marketing strategy. Posting an advertisement on utilitarian web pages helps to maximize brand reach and get a better feedback. Now-a-days companies are cautious of their brand image on the Internet due to the growing number of Internet users. Since there are billions of Web sites on the Internet, it becomes difficult for companies to really decide where to advertise on the Internet for brand popularity. What if, the company advertise on a page which is visited by less number of the interested (for a particular type of product) users instead of a web page which is visited by more number of the interested users?—this doubt and uncertainty—is a core issue faced by many companies. This research paper presents a Brand analysis framework and suggests some experimental practices to ensure efficiency of the proposed framework. This framework is divided into three components—(1) Web site network formation framework—a framework that forms a Web site network of a specific search query obtained from resultant web pages of three search engines-Google, Yahoo & Bing and their associated web pages; (2) content scraping framework—it crawls the content of web pages existing in the framework-formed Web site network; (3) rank assignment of networked web pages—text edge processing algorithm has been used to find out terms of interest and their occurrence associated with search query. We have further applied sentiment analysis to validate positive or negative impact of the sentences, having the search term and its associated terms (with reference to the search query) to identify impact of web page. Later, on the basis of both—text edge analysis and sentiment analysis results, we assigned a rank to networked web pages and online social network pages. In this research work, we present experiments for ‘Motorola smart phone,’ ‘LG smart phone’ and ‘Samsung smart phone’ as search query and sampled the Web site network of top 20 search results of all three search engines and examined up to 60 search results for each search engine. This work is useful to target the right online location for specific brand marketing. Once the brand knows the web pages/social media pages containing high brand affinity and ensures that the content of high affinity web page/social media page has a positive impact, we advertise at that respective online location. Thus, targeted brand analysis framework for online marketing not only has benefits for the advertisement agencies but also for the customers.",
"title": ""
},
{
"docid": "518090ef17c65c643287c65660eed699",
"text": "AbstructThis paper presents solutions to the entropyconstrained scalar quantizer (ECSQ) design problem for two sources commonly encountered in image and speech compression applications: sources having the exponential and Laplacian probability density functions. We use the memoryless property of the exponential distribution to develop a new noniterative algorithm for obtaining the optimal quantizer design. We show how to obtain the optimal ECSQ either with or without an additional constraint on the number of levels in the quantizer. In contrast to prior methods, which require multidimensional iterative solution of a large number of nonlinear equations, the new method needs only a single sequence of solutions to one-dimensional nonlinear equations (in some Laplacian cases, one additional two-dimensional solution is needed). As a result, the new method is orders of magnitude faster than prior ones. We show that as the constraint on the number of levels in the quantizer is relaxed, the optimal ECSQ becomes a uniform threshold quantizer (UTQ) for exponential, but not for Laplacian sources. We then further examine the performance of the UTQ and optimal ECSQ, and also investigate some interesting alternatives to the UTQ, including a uniform-reconstruction quantizer (URQ) and a constant dead-zone ratio quantizer (CDZRQ).",
"title": ""
},
{
"docid": "a6f4c2e8a754e31b1518c5fa776460e3",
"text": "The effects of 'natural' disasters in cities can be worse than in other environments, with poor and marginalised urban communities in the developing world being most at risk. To avoid post-disaster destruction and the forced eviction of these communities, proactive and preventive urban planning, including housing, is required. This paper examines current perceptions and practices within international aid organisations regarding the existing and potential roles of urban planning as a tool for reducing disaster risk. It reveals that urban planning confronts many of the generic challenges to mainstreaming risk reduction in development planning. However, it faces additional barriers. The main reasons for the identified lack of integration of urban planning and risk reduction are, first, the marginal position of both fields within international aid organisations, and second, an incompatibility between the respective professional disciplines. To achieve better integration, a conceptual shift from conventional to non-traditional urban planning is proposed. This paper suggests related operative measures and initiatives to achieve this change.",
"title": ""
},
{
"docid": "218c5fdd541a839094e8010ed6a56d22",
"text": "In this paper, we propose a consistent-aware deep learning (CADL) framework for person re-identification in a camera network. Unlike most existing person re-identification methods which identify whether two body images are from the same person, our approach aims to obtain the maximal correct matches for the whole camera network. Different from recently proposed camera network based re-identification methods which only consider the consistent information in the matching stage to obtain a global optimal association, we exploit such consistent-aware information under a deep learning framework where both feature representation and image matching are automatically learned with certain consistent constraints. Specifically, we reach the global optimal solution and balance the performance between different cameras by optimizing the similarity and association iteratively. Experimental results show that our method obtains significant performance improvement and outperforms the state-of-the-art methods by large margins.",
"title": ""
},
{
"docid": "b96c48948572854e9dd1424707358e64",
"text": "Due to the dipolar nature of the geomagnetic field, magnetic anomalies observed anywhere rather than magnetic poles are asymmetric even when the causative body distribution is symmetric. This property complicates the interpretation of magnetic data. Reduction to the pole (RTP) is a technique that converts magnetic anomaly to symmetrical pattern which would have been observed with vertical magnetization. This technique usually is applied in frequency domain which has some disadvantages such as noise induction, necessity of using fixed inclination and declination throughout survey area and also unknown remanent magnetization that in many cases restrict its applicability. Analytic signal is a suitable quantity that can be calculated either in space or frequency domain and its amplitude is independent to magnetization direction. In this paper, analytic signal has been used as RTP operator and applied on the synthetic magnetic data and on the real magnetic data from an area in Shahrood region of Iran and results compared to conventional RTP operation. Results show that least difference is relevant to the causative body location and then analytic signal can be used as substituent method for conventional RTP.",
"title": ""
},
{
"docid": "1c1042473f724da2ba2400110c2d4c48",
"text": "Recent work has shown good recognition results in 3D object recognition using 3D convolutional networks. In this paper, we show that the object orientation plays an important role in 3D recognition. More specifically, we argue that objects induce different features in the network under rotation. Thus, we approach the category-level classification task as a multi-task problem, in which the network is trained to predict the pose of the object in addition to the class label as a parallel task. We show that this yields significant improvements in the classification results. We test our suggested architecture on several datasets representing various 3D data sources: LiDAR data, CAD models, and RGB-D images. We report state-of-the-art results on classification as well as significant improvements in precision and speed over the baseline on 3D detection.",
"title": ""
},
{
"docid": "94db708f9166bb335f1430f279cd9db9",
"text": "Human emotion is a temporally dynamic event which can be inferred from both audio and video feature sequences. In this paper we investigate the long short term memory recurrent neural network (LSTM-RNN) based encoding method for category emotion recognition in the video. LSTM-RNN is able to incorporate knowledge about how emotion evolves over long range successive frames and emotion clues from isolated frame. After encoding, each video clip can be represented by a vector for each input feature sequence. The vectors contain both frame level and sequence level emotion information. These vectors are then concatenated and fed into support vector machine (SVM) to get the final prediction result. Extensive evaluations on Emotion Challenge in the Wild (EmotiW2015) dataset show the efficiency of the proposed encoding method and competitive results are obtained. The final recognition accuracy achieves 46.38% for audio-video emotion recognition sub-challenge, where the challenge baseline is 39.33%.",
"title": ""
},
{
"docid": "9f13ba2860e70e0368584bb4c36d01df",
"text": "Network log messages (e.g., syslog) are expected to be valuable and useful information to detect unexpected or anomalous behavior in large scale networks. However, because of the huge amount of system log data collected in daily operation, it is not easy to extract pinpoint system failures or to identify their causes. In this paper, we propose a method for extracting the pinpoint failures and identifying their causes from network syslog data. The methodology proposed in this paper relies on causal inference that reconstructs causality of network events from a set of time series of events. Causal inference can filter out accidentally correlated events, thus it outputs more plausible causal events than traditional cross-correlation-based approaches can. We apply our method to 15 months’ worth of network syslog data obtained from a nationwide academic network in Japan. The proposed method significantly reduces the number of pseudo correlated events compared with the traditional methods. Also, through three case studies and comparison with trouble ticket data, we demonstrate the effectiveness of the proposed method for practical network operation.",
"title": ""
},
{
"docid": "9859df7dbe200d09af3b598608905314",
"text": "Split-merge moves are a standard component of MCMC algorithms for tasks such as multitarget tracking and fitting mixture models with unknown numbers of components. Achieving rapid mixing for split-merge MCMC has been notoriously difficult, and state-of-the-art methods do not scale well. We explore the reasons for this and propose a new split-merge kernel consisting of two sub-kernels: one combines a “smart” split move that proposes plausible splits of heterogeneous clusters with a “dumb” merge move that proposes merging random pairs of clusters; the other combines a dumb split move with a smart merge move. We show that the resulting smart-dumb/dumb-smart (SDDS) algorithm outperforms previous methods. Experiments with entity-mention models and Dirichlet process mixture models demonstrate much faster convergence and better scaling to large data sets.",
"title": ""
},
{
"docid": "0fca883dcef4ef1f23d2d0006818d009",
"text": "In this paper, we design a new edge-aware structure, named segment graph, to represent the image and we further develop a novel double weighted average image filter (SGF) based on the segment graph. In our SGF, we use the tree distance on the segment graph to define the internal weight function of the filtering kernel, which enables the filter to smooth out high-contrast details and textures while preserving major image structures very well. While for the external weight function, we introduce a user specified smoothing window to balance the smoothing effects from each node of the segment graph. Moreover, we also set a threshold to adjust the edge-preserving performance. These advantages make the SGF more flexible in various applications and overcome the \"halo\" and \"leak\" problems appearing in most of the state-of-the-art approaches. Finally and importantly, we develop a linear algorithm for the implementation of our SGF, which has an O(N) time complexity for both gray-scale and high dimensional images, regardless of the kernel size and the intensity range. Typically, as one of the fastest edge-preserving filters, our CPU implementation achieves 0.15s per megapixel when performing filtering for 3-channel color images. The strength of the proposed filter is demonstrated by various applications, including stereo matching, optical flow, joint depth map upsampling, edge-preserving smoothing, edges detection, image abstraction and texture editing.",
"title": ""
}
] |
scidocsrr
|
899f929c66da02c41251f26f7584f854
|
Factorization Meets the Item Embedding: Regularizing Matrix Factorization with Item Co-occurrence
|
[
{
"docid": "abf3e75c6f714e4c2e2a02f9dd00117b",
"text": "Recent work has shown that collaborative filter-based recommender systems can be improved by incorporating side information, such as natural language reviews, as a way of regularizing the derived product representations. Motivated by the success of this approach, we introduce two different models of reviews and study their effect on collaborative filtering performance. While the previous state-of-the-art approach is based on a latent Dirichlet allocation (LDA) model of reviews, the models we explore are neural network based: a bag-of-words product-of-experts model and a recurrent neural network. We demonstrate that the increased flexibility offered by the product-of-experts model allowed it to achieve state-of-the-art performance on the Amazon review dataset, outperforming the LDA-based approach. However, interestingly, the greater modeling power offered by the recurrent neural network appears to undermine the model's ability to act as a regularizer of the product representations.",
"title": ""
}
] |
[
{
"docid": "cc5ede31b7dd9faa2cce9d2aa8819a3c",
"text": "Despite considerable research on systems, algorithms and hardware to speed up deep learning workloads, there is no standard means of evaluating end-to-end deep learning performance. Existing benchmarks measure proxy metrics, such as time to process one minibatch of data, that do not indicate whether the system as a whole will produce a high-quality result. In this work, we introduce DAWNBench, a benchmark and competition focused on end-to-end training time to achieve a state-of-the-art accuracy level, as well as inference time with that accuracy. Using time to accuracy as a target metric, we explore how different optimizations, including choice of optimizer, stochastic depth, and multi-GPU training, affect end-to-end training performance. Our results demonstrate that optimizations can interact in non-trivial ways when used in conjunction, producing lower speed-ups and less accurate models. We believe DAWNBench will provide a useful, reproducible means of evaluating the many trade-offs in deep learning systems.",
"title": ""
},
{
"docid": "4d8573fa52e325e2a058f6c49698dd26",
"text": "Running applications in the cloud efficiently requires much more than deploying software in virtual machines. Cloud applications have to be continuously managed: (1) to adjust their resources to the incoming load and (2) to face transient failures replicating and restarting components to provide resiliency on unreliable infrastructure. Continuous managementmonitors application and infrastructural metrics to provide automated and responsive reactions to failures (healthmanagement) and changing environmental conditions (auto-scaling) minimizing human intervention. In the current practice, management functionalities are provided as infrastructural or third party services. In both cases they are external to the application deployment. We claim that this approach has intrinsic limits, namely that separating management functionalities from the application prevents them from naturally scaling with the application and requires additional management code and human intervention. Moreover, using infrastructure provider services for management functionalities results in vendor lock-in effectively preventing cloud applications to adapt and run on the most effective cloud for the job. In this paper we discuss the main characteristics of cloud native applications, propose a novel architecture that enables scalable and resilient self-managing applications in the cloud, and relate on our experience in porting a legacy application to the cloud applying cloud-native principles. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "66fb14019184326107647df9771046f6",
"text": "Word embeddings are well known to capture linguistic regularities of the language on which they are trained. Researchers also observe that these regularities can transfer across languages. However, previous endeavors to connect separate monolingual word embeddings typically require cross-lingual signals as supervision, either in the form of parallel corpus or seed lexicon. In this work, we show that such cross-lingual connection can actually be established without any form of supervision. We achieve this end by formulating the problem as a natural adversarial game, and investigating techniques that are crucial to successful training. We carry out evaluation on the unsupervised bilingual lexicon induction task. Even though this task appears intrinsically cross-lingual, we are able to demonstrate encouraging performance without any cross-lingual clues.",
"title": ""
},
{
"docid": "5350d593f22c7de3661d15392434b24b",
"text": "Computer games can involve narrative and story elements integrating different forms of interactivity and using different strategies for combining interaction with non-interactive story and narrative elements. While some forms of interactive narrative involve simple selection between fixed narrative sequences, computer games more typically involve the integration of narrative with game play based upon a simulation substrate. These three forms, simulation, game play and narrative, involve pre-authored time structures at different levels of time scale. Simulation involves the lowest levels of time structure, with authored principles specifying how time develops from frame to frame based upon physics, the representation of game objects and their behaviour, and discrete event simulation. Games involve pre-designed game moves, types of actions that may be realized as abstractions over patterns of low level changes at the frame level. Linear and interactive narratives form the highest level of predesigned time structure, framing low-level simulation processes and intermediate level game moves within a high level structure typically based upon classic models of narrative form. Computer games may emphasise one or more of these primary forms as the focus of meaning in the play experience. Story construction within computer games is a function of how these different levels of time structure interact in the play experience, being the result of pre-designed narrative content, story potential and the actual unfolding story created by the actions of the player. There are many strategies for integrating these forms. However, a crucial issue in the design of story content is the relationship between how the resulting game experience relates to user play preferences. In particular, categories of play style can be extended to include preferences for how story content is experienced, based upon audience, performance and immersionist orientations to story. Perceived tensions within computer game form, such as the tension between game play and narrative, are explained, not as fundamental formal issues, but issues of player preferences and how these are satisfied or not by different strategies for story content within a game system.",
"title": ""
},
{
"docid": "540a6dd82c7764eedf99608359776e66",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aea.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "356a2c0b4837cf3d001068d43cb2b633",
"text": "A design is described of a broadband circularly-polarized (CP) slot antenna. A conventional annular-ring slot antenna is first analyzed, and it is found that two adjacent CP modes can be simultaneously excited through the proximity coupling of an L-shaped feed line. By tuning the dimensions of this L-shaped feed line, the two CP modes can be coupled together and a broad CP bandwidth is thus formed. The design method is also valid when the inner circular patch of the annular-ring slot antenna is vertically raised from the ground plane. In this case, the original band-limited ring slot antenna is converted into a wide-band structure that is composed of a circular wide slot and a parasitic patch, and consequently the CP bandwidth is further enhanced. For the patch-loaded wide slot antenna, its key parameters are investigated to show how to couple the two CP modes and achieve impedance matching. The effects of the distance between the parasitic patch and wide slot on the CP bandwidth and antenna gain are also presented and discussed in details.",
"title": ""
},
{
"docid": "25d63ac8bdd3bc3c6348566a63aef76c",
"text": "The mammalian intestine is home to a complex community of trillions of bacteria that are engaged in a dynamic interaction with the host immune system. Determining the principles that govern host–microbiota relationships is the focus of intense research. Here, we describe how the intestinal microbiota is able to influence the balance between pro-inflammatory and regulatory responses and shape the host's immune system. We suggest that improving our understanding of the intestinal microbiota has therapeutic implications, not only for intestinal immunopathologies but also for systemic immune diseases.",
"title": ""
},
{
"docid": "62f8eb0e7eafe1c0d857dadc72008684",
"text": "In the current Web 2.0 era, the popularity of Web resources fluctuates ephemerally, based on trends and social interest. As a result, content-based relevance signals are insufficient to meet users' constantly evolving information needs in searching for Web 2.0 items. Incorporating future popularity into ranking is one way to counter this. However, predicting popularity as a third party (as in the case of general search engines) is difficult in practice, due to their limited access to item view histories. To enable popularity prediction externally without excessive crawling, we propose an alternative solution by leveraging user comments, which are more accessible than view counts. Due to the sparsity of comments, traditional solutions that are solely based on view histories do not perform well. To deal with this sparsity, we mine comments to recover additional signal, such as social influence. By modeling comments as a time-aware bipartite graph, we propose a regularization-based ranking algorithm that accounts for temporal, social influence and current popularity factors to predict the future popularity of items. Experimental results on three real-world datasets --- crawled from YouTube, Flickr and Last.fm --- show that our method consistently outperforms competitive baselines in several evaluation tasks.",
"title": ""
},
{
"docid": "15dba7f87943a6d106f819d86a1a56c3",
"text": "The Gesture Recognition Toolkit is a cross-platform open-source C++ library designed to make real-time machine learning and gesture recognition more accessible for non-specialists. Emphasis is placed on ease of use, with a consistent, minimalist design that promotes accessibility while supporting flexibility and customization for advanced users. The toolkit features a broad range of classification and regression algorithms and has extensive support for building real-time systems. This includes algorithms for signal processing, feature extraction and automatic gesture spotting.",
"title": ""
},
{
"docid": "0b2f0b36bb458221b340b5e4a069fe2b",
"text": "The Dendritic Cell Algorithm (DCA) is inspired by the function of the dendritic cells of the human immune system. In nature, dendritic cells are the intrusion detection agents of the human body, policing the tissue and organs for potential invaders in the form of pathogens. In this research, and abstract model of DC behaviour is developed and subsequently used to form an algorithm, the DCA. The abstraction process was facilitated through close collaboration with laboratorybased immunologists, who performed bespoke experiments, the results of which are used as an integral part of this algorithm. The DCA is a population based algorithm, with each agent in the system represented as an ‘artificial DC’. Each DC has the ability to combine multiple data streams and can add context to data suspected as anomalous. In this chapter the abstraction process and details of the resultant algorithm are given. The algorithm is applied to numerous intrusion detection problems in computer security including the detection of port scans and botnets, where it has produced impressive results with relatively low rates of false positives.",
"title": ""
},
{
"docid": "f5648e3bd38e876b53ee748021e165f2",
"text": "The existing image captioning approaches typically train a one-stage sentence decoder, which is difficult to generate rich fine-grained descriptions. On the other hand, multi-stage image caption model is hard to train due to the vanishing gradient problem. In this paper, we propose a coarse-to-fine multi-stage prediction framework for image captioning, composed of multiple decoders each of which operates on the output of the previous stage, producing increasingly refined image descriptions. Our proposed learning approach addresses the difficulty of vanishing gradients during training by providing a learning objective function that enforces intermediate supervisions. Particularly, we optimize our model with a reinforcement learning approach which utilizes the output of each intermediate decoder’s test-time inference algorithm as well as the output of its preceding decoder to normalize the rewards, which simultaneously solves the well-known exposure bias problem and the loss-evaluation mismatch problem. We extensively evaluate the proposed approach on MSCOCO and show that our approach can achieve the state-of-the-art performance.",
"title": ""
},
{
"docid": "321049dbe0d9bae5545de3d8d7048e01",
"text": "ShopTalk, a proof-of-concept system designed to assist individuals with visual impairments with finding shelved products in grocery stores, is built on the assumption that simple verbal route directions and layout descriptions can be used to leverage the O&M skills of independent visually impaired travelers to enable them to navigate the store and retrieve shelved products. This paper introduces ShopTalk and summarizes experiments performed in a real-world supermarket.",
"title": ""
},
{
"docid": "f9468884fd24ff36b81fc2016a519634",
"text": "We study a new variant of Arikan's successive cancellation decoder (SCD) for polar codes. We first propose a new decoding algorithm on a new decoder graph, where the various stages of the graph are permuted. We then observe that, even though the usage of the permuted graph doesn't affect the encoder, it can significantly affect the decoding performance of a given polar code. The new permuted successive cancellation decoder (PSCD) typically exhibits a performance degradation, since the polar code is optimized for the standard SCD. We then present a new polar code construction rule matched to the PSCD and show their performance in simulations. For all rates we observe that the polar code matched to a given PSCD performs the same as the original polar code with the standard SCD. We also see that a PSCD with a reversal permutation can lead to a natural decoding order, avoiding the standard bit-reversal decoding order in SCD without any loss in performance.",
"title": ""
},
{
"docid": "c0b30475f78acefae1c15f9f5d6dc57b",
"text": "Traditionally, autonomous cars make predictions about other drivers’ future trajectories, and plan to stay out of their way. This tends to result in defensive and opaque behaviors. Our key insight is that an autonomous car’s actions will actually affect what other cars will do in response, whether the car is aware of it or not. Our thesis is that we can leverage these responses to plan more efficient and communicative behaviors. We model the interaction between an autonomous car and a human driver as a dynamical system, in which the robot’s actions have immediate consequences on the state of the car, but also on human actions. We model these consequences by approximating the human as an optimal planner, with a reward function that we acquire through Inverse Reinforcement Learning. When the robot plans with this reward function in this dynamical system, it comes up with actions that purposefully change human state: it merges in front of a human to get them to slow down or to reach its own goal faster; it blocks two lanes to get them to switch to a third lane; or it backs up slightly at an intersection to get them to proceed first. Such behaviors arise from the optimization, without relying on hand-coded signaling strategies and without ever explicitly modeling communication. Our user study results suggest that the robot is indeed capable of eliciting desired changes in human state by planning using this dynamical system.",
"title": ""
},
{
"docid": "a15c6d2f8905f66b23468c5c00009bf3",
"text": "This paper proposes a biomechatronic approach to the design of an anthropomorphic artificial hand able to mimic the natural motion of the human fingers. The hand is conceived to be applied to prosthetics as well as to humanoid and personal robotics; hence, anthropomorphism is a fundamental requirement to be addressed both in the physical aspect and in the functional behavior. In this paper, a biomechatronic approach is addressed to harmonize the mechanical design of the anthropomorphic artificial hand with the design of the hand control system. More in detail, this paper focuses on the control system of the hand and on the optimization of the hand design in order to obtain a human-like kinematics and dynamics. By evaluating the simulated hand performance, the mechanical design is iteratively refined. The mechanical structure and the ratio between number of actuators and number of degrees of freedom (DOFs) have been optimized in order to cope with the strict size and weight constraints that are typical of application of artificial hands to prosthetics and humanoid robotics. The proposed hand has a kinematic structure similar to the natural hand featuring three articulated fingers (thumb, index, and middle finger with 3 DOF for each finger and 1 DOF for the abduction/adduction of the thumb) driven by four dc motors. A special underactuated transmission has been designed that allows keeping the number of motors as low as possible while achieving a self-adaptive grasp, as a result of the passive compliance of the distal DOF of the fingers. A proper hand control scheme has been designed and implemented for the study and optimization of hand motor performance in order to achieve a human-like motor behavior. To this aim, available data on motion of the human fingers are collected from the neuroscience literature in order to derive a reference input for the control. Simulation trials and computer-aided design (CAD) mechanical tools are used to obtain a finger model including its dynamics. Also the closed-loop control system is simulated in order to study the effect of iterative mechanical redesign and to define the final set of mechanical parameters for the hand optimization. Results of the experimental tests carried out for validating the model of the robotic finger, and details on the process of integrated refinement and optimization of the mechanical structure and of the hand motor control scheme are extensively reported in the paper.",
"title": ""
},
{
"docid": "f8ea80edbb4f31d5c0d1a2da5e8aae13",
"text": "BACKGROUND\nPremenstrual syndrome (PMS) is a common condition, and for 5% of women, the influence is so severe as to interfere with their mental health, interpersonal relationships, or studies. Severe PMS may result in decreased occupational productivity. The aim of this study was to investigate the influence of perception of PMS on evaluation of work performance.\n\n\nMETHODS\nA total of 1971 incoming female university students were recruited in September 2009. A simulated clinical scenario was used, with a test battery including measurement of psychological symptoms and the Chinese Premenstrual Symptom Questionnaire.\n\n\nRESULTS\nWhen evaluating employee performance in the simulated scenario, 1565 (79.4%) students neglected the impact of PMS, while 136 (6.9%) students considered it. Multivariate logistic regression showed that perception of daily function impairment due to PMS and frequency of measuring body weight were significantly associated with consideration of the influence of PMS on evaluation of work performance.\n\n\nCONCLUSION\nIt is important to increase the awareness of functional impairments related to severe PMS.",
"title": ""
},
{
"docid": "b37064e74a2c88507eacb9062996a911",
"text": "This article builds a theoretical framework to help explain governance patterns in global value chains. It draws on three streams of literature – transaction costs economics, production networks, and technological capability and firm-level learning – to identify three variables that play a large role in determining how global value chains are governed and change. These are: (1) the complexity of transactions, (2) the ability to codify transactions, and (3) the capabilities in the supply-base. The theory generates five types of global value chain governance – hierarchy, captive, relational, modular, and market – which range from high to low levels of explicit coordination and power asymmetry. The article highlights the dynamic and overlapping nature of global value chain governance through four brief industry case studies: bicycles, apparel, horticulture and electronics.",
"title": ""
},
{
"docid": "693a544933a35862e5954d3e70b9e56a",
"text": "Shared decision making (SDM) is an effective health communication model designed to facilitate patient engagement in treatment decision making. In mental health, SDM has been applied and evaluated for medications decision making but less for its contribution to personal recovery and rehabilitation in psychiatric settings. The purpose of this pilot study was to assess the effect of SDM in choosing community psychiatric rehabilitation services before discharge from psychiatric hospitalization. A pre-post non-randomized design with two consecutive inpatient cohorts, SDM intervention (N = 51) and standard care (N = 50), was applied in two psychiatric hospitals in Israel. Participants in the intervention cohort reported greater engagement and knowledge after choosing rehabilitation services and greater services use at 6-to-12-month follow-up than those receiving standard care. No difference was found for rehospitalization rate. Two significant interaction effects indicated greater improvement in personal recovery over time for the SDM cohort. SDM can be applied to psychiatric rehabilitation decision making and can help promote personal recovery as part of the discharge process.",
"title": ""
},
{
"docid": "add776482f494f80f2fdbea05377490e",
"text": "We demonstrate the capability to conform a substrate integrated waveguide leaky-wave antenna (SIW LWA) along an arbitrarily curved line by suitably tapering the leaky mode along the antenna length. In particular, it is shown that by means of locally adjusting the pointing angle of the radiated wave, a coherent plane-wave front in the far-field region can be obtained. Combined with the capability to taper the leakage rate along the antenna, this allows designing antennas with high illumination efficiency, which provides higher directivity when compared with previous conformal LWAs such as the half-mode microstrip LWAs (HMLWAs). These concepts have been validated with both measured and simulated results involving the proposed conformal SIW LWA as well as three different configurations of HMLWA: conformal tapered HMLWA, conformal nontapered HMLWA and conventional rectilinear HMLWA. The antennas have been designed to operate at 15 GHz, and in the case of the SIW LWA an analysis of its frequency scanning response has been performed.",
"title": ""
},
{
"docid": "75bca61c2ca38e73ba43cca6244c357e",
"text": "This paper presents our latest investigation on Densely Connected Convolutional Networks (DenseNets) for acoustic modelling (AM) in automatic speech recognition. DenseNets are very deep, compact convolutional neural networks, which have demonstrated incredible improvements over the state-of-the-art results on several data sets in computer vision. Our experimental results show that DenseNet can be used for AM significantly outperforming other neuralbased models such as DNNs, CNNs, VGGs. Furthermore, results on Wall Street Journal revealed that with only a half of the training data DenseNet was able to outperform other models trained with the full data set by a large margin.",
"title": ""
}
] |
scidocsrr
|
fc8a78054559d20ed142ef44bd6c5caa
|
Artificial Intelligence and Asymmetric Information Theory
|
[
{
"docid": "a33f962c4a6ea61d3400ca9feea50bd7",
"text": "Now, we come to offer you the right catalogues of book to open. artificial intelligence techniques for rational decision making is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.",
"title": ""
}
] |
[
{
"docid": "8f9d5cd416ac038a4cbdf64737039053",
"text": "This paper proposes a method to extract the feature points from faces automatically. It provides a feasible way to locate the positions of two eyeballs, near and far corners of eyes, midpoint of nostrils and mouth corners from face image. This approach would help to extract useful features on human face automatically and improve the accuracy of face recognition. The experiments show that the method presented in this paper could locate feature points from faces exactly and quickly.",
"title": ""
},
{
"docid": "6faaafda15285b2ee8bc5337c9a599cf",
"text": "The three-phase wye connected permanent magnet brushless dc motor is conventionally driven by 120 degree commutation. Two phases are conducting current and the other one is always floating without any torque produced in each conduction interval. Rather than the conventional 120 degree drive, all three phases of 180 degree commutation are expected to conduct current in all sectors, which results in more power delivered from inverter side to the motor side for the same power supply voltage. In this paper, a recently proposed sensorless algorithm is highlighted with well performance in low speed operation. Based on dSPACE, comparison of different dynamic conditions between 120 and 180 degree commutation is presented and analyzed comprehensively. Extensive experiment tests show excellent results on dynamic performance of 180 degree commutation, which matches the simulation results from Simulink/Matlab. 180 degree commutation is verified to work properly with the ability to deliver more power when compared with conventional 120 degree commutation.",
"title": ""
},
{
"docid": "229c701c28a0398045756170aff7788e",
"text": "This paper presents Point Convolutional Neural Networks (PCNN): a novel framework for applying convolutional neural networks to point clouds. The framework consists of two operators: extension and restriction, mapping point cloud functions to volumetric functions and vise-versa. A point cloud convolution is defined by pull-back of the Euclidean volumetric convolution via an extension-restriction mechanism.\n The point cloud convolution is computationally efficient, invariant to the order of points in the point cloud, robust to different samplings and varying densities, and translation invariant, that is the same convolution kernel is used at all points. PCNN generalizes image CNNs and allows readily adapting their architectures to the point cloud setting.\n Evaluation of PCNN on three central point cloud learning benchmarks convincingly outperform competing point cloud learning methods, and the vast majority of methods working with more informative shape representations such as surfaces and/or normals.",
"title": ""
},
{
"docid": "360f2eb720f51c29b5561215d709139e",
"text": "A statistical hypothesis test determines whether a hypothesis should be rejected based on samples from populations. In particular, randomized controlled experiments (or A/B testing) that compare population means using, e.g., t-tests, have been widely deployed in technology companies to aid in making data-driven decisions. Samples used in these tests are collected from users and may contain sensitive information. Both the data collection and the testing process may compromise individuals’ privacy. In this paper, we study how to conduct hypothesis tests to compare population means while preserving privacy. We use the notation of local differential privacy (LDP), which has recently emerged as the main tool to ensure each individual’s privacy without the need of a trusted data collector. We propose LDP tests that inject noise into every user’s data in the samples before collecting them (so users do not need to trust the data collector), and draw conclusions with bounded type-I (significance level) and type-II errors (1− power). Our approaches can be extended to the scenario where some users require LDP while some are willing to provide exact data. We report experimental results on real-world datasets to verify the effectiveness of our approaches.",
"title": ""
},
{
"docid": "6b5e5bb3c1567be115dfd5060370b16f",
"text": "In this paper a system for processing documents that can be grouped into classes is illustrated. We have considered invoices as a case-study. The system is divided into three phases: document analysis, classification, and understanding. We illustrate the analysis and understanding phases. The system is based on knowledge constructed by means of a learning procedure. The experimental results demonstrate the reliability of our document analysis and understanding procedures. They also present evidence that it is possible to use a small learning set of invoices to obtain reliable knowledge for the understanding phase.",
"title": ""
},
{
"docid": "b94e096ea1bc990bd7c72aab988dd5ff",
"text": "The paper describes the design and implementation of an independent, third party contract monitoring service called Contract Compliance Checker (CCC). The CCC is provided with the specification of the contract in force, and is capable of observing and logging the relevant business-to-business (B2B) interaction events, in order to determine whether the actions of the business partners are consistent with the contract. A contract specification language called EROP (for Events, Rights, Obligations and Prohibitions) for the CCC has been developed based on business rules, that provides constructs to specify what rights, obligation and prohibitions become active and inactive after the occurrence of events related to the execution of business operations. The system has been designed to work with B2B industry standards such as ebXML and RosettaNet.",
"title": ""
},
{
"docid": "f71034627014c47b5751ff11455d5df8",
"text": "A biometrical-genetical analysis of twin data to elucidate the determinants of variation in extraversion and its components, sociability and impulsiveness, revealed that both genetical and environmental factors contributed to variation in extraversion, to the variation and covariation of its component scales, and to the interaction between subjects and scales. A large environmental correlation between the scales suggested that environmental factors may predominate in determining the unitary nature of extraversion. The interaction between subjects and scales depended more on genetical factors, which suggests that the dual nature of extraversion has a strong genetical basis. A model assuming random mating, additive gene action, and specific environmental effects adequately describes the observed variation and covariation of sociability and impulsiveness. Possible evolutionary implications are discussed.",
"title": ""
},
{
"docid": "3491539015ba902e38d2e8ef40bd8a90",
"text": "The fundamental task of general density estimation p(x) has been of keen interest to machine learning. In this work, we attempt to systematically characterize methods for density estimation. Broadly speaking, most of the existing methods can be categorized into either using: a) autoregressive models to estimate the conditional factors of the chain rule, p(xi ∣xi−1, . . .); or b) non-linear transformations of variables of a simple base distribution. Based on the study of the characteristics of these categories, we propose multiple novel methods for each category. For example we propose RNN based transformations to model non-Markovian dependencies. Further, through a comprehensive study over both real world and synthetic data, we show that jointly leveraging transformations of variables and autoregressive conditional models, results in a considerable improvement in performance. We illustrate the use of our models in outlier detection and image modeling. Finally we introduce a novel data driven framework for learning a family of distributions.",
"title": ""
},
{
"docid": "e225ea7571b9386107a78a91d16c1316",
"text": "Primary effusion lymphoma (PEL) is a rare type of extranodal lymphoma, typically of a B-cell origin, which presents as lymphomatous effusion with no nodal enlargement or tumor masses. The development PEL is universally associated with human herpes virus-8 (HHV-8) infection. Cases of HHV-8-negative primary lymphomatous effusion have recently been reported and referred to as HHV-8-unrelated PEL-like lymphoma. Some cases of this disease have been reported in iatrogenic immunocompromised patients. The mechanisms responsible for the inhibitory effects of the discontinuation of immunosuppressants other than methotrexate (MTX) against the disease, which have been demonstrated for MTX-associated lymphoproliferative disorders, have not yet been elucidated. We describe a case of PEL-like lymphoma that developed in the course of antisynthetase syndrome and was treated with tacrolimus. A single dose of systemic chemotherapy did not improve lymphomatous effusion, whereas the discontinuation of tacrolimus resulted in the long-term remission of this disease.",
"title": ""
},
{
"docid": "cd3bbec4c7f83c9fb553056b1b593bec",
"text": "We present results from experiments in using several pitch representations for jazz-oriented musical tasks performed by a recurrent neural network. We have run experiments with several kinds of recurrent networks for this purpose, and have found that Long Short-term Memory networks provide the best results. We show that a new pitch representation called Circles of Thirds works as well as two other published representations for these tasks, yet it is more succinct and enables faster learning. Recurrent Neural Networks and Music Many researchers are familiar with feedforward neural networks consisting of 2 or more layers of processing units, each with weighted connections to the next layer. Each unit passes the sum of its weighted inputs through a nonlinear sigmoid function. Each layer’s outputs are fed forward through the network to the next layer, until the output layer is reached. Weights are initialized to small initial random values. Via the back-propagation algorithm (Rumelhart et al. 1986), outputs are compared to targets, and the errors are propagated back through the connection weights. Weights are updated by gradient descent. Through an iterative training procedure, examples (inputs) and targets are presented repeatedly; the network learns a nonlinear function of the inputs. It can then generalize and produce outputs for new examples. These networks have been explored by the computer music community for classifying chords (Laden and Keefe 1991) and other musical tasks (Todd and Loy 1991, Griffith and Todd 1999). A recurrent network uses feedback from one or more of its units as input in choosing the next output. This means that values generated by units at time step t-1, say y(t-1), are part of the inputs x(t) used in selecting the next set of outputs y(t). A network may be fully recurrent; that is all units are connected back to each other and to themselves. Or part of the network may be fed back in recurrent links. Todd (Todd 1991) uses a Jordan recurrent network (Jordan 1986) to reproduce classical songs and then to produce new songs. The outputs are recurrently fed back as inputs as shown in Figure 1. In addition, self-recurrence on the inputs provides a decaying history of these inputs. The weight update algorithm is back-propagation, using teacher forcing (Williams and Zipser 1988). With teacher forcing, the target outputs are presented to the recurrent inputs from the output units (instead of the actual outputs, which are not correct yet during training). Pitches (on output or input) are represented in a localized binary representation, with one bit for each of the 12 chromatic notes. More bits can be added for more octaves. C is represented as 100000000000. C# is 010000000000, D is 001000000000. Time is divided into 16th note increments. Note durations are determined by how many increments a pitch’s output unit is on (one). E.g. an eighth note lasts for two time increments. Rests occur when all outputs are off (zero). Figure 1. Jordan network, with outputs fed back to inputs. (Mozer 1994)’s CONCERT uses a backpropagationthrough-time (BPTT) recurrent network to learn various musical tasks and to learn melodies with harmonic accompaniment. Then, CONCERT can run in generation mode to compose new music. The BPTT algorithm (Williams and Zipser 1992, Werbos 1988, Campolucci 1998) can be used with a fully recurrent network where the outputs of all units are connected to the inputs of all units, including themselves. The network can include external inputs and optionally, may include a regular feedforward output network (see Figure 2). The BPTT weight updates are proportional to the gradient of the sum of errors over every time step in the interval between start time t0 and end time t1, assuming the error at time step t is affected by the outputs at all previous time steps, starting with t0. BPTT requires saving all inputs, states, and errors for all time steps, and updating the weights in a batch operation at the end, time t1. One sequence (each example) requires one batch weight update. Figure 2. A fully self-recurrent network with external inputs, and optional feedforward output attachment. If there is no output attachment, one or more recurrent units are designated as output units. CONCERT is a combination of BPTT with a layer of output units that are probabilistically interpreted, and a maximum likelihood training criterion (rather than a squared error criterion). There are two sets of outputs (and two sets of inputs), one set for pitch and the other for duration. One pass through the network corresponds to a note, rather than a slice of time. We present only the pitch representation here since that is our focus. Mozer uses a psychologically based representation of musical notes. Figure 3 shows the chromatic circle (CC) and the circle of fifths (CF), used with a linear octave value for CONCERT’s pitch representation. Ignoring octaves, we refer to the rest of the representation as CCCF. Six digits represent the position of a pitch on CC and six more its position on CF. C is represented as 000000 000000, C# as 000001 111110, D as 000011 111111, and so on. Mozer uses -1,1 rather than 0,1 because of implementation details. Figure 3. Chromatic Circle on Left, Circle of Fifths on Right. Pitch position on each circle determines its representation. For chords, CONCERT uses the overlapping subharmonics representation of (Laden and Keefe, 1991). Each chord tone starts in Todd’s binary representation, but 5 harmonics (integer multiples of its frequency) are added. C3 is now C3, C4, G4, C5, E5 requiring a 3 octave representation. Because the 7th of the chord does not overlap with the triad harmonics, Laden and Keefe use triads only. C major triad C3, E3, G3, with harmonics, is C3, C4, G4, C5, E5, E3, E4, B4, E5, G#5, G3, G4, D4, G5, B5. The triad pitches and harmonics give an overlapping representation. Each overlapping pitch adds 1 to its corresponding input. CONCERT excludes octaves, leaving 12 highly overlapping chord inputs, plus an input that is positive when certain key-dependent chords appear, and learns waltzes over a harmonic chord structure. Eck and Schmidhuber (2002) use Long Short-term Memory (LSTM) recurrent networks to learn and compose blues music (Hochreiter and Schmidhuber 1997, and see Gers et al., 2000 for succinct pseudo-code for the algorithm). An LSTM network consists of input units, output units, and a set of memory blocks, each of which includes one or more memory cells. Blocks are connected to each other recurrently. Figure 4 shows an LSTM network on the left, and the contents of one memory block (this one with one cell) on the right. There may also be a direct connection from external inputs to the output units. This is the configuration found in Gers et al., and the one we use in our experiments. Eck and Schmidhuber also add recurrent connections from output units to memory blocks. Each block contains one or more memory cells that are self-recurrent. All other units in the block gate the inputs, outputs, and the memory cell itself. A memory cell can “cache” errors and release them for weight updates much later in time. The gates can learn to delay a block’s outputs, to reset the memory cells, and to inhibit inputs from reaching the cell or to allow inputs in. Figure 4. An LSTM network on the left and a one-cell memory block on the right, with input, forget, and output gates. Black squares on gate connections show that the gates can control whether information is passed to the cell, from the cell, or even within the cell. Weight updates are based on gradient descent, with multiplicative gradient calculations for gates, and approximations from the truncated BPTT (Williams and Peng 1990) and Real-Time Recurrent Learning (RTRL) (Robinson and Fallside 1987) algorithms. LSTM networks are able to perform counting tasks in time-series. Eck and Schmidhuber’s model of blues music is a 12-bar chord sequence over which music is composed/improvised. They successfully trained an LSTM network to learn a sequence of blues chords, with varying durations. Splitting time into 8th note increments, each chord’s duration is either 8 or 4 time steps (whole or half durations). Chords are sets of 3 or 4 tones (triads or triads plus sevenths), represented in a 12-bit localized binary representation with values of 1 for a chord pitch, and 0 for a non-chord pitch. Chords are inverted to fit in 1 octave. For example, C7 is represented as 100010010010 (C,E,G,B-flat), and F7 is 100101000100 (F,A,C,E-flat inverted to C,E-flat,F,A). The network has 4 memory blocks, each containing 2 cells. The outputs are considered probabilities of whether the corresponding note is on or off. The goal is to obtain an output of more that .5 for each note that should be on in a particular chord, with all other outputs below .5. Eck and Schmidhuber’s work includes learning melody and chords with two LSTM networks containing 4 blocks each. Connections are made from the chord network to the melody network, but not vice versa. The authors composed short 1-bar melodies over each of the 12 possible bars. The network is trained on concatenations of the short melodies over the 12-bar blues chord sequence. The melody network is trained until the chords network has learned according to the criterion. In music generation mode, the network can generate new melodies using this training. In a system called CHIME (Franklin 2000, 2001), we first train a Jordan recurrent network (Figure 1) to produce 3 Sonny Rollins jazz/blues melodies. The current chord and index number of the song are non-recurrent inputs to the network. Chords are represented as sets of 4 note values of 1 in a 12-note input layer, with non-chord note inputs set to 0 just as in Eck and Schmidhuber’s chord representation. Chords are also inverted to fit within one octave. 24 (2 octaves) of the outputs are notes, and the 25th is a rest. Of these 25, the unit with the largest value ",
"title": ""
},
{
"docid": "3b72f2d158aad8b21746f59212698c4f",
"text": "22 23 24 25 26",
"title": ""
},
{
"docid": "348f9c689c579cf07085b6e263c53ff5",
"text": "Over recent years, interest has been growing in Bitcoin, an innovation which has the potential to play an important role in e-commerce and beyond. The aim of our paper is to provide a comprehensive empirical study of the payment and investment features of Bitcoin and their implications for the conduct of ecommerce. Since network externality theory suggests that the value of a network and its take-up are interlinked, we investigate both adoption and price formation. We discover that Bitcoin returns are driven primarily by its popularity, the sentiment expressed in newspaper reports on the cryptocurrency, and total number of transactions. The paper also reports on the first global survey of merchants who have adopted this technology and model the share of sales paid for with this alternative currency, using both ordinary and Tobit regressions. Our analysis examines how country, customer and company-specific characteristics interact with the proportion of sales attributed to Bitcoin. We find that company features, use of other payment methods, customers’ knowledge about Bitcoin, as well as the size of both the official and unofficial economy are significant determinants. The results presented allow a better understanding of the practical and theoretical ramifications of this innovation.",
"title": ""
},
{
"docid": "368e72277a5937cb8ee94cea3fa11758",
"text": "Monoclinic Gd2O3:Eu(3+) nanoparticles (NPs) possess favorable magnetic and optical properties for biomedical application. However, how to obtain small enough NPs still remains a challenge. Here we combined the standard solid-state reaction with the laser ablation in liquids (LAL) technique to fabricate sub-10 nm monoclinic Gd2O3:Eu(3+) NPs and explained their formation mechanism. The obtained Gd2O3:Eu(3+) NPs exhibit bright red fluorescence emission and can be successfully used as fluorescence probe for cells imaging. In vitro and in vivo magnetic resonance imaging (MRI) studies show that the product can also serve as MRI good contrast agent. Then, we systematically investigated the nanotoxicity including cell viability, apoptosis in vitro, as well as the immunotoxicity and pharmacokinetics assays in vivo. This investigation provides a platform for the fabrication of ultrafine monoclinic Gd2O3:Eu(3+) NPs and evaluation of their efficiency and safety in preclinical application.",
"title": ""
},
{
"docid": "3fba9cdab9141fd38779937f765741c0",
"text": "The growth of data, the need for scalability and the complexity of models used in modern machine learning calls for distributed implementations. Yet, as of today, distributed machine learning frameworks have largely ignored the possibility of arbitrary (i.e., Byzantine) failures. In this paper, we study the robustness to Byzantine failures at the fundamental level of stochastic gradient descent (SGD), the heart of most machine learning algorithms. Assuming a set of n workers, up to f of them being Byzantine, we ask how robust can SGD be, without limiting the dimension, nor the size of the parameter space. We first show that no gradient descent update rule based on a linear combination of the vectors proposed by the workers (i.e, current approaches) tolerates a single Byzantine failure. We then formulate a resilience property of the update rule capturing the basic requirements to guarantee convergence despite f Byzantine workers. We finally propose Krum, an update rule that satisfies the resilience property aforementioned. For a d-dimensional learning problem, the time complexity of Krum is O(n · (d+ logn)).",
"title": ""
},
{
"docid": "9c6fdf4adb17803bbc7bb31d7d26f501",
"text": "Influences of social support and self-esteem on adjustment in early adolescence were investigated in a 2-year longitudinal study (N = 350). Multi-informant data (youth and parent) were used to assess both overall levels and balance in peer- versus adult-oriented sources for social support and self-esteem. Findings obtained using latent growth-curve modeling were consistent with self-esteem mediating effects of social support on both emotional and behavioral adjustment. Lack of balance in social support and self-esteem in the direction of stronger support and esteem from peer-oriented sources predicted greater levels and rates of growth in behavioral problems. Results indicate a need for process-oriented models of social support and self-esteem and sensitivity to patterning of sources for each resource relative to adaptive demands of early adolescence.",
"title": ""
},
{
"docid": "4b97ee592753138c916b4c5621bee6fe",
"text": "We propose the very first non-intrusive measurement methodology for quantifying the performance of commodity Virtual Reality (VR) systems. Our methodology considers the VR system under test as a black-box and works with any VR applications. Multiple performance metrics on timing and positioning accuracy are considered, and detailed testbed setup and measurement steps are presented. We also apply our methodology to several VR systems in the market, and carefully analyze the experiment results. We make several observations: (i) 3D scene complexity affects the timing accuracy the most, (ii) most VR systems implement the dead reckoning algorithm, which incurs a non-trivial correction latency after incorrect predictions, and (iii) there exists an inherent trade-off between two positioning accuracy metrics: precision and sensitivity.",
"title": ""
},
{
"docid": "10f5ad322eeee68e57b66dd9f2bfe25b",
"text": "Irmin is an OCaml library to design purely functional data structures that can be persisted on disk and be merged and synchronized efficiently. In this paper, we focus on the merge aspect of the library and present two data structures built on top of Irmin: (i) queues and (ii) ropes that extend the corresponding purely functional data structures with a 3-way merge operation. We provide early theoretical and practical complexity results for these new data structures. Irmin is available as open-source code as part of the MirageOS project.",
"title": ""
},
{
"docid": "6936462dee2424b92c7476faed5b5a23",
"text": "A significant challenge in scene text detection is the large variation in text sizes. In particular, small text are usually hard to detect. This paper presents an accurate oriented text detector based on Faster R-CNN. We observe that Faster R-CNN is suitable for general object detection but inadequate for scene text detection due to the large variation in text size. We apply feature fusion both in RPN and Fast R-CNN to alleviate this problem and furthermore, enhance model's ability to detect relatively small text. Our text detector achieves comparable results to those state of the art methods on ICDAR 2015 and MSRA-TD500, showing its advantage and applicability.",
"title": ""
},
{
"docid": "f60048d9803f2d3ae0178a14d7b03536",
"text": "Forking is the creation of a new software repository by copying another repository. Though forking is controversial in traditional open source software (OSS) community, it is encouraged and is a built-in feature in GitHub. Developers freely fork repositories, use codes as their own and make changes. A deep understanding of repository forking can provide important insights for OSS community and GitHub. In this paper, we explore why and how developers fork what from whom in GitHub. We collect a dataset containing 236,344 developers and 1,841,324 forks. We make surveys, and analyze programming languages and owners of forked repositories. Our main observations are: (1) Developers fork repositories to submit pull requests, fix bugs, add new features and keep copies etc. Developers find repositories to fork from various sources: search engines, external sites (e.g., Twitter, Reddit), social relationships, etc. More than 42 % of developers that we have surveyed agree that an automated recommendation tool is useful to help them pick repositories to fork, while more than 44.4 % of developers do not value a recommendation tool. Developers care about repository owners when they fork repositories. (2) A repository written in a developer’s preferred programming language is more likely to be forked. (3) Developers mostly fork repositories from creators. In comparison with unattractive repository owners, attractive repository owners have higher percentage of organizations, more followers and earlier registration in GitHub. Our results show that forking is mainly used for making contributions of original repositories, and it is beneficial for OSS community. Moreover, our results show the value of recommendation and provide important insights for GitHub to recommend repositories.",
"title": ""
},
{
"docid": "c4be29a7818c094f3c171e9153c56382",
"text": "The paper presents a novel segmentation approach applied to a two-dimensional point-cloud extracted by a LIDAR device. The most common approaches perform well in outdoor environments where usually furniture and other objects are rather big and are composed of smooth surfaces. However, these methods fail to segment uneven, rough surfaces. In this paper we propose a novel range data segmentation algorithm that is based on the popular one-pass version of the Connected Components algorithm. Our algorithm outperforms most commonly used approaches, while keeping the low computational complexity. The algorithm is used as a part of control and perception system in our unmanned ground vehicle where real-time response time is required. We presented experimental results obtained indoors and outdoors. The latter experiment was conducted in the real test field while the vehicle was autonomously driven.",
"title": ""
}
] |
scidocsrr
|
b7f36548ae68587100dc399c052d299d
|
Agent Based Framework for Scalability in Cloud Computing
|
[
{
"docid": "84cb130679353dbdeff24100409f57fe",
"text": "Cloud computing has become another buzzword after Web 2.0. However, there are dozens of different definitions for cloud computing and there seems to be no consensus on what a cloud is. On the other hand, cloud computing is not a completely new concept; it has intricate connection to the relatively new but thirteen-year established grid computing paradigm, and other relevant technologies such as utility computing, cluster computing, and distributed systems in general. This paper strives to compare and contrast cloud computing with grid computing from various angles and give insights into the essential characteristics of both.",
"title": ""
}
] |
[
{
"docid": "e9fbe2fe7b4f5617a37163e6d17b26ba",
"text": "Several digital forensic frameworks have been proposed, yet no conclusions have been reached about which are more appropriate. This is partly because each framework may work well for different types of investigations, but it hasn’t been shown if any are sufficient for all types of investigations. To address this problem, this work uses amodel based on the history of a computer to define categories and classes of analysis techniques. The model is more lower-level than existing frameworks and the categories and classes of analysis techniques that are defined support the existing higher-level frameworks. Therefore, they can be used to more clearly compare the frameworks. Proofs can be given to show the completeness of the analysis techniques and therefore the completeness of the frameworks can also be addressed. a 2006 DFRWS. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c45447fd682f730f350bae77c835b63a",
"text": "In this paper, we demonstrate a high heat resistant bonding method by Cu/Sn transient liquid phase sintering (TLPS) method can be applied to die-attachment of silicon carbide (SiC)-MOSFET in high temperature operation power module. The die-attachment is made of nano-composite Cu/Sn TLPS paste. The die shear strength was 40 MPa for 3 × 3 mm2 SiC chip after 1,000 cycles of thermal cycle testing between −40 °C and 250 °C. This indicated a high reliability of Cu/Sn die-attachment. The thermal resistance of the Cu/Sn die-attachment was evaluated by transient thermal analysis using a sample in which the SiC-MOSFET (die size: 4.04 × 6.44 mm2) was bonded with Cu/Sn die-attachment. The thermal resistance of Cu/Sn die-attachment was 0.13 K/W, which was comparable to the one of Au/Ge die-attachment (0.12 K/W). The validity of nano-composite Cu/Sn TLPS paste as a die-attachment for high-temperature operation SiC power module is confirmed.",
"title": ""
},
{
"docid": "a936b6d3b0f4a99042260abea0f39032",
"text": "In this paper, a new type of 3D bin packing problem (BPP) is proposed, in which a number of cuboidshaped items must be put into a bin one by one orthogonally. The objective is to find a way to place these items that can minimize the surface area of the bin. This problem is based on the fact that there is no fixed-sized bin in many real business scenarios and the cost of a bin is proportional to its surface area. Our research shows that this problem is NP-hard. Based on previous research on 3D BPP, the surface area is determined by the sequence, spatial locations and orientations of items. Among these factors, the sequence of items plays a key role in minimizing the surface area. Inspired by recent achievements of deep reinforcement learning (DRL) techniques, especially Pointer Network, on combinatorial optimization problems such as TSP, a DRL-based method is applied to optimize the sequence of items to be packed into the bin. Numerical results show that the method proposed in this paper achieve about 5% improvement than heuristic method.",
"title": ""
},
{
"docid": "d75f9c632d197040c7f6d2939b19c215",
"text": "OBJECTIVE\nTo understand belief in a specific scientific claim by studying the pattern of citations among papers stating it.\n\n\nDESIGN\nA complete citation network was constructed from all PubMed indexed English literature papers addressing the belief that beta amyloid, a protein accumulated in the brain in Alzheimer's disease, is produced by and injures skeletal muscle of patients with inclusion body myositis. Social network theory and graph theory were used to analyse this network.\n\n\nMAIN OUTCOME MEASURES\nCitation bias, amplification, and invention, and their effects on determining authority.\n\n\nRESULTS\nThe network contained 242 papers and 675 citations addressing the belief, with 220,553 citation paths supporting it. Unfounded authority was established by citation bias against papers that refuted or weakened the belief; amplification, the marked expansion of the belief system by papers presenting no data addressing it; and forms of invention such as the conversion of hypothesis into fact through citation alone. Extension of this network into text within grants funded by the National Institutes of Health and obtained through the Freedom of Information Act showed the same phenomena present and sometimes used to justify requests for funding.\n\n\nCONCLUSION\nCitation is both an impartial scholarly method and a powerful form of social communication. Through distortions in its social use that include bias, amplification, and invention, citation can be used to generate information cascades resulting in unfounded authority of claims. Construction and analysis of a claim specific citation network may clarify the nature of a published belief system and expose distorted methods of social citation.",
"title": ""
},
{
"docid": "2dee5823e4faf7f1cc99460d87439012",
"text": "This letter presents a novel metamaterial-inspired planar monopole antenna. The proposed structure consists of a monopole loaded with a composite right/left-handed (CRLH) unit cell. It operates at two narrow bands, 0.925 and 1.227 GHz, and one wide band, 1.56-2.7 GHz, i.e., it covers several communication standards. The CRLH-loaded monopole occupies the same Chu's sphere as a conventional monopole that operates at 2.4 GHz. The radiation patterns at the different operating frequencies are still quasi-omnidirectional. Measurements and EM simulations are in a good agreement with the theoretical predictions.",
"title": ""
},
{
"docid": "0f56b99bc1d2c9452786c05242c89150",
"text": "Individuals with below-knee amputation have more difficulty balancing during walking, yet few studies have explored balance enhancement through active prosthesis control. We previously used a dynamical model to show that prosthetic ankle push-off work affects both sagittal and frontal plane dynamics, and that appropriate step-by-step control of push-off work can improve stability. We hypothesized that this approach could be applied to a robotic prosthesis to partially fulfill the active balance requirements of human walking, thereby reducing balance-related activity and associated effort for the person using the device. We conducted experiments on human participants (N = 10) with simulated amputation. Prosthetic ankle push-off work was varied on each step in ways expected to either stabilize, destabilize or have no effect on balance. Average ankle push-off work, known to affect effort, was kept constant across conditions. Stabilizing controllers commanded more push-off work on steps when the mediolateral velocity of the center of mass was lower than usual at the moment of contralateral heel strike. Destabilizing controllers enforced the opposite relationship, while a neutral controller maintained constant push-off work regardless of body state. A random disturbance to landing foot angle and a cognitive distraction task were applied, further challenging participants’ balance. We measured metabolic rate, foot placement kinematics, center of pressure kinematics, distraction task performance, and user preference in each condition. We expected the stabilizing controller to reduce active control of balance and balance-related effort for the user, improving user preference. The best stabilizing controller lowered metabolic rate by 5.5% (p = 0.003) and 8.5% (p = 0.02), and step width variability by 10.0% (p = 0.009) and 10.7% (p = 0.03) compared to conditions with no control and destabilizing control, respectively. Participants tended to prefer stabilizing controllers. These effects were not due to differences in average push-off work, which was unchanged across conditions, or to average gait mechanics, which were also unchanged. Instead, benefits were derived from step-by-step adjustments to prosthesis behavior in response to variations in mediolateral velocity at heel strike. Once-per-step control of prosthetic ankle push-off work can reduce both active control of foot placement and balance-related metabolic energy use during walking.",
"title": ""
},
{
"docid": "346e160403ff9eb55c665f6cb8cca481",
"text": "Tasks in visual analytics differ from typical information retrieval tasks in fundamental ways. A critical part of a visual analytics is to ask the right questions when dealing with a diverse collection of information. In this article, we introduce the design and application of an integrated exploratory visualization system called Storylines. Storylines provides a framework to enable analysts visually and systematically explore and study a body of unstructured text without prior knowledge of its thematic structure. The system innovatively integrates latent semantic indexing, natural language processing, and social network analysis. The contributions of the work include providing an intuitive and directly accessible representation of a latent semantic space derived from the text corpus, an integrated process for identifying salient lines of stories, and coordinated visualizations across a spectrum of perspectives in terms of people, locations, and events involved in each story line. The system is tested with the 2006 VAST contest data, in particular, the portion of news articles.",
"title": ""
},
{
"docid": "f114e788557e8d734bd2a04a5b789208",
"text": "Adaptive content delivery is the state of the art in real-time multimedia streaming. Leading streaming approaches, e.g., MPEG-DASH and Apple HTTP Live Streaming (HLS), have been developed for classical IP-based networks, providing effective streaming by means of pure client-based control and adaptation. However, the research activities of the Future Internet community adopt a new course that is different from today's host-based communication model. So-called information-centric networks are of considerable interest and are advertised as enablers for intelligent networks, where effective content delivery is to be provided as an inherent network feature. This paper investigates the performance gap between pure client-driven adaptation and the theoretical optimum in the promising Future Internet architecture named data networking (NDN). The theoretical optimum is derived by modeling multimedia streaming in NDN as a fractional multi-commodity flow problem and by extending it taking caching into account. We investigate the multimedia streaming performance under different forwarding strategies, exposing the interplay of forwarding strategies and adaptation mechanisms. Furthermore, we examine the influence of network inherent caching on the streaming performance by varying the caching polices and the cache sizes.",
"title": ""
},
{
"docid": "5c0f2bcde310b7b76ed2ca282fde9276",
"text": "With the increasing prevalence of Alzheimer's disease, research focuses on the early computer-aided diagnosis of dementia with the goal to understand the disease process, determine risk and preserving factors, and explore preventive therapies. By now, large amounts of data from multi-site studies have been made available for developing, training, and evaluating automated classifiers. Yet, their translation to the clinic remains challenging, in part due to their limited generalizability across different datasets. In this work, we describe a compact classification approach that mitigates overfitting by regularizing the multinomial regression with the mixed ℓ1/ℓ2 norm. We combine volume, thickness, and anatomical shape features from MRI scans to characterize neuroanatomy for the three-class classification of Alzheimer's disease, mild cognitive impairment and healthy controls. We demonstrate high classification accuracy via independent evaluation within the scope of the CADDementia challenge. We, furthermore, demonstrate that variations between source and target datasets can substantially influence classification accuracy. The main contribution of this work addresses this problem by proposing an approach for supervised domain adaptation based on instance weighting. Integration of this method into our classifier allows us to assess different strategies for domain adaptation. Our results demonstrate (i) that training on only the target training set yields better results than the naïve combination (union) of source and target training sets, and (ii) that domain adaptation with instance weighting yields the best classification results, especially if only a small training component of the target dataset is available. These insights imply that successful deployment of systems for computer-aided diagnostics to the clinic depends not only on accurate classifiers that avoid overfitting, but also on a dedicated domain adaptation strategy.",
"title": ""
},
{
"docid": "21ac2d8221879933b5fff31df5931cba",
"text": "Sketch-photo synthesis plays an important role in sketch-based face photo retrieval and photo-based face sketch retrieval systems. In this paper, we propose an automatic sketch-photo synthesis and retrieval algorithm based on sparse representation. The proposed sketch-photo synthesis method works at patch level and is composed of two steps: sparse neighbor selection (SNS) for an initial estimate of the pseudoimage (pseudosketch or pseudophoto) and sparse-representation-based enhancement (SRE) for further improving the quality of the synthesized image. SNS can find closely related neighbors adaptively and then generate an initial estimate for the pseudoimage. In SRE, a coupled sparse representation model is first constructed to learn the mapping between sketch patches and photo patches, and a patch-derivative-based sparse representation method is subsequently applied to enhance the quality of the synthesized photos and sketches. Finally, four retrieval modes, namely, sketch-based, photo-based, pseudosketch-based, and pseudophoto-based retrieval are proposed, and a retrieval algorithm is developed by using sparse representation. Extensive experimental results illustrate the effectiveness of the proposed face sketch-photo synthesis and retrieval algorithms.",
"title": ""
},
{
"docid": "764c38722f53229344184248ac94a096",
"text": "Verbal fluency tasks have long been used to assess and estimate group and individual differences in executive functioning in both cognitive and neuropsychological research domains. Despite their ubiquity, however, the specific component processes important for success in these tasks have remained elusive. The current work sought to reveal these various components and their respective roles in determining performance in fluency tasks using latent variable analysis. Two types of verbal fluency (semantic and letter) were compared along with several cognitive constructs of interest (working memory capacity, inhibition, vocabulary size, and processing speed) in order to determine which constructs are necessary for performance in these tasks. The results are discussed within the context of a two-stage cyclical search process in which participants first search for higher order categories and then search for specific items within these categories.",
"title": ""
},
{
"docid": "e0b7efd5d3bba071ada037fc5b05a622",
"text": "Social exclusion can thwart people's powerful need for social belonging. Whereas prior studies have focused primarily on how social exclusion influences complex and cognitively downstream social outcomes (e.g., memory, overt social judgments and behavior), the current research examined basic, early-in-the-cognitive-stream consequences of exclusion. Across 4 experiments, the threat of exclusion increased selective attention to smiling faces, reflecting an attunement to signs of social acceptance. Compared with nonexcluded participants, participants who experienced the threat of exclusion were faster to identify smiling faces within a \"crowd\" of discrepant faces (Experiment 1), fixated more of their attention on smiling faces in eye-tracking tasks (Experiments 2 and 3), and were slower to disengage their attention from smiling faces in a visual cueing experiment (Experiment 4). These attentional attunements were specific to positive, social targets. Excluded participants did not show heightened attention to faces conveying social disapproval or to positive nonsocial images. The threat of social exclusion motivates people to connect with sources of acceptance, which is manifested not only in \"downstream\" choices and behaviors but also at the level of basic, early-stage perceptual processing.",
"title": ""
},
{
"docid": "3688e796f22f57c1735bbb6caa2c2d06",
"text": "In this paper, we propose a solution to transforming photos of real-world scenes into cartoon style images, which is valuable and challenging in computer vision and computer graphics. Our solution belongs to learning based methods, which have recently become popular to stylize images in artistic forms such as painting. However, existing methods do not produce satisfactory results for cartoonization, due to the fact that (1) cartoon styles have unique characteristics with high level simplification and abstraction, and (2) cartoon images tend to have clear edges, smooth color shading and relatively simple textures, which exhibit significant challenges for texture-descriptor-based loss functions used in existing methods. In this paper, we propose CartoonGAN, a generative adversarial network (GAN) framework for cartoon stylization. Our method takes unpaired photos and cartoon images for training, which is easy to use. Two novel losses suitable for cartoonization are proposed: (1) a semantic content loss, which is formulated as a sparse regularization in the high-level feature maps of the VGG network to cope with substantial style variation between photos and cartoons, and (2) an edge-promoting adversarial loss for preserving clear edges. We further introduce an initialization phase, to improve the convergence of the network to the target manifold. Our method is also much more efficient to train than existing methods. Experimental results show that our method is able to generate high-quality cartoon images from real-world photos (i.e., following specific artists' styles and with clear edges and smooth shading) and outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "0477f74fce5684f3fa4630a14a3b8bae",
"text": "Central fatigue during exercise is the decrease in muscle force attributable to a decline in motoneuronal output. Several methods have been used to assess central fatigue; however, some are limited or not sensitive enough to detect failure in central drive. Central fatigue develops during many forms of exercise. A number of mechanisms may contribute to its development including an increased inhibition mediated by group III and IV muscle afferents along with a decrease in muscle spindle facilitation. In some situations, motor cortical output is shown to be suboptimal. A specific terminology for central fatigue is included.",
"title": ""
},
{
"docid": "d597fc35ccf1e3f6312c21a5a50dafcd",
"text": "Purpose – In view of the promising growth of e-payment in Malaysia, this study aims to discover the factors influencing perception towards electronic payment (e-payment) from the Malaysian consumers’ perspective. Design/methodology/approach – Literature indicates that factors such as benefits, trust, selfefficacy, ease of use, and security influence consumers’ perception towards e-payment. A self-reporting questionnaire was developed and disseminated to 200 respondents, out of which 183 valid responses were considered for further statistical analysis. Findings – The multiple linear regression results reveal that benefits, self-efficacy, and ease of use exert significant influences on consumers’ perception towards e-payment. However, the insignificant results obtained for trust and security warrant further investigation. Research limitations/implications – This study proposes five factors for measuring consumers’ perception towards e-payment which is replicable across different economies. However, the small sample size raises the issue of generalizability which future studies should seek to address. Practical implications – The use of e-payment by the majority of respondents confirms that there is a great potential for future expansion of such payment devices. The challenge is to ensure that it continues to meet consumers’ expectations which will subsequently lead to its increased adoption and use. Originality/value – This study has advanced knowledge for it has provided information on the current state of e-payment acceptance and use, particularly among Malaysians. The significant factors identified are beneficial to the policy maker, banking institutions, online transaction facilities providers, and software developers as they develop strategies directed at increasing e-payment acceptance and use.",
"title": ""
},
{
"docid": "bc77c4bcc60c3746a791e61951d42c78",
"text": "In this paper, a hybrid of indoor ambient light and thermal energy harvesting scheme that uses only one power management circuit to condition the combined output power harvested from both energy sources is proposed to extend the lifetime of the wireless sensor node. By avoiding the use of individual power management circuits for multiple energy sources, the number of components used in the hybrid energy harvesting (HEH) system is reduced and the system form factor, cost and power losses are thus reduced. An efficient microcontroller-based ultra low power management circuit with fixed voltage reference based maximum power point tracking is implemented with closed-loop voltage feedback control to ensure near maximum power transfer from the two energy sources to its connected electronic load over a wide range of operating conditions. From the experimental test results obtained, an average electrical power of 621 μW is harvested by the optimized HEH system at an average indoor solar irradiance of 1010 lux and a thermal gradient of 10 K, which is almost triple of that can be obtained with conventional single-source thermal energy harvesting method.",
"title": ""
},
{
"docid": "7f067f869481f06e865880e1d529adc8",
"text": "Distributed Denial of Service (DDoS) is defined as an attack in which mutiple compromised systems are made to attack a single target to make the services unavailable foe legitimate users.It is an attack designed to render a computer or network incapable of providing normal services. DDoS attack uses many compromised intermediate systems, known as botnets which are remotely controlled by an attacker to launch these attacks. DDOS attack basically results in the situation where an entity cannot perform an action for which it is authenticated. This usually means that a legitimate node on the network is unable to reach another node or their performance is degraded. The high interruption and severance caused by DDoS is really posing an immense threat to entire internet world today. Any compromiseto computing, communication and server resources such as sockets, CPU, memory, disk/database bandwidth, I/O bandwidth, router processing etc. for collaborative environment would surely endanger the entire application. It becomes necessary for researchers and developers to understand behaviour of DDoSattack because it affects the target network with little or no advance warning. Hence developing advanced intrusion detection and prevention systems for preventing, detecting, and responding to DDOS attack is a critical need for cyber space. Our rigorous survey study presented in this paper describes a platform for the study of evolution of DDoS attacks and their defense mechanisms.",
"title": ""
},
{
"docid": "4922c751dded99ca83e19d51eb5d647e",
"text": "The viewpoint consistency constraint requires that the locations of all object features in an image must be consistent with projection from a single viewpoint. The application of this constraint is central to the problem of achieving robust recognition, since it allows the spatial information in an image to be compared with prior knowledge of an object's shape to the full degree of available image resolution. In addition, the constraint greatly reduces the size of the search space during model-based matching by allowing a few initial matches to provide tight constraints for the locations of other model features. Unfortunately, while simple to state, this constraint has seldom been effectively applied in model-based computer vision systems. This paper reviews the history of attempts to make use of the viewpoint consistency constraint and then describes a number of new techniques for applying it to the process of model-based recognition. A method is presented for probabilistically evaluating new potential matches to extend and refine an initial viewpoint estimate. This evaluation allows the model-based verification process to proceed without the expense of backtracking or search. It will be shown that the effective application of the viewpoint consistency constraint, in conjunction with bottom-up image description based upon principles of perceptual organization, can lead to robust three-dimensional object recognition from single gray-scale images.",
"title": ""
},
{
"docid": "56b71ef654cb9ddc856cb877641e5c8a",
"text": "In the Web 2.0 era, a huge number of media data, such as text, image/video, and social interaction information, have been generated on the social media sites (e.g., Facebook, Google, Flickr, and YouTube). These media data can be effectively adopted for many applications (e.g., image/video annotation, image/video retrieval, and event classification) in multimedia. However, it is difficult to design an effective feature representation to describe these data because they have multi-modal property (e.g., text, image, video, and audio) and multi-domain property (e.g., Flickr, Google, and YouTube). To deal with these issues, we propose a novel cross-domain feature learning (CDFL) algorithm based on stacked denoising auto-encoders. By introducing the modal correlation constraint and the cross-domain constraint in conventional auto-encoder, our CDFL can maximize the correlations among different modalities and extract domain invariant semantic features simultaneously. To evaluate our CDFL algorithm , we apply it to three important applications: sentiment classification, spam filtering, and event classification. Comprehensive evaluations demonstrate the encouraging performance of the proposed approach.",
"title": ""
},
{
"docid": "fadf344ec31ea1705dbd88a4fe8862f1",
"text": "The quality of a statistical machine translation (SMT) system is heavily dependent upon the amount of parallel sentences used in training. In recent years, there have been several approaches developed for obtaining parallel sentences from non-parallel, or comparable data, such as news articles published within the same time period (Munteanu and Marcu, 2005), or web pages with a similar structure (Resnik and Smith, 2003). One resource not yet thoroughly explored is Wikipedia, an online encyclopedia containing linked articles in many languages. We advance the state of the art in parallel sentence extraction by modeling the document level alignment, motivated by the observation that parallel sentence pairs are often found in close proximity. We also include features which make use of the additional annotation given by Wikipedia, and features using an automatically induced lexicon model. Results for both accuracy in sentence extraction and downstream improvement in an SMT system are presented.",
"title": ""
}
] |
scidocsrr
|
65d3119be08434129f24596f5b03613b
|
5G mm-Wave front-end-module design with advanced SOI process
|
[
{
"docid": "707a9773d79e04e8ee517845faa8e79f",
"text": "In this paper, we discuss a DC-20GHz single-pole double-throw (SPDT) transmit/receive switch (T/R switch) design in 45nm SOI process. This circuit is dedicated to fully integrated CMOS RF front end modules for X/Ku band satellite communication applications. The switch exhibits a measured insertion loss of 0.59dB, return loss of 23dB, and isolation of 17dB at 14GHz. The input 1dB compression point is 31.5dBm, and one-tone IIP3 is 63.8dBm. This state of the art performance is comparable or even better than existing commercial GaAs SPDT in this frequency range. The core area is only 90um × 100um, which is very helpful for low cost large element phase array designs.",
"title": ""
}
] |
[
{
"docid": "4a779f5e15cc60f131a77c69e09e54bc",
"text": "We introduce a new iterative regularization procedure for inverse problems based on the use of Bregman distances, with particular focus on problems arising in image processing. We are motivated by the problem of restoring noisy and blurry images via variational methods by using total variation regularization. We obtain rigorous convergence results and effective stopping criteria for the general procedure. The numerical results for denoising appear to give significant improvement over standard models, and preliminary results for deblurring/denoising are very encouraging.",
"title": ""
},
{
"docid": "bd1c93dfc02d90ad2a0c7343236342a7",
"text": "Osteochondritis dissecans (OCD) of the capitellum is an uncommon disorder seen primarily in the adolescent overhead athlete. Unlike Panner disease, a self-limiting condition of the immature capitellum, OCD is multifactorial and likely results from microtrauma in the setting of cartilage mismatch and vascular susceptibility. The natural history of OCD is poorly understood, and degenerative joint disease may develop over time. Multiple modalities aid in diagnosis, including radiography, MRI, and magnetic resonance arthrography. Lesion size, location, and grade determine management, which should attempt to address subchondral bone loss and articular cartilage damage. Early, stable lesions are managed with rest. Surgery should be considered for unstable lesions. Most investigators advocate arthroscopic débridement with marrow stimulation. Fragment fixation and bone grafting also have provided good short-term results, but concerns persist regarding the healing potential of advanced lesions. Osteochondral autograft transplantation appears to be promising and should be reserved for larger, higher grade lesions. Clinical outcomes and return to sport are variable. Longer-term follow-up studies are necessary to fully assess surgical management, and patients must be counseled appropriately.",
"title": ""
},
{
"docid": "fe407f4983ef6cc2e257d63a173c8487",
"text": "We present a semantically rich graph representation for indoor robotic navigation. Our graph representation encodes: semantic locations such as offices or corridors as nodes, and navigational behaviors such as enter office or cross a corridor as edges. In particular, our navigational behaviors operate directly from visual inputs to produce motor controls and are implemented with deep learning architectures. This enables the robot to avoid explicit computation of its precise location or the geometry of the environment, and enables navigation at a higher level of semantic abstraction. We evaluate the effectiveness of our representation by simulating navigation tasks in a large number of virtual environments. Our results show that using a simple sets of perceptual and navigational behaviors, the proposed approach can successfully guide the way of the robot as it completes navigational missions such as going to a specific office. Furthermore, our implementation shows to be effective to control the selection and switching of behaviors.",
"title": ""
},
{
"docid": "ea982e20cc739fc88ed6724feba3d896",
"text": "We report new evidence on the emotional, demographic, and situational correlates of boredom from a rich experience sample capturing 1.1 million emotional and time-use reports from 3,867 U.S. adults. Subjects report boredom in 2.8% of the 30-min sampling periods, and 63% of participants report experiencing boredom at least once across the 10-day sampling period. We find that boredom is more likely to co-occur with negative, rather than positive, emotions, and is particularly predictive of loneliness, anger, sadness, and worry. Boredom is more prevalent among men, youths, the unmarried, and those of lower income. We find that differences in how such demographic groups spend their time account for up to one third of the observed differences in overall boredom. The importance of situations in predicting boredom is additionally underscored by the high prevalence of boredom in specific situations involving monotonous or difficult tasks (e.g., working, studying) or contexts where one's autonomy might be constrained (e.g., time with coworkers, afternoons, at school). Overall, our findings are consistent with cognitive accounts that cast boredom as emerging from situations in which engagement is difficult, and are less consistent with accounts that exclusively associate boredom with low arousal or with situations lacking in meaning. (PsycINFO Database Record",
"title": ""
},
{
"docid": "65b933f72f74a17777baa966658f4c42",
"text": "We describe the epidemic of obesity in the United States: escalating rates of obesity in both adults and children, and why these qualify as an epidemic; disparities in overweight and obesity by race/ethnicity and sex, and the staggering health and economic consequences of obesity. Physical activity contributes to the epidemic as explained by new patterns of physical activity in adults and children. Changing patterns of food consumption, such as rising carbohydrate intake--particularly in the form of soda and other foods containing high fructose corn syrup--also contribute to obesity. We present as a central concept, the food environment--the contexts within which food choices are made--and its contribution to food consumption: the abundance and ubiquity of certain types of foods over others; limited food choices available in certain settings, such as schools; the market economy of the United States that exposes individuals to many marketing/advertising strategies. Advertising tailored to children plays an important role.",
"title": ""
},
{
"docid": "8a4772e698355c463692ebcb27e68ea7",
"text": "Abstracr-Test data generation in program testing is the process of identifying a set of test data which satisfies given testing criterion. Most of the existing test data generators 161, [It], [lo], [16], [30] use symbolic evaluation to derive test data. However, in practical programs this technique frequently requires complex algebraic manipulations, especially in the presence of arrays. In this paper we present an alternative approach of test data generation which is based on actual execution of the program under test, function minimization methods, and dynamic data flow analysis. Test data are developed for the program using actual values of input variables. When the program is executed, the program execution flow is monitored. If during program execution an undesirable execution flow is observed (e.g., the “actual” path does not correspond to the selected control path) then function minimization search algorithms are used to automatically locate the values of input variables for which the selected path is traversed. In addition, dynamic data Bow analysis is used to determine those input variables responsible for the undesirable program behavior, leading to significant speedup of the search process. The approach of generating test data is then extended to programs with dynamic data structures, and a search method based on dynamic data flow analysis and backtracking is presented. In the approach described in this paper, values of array indexes and pointers are known at each step of program execution, and this approach exploits this information to overcome difficulties of array and pointer handling; as a result, the effectiveness of test data generation can be significantly improved.",
"title": ""
},
{
"docid": "f55ac9e319ad8b9782a34251007a5d06",
"text": "The availability in machine-readable form of descriptions of the structure of documents, as well as of the document discourse (e.g. the scientific discourse within scholarly articles), is crucial for facilitating semantic publishing and the overall comprehension of documents by both users and machines. In this paper we introduce DoCO, the Document Components Ontology, an OWL 2 DL ontology that provides a general-purpose structured vocabulary of document elements to describe both structural and rhetorical document components in RDF. In addition to describing the formal description of the ontology, this paper showcases its utility in practice in a variety of our own applications and other activities of the Semantic Publishing community that rely on DoCO to annotate and retrieve document components of scholarly articles.",
"title": ""
},
{
"docid": "114e6cde6a38bcbb809f19b80110c16f",
"text": "This paper proposes a neural semantic parsing approach – Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.",
"title": ""
},
{
"docid": "7100fea85ba7c88f0281f11e7ddc04a9",
"text": "This paper reports the spoof surface plasmons polaritons (SSPPs) based multi-band bandpass filter. An efficient back to back transition from Quasi TEM mode of microstrip line to SSPP mode has been designed by etching a gradient corrugated structure on the metal strip; while keeping ground plane unaltered. SSPP wave is found to be highly confined within the teeth part of corrugation. Complementary split ring resonator has been etched in the ground plane to obtained multiband bandpass filter response. Excellent conversion from QTEM mode to SSPP mode has been observed.",
"title": ""
},
{
"docid": "25ca6416d95398eb0e79c1357dcf6554",
"text": "Bayesian Learning with Dependency Structures via Latent Factors, Mixtures, and Copulas by Shaobo Han Department of Electrical and Computer Engineering Duke University Date: Approved: Lawrence Carin, Supervisor",
"title": ""
},
{
"docid": "18fcdcadc3290f9c8dd09f0aa1a27e8f",
"text": "The Industry 4.0 is a vision that includes connecting more intensively physical systems with their virtual counterparts in computers. This computerization of manufacturing will bring many advantages, including allowing data gathering, integration and analysis in the scale not seen earlier. In this paper we describe our Semantic Big Data Historian that is intended to handle large volumes of heterogeneous data gathered from distributed data sources. We describe the approach and implementation with a special focus on using Semantic Web technologies for integrating the data.",
"title": ""
},
{
"docid": "4e91d37de7701e4a03c506c602ef3455",
"text": "This paper presents the design of Glow, a machine learning compiler for heterogeneous hardware. It is a pragmatic approach to compilation that enables the generation of highly optimized code for multiple targets. Glow lowers the traditional neural network dataflow graph into a two-phase strongly-typed intermediate representation. The high-level intermediate representation allows the optimizer to perform domain-specific optimizations. The lower-level instruction-based address-only intermediate representation allows the compiler to perform memory-related optimizations, such as instruction scheduling, static memory allocation and copy elimination. At the lowest level, the optimizer performs machine-specific code generation to take advantage of specialized hardware features. Glow features a lowering phase which enables the compiler to support a high number of input operators as well as a large number of hardware targets by eliminating the need to implement all operators on all targets. The lowering phase is designed to reduce the input space and allow new hardware backends to focus on a small number of linear algebra primitives.",
"title": ""
},
{
"docid": "1a9d276c4571419e0d1b297f248d874d",
"text": "Organizational culture plays a critical role in the acceptance and adoption of agile principles by a traditional software development organization (Chan & Thong, 2008). Organizations must understand the differences that exist between traditional software development principles and agile principles. Based on an analysis of the literature published between 2003 and 2010, this study examines nine distinct organizational cultural factors that require change, including management style, communication, development team practices, knowledge management, and customer interactions.",
"title": ""
},
{
"docid": "abc709735ff3566b9d3efa3bb9babd6e",
"text": "Disaster scenarios involve a multitude of obstacles that are difficult to traverse for humans and robots alike. Most robotic search and rescue solutions to this problem involve large, tank-like robots that use brute force to cross difficult terrain; however, these large robots may cause secondary damage. H.E.R.A.L.D, the Hybrid Exploration Robot for Air and Land Deployment, is a novel integrated system of three nimble, lightweight robots which can travel over difficult obstacles by air, but also travel through rubble. We present the design methodology and optimization of each robot, as well as design and testing of the physical integration of the system as a whole, and compare the performance of the robots to the state of the art.",
"title": ""
},
{
"docid": "29479201c12e99eb9802dd05cff60c36",
"text": "Exposures to air pollution in the form of particulate matter (PM) can result in excess production of reactive oxygen species (ROS) in the respiratory system, potentially causing both localized cellular injury and triggering a systemic inflammatory response. PM-induced inflammation in the lung is modulated in large part by alveolar macrophages and their biochemical signaling, including production of inflammatory cytokines, the primary mechanism via which inflammation is initiated and sustained. We developed a robust, relevant, and flexible method employing a rat alveolar macrophage cell line (NR8383) which can be applied to routine samples of PM from air quality monitoring sites to gain insight into the drivers of PM toxicity that lead to oxidative stress and inflammation. Method performance was characterized using extracts of ambient and vehicular engine exhaust PM samples. Our results indicate that the reproducibility and the sensitivity of the method are satisfactory and comparisons between PM samples can be made with good precision. The average relative percent difference for all genes detected during 10 different exposures was 17.1%. Our analysis demonstrated that 71% of genes had an average signal to noise ratio (SNR) ≥ 3. Our time course study suggests that 4 h may be an optimal in vitro exposure time for observing short-term effects of PM and capturing the initial steps of inflammatory signaling. The 4 h exposure resulted in the detection of 57 genes (out of 84 total), of which 86% had altered expression. Similarities and conserved gene signaling regulation among the PM samples were demonstrated through hierarchical clustering and other analyses. Overlying the core congruent patterns were differentially regulated genes that resulted in distinct sample-specific gene expression \"fingerprints.\" Consistent upregulation of Il1f5 and downregulation of Ccr7 was observed across all samples, while TNFα was upregulated in half of the samples and downregulated in the other half. Overall, this PM-induced cytokine expression assay could be effectively integrated into health studies and air quality monitoring programs to better understand relationships between specific PM components, oxidative stress activity and inflammatory signaling potential.",
"title": ""
},
{
"docid": "b132b6aedba7415f2ccaa3783fafd271",
"text": "Recent technologies enable electronic and RF circuits in communication devices and radar to be miniaturized and become physically smaller in size. Antenna design has been one of the key limiting constraints to the development of small communication terminals and also in meeting next generation and radar requirements. Multiple antenna technologies (MATs) have gained much attention in the last few years because of the huge gain. MATs can enhance the reliability and the channel capacity levels. Furthermore, multiple antenna systems can have a big contribution to reduce the interference both in the uplink and the downlink. To increase the communication systems reliability, multiple antennas can be installed at the transmitter or/and at the receiver. The idea behind multiple antenna diversity is to supply the receiver by multiple versions of the same signal transmitted via independent channels. In modern communication transceiver and radar systems, primary aims are to direct high power RF signal from transmitter to antenna while preventing leakage of that large signal into more sensitive frontend of receiver. So, a Single-Pole Double-Throw (SPDT) Transmitter/Receiver (T/R) Switch plays an important role. In this paper, design of smart distributed subarray MIMO (DS-MIMO) microstrip antenna system with controller unit and frequency agile has been introduced and investigated. All the entire proposed antenna system has been evaluated using a commercial software. The final proposed design has been fabricated and the radiation characteristics have been illustrated using network analyzer to meet the requirements for communication and radar applications.",
"title": ""
},
{
"docid": "074d4a552c82511d942a58b93d51c38a",
"text": "This is a survey of neural network applications in the real-world scenario. It provides a taxonomy of artificial neural networks (ANNs) and furnish the reader with knowledge of current and emerging trends in ANN applications research and area of focus for researchers. Additionally, the study presents ANN application challenges, contributions, compare performances and critiques methods. The study covers many applications of ANN techniques in various disciplines which include computing, science, engineering, medicine, environmental, agriculture, mining, technology, climate, business, arts, and nanotechnology, etc. The study assesses ANN contributions, compare performances and critiques methods. The study found that neural-network models such as feedforward and feedback propagation artificial neural networks are performing better in its application to human problems. Therefore, we proposed feedforward and feedback propagation ANN models for research focus based on data analysis factors like accuracy, processing speed, latency, fault tolerance, volume, scalability, convergence, and performance. Moreover, we recommend that instead of applying a single method, future research can focus on combining ANN models into one network-wide application.",
"title": ""
},
{
"docid": "55a6c14a7445b1903223f59ad4ad9b77",
"text": "Energy and environmental issues are among the major concerns facing the global community today. Transportation fuel represents a large proportion of energy consumption, not only in the US, but also worldwide. As fossil fuel is being depleted, new substitutes are needed to provide energy. Ethanol, which has been produced mainly from the fermentation of corn starch in the US, has been regarded as one of the main liquid transportation fuels that can take the place of fossil fuel. However, limitations in the supply of starch are creating a need for different substrates. Forest biomass is believed to be one of the most abundant sources of sugars, although much research has been reported on herbaceous grass, agricultural residue, and municipal waste. The use of biomass sugars entails pretreatment to disrupt the lignin-carbohydrate complex and expose carbohydrates to enzymes. This paper reviews pretreatment technologies from the perspective of their potential use with wood, bark, and forest residues. Acetic acid catalysis is suggested for the first time to be used in steam explosion pretreatment. Its pretreatment economics, as well as that for ammonia fiber explosion pretreatment, is estimated. This analysis suggests that both are promising techniques worthy of further exploration or optimization for commercialization.",
"title": ""
},
{
"docid": "a576a6bf249616d186657a48c2aec071",
"text": "Penumbras, or soft shadows, are an important means to enhance the realistic ap pearance of computer generated images. We present a fast method based on Minkowski operators to reduce t he run ime for penumbra calculation with stochastic ray tracing. Detailed run time analysis on some examples shows that the new method is significantly faster than the conventional approach. Moreover, it adapts to the environment so that small penumbras are calculated faster than larger ones. The algorithm needs at most twice as much memory as the underlying ray tracing algorithm.",
"title": ""
},
{
"docid": "16dae5a68647c9a8aa93b900eb470eb4",
"text": "Saving power in datacenter networks has become a pressing issue. ElasticTree and CARPO fat-tree networks have recently been proposed to reduce power consumption by using sleep mode during the operation stage of the network. In this paper, we address the design stage where the right switch size is evaluated to maximize power saving during the expected operation of the network. Our findings reveal that deploying a large number of small switches is more power-efficient than a small number of large switches when the traffic demand is relatively moderate or when servers exchanging traffic are in close proximity. We also discuss the impact of sleep mode on performance such as packet delay and loss.",
"title": ""
}
] |
scidocsrr
|
fc572685aa55c813ea4803ee813b4801
|
Proposal : Scalable , Active and Flexible Learning on Distributions
|
[
{
"docid": "9e3057c25630bfdf5e7ebcc53b6995b0",
"text": "We present a new solution to the ``ecological inference'' problem, of learning individual-level associations from aggregate data. This problem has a long history and has attracted much attention, debate, claims that it is unsolvable, and purported solutions. Unlike other ecological inference techniques, our method makes use of unlabeled individual-level data by embedding the distribution over these predictors into a vector in Hilbert space. Our approach relies on recent learning theory results for distribution regression, using kernel embeddings of distributions. Our novel approach to distribution regression exploits the connection between Gaussian process regression and kernel ridge regression, giving us a coherent, Bayesian approach to learning and inference and a convenient way to include prior information in the form of a spatial covariance function. Our approach is highly scalable as it relies on FastFood, a randomized explicit feature representation for kernel embeddings. We apply our approach to the challenging political science problem of modeling the voting behavior of demographic groups based on aggregate voting data. We consider the 2012 US Presidential election, and ask: what was the probability that members of various demographic groups supported Barack Obama, and how did this vary spatially across the country? Our results match standard survey-based exit polling data for the small number of states for which it is available, and serve to fill in the large gaps in this data, at a much higher degree of granularity.",
"title": ""
},
{
"docid": "09df260d26638f84ec3bd309786a8080",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
},
{
"docid": "2bdaaeb18db927e2140c53fcc8d4fa30",
"text": "Many information gathering problems require determining the set of points, for which an unknown function takes value above or below some given threshold level. As a concrete example, in the context of environmental monitoring of Lake Zurich we would like to estimate the regions of the lake where the concentration of chlorophyll or algae is greater than some critical value, which would serve as an indicator of algal bloom phenomena. A critical factor in such applications is the high cost in terms of time, baery power, etc. that is associated with each measurement, therefore it is important to be careful about selecting “informative” locations to sample, in order to reduce the total sampling effort required. We formalize the task of level set estimation as a classification problem with sequential measurements, where the unknown function is modeled as a sample from a Gaussian process (GP). We propose LSE, an active learning algorithm that guides both sampling and classification based on GP-derived confidence bounds, and provide theoretical guarantees about its sample complexity. Furthermore, we extend LSE and its theory to two more natural seings: (1) where the threshold level is implicitly defined as a percentage of the (unknown) maximum of the target function and (2) where samples are selected in batches. Based on the laer extension we also propose a simple path planning algorithm. We evaluate the effectiveness of our proposed methods on two problems of practical interest, namely the aforementioned autonomous monitoring of algal populations in Lake Zurich and geolocating network latency.",
"title": ""
}
] |
[
{
"docid": "5b0e33ede34f6532a48782e423128f49",
"text": "The literature on globalisation reveals wide agreement concerning the relevance of international sourcing strategies as key competitive factors for companies seeking globalisation, considering such strategies to be a purchasing management approach focusing on supplies from vendors in the world market, rather than relying exclusively on domestic offerings (Petersen, Frayer, & Scannel, 2000; Stevens, 1995; Trent & Monczka, 1998). Thus, the notion of “international sourcing” mentioned by these authors describes the level of supply globalisation in companies’ purchasing strategy, as related to supplier source (Giunipero & Pearcy, 2000; Levy, 1995; Trent & Monczka, 2003b).",
"title": ""
},
{
"docid": "0a3d4b02d2273087c50b8b0d77fb8c36",
"text": "Circulation. 2017;135:e867–e884. DOI: 10.1161/CIR.0000000000000482 April 11, 2017 e867 ABSTRACT: Multiple randomized controlled trials (RCTs) have assessed the effects of supplementation with eicosapentaenoic acid plus docosahexaenoic acid (omega-3 polyunsaturated fatty acids, commonly called fish oils) on the occurrence of clinical cardiovascular diseases. Although the effects of supplementation for the primary prevention of clinical cardiovascular events in the general population have not been examined, RCTs have assessed the role of supplementation in secondary prevention among patients with diabetes mellitus and prediabetes, patients at high risk of cardiovascular disease, and those with prevalent coronary heart disease. In this scientific advisory, we take a clinical approach and focus on common indications for omega-3 polyunsaturated fatty acid supplements related to the prevention of clinical cardiovascular events. We limited the scope of our review to large RCTs of supplementation with major clinical cardiovascular disease end points; meta-analyses were considered secondarily. We discuss the features of available RCTs and provide the rationale for our recommendations. We then use existing American Heart Association criteria to assess the strength of the recommendation and the level of evidence. On the basis of our review of the cumulative evidence from RCTs designed to assess the effect of omega-3 polyunsaturated fatty acid supplementation on clinical cardiovascular events, we update prior recommendations for patients with prevalent coronary heart disease, and we offer recommendations, when data are available, for patients with other clinical indications, including patients with diabetes mellitus and prediabetes and those with high risk of cardiovascular disease, stroke, heart failure, and atrial fibrillation. David S. Siscovick, MD, MPH, FAHA, Chair Thomas A. Barringer, MD, FAHA Amanda M. Fretts, PhD, MPH Jason H.Y. Wu, PhD, MSc, FAHA Alice H. Lichtenstein, DSc, FAHA Rebecca B. Costello, PhD, FAHA Penny M. Kris-Etherton, PhD, RD, FAHA Terry A. Jacobson, MD, FAHA Mary B. Engler, PhD, RN, MS, FAHA Heather M. Alger, PhD Lawrence J. Appel, MD, MPH, FAHA Dariush Mozaffarian, MD, DrPH, FAHA On behalf of the American Heart Association Nutrition Committee of the Council on Lifestyle and Cardiometabolic Health; Council on Epidemiology and Prevention; Council on Cardiovascular Disease in the Young; Council on Cardiovascular and Stroke Nursing; and Council on Clinical Cardiology Omega-3 Polyunsaturated Fatty Acid (Fish Oil) Supplementation and the Prevention of Clinical Cardiovascular Disease",
"title": ""
},
{
"docid": "da61794b9ffa1f6f4bc39cef9655bf77",
"text": "This manuscript analyzes the effects of design parameters, such as aspect ratio, doping concentration and bias, on the performance of a general CMOS Hall sensor, with insight on current-related sensitivity, power consumption, and bandwidth. The article focuses on rectangular-shaped Hall probes since this is the most general geometry leading to shape-independent results. The devices are analyzed by means of 3D-TCAD simulations embedding galvanomagnetic transport model, which takes into account the Lorentz force acting on carriers due to a magnetic field. Simulation results define a set of trade-offs and design rules that can be used by electronic designers to conceive their own Hall probes.",
"title": ""
},
{
"docid": "fe4954b2b96a0ab95f5eedfca9b12066",
"text": "Marketing historically has undergone various shifts in emphasis from production through sales to marketing orientation. However, the various orientations have failed to engage customers in meaningful relationship mutually beneficial to organisations and customers, with all forms of the shift still exhibiting the transactional approach inherit in traditional marketing (Kubil & Doku, 2010). However, Coltman (2006) indicates that in strategy and marketing literature, scholars have long suggested that a customer centred strategy is fundamental to competitive advantage and that customer relationship management (CRM) programmes are increasingly being used by organisations to support the type of customer understanding and interdepartmental connectedness required to effectively execute a customer strategy.",
"title": ""
},
{
"docid": "f3fdc63904e2bf79df8b6ca30a864fd3",
"text": "Although the potential benefits of a powered ankle-foot prosthesis have been well documented, no one has successfully developed and verified that such a prosthesis can improve amputee gait compared to a conventional passive-elastic prosthesis. One of the main hurdles that hinder such a development is the challenge of building an ankle-foot prosthesis that matches the size and weight of the intact ankle, but still provides a sufficiently large instantaneous power output and torque to propel an amputee. In this paper, we present a novel, powered ankle-foot prosthesis that overcomes these design challenges. The prosthesis comprises an unidirectional spring, configured in parallel with a force-controllable actuator with series elasticity. With this architecture, the ankle-foot prosthesis matches the size and weight of the human ankle, and is shown to be satisfying the restrictive design specifications dictated by normal human ankle walking biomechanics.",
"title": ""
},
{
"docid": "e8fee9f93106ce292c89c26be373030f",
"text": "As a non-invasive imaging modality, optical coherence tomography (OCT) can provide micrometer-resolution 3D images of retinal structures. Therefore it is commonly used in the diagnosis of retinal diseases associated with edema in and under the retinal layers. In this paper, a new framework is proposed for the task of fluid segmentation and detection in retinal OCT images. Based on the raw images and layers segmented by a graph-cut algorithm, a fully convolutional neural network was trained to recognize and label the fluid pixels. Random forest classification was performed on the segmented fluid regions to detect and reject the falsely labeled fluid regions. The leave-one-out cross validation experiments on the RETOUCH database show that our method performs well in both segmentation (mean Dice: 0.7317) and detection (mean AUC: 0.985) tasks.",
"title": ""
},
{
"docid": "181356b104a26d1d300d10619fb78f45",
"text": "Recent advances in combining deep neural network architectures with reinforcement learning techniques have shown promising potential results in solving complex control problems with high dimensional state and action spaces. Inspired by these successes, in this paper, we build two kinds of reinforcement learning algorithms: deep policy-gradient and value-function based agents which can predict the best possible traffic signal for a traffic intersection. At each time step, these adaptive traffic light control agents receive a snapshot of the current state of a graphical traffic simulator and produce control signals. The policy-gradient based agent maps its observation directly to the control signal, however the value-function based agent first estimates values for all legal control signals. The agent then selects the optimal control action with the highest value. Our methods show promising results in a traffic network simulated in the SUMO traffic simulator, without suffering from instability issues during the training process.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "3f225efbccb63d0c5170fce44fadb3c6",
"text": "Pelvic pain is a common gynaecological complaint, sometimes without any obvious etiology. We report a case of pelvic congestion syndrome, an often overlooked cause of pelvic pain, diagnosed by helical computed tomography. This seems to be an effective and noninvasive imaging modality. RID=\"\"ID=\"\"<e5>Correspondence to:</e5> J. H. Desimpelaere",
"title": ""
},
{
"docid": "3ca2d95885303f1ab395bd31d32df0c2",
"text": "Curiosity to predict personality, behavior and need for this is not as new as invent of social media. Personality prediction to better accuracy could be very useful for society. There are many papers and researches conducted on usefulness of the data for various purposes like in marketing, dating suggestions, organization development, personalized recommendations and health care to name a few. With the introduction and extreme popularity of Online Social Networking Sites like Facebook, Twitter and LinkedIn numerous researches were conducted based on public data available, online social networking applications and social behavior towards friends and followers to predict the personality. Structured mining of the social media content can provide us the ability to predict some personality traits. This survey aims at providing researchers with an overview of various strategies used for studies and research concentrating on predicting user personality and behavior using online social networking site content. There positives, limitations are well summarized as reported in the literature. Finally, a brief discussion including open issues for further research in the area of social networking site based personality prediction preceding conclusion.",
"title": ""
},
{
"docid": "5fa0ae0baaa954fb2ab356719f8ca629",
"text": "Estimating the pose of a camera (virtual or real) in which some augmentation takes place is one of the most important parts of an augmented reality (AR) system. Availability of powerful processors and fast frame grabbers have made vision-based trackers commonly used due to their accuracy as well as flexibility and ease of use. Current vision-based trackers are based on tracking of markers. The use of markers increases robustness and reduces computational requirements. However, their use can be very complicated, as they require certain maintenance. Direct use of scene features for tracking, therefore, is desirable. To this end, we describe a general system that tracks the position and orientation of a camera observing a scene without any visual markers. Our method is based on a two-stage process. In the first stage, a set of features is learned with the help of an external tracking system while in action. The second stage uses these learned features for camera tracking when the system in the first stage decides that it is possible to do so. The system is very general so that it can employ any available feature tracking and pose estimation system for learning and tracking. We experimentally demonstrate the viability of the method in real-life examples.",
"title": ""
},
{
"docid": "5ca29a94ac01f9ad20249021802b1746",
"text": "Big Data has become a very popular term. It refers to the enormous amount of structured, semi-structured and unstructured data that are exponentially generated by high-performance applications in many domains: biochemistry, genetics, molecular biology, physics, astronomy, business, to mention a few. Since the literature of Big Data has increased significantly in recent years, it becomes necessary to develop an overview of the state-of-the-art in Big Data. This paper aims to provide a comprehensive review of Big Data literature of the last 4 years, to identify the main challenges, areas of application, tools and emergent trends of Big Data. To meet this objective, we have analyzed and classified 457 papers concerning Big Data. This review gives relevant information to practitioners and researchers about the main trends in research and application of Big Data in different technical domains, as well as a reference overview of Big Data tools.",
"title": ""
},
{
"docid": "f8ea80edbb4f31d5c0d1a2da5e8aae13",
"text": "BACKGROUND\nPremenstrual syndrome (PMS) is a common condition, and for 5% of women, the influence is so severe as to interfere with their mental health, interpersonal relationships, or studies. Severe PMS may result in decreased occupational productivity. The aim of this study was to investigate the influence of perception of PMS on evaluation of work performance.\n\n\nMETHODS\nA total of 1971 incoming female university students were recruited in September 2009. A simulated clinical scenario was used, with a test battery including measurement of psychological symptoms and the Chinese Premenstrual Symptom Questionnaire.\n\n\nRESULTS\nWhen evaluating employee performance in the simulated scenario, 1565 (79.4%) students neglected the impact of PMS, while 136 (6.9%) students considered it. Multivariate logistic regression showed that perception of daily function impairment due to PMS and frequency of measuring body weight were significantly associated with consideration of the influence of PMS on evaluation of work performance.\n\n\nCONCLUSION\nIt is important to increase the awareness of functional impairments related to severe PMS.",
"title": ""
},
{
"docid": "f7c2ebd19c41b697d52850a225bfe8a0",
"text": "There is currently a misconception among designers and users of free space laser communication (lasercom) equipment that 1550 nm light suffers from less atmospheric attenuation than 785 or 850 nm light in all weather conditions. This misconception is based upon a published equation for atmospheric attenuation as a function of wavelength, which is used frequently in the free-space lasercom literature. In hazy weather (visibility > 2 km), the prediction of less atmospheric attenuation at 1550 nm is most likely true. However, in foggy weather (visibility < 500 m), it appears that the attenuation of laser light is independent of wavelength, ie. 785 nm, 850 nm, and 1550 nm are all attenuated equally by fog. This same wavelength independence is also observed in snow and rain. This observation is based on an extensive literature search, and from full Mie scattering calculations. A modification to the published equation describing the atmospheric attenuation of laser power, which more accurately describes the effects of fog, is offered. This observation of wavelength-independent attenuation in fog is important, because fog, heavy snow, and extreme rain are the only types of weather that are likely to disrupt short (<500 m) lasercom links. Short lasercom links will be necessary to meet the high availability requirements of the telecommunications industry.",
"title": ""
},
{
"docid": "485270200008a292cefdb1e952441113",
"text": "This paper describes the prototype design, specimen design, experimental setup, and experimental results of three steel plate shear wall concepts. Prototype light-gauge steel plate shear walls are designed as seismic retrofits for a hospital st area of high seismicity, and emphasis is placed on minimizing their impact on the existing framing. Three single-story test spe designed using these prototypes as a basis, two specimens with flat infill plates (thicknesses of 0.9 mm ) and a third using a corrugat infill plate (thickness of 0.7 mm). Connection of the infill plates to the boundary frames is achieved through the use of b combination with industrial strength epoxy or welds, allowing for mobility of the infills if desired. Testing of the systems is don quasi-static conditions. It is shown that one of the flat infill plate specimens, as well as the specimen utilizing a corrugated in achieve significant ductility and energy dissipation while minimizing the demands placed on the surrounding framing. Exp results are compared to monotonic pushover predictions from computer analysis using a simple model and good agreement DOI: 10.1061/ (ASCE)0733-9445(2005)131:2(259) CE Database subject headings: Shear walls; Experimentation; Retrofitting; Seismic design; Cyclic design; Steel plates . d the field g of be a ; 993; rot are have ds ts istexfrom ctive is to y seis eintrofit reatn to the ular are r light-",
"title": ""
},
{
"docid": "3564cf609cf1b9666eaff7edcd12a540",
"text": "Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.",
"title": ""
},
{
"docid": "176d0bf9525d6dd9bd4837b174e4f769",
"text": "Prader-Willi syndrome (PWS) is a genetic disorder frequently characterized by obesity, growth hormone deficiency, genital abnormalities, and hypogonadotropic hypogonadism. Incomplete or delayed pubertal development as well as premature adrenarche are usually found in PWS, whereas central precocious puberty (CPP) is very rare. This study aimed to report the clinical and biochemical follow-up of a PWS boy with CPP and to discuss the management of pubertal growth. By the age of 6, he had obesity, short stature, and many clinical criteria of PWS diagnosis, which was confirmed by DNA methylation test. Therapy with recombinant human growth hormone (rhGH) replacement (0.15 IU/kg/day) was started. Later, he presented psychomotor agitation, aggressive behavior, and increased testicular volume. Laboratory analyses were consistent with the diagnosis of CPP (gonadorelin-stimulated LH peak 15.8 IU/L, testosterone 54.7 ng/dL). The patient was then treated with gonadotropin-releasing hormone analog (GnRHa). Hypothalamic dysfunctions have been implicated in hormonal disturbances related to pubertal development, but no morphologic abnormalities were detected in the present case. Additional methylation analysis (MS-MLPA) of the chromosome 15q11 locus confirmed PWS diagnosis. We presented the fifth case of CPP in a genetically-confirmed PWS male. Combined therapy with GnRHa and rhGH may be beneficial in this rare condition of precocious pubertal development in PWS.",
"title": ""
},
{
"docid": "3a3f3e1c0eac36d53a40d7639c3d65cc",
"text": "The aim of this paper is to present a hybrid approach to accurate quantification of vascular structures from magnetic resonance angiography (MRA) images using level set methods and deformable geometric models constructed with 3-D Delaunay triangulation. Multiple scale filtering based on the analysis of local intensity structure using the Hessian matrix is used to effectively enhance vessel structures with various diameters. The level set method is then applied to automatically segment vessels enhanced by the filtering with a speed function derived from enhanced MRA images. Since the goal of this paper is to obtain highly accurate vessel borders, suitable for use in fluid flow simulations, in a subsequent step, the vessel surface determined by the level set method is triangulated using 3-D Delaunay triangulation and the resulting surface is used as a parametric deformable model. Energy minimization is then performed within a variational setting with a first-order internal energy; the external energy is derived from 3-D image gradients. Using the proposed method, vessels are accurately segmented from MRA data.",
"title": ""
},
{
"docid": "9f469cdc1864aad2026630a29c210c1f",
"text": "This paper proposes an asymptotically optimal hybrid beamforming solution for large antenna arrays by exploiting the properties of the singular vectors of the channel matrix. It is shown that the elements of the channel matrix with Rayleigh fading follow a normal distribution when large antenna arrays are employed. The proposed beamforming algorithm is effective in both sparse and rich propagation environments, and is applicable for both point-to-point and multiuser scenarios. In addition, a closed-form expression and a lower bound for the achievable rates are derived when analog and digital phase shifters are employed. It is shown that the performance of the hybrid beamformers using phase shifters with more than 2-bit resolution is comparable with analog phase shifting. A novel phase shifter selection scheme that reduces the power consumption at the phase shifter network is proposed when the wireless channel is modeled by Rayleigh fading. Using this selection scheme, the spectral efficiency can be increased as the power consumption in the phase shifter network reduces. Compared with the scenario that all of the phase shifters are in operation, the simulation results indicate that the spectral efficiency increases when up to 50% of phase shifters are turned OFF.",
"title": ""
},
{
"docid": "1e493440a61578c8c6ca8fbe63f475d6",
"text": "3D object detection is an essential task in autonomous driving. Recent techniques excel with highly accurate detection rates, provided the 3D input data is obtained from precise but expensive LiDAR technology. Approaches based on cheaper monocular or stereo imagery data have, until now, resulted in drastically lower accuracies — a gap that is commonly attributed to poor image-based depth estimation. However, in this paper we argue that data representation (rather than its quality) accounts for the majority of the difference. Taking the inner workings of convolutional neural networks into consideration, we propose to convert imagebased depth maps to pseudo-LiDAR representations — essentially mimicking LiDAR signal. With this representation we can apply different existing LiDAR-based detection algorithms. On the popular KITTI benchmark, our approach achieves impressive improvements over the existing stateof-the-art in image-based performance — raising the detection accuracy of objects within 30m range from the previous state-of-the-art of 22% to an unprecedented 74%. At the time of submission our algorithm holds the highest entry on the KITTI 3D object detection leaderboard for stereo image based approaches.",
"title": ""
}
] |
scidocsrr
|
b50725324e44b8548ecc10451e59ec09
|
Logical Physical Clocks and Consistent Snapshots in Globally Distributed Databases
|
[
{
"docid": "f481f0ba70ce16587f7c5639360bc2f9",
"text": "We describe the design and implementation of Walter, a key-value store that supports transactions and replicates data across distant sites. A key feature behind Walter is a new property called Parallel Snapshot Isolation (PSI). PSI allows Walter to replicate data asynchronously, while providing strong guarantees within each site. PSI precludes write-write conflicts, so that developers need not worry about conflict-resolution logic. To prevent write-write conflicts and implement PSI, Walter uses two new and simple techniques: preferred sites and counting sets. We use Walter to build a social networking application and port a Twitter-like application.",
"title": ""
},
{
"docid": "1ac8e84ada32efd6f6c7c9fdfd969ec0",
"text": "Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems.",
"title": ""
},
{
"docid": "457684e85d51869692aab90231a711a1",
"text": "Cassandra is a distributed storage system for managing structured data that is designed to scale to a very large size across many commodity servers, with no single point of failure. Reliability at massive scale is a very big challenge. Outages in the service can have significant negative impact. Hence Cassandra aims to run on top of an infrastructure of hundreds of nodes (possibly spread across different datacenters). At this scale, small and large components fail continuously; the way Cassandra manages the persistent state in the face of these failures drives the reliability and scalability of the software systems relying on this service. Cassandra has achieved several goals--scalability, high performance, high availability and applicability. In many ways Cassandra resembles a database and shares many design and implementation strategies with databases. Cassandra does not support a full relational data model; instead, it provides clients with a simple data model that supports dynamic control over data layout and format.",
"title": ""
}
] |
[
{
"docid": "6147c993e4c7f5b9daf18f99c374b129",
"text": "We propose an efficient text summarization technique that involves two basic operations. The first operation involves finding coherent chunks in the document and the second operation involves ranking the text in the individual coherent chunks and picking the sentences that rank above a given threshold. The coherent chunks are formed by exploiting the lexical relationship between adjacent sentences in the document. Occurrence of words through repetition or relatedness by sense relation plays a major role in forming a cohesive tie. The proposed text ranking approach is based on a graph theoretic ranking model applied to text summarization task.",
"title": ""
},
{
"docid": "c2305233c8ec74913196a6d8a832d582",
"text": "Almost a decade has passed since the objectives and benefits of autonomic computing were stated, yet even the latest system designs and deployments exhibit only limited and isolated elements of autonomic functionality. In previous work, we identified several of the key challenges behind this delay in the adoption of autonomic solutions, and proposed a generic framework for the development of autonomic computing systems that overcomes these challenges. In this article, we describe how existing technologies and standards can be used to realise our autonomic computing framework, and present its implementation as a service-oriented architecture. We show how this implementation employs a combination of automated code generation, model-based and object-oriented development techniques to ensure that the framework can be used to add autonomic capabilities to systems whose characteristics are unknown until runtime. We then use our framework to develop two autonomic solutions for the allocation of server capacity to services of different priorities and variable workloads, thus illustrating its application in the context of a typical data-centre resource management problem.",
"title": ""
},
{
"docid": "6a3bb84e7b8486692611aaa790609099",
"text": "As ubiquitous commerce using IT convergence technologies is coming, it is important for the strategy of cosmetic sales to investigate the sensibility and the degree of preference in the environment for which the makeup style has changed focusing on being consumer centric. The users caused the diversification of the facial makeup styles, because they seek makeup and individuality to satisfy their needs. In this paper, we proposed the effect of the facial makeup style recommendation on visual sensibility. Development of the facial makeup style recommendation system used a user interface, sensibility analysis, weather forecast, and collaborative filtering for the facial makeup styles to satisfy the user’s needs in the cosmetic industry. Collaborative filtering was adopted to recommend facial makeup style of interest for users based on the predictive relationship discovered between the current user and other previous users. We used makeup styles in the survey questionnaire. The pictures of makeup style details, such as foundation, color lens, eye shadow, blusher, eyelash, lipstick, hairstyle, hairpin, necklace, earring, and hair length were evaluated in terms of sensibility. The data were analyzed by SPSS using ANOVA and factor analysis to discover the most effective types of details from the consumer’s sensibility viewpoint. Sensibility was composed of three concepts: contemporary, mature, and individual. The details of facial makeup styles were positioned in 3D-concept space to relate each type of detail to the makeup concept regarding a woman’s cosmetics. Ultimately, this paper suggests empirical applications to verify the adequacy and the validity of this system.",
"title": ""
},
{
"docid": "df2c576e7cc3259ae1e0c29b3e3b4d35",
"text": "The use of previous direct interactions is probably the best way to calculate a reputation but, unfortunately this information is not always available. This is especially true in large multi-agent systems where interaction is scarce. In this paper we present a reputation system that takes advantage, among other things, of social relations between agents to overcome this problem.",
"title": ""
},
{
"docid": "d5d2b61493ed11ee74d566b7713b57ba",
"text": "BACKGROUND\nSymptomatic breakthrough in proton pump inhibitor (PPI)-treated gastro-oesophageal reflux disease (GERD) patients is a common problem with a range of underlying causes. The nonsystemic, raft-forming action of alginates may help resolve symptoms.\n\n\nAIM\nTo assess alginate-antacid (Gaviscon Double Action, RB, Slough, UK) as add-on therapy to once-daily PPI for suppression of breakthrough reflux symptoms.\n\n\nMETHODS\nIn two randomised, double-blind studies (exploratory, n=52; confirmatory, n=262), patients taking standard-dose PPI who had breakthrough symptoms, assessed by Heartburn Reflux Dyspepsia Questionnaire (HRDQ), were randomised to add-on Gaviscon or placebo (20 mL after meals and bedtime). The exploratory study endpoint was change in HRDQ score during treatment vs run-in. The confirmatory study endpoint was \"response\" defined as ≥3 days reduction in the number of \"bad\" days (HRDQ [heartburn/regurgitation] >0.70) during treatment vs run-in.\n\n\nRESULTS\nIn the exploratory study, significantly greater reductions in HRDQ scores (heartburn/regurgitation) were observed in the Gaviscon vs placebo (least squares mean difference [95% CI] -2.10 [-3.71 to -0.48]; P=.012). Post hoc \"responder\" analysis of the exploratory study also revealed significantly more Gaviscon patients (75%) achieved ≥3 days reduction in \"bad\" days vs placebo patients (36%), P=.005. In the confirmatory study, symptomatic improvement was observed with add-on Gaviscon (51%) but there was no significant difference in response vs placebo (48%) (OR (95% CI) 1.15 (0.69-1.91), P=.5939).\n\n\nCONCLUSIONS\nAdding Gaviscon to PPI reduced breakthrough GERD symptoms but a nearly equal response was observed for placebo. Response to intervention may vary according to whether symptoms are functional in origin.",
"title": ""
},
{
"docid": "f87b87af157de5bd5229f3e20a0d12a2",
"text": "The paper describes an improvement of the chopper method for elimination of parasitic voltages in a low resistance comparison and measurement procedure. The basic circuit diagram along with a short description of the working principle are presented and the appropriate low resistance comparator prototype was designed and realized. Preliminary examinations confirm the possibility of measuring extremely low voltages. Very high accuracy in resistance comparison and measurement is achieved (0.08 ppm for 1,000 attempts). Some special critical features in the design are discussed and solutions for overcoming the problems are described.",
"title": ""
},
{
"docid": "8fcc1b7e4602649f66817c4c50e10b3d",
"text": "Conventional wisdom suggests that praising a child as a whole or praising his or her traits is beneficial. Two studies tested the hypothesis that both criticism and praise that conveyed person or trait judgments could send a message of contingent worth and undermine subsequent coping. In Study 1, 67 children (ages 5-6 years) role-played tasks involving a setback and received 1 of 3 forms of criticism after each task: person, outcome, or process criticism. In Study 2, 64 children role-played successful tasks and received either person, outcome, or process praise. In both studies, self-assessments, affect, and persistence were measured on a subsequent task involving a setback. Results indicated that children displayed significantly more \"helpless\" responses (including self-blame) on all dependent measures after person criticism or praise than after process criticism or praise. Thus person feedback, even when positive, can create vulnerability and a sense of contingent self-worth.",
"title": ""
},
{
"docid": "93133be6094bba6e939cef14a72fa610",
"text": "We systematically searched available databases. We reviewed 6,143 studies published from 1833 to 2017. Reports in English, French, German, Italian, and Spanish were considered, as were publications in other languages if definitive treatment and recurrence at specific follow-up times were described in an English abstract. We assessed data in the manner of a meta-analysis of RCTs; further we assessed non-RCTs in the manner of a merged data analysis. In the RCT analysis including 11,730 patients, Limberg & Dufourmentel operations were associated with low recurrence of 0.6% (95%CI 0.3–0.9%) 12 months and 1.8% (95%CI 1.1–2.4%) respectively 24 months postoperatively. Analysing 89,583 patients from RCTs and non-RCTs, the Karydakis & Bascom approaches were associated with recurrence of only 0.2% (95%CI 0.1–0.3%) 12 months and 0.6% (95%CI 0.5–0.8%) 24 months postoperatively. Primary midline closure exhibited long-term recurrence up to 67.9% (95%CI 53.3–82.4%) 240 months post-surgery. For most procedures, only a few RCTs without long term follow up data exist, but substitute data from numerous non-RCTs are available. Recurrence in PSD is highly dependent on surgical procedure and by follow-up time; both must be considered when drawing conclusions regarding the efficacy of a procedure.",
"title": ""
},
{
"docid": "3b125237578f4505a0ca6c9477e2b766",
"text": "With today’s technology, elderly users could be supported in living independently in their own homes for a prolonged period of time. Commercially available products enable remote monitoring of the state of the user, enhance social networks, and even support elderly citizens in their everyday routines. Whereas technology seems to be in place to support elderly users, one might question the value of present solutions in terms of solving real user problems such as loneliness and self-efficacy. Furthermore, products tend to be complex in use and do not relate to the reference framework of elderly users. Consequently, acceptability of many present solutions tends to be low. This paper presents a design vision of assisted living solutions that elderly love to use. Based on earlier work, five concrete design goals have been identified that are specific to assisted living services for elderly users. The vision is illustrated by three examples of ongoing work; these cases present the design process of prototypes that are being tested in the field with elderly users. Even though the example cases are limited in terms of number of participants and quantitative data, the qualitative feedback and design experiences can serve as inspiration for designers of assisted living services.",
"title": ""
},
{
"docid": "0ede49c216f911cd01b3bfcf0c539d6e",
"text": "Distribution patterns along a slope and vertical root distribution were compared among seven major woody species in a secondary forest of the warm-temperate zone in central Japan in relation to differences in soil moisture profiles through a growing season among different positions along the slope. Pinus densiflora, Juniperus rigida, Ilex pedunculosa and Lyonia ovalifolia, growing mostly on the upper part of the slope with shallow soil depth had shallower roots. Quercus serrata and Quercus glauca, occurring mostly on the lower slope with deep soil showed deeper rooting. Styrax japonica, mainly restricted to the foot slope, had shallower roots in spite of growing on the deepest soil. These relations can be explained by the soil moisture profile under drought at each position on the slope. On the upper part of the slope and the foot slope, deep rooting brings little advantage in water uptake from the soil due to the total drying of the soil and no period of drying even in the shallow soil, respectively. However, deep rooting is useful on the lower slope where only the deep soil layer keeps moist. This was supported by better diameter growth of a deep-rooting species on deeper soil sites than on shallower soil sites, although a shallow-rooting species showed little difference between them.",
"title": ""
},
{
"docid": "9f60376e3371ac489b4af90026041fa7",
"text": "There is a substantive body of research focusing on women's experiences of intimate partner violence (IPV), but a lack of qualitative studies focusing on men's experiences as victims of IPV. This article addresses this gap in the literature by paying particular attention to hegemonic masculinities and men's perceptions of IPV. Men ( N = 9) participated in in-depth interviews. Interview data were rigorously subjected to thematic analysis, which revealed five key themes in the men's narratives: fear of IPV, maintaining power and control, victimization as a forbidden narrative, critical understanding of IPV, and breaking the silence. Although the men share similar stories of victimization as women, the way this is influenced by their gendered histories is different. While some men reveal a willingness to disclose their victimization and share similar fear to women victims, others reframe their victim status in a way that sustains their own power and control. The men also draw attention to the contextual realities that frame abuse, including histories of violence against the women who used violence and the realities of communities suffering intergenerational affects of colonized histories. The findings reinforce the importance of in-depth qualitative work toward revealing the context of violence, understanding the impact of fear, victimization, and power/control on men's mental health as well as the outcome of legal and support services and lack thereof. A critical discussion regarding the gendered context of violence, power within relationships, and addressing men's need for support without redefining victimization or taking away from policies and support for women's ongoing victimization concludes the work.",
"title": ""
},
{
"docid": "283c6f04a5409a56fa366832c8a93c9c",
"text": "A substantial body of work has examined how exploitative and exploratory learning processes need to be balanced within an organization in order to increase innovation, productivity, and firm performance. Since exploration and exploitation require different resources, structures, and processes, several approaches to balancing these activities have been suggested; one of which is simultaneous implementation which is termed ambidexterity. In this paper, we adjust the lens and suggest that equally crucial issues to resolve are (a) defining ‘balance’ and (b) determining criteria for assessing ‘appropriate.’ We argue that balance does not necessarily require identical proportions of exploration and exploitation and propose different mixes of these two processes leading to different ambidexterity configurations. Three specific ambidexterity configurations are examined in terms of their distinct contributions to strategic objectives. In addition we argue that several contingency factors (organizational and environmental) influence the relation between particular ambidexterity configurations and performance. Therefore an ambidexterity configurations need to change and evolve to achieve optimum performance over time. We contribute to emerging research in contingency theory, organizational learning, and strategic management.",
"title": ""
},
{
"docid": "e42a1faf3d983bac59c0bfdd79212093",
"text": "L eadership matters, according to prominent leadership scholars (see also Bennis, 2007). But what is leadership? That turns out to be a challenging question to answer. Leadership is a complex and diverse topic, and trying to make sense of leadership research can be an intimidating endeavor. One comprehensive handbook of leadership (Bass, 2008), covering more than a century of scientific study, comprises more than 1,200 pages of text and more than 200 additional pages of references! There is clearly a substantial scholarly body of leadership theory and research that continues to grow each year. Given the sheer volume of leadership scholarship that is available, our purpose is not to try to review it all. That is why our focus is on the nature or essence of leadership as we and our chapter authors see it. But to fully understand and appreciate the nature of leadership, it is essential that readers have some background knowledge of the history of leadership research, the various theoretical streams that have evolved over the years, and emerging issues that are pushing the boundaries of the leadership frontier. Further complicating our task is that more than one hundred years of leadership research have led to several paradigm shifts and a voluminous body of knowledge. On several occasions, scholars of leadership became quite frustrated by the large amount of false starts, incremental theoretical advances, and contradictory findings. As stated more than five decades ago by Warren Bennis (1959, pp. 259–260), “Of all the hazy and confounding areas in social psychology, leadership theory undoubtedly contends for Leadership: Past, Present, and Future",
"title": ""
},
{
"docid": "5b79a4fcedaebf0e64b7627b2d944e22",
"text": "Self-replication is a key aspect of biological life that has been largely overlooked in Artificial Intelligence systems. Here we describe how to build and train self-replicating neural networks. The network replicates itself by learning to output its own weights. The network is designed using a loss function that can be optimized with either gradient-based or nongradient-based methods. We also describe a method we call regeneration to train the network without explicit optimization, by injecting the network with predictions of its own parameters. The best solution for a self-replicating network was found by alternating between regeneration and optimization steps. Finally, we describe a design for a self-replicating neural network that can solve an auxiliary task such as MNIST image classification. We observe that there is a trade-off between the network’s ability to classify images and its ability to replicate, but training is biased towards increasing its specialization at image classification at the expense of replication. This is analogous to the trade-off between reproduction and other tasks observed in nature. We suggest that a selfreplication mechanism for artificial intelligence is useful because it introduces the possibility of continual improvement through natural selection.",
"title": ""
},
{
"docid": "a691642e6d27c0df3508a2ab953e4392",
"text": "Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estima-",
"title": ""
},
{
"docid": "d96237fca40ac097e52146549672fbdf",
"text": "Cannabidiol (CBD) is a phytocannabinoid with therapeutic properties for numerous disorders exerted through molecular mechanisms that are yet to be completely identified. CBD acts in some experimental models as an anti-inflammatory, anticonvulsant, anti-oxidant, anti-emetic, anxiolytic and antipsychotic agent, and is therefore a potential medicine for the treatment of neuroinflammation, epilepsy, oxidative injury, vomiting and nausea, anxiety and schizophrenia, respectively. The neuroprotective potential of CBD, based on the combination of its anti-inflammatory and anti-oxidant properties, is of particular interest and is presently under intense preclinical research in numerous neurodegenerative disorders. In fact, CBD combined with Δ(9)-tetrahydrocannabinol is already under clinical evaluation in patients with Huntington's disease to determine its potential as a disease-modifying therapy. The neuroprotective properties of CBD do not appear to be exerted by the activation of key targets within the endocannabinoid system for plant-derived cannabinoids like Δ(9)-tetrahydrocannabinol, i.e. CB(1) and CB(2) receptors, as CBD has negligible activity at these cannabinoid receptors, although certain activity at the CB(2) receptor has been documented in specific pathological conditions (i.e. damage of immature brain). Within the endocannabinoid system, CBD has been shown to have an inhibitory effect on the inactivation of endocannabinoids (i.e. inhibition of FAAH enzyme), thereby enhancing the action of these endogenous molecules on cannabinoid receptors, which is also noted in certain pathological conditions. CBD acts not only through the endocannabinoid system, but also causes direct or indirect activation of metabotropic receptors for serotonin or adenosine, and can target nuclear receptors of the PPAR family and also ion channels.",
"title": ""
},
{
"docid": "609806e76f3f919da03900165c2727b8",
"text": "Modern and powerful mobile devices comprise an attractive target for any potential intruder or malicious code. The usual goal of an attack is to acquire users’ sensitive data or compromise the device so as to use it as a stepping stone (or bot) to unleash a number of attacks to other targets. In this paper, we focus on the popular iPhone device. We create a new stealth and airborne malware namely iSAM able to wirelessly infect and self-propagate to iPhone devices. iSAM incorporates six different malware mechanisms, and is able to connect back to the iSAM bot master server to update its programming logic or to obey commands and unleash a synchronized attack. Our analysis unveils the internal mechanics of iSAM and discusses the way all iSAM components contribute towards achieving its goals. Although iSAM has been specifically designed for iPhone it can be easily modified to attack any iOS-based device.",
"title": ""
},
{
"docid": "6f049f55c1b6f65284c390bd9a2d7511",
"text": "Thanks to their state-of-the-art performance, deep neural networks are increasingly used for object recognition. To achieve these results, they use millions of parameters to be trained. However, when targetting embedded applications the size of these models becomes problematic. As a consequence, their usage on smartphones or other resource limited devices is prohibited. In this paper we introduce a novel compression method for deep neural networks that is performed during the learning phase. It consists in adding an extra regularization term to the cost function of fully-connected layers. We combine this method with Product Quantization (PQ) of the trained weights for higher savings in storage consumption. We evaluate our method on two data sets (MNIST and CIFAR10), on which we achieve significantly larger compression rates than state-of-the-art methods.",
"title": ""
},
{
"docid": "ef0e2fb10fe5a3a5b2676f7630989d14",
"text": "This paper presents a novel method of characterizing optically transparent diamond-grid unit cells at millimeter-wave (mmWave) spectrum. The unit cell consists of Ag-alloy grids featuring 2000-Å thickness and $3 \\mu \\mathrm{m}$ grid-width, resulting in 88 % optical transmittance and sheet resistance of $\\pmb{3.22 \\Omega/\\mathrm{sq}}$. The devised characterization method enables accurate and efficient modeling of transparent circuits at mmWave. The validity of this approach is studied by devising an optically transparent patch antenna operating at 30.34 GHz with a measured gain of 3.2 dBi. The featured analysis and demonstration paves way to a novel concept of integrating optically transparent antennas within the active region of display panels in the future.",
"title": ""
},
{
"docid": "f05718832e9e8611b4cd45b68d0f80e3",
"text": "Conflict occurs frequently in any workplace; health care is not an exception. The negative consequences include dysfunctional team work, decreased patient satisfaction, and increased employee turnover. Research demonstrates that training in conflict resolution skills can result in improved teamwork, productivity, and patient and employee satisfaction. Strategies to address a disruptive physician, a particularly difficult conflict situation in healthcare, are addressed.",
"title": ""
}
] |
scidocsrr
|
b036bd83e2c74c99d99e3ee697ecd8e5
|
Graph Classification with 2 D Convolutional Neural Networks
|
[
{
"docid": "d5adbe2a074711bdfcc5f1840f27bac3",
"text": "Graph kernels have emerged as a powerful tool for graph comparison. Most existing graph kernels focus on local properties of graphs and ignore global structure. In this paper, we compare graphs based on their global properties as these are captured by the eigenvectors of their adjacency matrices. We present two algorithms for both labeled and unlabeled graph comparison. These algorithms represent each graph as a set of vectors corresponding to the embeddings of its vertices. The similarity between two graphs is then determined using the Earth Mover’s Distance metric. These similarities do not yield a positive semidefinite matrix. To address for this, we employ an algorithm for SVM classification using indefinite kernels. We also present a graph kernel based on the Pyramid Match kernel that finds an approximate correspondence between the sets of vectors of the two graphs. We further improve the proposed kernel using the Weisfeiler-Lehman framework. We evaluate the proposed methods on several benchmark datasets for graph classification and compare their performance to state-of-the-art graph kernels. In most cases, the proposed algorithms outperform the competing methods, while their time complexity remains very attractive.",
"title": ""
},
{
"docid": "2bf9e347e163d97c023007f4cc88ab02",
"text": "State-of-the-art graph kernels do not scale to large graphs with hundreds of nodes and thousands of edges. In this article we propose to compare graphs by counting graphlets, i.e., subgraphs with k nodes where k ∈ {3, 4, 5}. Exhaustive enumeration of all graphlets being prohibitively expensive, we introduce two theoretically grounded speedup schemes, one based on sampling and the second one specifically designed for bounded degree graphs. In our experimental evaluation, our novel kernels allow us to efficiently compare large graphs that cannot be tackled by existing graph kernels.",
"title": ""
}
] |
[
{
"docid": "a172c51270d6e334b50dcc6233c54877",
"text": "m U biquitous computing enhances computer use by making many computers available throughout the physical environment, while making them effectively invisible to the user. This article explains what is new and different about the computer science involved in ubiquitous computing. First, it provides a brief overview of ubiquitous computing, then elaborates through a series of examples drawn from various subdisciplines of computer science: hardware components (e.g., chips), network protocols, interaction substrates (e.g., software for screens and pens), applications, privacy, and computational methods. Ubiquitous computing offers a framework for new and exciting research across the spectrum of computer science. Since we started this work at Xerox Palo Alto Research Center (PARC) in 1988 a few places have begun work on this possible next-generation computing environment in which each person is continually interacting with hundreds of nearby wirelessly interconnected computers. The goal is to achieve the most effective kind of technology, that which is essentially invisible to the user. To bring computers to this point while retaining their power will require radically new kinds of computers of all sizes and shapes to be available to each person. I call this future world \"Ubiquitous Comput ing\" (Ubicomp) [27]. The research method for ubiquitous computing is standard experimental computer science: the construction of working prototypes of the necessai-y infrastructure in sufficient quantity to debug the viability of the systems in everyday use; ourselves and a few colleagues serving as guinea pigs. This is",
"title": ""
},
{
"docid": "7be1f8be2c74c438b1ed1761e157d3a3",
"text": "The feeding behavior and digestive physiology of the sea cucumber, Apostichopus japonicus are not well understood. A better understanding may provide useful information for the development of the aquaculture of this species. In this article the tentacle locomotion, feeding rhythms, ingestion rate (IR), feces production rate (FPR) and digestive enzyme activities were studied in three size groups (small, medium and large) of sea cucumber under a 12h light/12h dark cycle. Frame-by-frame video analysis revealed that all size groups had similar feeding strategies using a grasping motion to pick up sediment particles. The tentacle insertion rates of the large size group were significantly faster than those of the small and medium-sized groups (P<0.05). Feeding activities investigated by charge coupled device cameras with infrared systems indicated that all size groups of sea cucumber were nocturnal and their feeding peaks occurred at 02:00-04:00. The medium and large-sized groups also had a second feeding peak during the day. Both IR and FPR in all groups were significantly higher at night than those during the daytime (P<0.05). Additionally, the peak activities of digestive enzymes were 2-4h earlier than the peak of feeding. Taken together, these results demonstrated that the light/dark cycle was a powerful environment factor that influenced biological rhythms of A. japonicus, which had the ability to optimize the digestive processes for a forthcoming ingestion.",
"title": ""
},
{
"docid": "85007f98272a3fd355015f9f9931bed1",
"text": "Fully convolutional neural networks (FCNs) have shown outstanding performance in many computer vision tasks including salient object detection. However, there still remains two issues needed to be addressed in deep learning based saliency detection. One is the lack of tremendous amount of annotated data to train a network. The other is the lack of robustness for extracting salient objects in images containing complex scenes. In this paper, we present a new architecture−PDNet, a robust prior-model guided depth-enhanced network for RGB-D salient object detection. In contrast to existing works, in which RGBD values of image pixels are fed directly to a network, the proposed architecture is composed of a master network for processing RGB values, and a sub-network making full use of depth cues and incorporate depth-based features into the master network. To overcome the limited size of the labeled RGB-D dataset for training, we employ a large conventional RGB dataset to pre-train the master network, which proves to contribute largely to the final accuracy. Extensive evaluations over five benchmark datasets demonstrate that our proposed method performs favorably against the state-of-the-art approaches.",
"title": ""
},
{
"docid": "c1d95246f5d1b8c67f4ff4769bb6b9ce",
"text": "BACKGROUND\nA previous open-label study of melatonin, a key substance in the circadian system, has shown effects on migraine that warrant a placebo-controlled study.\n\n\nMETHOD\nA randomized, double-blind, placebo-controlled crossover study was carried out in 2 centers. Men and women, aged 18-65 years, with migraine but otherwise healthy, experiencing 2-7 attacks per month, were recruited from the general population. After a 4-week run-in phase, 48 subjects were randomized to receive either placebo or extended-release melatonin (Circadin®, Neurim Pharmaceuticals Ltd., Tel Aviv, Israel) at a dose of 2 mg 1 hour before bedtime for 8 weeks. After a 6-week washout treatment was switched. The primary outcome was migraine attack frequency (AF). A secondary endpoint was sleep quality assessed by the Pittsburgh Sleep Quality Index (PSQI).\n\n\nRESULTS\nForty-six subjects completed the study (96%). During the run-in phase, the average AF was 4.2 (±1.2) per month and during melatonin treatment the AF was 2.8 (±1.6). However, the reduction in AF during placebo was almost equal (p = 0.497). Absolute risk reduction was 3% (95% confidence interval -15 to 21, number needed to treat = 33). A highly significant time effect was found. The mean global PSQI score did not improve during treatment (p = 0.09).\n\n\nCONCLUSION\nThis study provides Class I evidence that prolonged-release melatonin (2 mg 1 hour before bedtime) does not provide any significant effect over placebo as migraine prophylaxis.\n\n\nCLASSIFICATION OF EVIDENCE\nThis study provides Class I evidence that 2 mg of prolonged release melatonin given 1 hour before bedtime for a duration of 8 weeks did not result in a reduction in migraine frequency compared with placebo (p = 0.497).",
"title": ""
},
{
"docid": "0000bd646e28d5012d7d77e43f75d2f5",
"text": "Classification of temporal textual data sequences is a common task in various domains such as social media and the Web. In this paper we propose to use Hawkes Processes for classifying sequences of temporal textual data, which exploit both temporal and textual information. Our experiments on rumour stance classification on four Twitter datasets show the importance of using the temporal information of tweets along with the textual content.",
"title": ""
},
{
"docid": "982dae78e301aec02012d9834f000d6d",
"text": "This paper investigates a universal approach of synthesizing arbitrary ternary logic circuits in quantum computation based on the truth table technology. It takes into account of the relationship of classical logic and quantum logic circuits. By adding inputs with constant value and garbage outputs, the classical non-reversible logic can be transformed into reversible logic. Combined with group theory, it provides an algorithm using the ternary Swap gate, ternary NOT gate and ternary Toffoli gate library. Simultaneously, the main result shows that the numbers of qutrits we use are minimal compared to other methods. We also illustrate with two examples to test our approach.",
"title": ""
},
{
"docid": "5039733d1fd5361820489549bfd2669f",
"text": "Reporting the economic burden of oral diseases is important to evaluate the societal relevance of preventing and addressing oral diseases. In addition to treatment costs, there are indirect costs to consider, mainly in terms of productivity losses due to absenteeism from work. The purpose of the present study was to estimate the direct and indirect costs of dental diseases worldwide to approximate the global economic impact. Estimation of direct treatment costs was based on a systematic approach. For estimation of indirect costs, an approach suggested by the World Health Organization's Commission on Macroeconomics and Health was employed, which factored in 2010 values of gross domestic product per capita as provided by the International Monetary Fund and oral burden of disease estimates from the 2010 Global Burden of Disease Study. Direct treatment costs due to dental diseases worldwide were estimated at US$298 billion yearly, corresponding to an average of 4.6% of global health expenditure. Indirect costs due to dental diseases worldwide amounted to US$144 billion yearly, corresponding to economic losses within the range of the 10 most frequent global causes of death. Within the limitations of currently available data sources and methodologies, these findings suggest that the global economic impact of dental diseases amounted to US$442 billion in 2010. Improvements in population oral health may imply substantial economic benefits not only in terms of reduced treatment costs but also because of fewer productivity losses in the labor market.",
"title": ""
},
{
"docid": "5fd63f9800b5df10d0c370c0db252b0d",
"text": "This article describes an algorithm for the automated generation of any Euler diagram starting with an abstract description of the diagram. An automated generation mechanism for Euler diagrams forms the foundations of a generation algorithm for notations such as Harel’s higraphs, constraint diagrams and some of the UML notation. An algorithm to generate diagrams is an essential component of a diagram tool for users to generate, edit and reason with diagrams. The work makes use of properties of the dual graph of an abstract diagram to identify which abstract diagrams are “drawable” within given wellformedness rules on concrete diagrams. A Java program has been written to implement the algorithm and sample output is included.",
"title": ""
},
{
"docid": "b27fc98c7e962b29819aa46429a18a9c",
"text": "Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines the scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.",
"title": ""
},
{
"docid": "ff3c4893cfb9c3830750e65ec5ddf9ef",
"text": "One of the most successful semi-supervised learning approaches is co-training for multiview data. In co-training, one trains two classifiers, one for each view, and uses the most confident predictions of the unlabeled data for the two classifiers to “teach each other”. In this paper, we extend co-training to learning scenarios without an explicit multi-view representation. Inspired by a theoretical analysis of Balcan et al. (2004), we introduce a novel algorithm that splits the feature space during learning, explicitly to encourage co-training to be successful. We demonstrate the efficacy of our proposed method in a weakly-supervised setting on the challenging Caltech-256 object recognition task, where we improve significantly over previous results by (Bergamo & Torresani, 2010) in almost all training-set size settings.",
"title": ""
},
{
"docid": "2361e70109a3595241b2cdbbf431659d",
"text": "There is a trend in the scientific community to model and solve complex optimization problems by employing natural metaphors. This is mainly due to inefficiency of classical optimization algorithms in solving larger scale combinatorial and/or highly non-linear problems. The situation is not much different if integer and/or discrete decision variables are required in most of the linear optimization models as well. One of the main characteristics of the classical optimization algorithms is their inflexibility to adapt the solution algorithm to a given problem. Generally a given problem is modelled in such a way that a classical algorithm like simplex algorithm can handle it. This generally requires making several assumptions which might not be easy to validate in many situations. In order to overcome these limitations more flexible and adaptable general purpose algorithms are needed. It should be easy to tailor these algorithms to model a given problem as close as to reality. Based on this motivation many nature inspired algorithms were developed in the literature like genetic algorithms, simulated annealing and tabu search. It has also been shown that these algorithms can provide far better solutions in comparison to classical algorithms. A branch of nature inspired algorithms which are known as swarm intelligence is focused on insect behaviour in order to develop some meta-heuristics which can mimic insect's problem solution abilities. Ant colony optimization, particle swarm optimization, wasp nets etc. are some of the well known algorithms that mimic insect behaviour in problem modelling and solution. Artificial Bee Colony (ABC) is a relatively new member of swarm intelligence. ABC tries to model natural behaviour of real honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new intelligent search algorithms. In this chapter an extensive review of work on artificial bee algorithms is given. Afterwards, development of an ABC algorithm for solving generalized assignment problem which is known as NP-hard problem is presented in detail along with some comparisons. It is a well known fact that classical optimization techniques impose several limitations on solving mathematical programming and operational research models. This is mainly due to inherent solution mechanisms of these techniques. Solution strategies of classical optimization algorithms are generally depended on the type of objective and constraint",
"title": ""
},
{
"docid": "cf7af6838ae725794653bfce39c609b8",
"text": "This paper strives to find the sentence best describing the content of an image or video. Different from existing works, which rely on a joint subspace for image / video to sentence matching, we propose to do so in a visual space only. We contribute Word2VisualVec, a deep neural network architecture that learns to predict a deep visual encoding of textual input based on sentence vectorization and a multi-layer perceptron. We thoroughly analyze its architectural design, by varying the sentence vectorization strategy, network depth and the deep feature to predict for image to sentence matching. We also generalize Word2VisualVec for matching a video to a sentence, by extending the predictive abilities to 3-D ConvNet features as well as a visual-audio representation. Experiments on four challenging image and video benchmarks detail Word2VisualVec’s properties, capabilities for image and video to sentence matching, and on all datasets its state-of-the-art results.",
"title": ""
},
{
"docid": "c304ab8c4b08d2d0019bec1bdc437672",
"text": "Highly efficient ammonia synthesis at a low temperature is desirable for future energy and material sources. We accomplished efficient electrocatalytic low-temperature ammonia synthesis with the highest yield ever reported. The maximum ammonia synthesis rate was 30 099 μmol gcat-1 h-1 over a 9.9 wt% Cs/5.0 wt% Ru/SrZrO3 catalyst, which is a very high rate. Proton hopping on the surface of the heterogeneous catalyst played an important role in the reaction, revealed by in situ IR measurements. Hopping protons activate N2 even at low temperatures, and they moderate the harsh reaction condition requirements. Application of an electric field to the catalyst resulted in a drastic decrease in the apparent activation energy from 121 kJ mol-1 to 37 kJ mol-1. N2 dissociative adsorption is markedly promoted by the application of the electric field, as evidenced by DFT calculations. The process described herein opens the door for small-scale, on-demand ammonia synthesis.",
"title": ""
},
{
"docid": "ee4c8c4d9bbd39562ecd644cbc9cde90",
"text": "We consider generic optimization problems that can be formu lated as minimizing the cost of a feasible solution w T x over a combinatorial feasible set F ⊂ {0, 1}. For these problems we describe a framework of risk-averse stochastic problems where the cost vector W has independent random components, unknown at the time of so lution. A natural and important objective that incorporates risk in this stochastic setting is to look for a feasible solution whose stochastic cost has a small tail or a small convex combi nation of mean and standard deviation. Our models can be equivalently reformulated as nonconvex programs for whi ch no efficient algorithms are known. In this paper, we make progress on these hard problems. Our results are several efficient general-purpose approxim ation schemes. They use as a black-box (exact or approximate) the solution to the underlying deterministic pr oblem and thus immediately apply to arbitrary combinatoria l problems. For example, from an available δ-approximation algorithm to the linear problem, we constru ct aδ(1 + ǫ)approximation algorithm for the stochastic problem, which invokes the linear algorithm only a logarithmic number of times in the problem input (and polynomial in 1 ǫ ), for any desired accuracy level ǫ > 0. The algorithms are based on a geometric analysis of the curvature and approximabilit y of he nonlinear level sets of the objective functions.",
"title": ""
},
{
"docid": "d681c9c5a3f1f2069025d605a98bd764",
"text": "The Smart Home concept integrates smart applications in the daily human life. In recent years, Smart Homes have increased security and management challenges due to the low capacity of small sensors, multiple connectivity to the Internet for efficient applications (use of big data and cloud computing), and heterogeneity of home systems, which require inexpert users to configure devices and micro-systems. This article presents current security and management approaches in Smart Homes and shows the good practices imposed on the market for developing secure systems in houses. At last, we propose future solutions for efficiently and securely managing the Smart Homes.",
"title": ""
},
{
"docid": "128ea037369e69aefa90ec37ae1f9625",
"text": "The deep two-stream architecture [23] exhibited excellent performance on video based action recognition. The most computationally expensive step in this approach comes from the calculation of optical flow which prevents it to be real-time. This paper accelerates this architecture by replacing optical flow with motion vector which can be obtained directly from compressed videos without extra calculation. However, motion vector lacks fine structures, and contains noisy and inaccurate motion patterns, leading to the evident degradation of recognition performance. Our key insight for relieving this problem is that optical flow and motion vector are inherent correlated. Transferring the knowledge learned with optical flow CNN to motion vector CNN can significantly boost the performance of the latter. Specifically, we introduce three strategies for this, initialization transfer, supervision transfer and their combination. Experimental results show that our method achieves comparable recognition performance to the state-of-the-art, while our method can process 390.7 frames per second, which is 27 times faster than the original two-stream method.",
"title": ""
},
{
"docid": "9581483f301b3522b88f6690b2668217",
"text": "AI researchers employ not only the scientific method, but also methodology from mathematics and engineering. However, the use of the scientific method – specifically hypothesis testing – in AI is typically conducted in service of engineering objectives. Growing interest in topics such as fairness and algorithmic bias show that engineering-focused questions only comprise a subset of the important questions about AI systems. This results in the AI Knowledge Gap: the number of unique AI systems grows faster than the number of studies that characterize these systems’ behavior. To close this gap, we argue that the study of AI could benefit from the greater inclusion of researchers who are well positioned to formulate and test hypotheses about the behavior of AI systems. We examine the barriers preventing social and behavioral scientists from conducting such studies. Our diagnosis suggests that accelerating the scientific study of AI systems requires new incentives for academia and industry, mediated by new tools and institutions. To address these needs, we propose a two-sided marketplace called TuringBox. On one side, AI contributors upload existing and novel algorithms to be studied scientifically by others. On the other side, AI examiners develop and post machine intelligence tasks designed to evaluate and characterize algorithmic behavior. We discuss this market’s potential to democratize the scientific study of AI behavior, and thus narrow the AI Knowledge Gap. 1 The Many Facets of AI Research Although AI is a sub-discipline of computer science, AI researchers do not exclusively use the scientific method in their work. For example, the methods used by early AI researchers often drew from logic, a subfield of mathematics, and are distinct from the scientific method we think of today. Indeed AI has adopted many techniques and approaches over time. In this section, we distinguish and explore the history of these ∗Equal contribution. methodologies with a particular emphasis on characterizing the evolving science of AI.",
"title": ""
},
{
"docid": "8b9bf16bd915d795f62aae155c1ecf06",
"text": "Wearing a wet diaper for prolonged periods, cause diaper rash. This paper presents an automated alarm system for Diaper wet. The design system using an advanced RF transceiver and GSM system to sound an alarm on the detection of moisture in the diaper to alert the intended person to change the diaper. A wet diaper detector comprises an elongated pair of spaced fine conductors which form the wet sensor. The sensor is positioned between the layers of a diaper in a region subject to wetness. The detector and RF transmitter are adapted to be easily coupled to the protruding end of the elongated sensor. When the diaper is wet the resistance between the spaced conductors falls below a pre-established value. Consequently, the detector and RF transmitter sends a signal to the RF receiver and the GSM to produce the require alarm. When the diaper is changed, the detector unit is decoupled from the pressing studs for reuse and the conductor is discarded along with the soiled diaper. Our experimental tests show that the designed system perfectly produces the intended alarm and can be adjusted for different level of wet if needed.",
"title": ""
},
{
"docid": "cd449faa3508b96cd827647de9f9c0cb",
"text": "Living with unrelenting pain (chronic pain) is maladaptive and is thought to be associated with physiological and psychological modifications, yet there is a lack of knowledge regarding brain elements involved in such conditions. Here, we identify brain regions involved in spontaneous pain of chronic back pain (CBP) in two separate groups of patients (n = 13 and n = 11), and contrast brain activity between spontaneous pain and thermal pain (CBP and healthy subjects, n = 11 each). Continuous ratings of fluctuations of spontaneous pain during functional magnetic resonance imaging were separated into two components: high sustained pain and increasing pain. Sustained high pain of CBP resulted in increased activity in the medial prefrontal cortex (mPFC; including rostral anterior cingulate). This mPFC activity was strongly related to intensity of CBP, and the region is known to be involved in negative emotions, response conflict, and detection of unfavorable outcomes, especially in relation to the self. In contrast, the increasing phase of CBP transiently activated brain regions commonly observed for acute pain, best exemplified by the insula, which tightly reflected duration of CBP. When spontaneous pain of CBP was contrasted to thermal stimulation, we observe a double-dissociation between mPFC and insula with the former correlating only to intensity of spontaneous pain and the latter correlating only to pain intensity for thermal stimulation. These findings suggest that subjective spontaneous pain of CBP involves specific spatiotemporal neuronal mechanisms, distinct from those observed for acute experimental pain, implicating a salient role for emotional brain concerning the self.",
"title": ""
},
{
"docid": "60cc418b3b5a47e8f636b6c54a0a2d5e",
"text": "Continued use of petroleum sourced fuels is now widely recognized as unsustainable because of depleting supplies and the contribution of these fuels to the accumulation of carbon dioxide in the environment. Renewable, carbon neutral, transport fuels are necessary for environmental and economic sustainability. Biodiesel derived from oil crops is a potential renewable and carbon neutral alternative to petroleum fuels. Unfortunately, biodiesel from oil crops, waste cooking oil and animal fat cannot realistically satisfy even a small fraction of the existing demand for transport fuels. As demonstrated here, microalgae appear to be the only source of renewable biodiesel that is capable of meeting the global demand for transport fuels. Like plants, microalgae use sunlight to produce oils but they do so more efficiently than crop plants. Oil productivity of many microalgae greatly exceeds the oil productivity of the best producing oil crops. Approaches for making microalgal biodiesel economically competitive with petrodiesel are discussed.",
"title": ""
}
] |
scidocsrr
|
c4574041492b3cf56d65be37fee8bf90
|
Variational Policy Search via Trajectory Optimization
|
[
{
"docid": "72623aed95db9dff8a59797be1da8ffd",
"text": "Many policy search algorithms minimize the Kullback-Leibler (KL) divergence to a certain target distribution in order to fit their policy. The commonly used KL-divergence forces the resulting policy to be ’reward-attracted’. The policy tries to reproduce all positively rewarded experience while negative experience is neglected. However, the KL-divergence is not symmetric and we can also minimize the the reversed KL-divergence, which is typically used in variational inference. The policy now becomes ’cost-averse’. It tries to avoid reproducing any negatively-rewarded experience while maximizing exploration. Due to this ’cost-averseness’ of the policy, Variational Inference for Policy Search (VIP) has several interesting properties. It requires no kernelbandwith nor exploration rate, such settings are determined automatically by the inference. The algorithm meets the performance of state-of-theart methods while being applicable to simultaneously learning in multiple situations. We concentrate on using VIP for policy search in robotics. We apply our algorithm to learn dynamic counterbalancing of different kinds of pushes with human-like 2-link and 4-link robots.",
"title": ""
},
{
"docid": "c0dd6bb821323c3a429458d74cce1d95",
"text": "We address the problem of learning robot control by model-free reinforcement learning (RL). We adopt the probabilistic model of Vlassis and Toussaint (2009) for model-free RL, and we propose a Monte Carlo EM algorithm (MCEM) for control learning that searches directly in the space of controller parameters using information obtained from randomly generated robot trajectories. MCEM is related to, and generalizes, the PoWER algorithm of Kober and Peters (2009). In the finite-horizon case MCEM reduces precisely to PoWER, but MCEM can also handle the discounted infinite-horizon case. An interesting result is that the infinite-horizon case can be viewed as a ‘randomized’ version of the finite-horizon case, in the sense that the length of each sampled trajectory is a random draw from an appropriately constructed geometric distribution. We provide some preliminary experiments demonstrating the effects of fixed (PoWER) vs randomized (MCEM) horizon length in two simulated and one real robot control tasks.",
"title": ""
}
] |
[
{
"docid": "fac476744429cacfe1c07ec19ee295eb",
"text": "One effort to protect the network from the threats of hackers, crackers and security experts is to build the Intrusion Detection System (IDS) on the network. The problem arises when new attacks emerge in a relatively fast, so a network administrator must create their own signature and keep updated on new types of attacks that appear. In this paper, it will be made an Intelligence Intrusion Detection System (IIDS) where the Hierarchical Clustering algorithm as an artificial intelligence is used as pattern recognition and implemented on the Snort IDS. Hierarchical clustering applied to the training data to determine the number of desired clusters. Labeling cluster is then performed; there are three labels of cluster, namely Normal, High Risk and Critical. Centroid Linkage Method used for the test data of new attacks. Output system is used to update the Snort rule database. This research is expected to help the Network Administrator to monitor and learn some new types of attacks. From the result, this system is already quite good to recognize certain types of attacks like exploit, buffer overflow, DoS and IP Spoofing. Accuracy performance of this system for the mentioned above type of attacks above is 90%.",
"title": ""
},
{
"docid": "e0c52b0fdf2d67bca4687b8060565288",
"text": "Large graph databases are commonly collected and analyzed in numerous domains. For reasons related to either space efficiency or for privacy protection (e.g., in the case of social network graphs), it sometimes makes sense to replace the original graph with a summary, which removes certain details about the original graph topology. However, this summarization process leaves the database owner with the challenge of processing queries that are expressed in terms of the original graph, but are answered using the summary. In this paper, we propose a formal semantics for answering queries on summaries of graph structures. At its core, our formulation is based on a random worlds model. We show that important graph-structure queries (e.g., adjacency, degree, and eigenvector centrality) can be answered efficiently and in closed form using these semantics. Further, based on this approach to query answering, we formulate three novel graph partitioning/compression problems. We develop algorithms for finding a graph summary that least affects the accuracy of query results, and we evaluate our proposed algorithms using both real and synthetic data.",
"title": ""
},
{
"docid": "d415096fccd9b0f082b7202d0c9f32fe",
"text": "Penil duplikasyon(diğer adıyla Diphallia veya diphallasparatus) beş milyon da bir görülen nadir bir malformationdur. Sıklıkla anorektal, üriner ve vertebral anomalilerle birliktedir. Hastanemiz üroloji kliniğine penil şekil bozukluğu şikayetiyle başvuran 15 yaşında erkek hasta anomalinin ve eşlik edebilecek diğer patolojilerin görüntülenebilmesi amacıyla radyoloji ünitesine gönderilmişti. MR incelemede tam olmayan psödoduplikasyon ile uyumlu olan ve diğer kanal ile birleşen ikinci bir uretra distalde aksesuar glans düzeyinde künt olarak sonlanmaktaydı. Doppler USG inceleme ile korpus kavernozum ve korpus spongiozum düzeylerinde vasküler yapılar değerlendirildi. Nadir rastlanan penil duplikasyon olgumuzda yapılan MR , sonografi ve indirekt röntgen incelemelerinin sonuçlarını literatürde rastlanan az sayıda benzer olguları da gözden geçirerek sunmayı amaçladık. Abstract",
"title": ""
},
{
"docid": "f7d56588da8f5c5ac0f1481e5f2286b4",
"text": "Machine learning is an established method of selecting algorithms to solve hard search problems. Despite this, to date no systematic comparison and evaluation of the different techniques has been performed and the performance of existing systems has not been critically compared to other approaches. We compare machine learning techniques for algorithm selection on real-world data sets of hard search problems. In addition to well-established approaches, for the first time we also apply statistical relational learning to this problem. We demonstrate that most machine learning techniques and existing systems perform less well than one might expect. To guide practitioners, we close by giving clear recommendations as to which machine learning techniques are likely to perform well based on our experiments.",
"title": ""
},
{
"docid": "af952f9368761c201c5dfe4832686e87",
"text": "The field of service design is expanding rapidly in practice, and a body of formal research is beginning to appear to which the present article makes an important contribution. As innovations in services develop, there is an increasing need not only for research into emerging practices and developments but also into the methods that enable, support and promote such unfolding changes. This article tackles this need directly by referring to a large design research project, and performing a related practicebased inquiry into the co-design and development of methods for fostering service design in organizations wishing to improve their service offerings to customers. In particular, with reference to a funded four-year research project, one aspect is elaborated on that uses cards as a method to focus on the importance and potential of touch-points in service innovation. Touch-points are one of five aspects in the project that comprise a wider, integrated model and means for implementing innovations in service design. Touch-points are the points of contact between a service provider and customers. A customer might utilise many different touch-points as part of a use scenario (often called a customer journey). For example, a bank’s touch points include its physical buildings, web-site, physical print-outs, self-service machines, bank-cards, customer assistants, call-centres, telephone assistance etc. Each time a person relates to, or interacts with, a touch-point, they have a service-encounter. This gives an experience and adds something to the person’s relationship with the service and the service provider. The sum of all experiences from touch-point interactions colours their opinion of the service (and the service provider). Touch-points are one of the central aspects of service design. A commonly used definition of service design is “Design for experiences that happen over time and across different touchpoints” (ServiceDesign.org). As this definition shows, touchpoints are often cited as one of the major elements of service",
"title": ""
},
{
"docid": "a7089d7b076d2fb974e95985b20d5fa5",
"text": "In this paper, we use a simple concept based on k-reverse nearest neighbor digraphs, to develop a framework RECORD for clustering and outlier detection. We developed three algorithms - (i) RECORD algorithm (requires one parameter), (ii) Agglomerative RECORD algorithm (no parameters required) and (iii) Stability-based RECORD algorithm (no parameters required). Our experimental results with published datasets, synthetic and real-life datasets show that RECORD not only handles noisy data, but also identifies the relevant clusters. Our results are as good as (if not better than) the results got from other algorithms.",
"title": ""
},
{
"docid": "79b26ac97deb39c4de11a87604003f26",
"text": "This paper presents a novel wheel-track-Leg hybrid Locomotion Mechanism that has a compact structure. Compared to most robot wheels that have a rigid round rim, the transformable wheel with a flexible rim can switch to track mode for higher efficiency locomotion on swampy terrain or leg mode for better over-obstacle capability on rugged road. In detail, the wheel rim of this robot is cut into four end-to-end circles to make it capable of transforming between a round circle with a flat ring (just like “O” and “∞”) to change the contact type between transformable wheels with the ground. The transformation principle and constraint conditions between different locomotion modes are explained. The driving methods and locomotion strategies on various terrains of the robot are analyzed. Meanwhile, an initial experiment is conducted to verify the design.",
"title": ""
},
{
"docid": "40b5929886bc0b924ff2de9ad788f515",
"text": "Accurate measurement of the rotor angle and speed of synchronous generators is instrumental in developing powerful local or wide-area control and monitoring systems to enhance power grid stability and reliability. Exogenous input signals such as field voltage and mechanical torque are critical information in this context, but obtaining them raises significant logistical challenges, which in turn complicates the estimation of the generator dynamic states from easily available terminal phasor measurement unit (PMU) signals only. To overcome these issues, the authors of this paper employ the extended Kalman filter with unknown inputs, referred to as the EKF-UI technique, for decentralized dynamic state estimation of a synchronous machine states using terminal active and reactive powers, voltage phasor and frequency measurements. The formulation is fully decentralized without single-machine infinite bus (SMIB) or excitation model assumption so that only local information is required. It is demonstrated that using the decentralized EKF-UI scheme, synchronous machine states can be estimated accurately enough to enable wide-area power system stabilizers (WA-PSS) and system integrity protection schemes (SIPS). Simulation results on New-England test system, Hydro-Québec simplified system, and Kundur network highlight the efficiency of the proposed method under fault conditions with electromagnetic transients and full-order generator models in realistic multi-machine setups.",
"title": ""
},
{
"docid": "a40c8b124eccc4e3651d6ef5d6de547f",
"text": "This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered.",
"title": ""
},
{
"docid": "9c2c74da1e0f5ea601e50f257015c5b3",
"text": "We present a new lock-based algorithm for concurrent manipulation of a binary search tree in an asynchronous shared memory system that supports search, insert and delete operations. Some of the desirable characteristics of our algorithm are: (i) a search operation uses only read and write instructions, (ii) an insert operation does not acquire any locks, and (iii) a delete operation only needs to lock up to four edges in the absence of contention. Our algorithm is based on an internal representation of a search tree and it operates at edge-level (locks edges) rather than at node-level (locks nodes); this minimizes the contention window of a write operation and improves the system throughput. Our experiments indicate that our lock-based algorithm outperforms existing algorithms for a concurrent binary search tree for medium-sized and larger trees, achieving up to 59% higher throughput than the next best algorithm.",
"title": ""
},
{
"docid": "d4fa449a988fa2595284cce88ea5087e",
"text": "Global warming and price volatility are increasing uncertainty for the future of agriculture. Therefore, agricultural systems must be sustainable not only under average conditions, but also under extreme changes of productivity, economy, environment and social context. Here, we review four concepts: stability, robustness, vulnerability and resilience. Those concepts are commonly used but are sometimes difficult to distinguish due to the lack of clear boundaries. Here, we clarify the role of these concepts in addressing agronomic issues. Our main findings are as follows: (1) agricultural systems face different types of perturbations, from small and usual perturbations to extreme and unpredictable changes; (2) stability, robustness, vulnerability and resilience have been increasingly applied to analyze the agricultural context in order to predict the system response under changing conditions; (3) the four concepts are distinguished by the nature of the system components and by the type of perturbation studied; (4) assessment methods must be tested under contrasted situations; and (5) the major options allowing system adaptation under extreme and unpredictable changes are the increase of diversity and the increase of the adaptive capacity.",
"title": ""
},
{
"docid": "04c0a4613ab0ec7fd77ac5216a17bd1d",
"text": "Many contemporary biomedical applications such as physiological monitoring, imaging, and sequencing produce large amounts of data that require new data processing and visualization algorithms. Algorithms such as principal component analysis (PCA), singular value decomposition and random projections (RP) have been proposed for dimensionality reduction. In this paper we propose a new random projection version of the fuzzy c-means (FCM) clustering algorithm denoted as RPFCM that has a different ensemble aggregation strategy than the one previously proposed, denoted as ensemble FCM (EFCM). RPFCM is more suitable than EFCM for big data sets (large number of points, n). We evaluate our method and compare it to EFCM on synthetic and real datasets.",
"title": ""
},
{
"docid": "63cc929e358746526b157ded5ff4b2c8",
"text": "This paper asks how internet use, citizen satisfaction with e-government and citizen trust in government are interrelated. Prior research has found that agencies stress information and service provision on the Web (oneway e-government strategy), but have generally ignore applications that would enhance citizen-government interaction (two-way e-government strategy). Based on a review of the literature, we develop hypotheses about how two facets of e-democracy – transparency and interactivity – may affect citizen trust in government. Using data obtained from the Council on Excellence in Government, we apply a two stage multiple equation model. Findings indicate that internet use is positively associated with transparency satisfaction but negatively associated with interactivity satisfaction, and that both interactivity and transparency are positively associated with citizen trust in government. We conclude that the one-way e-transparency strategy may be insufficient, and that in the future agencies should make and effort to enhance e-interactivity.",
"title": ""
},
{
"docid": "69d826aa8309678cf04e2870c23a99dd",
"text": "Contemporary analyses of cell metabolism have called out three metabolites: ATP, NADH, and acetyl-CoA, as sentinel molecules whose accumulation represent much of the purpose of the catabolic arms of metabolism and then drive many anabolic pathways. Such analyses largely leave out how and why ATP, NADH, and acetyl-CoA (Figure 1 ) at the molecular level play such central roles. Yet, without those insights into why cells accumulate them and how the enabling properties of these key metabolites power much of cell metabolism, the underlying molecular logic remains mysterious. Four other metabolites, S-adenosylmethionine, carbamoyl phosphate, UDP-glucose, and Δ2-isopentenyl-PP play similar roles in using group transfer chemistry to drive otherwise unfavorable biosynthetic equilibria. This review provides the underlying chemical logic to remind how these seven key molecules function as mobile packets of cellular currencies for phosphoryl transfers (ATP), acyl transfers (acetyl-CoA, carbamoyl-P), methyl transfers (SAM), prenyl transfers (IPP), glucosyl transfers (UDP-glucose), and electron and ADP-ribosyl transfers (NAD(P)H/NAD(P)+) to drive metabolic transformations in and across most primary pathways. The eighth key metabolite is molecular oxygen (O2), thermodynamically activated for reduction by one electron path, leaving it kinetically stable to the vast majority of organic cellular metabolites.",
"title": ""
},
{
"docid": "40db41aa0289dbf45bef067f7d3e3748",
"text": "Maximum reach envelopes for the 5th, 50th and 95th percentile reach lengths of males and females in seated and standing work positions were determined. The use of a computerized potentiometric measurement system permitted functional reach measurement in 15 min for each subject. The measurement system captured reach endpoints in a dynamic mode while the subjects were describing their maximum reach envelopes. An unbiased estimate of the true reach distances was made through a systematic computerized data averaging process. The maximum reach envelope for the standing position was significantly (p<0.05) larger than the corresponding measure in the seated position for both the males and females. The average reach length of the female was 13.5% smaller than that for the corresponding male. Potential applications of this research include designs of industrial workstations, equipment, tools and products.",
"title": ""
},
{
"docid": "480c8d16f3e58742f0164f8c10a206dd",
"text": "Dyna is an architecture for reinforcement learning agents that interleaves planning, acting, and learning in an online setting. This architecture aims to make fuller use of limited experience to achieve better performance with fewer environmental interactions. Dyna has been well studied in problems with a tabular representation of states, and has also been extended to some settings with larger state spaces that require function approximation. However, little work has studied Dyna in environments with high-dimensional state spaces like images. In Dyna, the environment model is typically used to generate one-step transitions from selected start states. We applied one-step Dyna to several games from the Arcade Learning Environment and found that the model-based updates offered surprisingly little benefit, even with a perfect model. However, when the model was used to generate longer trajectories of simulated experience, performance improved dramatically. This observation also holds when using a model that is learned from experience; even though the learned model is flawed, it can still be used to accelerate learning.",
"title": ""
},
{
"docid": "8609f49cc78acc1ba25e83c8e68040a6",
"text": "Time series shapelets are small, local patterns in a time series that are highly predictive of a class and are thus very useful features for building classifiers and for certain visualization and summarization tasks. While shapelets were introduced only recently, they have already seen significant adoption and extension in the community. Despite their immense potential as a data mining primitive, there are two important limitations of shapelets. First, their expressiveness is limited to simple binary presence/absence questions. Second, even though shapelets are computed offline, the time taken to compute them is significant. In this work, we address the latter problem by introducing a novel algorithm that finds shapelets in less time than current methods by an order of magnitude. Our algorithm is based on intelligent caching and reuse of computations, and the admissible pruning of the search space. Because our algorithm is so fast, it creates an opportunity to consider more expressive shapelet queries. In particular, we show for the first time an augmented shapelet representation that distinguishes the data based on conjunctions or disjunctions of shapelets. We call our novel representation Logical-Shapelets. We demonstrate the efficiency of our approach on the classic benchmark datasets used for these problems, and show several case studies where logical shapelets significantly outperform the original shapelet representation and other time series classification techniques. We demonstrate the utility of our ideas in domains as diverse as gesture recognition, robotics, and biometrics.",
"title": ""
},
{
"docid": "1a095e16a26837e65a1c6692190b34c6",
"text": "Increasing documentation on the size and appearance of muscles in the lumbar spine of low back pain (LBP) patients is available in the literature. However, a comparative study between unoperated chronic low back pain (CLBP) patients and matched (age, gender, physical activity, height and weight) healthy controls with regard to muscle cross-sectional area (CSA) and the amount of fat deposits at different levels has never been undertaken. Moreover, since a recent focus in the physiotherapy management of patients with LBP has been the specific training of the stabilizing muscles, there is a need for quantifying and qualifying the multifidus. A comparative study between unoperated CLBP patients and matched control subjects was conducted. Twenty-three healthy volunteers and 32 patients were studied. The muscle and fat CSAs were derived from standard computed tomography (CT) images at three different levels, using computerized image analysis techniques. The muscles studied were: the total paraspinal muscle mass, the isolated multifidus and the psoas. The results showed that only the CSA of the multifidus and only at the lowest level (lower end-plate of L4) was found to be statistically smaller in LBP patients. As regards amount of fat, in none of the three studied muscles was a significant difference found between the two groups. An aetiological relationship between atrophy of the multifidus and the occurrence of LBP can not be ruled out as a possible explanation. Alternatively, atrophy may be the consequence of LBP: after the onset of pain and possible long-loop inhibition of the multifidus a combination of reflex inhibition and substitution patterns of the trunk muscles may work together and could cause a selective atrophy of the multifidus. Since this muscle is considered important for lumbar segmental stability, the phenomenon of atrophy may be a reason for the high recurrence rate of LBP.",
"title": ""
},
{
"docid": "8f77e4d133b6ab04438ddd19a037bbb6",
"text": "Radio Frequency Identification (RFID) technology provides new and exciting opportunities for increasing organizational, financial, and operational performance. With its focus on organizational efficiency and effectiveness, RFID technology is superior to barcodes in its ability to provide source automation features that increase the speed and volume of data collection for analysis. Today, applications that employ RFID are growing rapidly and this technology is in a continuous state of evolution and growth. As it continues to progress, RFID provides us with new opportunities to use business intelligence (BI) to monitor organizational operations and learn more about markets, as well as consumer attitudes, behaviors, and product preferences. This technology can even be used to prevent potentially faulty or spoiled products from ending up in the hands of consumers. However, RFID offers significant challenges to organizations that attempt to employ this technology. Most significantly, there exists the potential for RFID to overwhelm data collection and BI analytic efforts if organizations fail to effectively address RFID data integration issues. To this end, the purpose of this article is to explicate the dynamic technology of RFID and how it is being used today. Additionally, this article will provide insights into how RFID technology is evolving and how this technology relates to BI and issues related to data integration. This knowledge has never been more essential. While IT academic research into RFID development and issues has declined in recent years, RFID continues to be a vital area of exploration, especially as it relates to BI in the 21st century.",
"title": ""
},
{
"docid": "3f79f0eee8878fd43187e9d48531a221",
"text": "In this paper, the design and development of a portable classroom attendance system based on fingerprint biometric is presented. Among the salient aims of implementing a biometric feature into a portable attendance system is security and portability. The circuit of this device is strategically constructed to have an independent source of energy to be operated, as well as its miniature design which made it more efficient in term of its portable capability. Rather than recording the attendance in writing or queuing in front of class equipped with fixed fingerprint or smart card reader. This paper introduces a portable fingerprint based biometric attendance system which addresses the weaknesses of the existing paper based attendance method or long time queuing. In addition, our biometric fingerprint based system is encrypted which preserves data integrity.",
"title": ""
}
] |
scidocsrr
|
4090183485749d965733d1653a0b7ba8
|
Detecting anomalous behavior of PLC using semi-supervised machine learning
|
[
{
"docid": "ba7d80246069938fbb0e8bc0170f50be",
"text": "Supervisory Control and Data Acquisition (SCADA) system is an industrial control automated system. It is built with multiple Programmable Logic Controllers (PLCs). PLC is a special form of microprocessor-based controller with proprietary operating system. Due to the unique architecture of PLC, traditional digital forensic tools are difficult to be applied. In this paper, we propose a program called Control Program Logic Change Detector (CPLCD), which works with a set of Detection Rules (DRs) to detect and record undesired incidents on interfering normal operations of PLC. In order to prove the feasibility of our solution, we set up two experiments for detecting two common PLC attacks. Moreover, we illustrate how CPLCD and network analyzer Wireshark could work together for performing digital forensic investigation on PLC.",
"title": ""
}
] |
[
{
"docid": "827aa405d879448d2c5151406b180791",
"text": "Multiple natural and anthropogenic stressors impact coral reefs across the globe leading to declines of coral populations, but the relative importance of different stressors and the ways they interact remain poorly understood. Because coral reefs exist in environments commonly impacted by multiple stressors simultaneously, understanding their interactions is of particular importance. To evaluate the role of multiple stressors we experimentally manipulated three stressors (herbivore abundance, nutrient supply, and sediment loading) in plots on a natural reef in the Gulf of Panamá in the Eastern Tropical Pacific. Monitoring of the benthic community (coral, macroalgae, algal turf, and crustose coralline algae) showed complex responses with all three stressors impacting the community, but at different times, in different combinations, and with varying effects on different community members. Reduction of top–down control in combination with sediment addition had the strongest effect on the community, and led to approximately three times greater algal biomass. Coral cover was reduced in all experimental units with a negative effect of nutrients over time and a synergistic interaction between herbivore exclosures and sediment addition. In contrast, nutrient and sediment additions interacted antagonistically in their impacts on crustose coralline algae and turf algae so that in combination the treatments limited each other’s effects. Interactions between stressors and temporal variability indicated that, while each stressor had the potential to impact community structure, their combinations and the broader environmental conditions under which they acted strongly influenced their specific effects. Thus, it is critical to evaluate the effects of stressors on community dynamics not only independently but also under different combinations or environmental conditions to understand how those effects will be played out in more realistic scenarios.",
"title": ""
},
{
"docid": "461d42e45c0ebcfeeb074904957b943c",
"text": "Quadratic discriminant analysis is a common tool for classification, but estimation of the Gaussian parameters can be ill-posed. This paper contains theoretical and algorithmic contributions to Bayesian estimation for quadratic discriminant analysis. A distribution-based Bayesian classifier is derived using information geometry. Using a calculus of variations approach to define a functional Bregman divergence for distributions, it is shown that the Bayesian distribution-based classifier that minimizes the expected Bregman divergence of each class conditional distribution also minimizes the expected misclassification cost. A series approximation is used to relate regularized discriminant analysis to Bayesian discriminant analysis. A new Bayesian quadratic discriminant analysis classifier is proposed where the prior is defined using a coarse estimate of the covariance based on the training data; this classifier is termed BDA7. Results on benchmark data sets and simulations show that BDA7 performance is competitive with, and in some cases significantly better than, regularized quadratic discriminant analysis and the cross-validated Bayesian quadratic discriminant analysis classifier Quadratic Bayes.",
"title": ""
},
{
"docid": "db2160b80dd593c33661a16ed2e404d1",
"text": "Steganalysis tools play an important part in saving time and providing new angles of attack for forensic analysts. StegExpose is a solution designed for use in the real world, and is able to analyse images for LSB steganography in bulk using proven attacks in a time efficient manner. When steganalytic methods are combined intelligently, they are able generate even more accurate results. This is the prime focus of StegExpose.",
"title": ""
},
{
"docid": "faad414eebea949d944e045f9cec3cf4",
"text": "This note introduces practical set invariance notions for physically interconnected, discrete–time systems, subject to additive but bounded disturbances. The developed approach provides a decentralized, non–conservative and computationally tractable way to study desirable robust positive invariance and stability notions for the overall system as well as to guarantee safe and independent operation of the constituting subsystems. These desirable properties are inherited, under mild assumptions, from the classical stability and invariance properties of the associated vector–valued dynamics which capture in a simple but appropriate and non– conservative way the dynamical behavior induced by the underlying set–dynamics of interest.",
"title": ""
},
{
"docid": "1ae3eb81ae75f6abfad4963ee0056be5",
"text": "Due to the shared responsibility model of clouds, tenants have to manage the security of their workloads and data. Developing security solutions using VMs or containers creates further problems as these resources also need to be secured. In this paper, we advocate for taking a serverless approach by proposing six serverless design patterns to build security services in the cloud. For each design pattern, we describe the key advantages and present applications and services utilizing the pattern. Using the proposed patterns as building blocks, we introduce a threat-intelligence platform that collects logs from various sources, alerts malicious activities, and takes actions against such behaviors. We also discuss the limitations of serverless design and how future implementations can overcome those limitations.",
"title": ""
},
{
"docid": "ff35f4bd47572dd6e95b1e3b5b1fb129",
"text": "This survey presents an overview of verification techniques for autonomous systems, with a focus on safety-critical autonomous cyber-physical systems (CPS) and subcomponents thereof. Autonomy in CPS is enabling by recent advances in artificial intelligence (AI) and machine learning (ML) through approaches such as deep neural networks (DNNs), embedded in so-called learning enabled components (LECs) that accomplish tasks from classification to control. Recently, the formal methods and formal verification community has developed methods to characterize behaviors in these LECs with eventual goals of formally verifying specifications for LECs, and this article presents a survey of many of these recent",
"title": ""
},
{
"docid": "4eb1e28d62af4a47a2e8dc795b89cc09",
"text": "This paper describes a new computational finance approach. This approach combines pattern recognition techniques with an evolutionary computation kernel applied to financial markets time series in order to optimize trading strategies. Moreover, for pattern matching a template-based approach is used in order to describe the desired trading patterns. The parameters for the pattern templates, as well as, for the decision making rules are optimized using a genetic algorithm kernel. The approach was tested considering actual data series and presents a robust profitable trading strategy which clearly beats the market, S&P 500 index, reducing the investment risk significantly.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "84750fa3f3176d268ae85830a87f7a24",
"text": "Context: The pull-based model, widely used in distributed software development, offers an extremely low barrier to entry for potential contributors (anyone can submit of contributions to any project, through pull-requests). Meanwhile, the project’s core team must act as guardians of code quality, ensuring that pull-requests are carefully inspected before being merged into the main development line. However, with pull-requests becoming increasingly popular, the need for qualified reviewers also increases. GitHub facilitates this, by enabling the crowd-sourcing of pull-request reviews to a larger community of coders than just the project’s core team, as a part of their social coding philosophy. However, having access to more potential reviewers does not necessarily mean that it’s easier to find the right ones (the “needle in a haystack” problem). If left unsupervised, this process may result in communication overhead and delayed pull-request processing. Objective: This study aims to investigate whether and how previous approaches used in bug triaging and code review can be adapted to recommending reviewers for pull-requests, and how to improve the recommendation performance. Method: First, we extend three typical approaches used in bug triaging and code review for the new challenge of assigning reviewers to pull-requests. Second, we analyze social relations between contributors and reviewers, and propose a novel approach by mining each project’s comment networks (CNs). Finally, we combine the CNs with traditional approaches, and evaluate the effectiveness of all these methods on 84 GitHub projects through both quantitative and qualitative analysis. Results: We find that CN-based recommendation can achieve, by itself, similar performance as the traditional approaches. However, the mixed approaches can achieve significant improvements compared to using either of them independently. Conclusion: Our study confirms that traditional approaches to bug triaging and code review are feasible for pull-request reviewer recommendations on GitHub. Furthermore, their performance can be improved significantly by combining them with information extracted from prior social interactions between developers on GitHub. These results prompt for novel tools to support process automation in social coding platforms, that combine social (e.g., common interests among developers) and technical factors (e.g., developers’ expertise). © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e306933b27867c99585d7fc82cc380ff",
"text": "We introduce a new OS abstraction—light-weight contexts (lwCs)—that provides independent units of protection, privilege, and execution state within a process. A process may include several lwCs, each with possibly different views of memory, file descriptors, and access capabilities. lwCs can be used to efficiently implement roll-back (process can return to a prior recorded state), isolated address spaces (lwCs within the process may have different views of memory, e.g., isolating sensitive data from network-facing components or isolating different user sessions), and privilege separation (in-process reference monitors can arbitrate and control access). lwCs can be implemented efficiently: the overhead of a lwC is proportional to the amount of memory exclusive to the lwC; switching lwCs is quicker than switching kernel threads within the same process. We describe the lwC abstraction and API, and an implementation of lwCs within the FreeBSD 11.0 kernel. Finally, we present an evaluation of common usage patterns, including fast rollback, session isolation, sensitive data isolation, and inprocess reference monitoring, using Apache, nginx, PHP, and OpenSSL.",
"title": ""
},
{
"docid": "b8a19927899dafa930c48eacd340e95e",
"text": "A major challenge in EEG-based brain-computer interfaces (BCIs) is the intersession nonstationarity in the EEG data that often leads to deteriorated BCI performances. To address this issue, this letter proposes a novel data space adaptation technique, EEG data space adaptation (EEG-DSA), to linearly transform the EEG data from the target space (evaluation session), such that the distribution difference to the source space (training session) is minimized. Using the Kullback-Leibler (KL) divergence criterion, we propose two versions of the EEG-DSA algorithm: the supervised version, when labeled data are available in the evaluation session, and the unsupervised version, when labeled data are not available. The performance of the proposed EEG-DSA algorithm is evaluated on the publicly available BCI Competition IV data set IIa and a data set recorded from 16 subjects performing motor imagery tasks on different days. The results show that the proposed EEG-DSA algorithm in both the supervised and unsupervised versions significantly outperforms the results without adaptation in terms of classification accuracy. The results also show that for subjects with poor BCI performances when no adaptation is applied, the proposed EEG-DSA algorithm in both the supervised and unsupervised versions significantly outperforms the unsupervised bias adaptation algorithm (PMean).",
"title": ""
},
{
"docid": "39afefd938cd835dc385ce691302f533",
"text": "The recent development of fast depth map fusion technique enables the realtime, detailed scene reconstruction using commodity depth camera, making the indoor scene understanding more possible than ever. To address the specific challenges in object analysis at subscene level, this work proposes a data-driven approach to modeling contextual information covering both intra-object part relations and inter-object object layouts. Our method combines the detection of individual objects and object groups within the same framework, enabling contextual analysis without knowing the objects in the scene a priori. The key idea is that while contextual information could benefit the detection of either individual objects or object groups, both can contribute to object extraction when objects are unknown. Our method starts with a robust segmentation and partitions a subscene into segments, each of which represents either an independent object or a part of some object. A set of classifiers are trained for both individual objects and object groups, using a database of 3D scene models. We employ the multiple kernel learning (MKL) to learn per-category optimized classifiers for objects and object groups. Finally, we perform a graph matching to extract objects using the classifiers, thus grouping the segments into either an object or an object group. The output is an object-level labeled segmentation of the input subscene. Experiments demonstrate that the unified contextual analysis framework achieves robust object detection and recognition over cluttered subscenes.",
"title": ""
},
{
"docid": "0bd30308a11711f1dc71b8ff8ae8e80c",
"text": "Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of the client through the auditing of whether his data stored in the cloud are indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion, and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public auditability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for the seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the existing proof of storage models by manipulating the classic Merkle Hash Tree construction for block tag authentication. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multiuser setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis show that the proposed schemes are highly efficient and provably secure.",
"title": ""
},
{
"docid": "5ea65d6e878d2d6853237a74dbc5a894",
"text": "We study indexing techniques for main memory, including hash indexes, binary search trees, T-trees, B+-trees, interpolation search, and binary search on arrays. In a decision-support context, our primary concerns are the lookup time, and the space occupied by the index structure. Our goal is to provide faster lookup times than binary search by paying attention to reference locality and cache behavior, without using substantial extra space. We propose a new indexing technique called “Cache-Sensitive Search Trees” (CSS-trees). Our technique stores a directory structure on top of a sorted array. Nodes in this directory have size matching the cache-line size of the machine. We store the directory in an array and do not store internal-node pointers; child nodes can be found by performing arithmetic on array offsets. We compare the algorithms based on their time and space requirements. We have implemented all of the techniques, and present a performance study on two popular modern machines. We demonstrate that with ∗This research was supported by a David and Lucile Packard Foundation Fellowship in Science and Engineering, by an NSF Young Investigator Award, by NSF grant number IIS-98-12014, and by NSF CISE award CDA-9625374. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. a small space overhead, we can reduce the cost of binary search on the array by more than a factor of two. We also show that our technique dominates B+-trees, T-trees, and binary search trees in terms of both space and time. A cache simulation verifies that the gap is due largely to cache misses.",
"title": ""
},
{
"docid": "c92afabbe921c3b75408d841189ff1df",
"text": "The contamination from heavy metals has risen during the last decade due to increase in Industrialization. This has led to a significant increase in health problems. Many of the known remediation techniques to remove heavy metal from soil are expensive, time consuming and environmentally destructive. Phytoremediation is an emerging technology for removal of heavy metals which is cost effective, and has aesthetic advantages and long term applicability. The present study aims at efficiently utilizing Brassica juncea L. to remove lead (Pb). The result of our study indicate that amount of lead in Indian mustard is increased with the amount of EDTA applied to the soil and maximum accumulation was achieved with 5mmol/kg of EDTA. On further increase in EDTA resulted in leaf necrosis and early shedding of leaves. Therefore EDTA at a concentration of 5mmol/kg was considered optimum for lead accumulation by Brassica juncea L.",
"title": ""
},
{
"docid": "90cd3aa6a70a89ee3d55a712767b7fbd",
"text": "End-to-end automatic speech recognition (ASR) has become a popular alternative to conventional DNN/HMM systems because it avoids the need for linguistic resources such as pronunciation dictionary, tokenization, and contextdependency trees, leading to a greatly simplified model-building process. There are two major types of end-to-end architectures for ASR: attention-based methods use an attention mechanism to perform alignment between acoustic frames and recognized symbols, and connectionist temporal classification (CTC), uses Markov assumptions to efficiently solve sequential problems by dynamic programming. This paper proposes a joint decoding algorithm for end-to-end ASR with a hybrid CTC/attention architecture, which effectively utilizes both advantages in decoding. We have applied the proposed method to two ASR benchmarks (spontaneous Japanese and Mandarin Chinese), and showing the comparable performance to conventional state-of-the-art DNN/HMM ASR systems without linguistic resources. Association for Computational Linguistics (ACL) This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2017 201 Broadway, Cambridge, Massachusetts 02139 Joint CTC/attention decoding for end-to-end speech recognition Takaaki Hori, Shinji Watanabe, John R. Hershey Mitsubishi Electric Research Laboratories (MERL) {thori,watanabe,hershey}@merl.com",
"title": ""
},
{
"docid": "2997fc35a86646d8a43c16217fc8079b",
"text": "During sudden onset crisis events, the presence of spam, rumors and fake content on Twitter reduces the value of information contained on its messages (or “tweets”). A possible solution to this problem is to use machine learning to automatically evaluate the credibility of a tweet, i.e. whether a person would deem the tweet believable or trustworthy. This has been often framed and studied as a supervised classification problem in an off-line (post-hoc) setting. In this paper, we present a semi-supervised ranking model for scoring tweets according to their credibility. This model is used in TweetCred , a real-time system that assigns a credibility score to tweets in a user’s timeline. TweetCred , available as a browser plug-in, was installed and used by 1,127 Twitter users within a span of three months. During this period, the credibility score for about 5.4 million tweets was computed, allowing us to evaluate TweetCred in terms of response time, effectiveness and usability. To the best of our knowledge, this is the first research work to develop a real-time system for credibility on Twitter, and to evaluate it on a user base of this size.",
"title": ""
},
{
"docid": "a1b387e3199aa1c70fa07196426af256",
"text": "Hyperbolic embeddings offer excellent quality with few dimensions when embedding hierarchical data structures. We give a combinatorial construction that embeds trees into hyperbolic space with arbitrarily low distortion without optimization. On WordNet, this algorithm obtains a meanaverage-precision of 0.989 with only two dimensions, outperforming existing work by 0.11 points. We provide bounds characterizing the precisiondimensionality tradeoff inherent in any hyperbolic embedding. To embed general metric spaces, we propose a hyperbolic generalization of multidimensional scaling (h-MDS). We show how to perform exact recovery of hyperbolic points from distances, provide a perturbation analysis, and give a recovery result that enables us to reduce dimensionality. Finally, we extract lessons from the algorithms and theory above to design a scalable PyTorch-based implementation that can handle incomplete information.",
"title": ""
},
{
"docid": "85007af502deac21cd6477945e0578d6",
"text": "State of the art movie restoration methods either estimate motion and filter out the trajectories, or compensate the motion by an optical flow estimate and then filter out the compensated movie. Now, the motion estimation problem is ill posed. This fact is known as the aperture problem: trajectories are ambiguous since they could coincide with any promenade in the space-time isophote surface. In this paper, we try to show that, for denoising, the aperture problem can be taken advantage of. Indeed, by the aperture problem, many pixels in the neighboring frames are similar to the current pixel one wishes to denoise. Thus, denoising by an averaging process can use many more pixels than just the ones on a single trajectory. This observation leads to use for movies a recently introduced image denoising method, the NL-means algorithm. This static 3D algorithm outperforms motion compensated algorithms, as it does not lose movie details. It involves the whole movie isophote and not just a trajectory.",
"title": ""
},
{
"docid": "cc5e5efde794b1b02033c490527732d3",
"text": "In this paper we present hand and foot based immersive multimodal interaction approach for handheld devices. A smart phone based immersive football game is designed as a proof of concept. Our proposed method combines input modalities (i.e. hand & foot) and provides a coordinated output to both modalities along with audio and video. In this work, human foot gesture is detected and tracked using template matching method and Tracking-Learning-Detection (TLD) framework. We evaluated our system's usability through a user study in which we asked participants to evaluate proposed interaction method. Our preliminary evaluation demonstrates the efficiency and ease of use of proposed multimodal interaction approach.",
"title": ""
}
] |
scidocsrr
|
2e9015433f83b79fb13724ffacc0bdad
|
Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability
|
[
{
"docid": "ad7f49832562d27534f11b162e28f51b",
"text": "Gaze is an important component of social interaction. The function, evolution and neurobiology of gaze processing are therefore of interest to a number of researchers. This review discusses the evolutionary role of social gaze in vertebrates (focusing on primates), and a hypothesis that this role has changed substantially for primates compared to other animals. This change may have been driven by morphological changes to the face and eyes of primates, limitations in the facial anatomy of other vertebrates, changes in the ecology of the environment in which primates live, and a necessity to communicate information about the environment, emotional and mental states. The eyes represent different levels of signal value depending on the status, disposition and emotional state of the sender and receiver of such signals. There are regions in the monkey and human brain which contain neurons that respond selectively to faces, bodies and eye gaze. The ability to follow another individual's gaze direction is affected in individuals with autism and other psychopathological disorders, and after particular localized brain lesions. The hypothesis that gaze following is \"hard-wired\" in the brain, and may be localized within a circuit linking the superior temporal sulcus, amygdala and orbitofrontal cortex is discussed.",
"title": ""
}
] |
[
{
"docid": "45eb2d7b74f485e9eeef584555e38316",
"text": "With the increasing demand of massive multimodal data storage and organization, cross-modal retrieval based on hashing technique has drawn much attention nowadays. It takes the binary codes of one modality as the query to retrieve the relevant hashing codes of another modality. However, the existing binary constraint makes it difficult to find the optimal cross-modal hashing function. Most approaches choose to relax the constraint and perform thresholding strategy on the real-value representation instead of directly solving the original objective. In this paper, we first provide a concrete analysis about the effectiveness of multimodal networks in preserving the inter- and intra-modal consistency. Based on the analysis, we provide a so-called Deep Binary Reconstruction (DBRC) network that can directly learn the binary hashing codes in an unsupervised fashion. The superiority comes from a proposed simple but efficient activation function, named as Adaptive Tanh (ATanh). The ATanh function can adaptively learn the binary codes and be trained via back-propagation. Extensive experiments on three benchmark datasets demonstrate that DBRC outperforms several state-of-the-art methods in both image2text and text2image retrieval task.",
"title": ""
},
{
"docid": "23cc8b190e9de5177cccf2f918c1ad45",
"text": "NFC is a standardised technology providing short-range RFID communication channels for mobile devices. Peer-to-peer applications for mobile devices are receiving increased interest and in some cases these services are relying on NFC communication. It has been suggested that NFC systems are particularly vulnerable to relay attacks, and that the attacker’s proxy devices could even be implemented using off-the-shelf NFC-enabled devices. This paper describes how a relay attack can be implemented against systems using legitimate peer-to-peer NFC communication by developing and installing suitable MIDlets on the attacker’s own NFC-enabled mobile phones. The attack does not need to access secure program memory nor use any code signing, and can use publicly available APIs. We go on to discuss how relay attack countermeasures using device location could be used in the mobile environment. These countermeasures could also be applied to prevent relay attacks on contactless applications using ‘passive’ NFC on mobile phones.",
"title": ""
},
{
"docid": "e94afab2ce61d7426510a5bcc88f7ca8",
"text": "Community detection is an important task in network analysis, in which we aim to learn a network partition that groups together vertices with similar community-level connectivity patterns. By finding such groups of vertices with similar structural roles, we extract a compact representation of the network’s large-scale structure, which can facilitate its scientific interpretation and the prediction of unknown or future interactions. Popular approaches, including the stochastic block model, assume edges are unweighted, which limits their utility by discarding potentially useful information. We introduce the weighted stochastic block model (WSBM), which generalizes the stochastic block model to networks with edge weights drawn from any exponential family distribution. This model learns from both the presence and weight of edges, allowing it to discover structure that would otherwise be hidden when weights are discarded or thresholded. We describe a Bayesian variational algorithm for efficiently approximating this model’s posterior distribution over latent block structures. We then evaluate the WSBM’s performance on both edge-existence and edge-weight prediction tasks for a set of real-world weighted networks. In all cases, the WSBM performs as well or better than the best alternatives on these tasks. community detection, weighted relational data, block models, exponential family, variational Bayes.",
"title": ""
},
{
"docid": "de99a984795645bc2e9fb4b3e3173807",
"text": "Neural networks are a family of powerful machine learning models. is book focuses on the application of neural network models to natural language data. e first half of the book (Parts I and II) covers the basics of supervised machine learning and feed-forward neural networks, the basics of working with machine learning over language data, and the use of vector-based rather than symbolic representations for words. It also covers the computation-graph abstraction, which allows to easily define and train arbitrary neural networks, and is the basis behind the design of contemporary neural network software libraries. e second part of the book (Parts III and IV) introduces more specialized neural network architectures, including 1D convolutional neural networks, recurrent neural networks, conditioned-generation models, and attention-based models. ese architectures and techniques are the driving force behind state-of-the-art algorithms for machine translation, syntactic parsing, and many other applications. Finally, we also discuss tree-shaped networks, structured prediction, and the prospects of multi-task learning.",
"title": ""
},
{
"docid": "2be58a0a458115fb9ef00627cc0580e0",
"text": "OBJECTIVE\nTo determine the physical and psychosocial impact of macromastia on adolescents considering reduction mammaplasty in comparison with healthy adolescents.\n\n\nMETHODS\nThe following surveys were administered to adolescents with macromastia and control subjects, aged 12 to 21 years: Short-Form 36v2, Rosenberg Self-Esteem Scale, Breast-Related Symptoms Questionnaire, and Eating-Attitudes Test-26 (EAT-26). Demographic variables and self-reported breast symptoms were compared between the 2 groups. Linear regression models, unadjusted and adjusted for BMI category (normal weight, overweight, obese), were fit to determine the effect of case status on survey score. Odds ratios for the risk of disordered eating behaviors (EAT-26 score ≥ 20) in cases versus controls were also determined.\n\n\nRESULTS\nNinety-six subjects with macromastia and 103 control subjects participated in this study. Age was similar between groups, but subjects with macromastia had a higher BMI (P = .02). Adolescents with macromastia had lower Short-Form 36v2 domain, Rosenberg Self-Esteem Scale, and Breast-Related Symptoms Questionnaire scores and higher EAT-26 scores compared with controls. Macromastia was also associated with a higher risk of disordered eating behaviors. In almost all cases, the impact of macromastia was independent of BMI category.\n\n\nCONCLUSIONS\nMacromastia has a substantial negative impact on health-related quality of life, self-esteem, physical symptoms, and eating behaviors in adolescents with this condition. These observations were largely independent of BMI category. Health care providers should be aware of these important negative health outcomes that are associated with macromastia and consider early evaluation for adolescents with this condition.",
"title": ""
},
{
"docid": "d43dc521d3f0f17ccd4840d6081dcbfe",
"text": "In Vehicular Ad hoc NETworks (VANETs), authentication is a crucial security service for both inter-vehicle and vehicle-roadside communications. On the other hand, vehicles have to be protected from the misuse of their private data and the attacks on their privacy, as well as to be capable of being investigated for accidents or liabilities from non-repudiation. In this paper, we investigate the authentication issues with privacy preservation and non-repudiation in VANETs. We propose a novel framework with preservation and repudiation (ACPN) for VANETs. In ACPN, we introduce the public-key cryptography (PKC) to the pseudonym generation, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles' real IDs. The self-generated PKCbased pseudonyms are also used as identifiers instead of vehicle IDs for the privacy-preserving authentication, while the update of the pseudonyms depends on vehicular demands. The existing ID-based signature (IBS) scheme and the ID-based online/offline signature (IBOOS) scheme are used, for the authentication between the road side units (RSUs) and vehicles, and the authentication among vehicles, respectively. Authentication, privacy preservation, non-repudiation and other objectives of ACPN have been analyzed for VANETs. Typical performance evaluation has been conducted using efficient IBS and IBOOS schemes. We show that the proposed ACPN is feasible and adequate to be used efficiently in the VANET environment.",
"title": ""
},
{
"docid": "447c5b2db5b1d7555cba2430c6d73a35",
"text": "Recent years have seen a proliferation of complex Advanced Driver Assistance Systems (ADAS), in particular, for use in autonomous cars. These systems consist of sensors and cameras as well as image processing and decision support software components. They are meant to help drivers by providing proper warnings or by preventing dangerous situations. In this paper, we focus on the problem of design time testing of ADAS in a simulated environment. We provide a testing approach for ADAS by combining multi-objective search with surrogate models developed based on neural networks. We use multi-objective search to guide testing towards the most critical behaviors of ADAS. Surrogate modeling enables our testing approach to explore a larger part of the input search space within limited computational resources. We characterize the condition under which the multi-objective search algorithm behaves the same with and without surrogate modeling, thus showing the accuracy of our approach. We evaluate our approach by applying it to an industrial ADAS system. Our experiment shows that our approach automatically identifies test cases indicating critical ADAS behaviors. Further, we show that combining our search algorithm with surrogate modeling improves the quality of the generated test cases, especially under tight and realistic computational resources.",
"title": ""
},
{
"docid": "47bf54c0d51596f39929e8f3e572a051",
"text": "Parameterizations of triangulated surfaces are used in an increasing number of mesh processing applications for various purposes. Although demands vary, they are often required to preserve the surface metric and thus minimize angle, area and length deformation. However, most of the existing techniques primarily target at angle preservation while disregarding global area deformation. In this paper an energy functional is proposed, that quantifies angle and global area deformations simultaneously, while the relative importance between angle and area preservation can be controlled by the user through a parameter. We show how this parameter can be chosen to obtain parameterizations, that are optimized for an uniform sampling of the surface of a model. Maps obtained by minimizing this energy are well suited for applications that desire an uniform surface sampling, like re-meshing or mapping regularly patterned textures. Besides being invariant under rotation and translation of the domain, the energy is designed to prevent face flips during minimization and does not require a fixed boundary in the parameter domain. Although the energy is nonlinear, we show how it can be minimized efficiently using non-linear conjugate gradient methods in a hierarchical optimization framework and prove the convergence of the algorithm. The ability to control the tradeoff between the degree of angle and global area preservation is demonstrated for several models of varying complexity.",
"title": ""
},
{
"docid": "e1bee61b205d29db6b2ebbaf95e9c20b",
"text": "Despite the fact that there are thousands of programming languages existing there is a huge controversy about what language is better to solve a particular problem. In this paper we discuss requirements for programming language with respect to AGI research. In this article new language will be presented. Unconventional features (e.g. probabilistic programming and partial evaluation) are discussed as important parts of language design and implementation. Besides, we consider possible applications to particular problems related to AGI. Language interpreter for Lisp-like probabilistic mixed paradigm programming language is implemented in Haskell.",
"title": ""
},
{
"docid": "3a1019c31ff34f8a45c65703c1528fc4",
"text": "The increasing trend of studying the innate softness of robotic structures and amalgamating it with the benefits of the extensive developments in the field of embodied intelligence has led to sprouting of a relatively new yet extremely rewarding sphere of technology. The fusion of current deep reinforcement algorithms with physical advantages of a soft bio-inspired structure certainly directs us to a fruitful prospect of designing completely self-sufficient agents that are capable of learning from observations collected from their environment to achieve a task they have been assigned. For soft robotics structure possessing countless degrees of freedom, it is often not easy (something not even possible) to formulate mathematical constraints necessary for training a deep reinforcement learning (DRL) agent for the task in hand, hence, we resolve to imitation learning techniques due to ease of manually performing such tasks like manipulation that could be comfortably mimicked by our agent. Deploying current imitation learning algorithms on soft robotic systems have been observed to provide satisfactory results but there are still challenges in doing so. This review article thus posits an overview of various such algorithms along with instances of them being applied to real world scenarios and yielding state-of-the-art results followed by brief descriptions on various pristine branches of DRL research that may be centers of future research in this field of interest.",
"title": ""
},
{
"docid": "4d73c50244d16dab6d3773dbeebbae98",
"text": "We describe the latest version of Microsoft's conversational speech recognition system for the Switchboard and CallHome domains. The system adds a CNN-BLSTM acoustic model to the set of model architectures we combined previously, and includes character-based and dialog session aware LSTM language models in rescoring. For system combination we adopt a two-stage approach, whereby acoustic model posteriors are first combined at the senone/frame level, followed by a word-level voting via confusion networks. We also added another language model rescoring step following the confusion network combination. The resulting system yields a 5.1% word error rate on the NIST 2000 Switchboard test set, and 9.8% on the CallHome subset.",
"title": ""
},
{
"docid": "79ad27cffbbcbe3a49124abd82c6e477",
"text": "In this paper we address the following problem in web document and information retrieval (IR): How can we use long-term context information to gain better IR performance? Unlike common IR methods that use bag of words representation for queries and documents, we treat them as a sequence of words and use long short term memory (LSTM) to capture contextual dependencies. To the best of our knowledge, this is the first time that LSTM is applied to information retrieval tasks. Unlike training traditional LSTMs, the training strategy is different due to the special nature of information retrieval problem. Experimental evaluation on an IR task derived from the Bing web search demonstrates the ability of the proposed method in addressing both lexical mismatch and long-term context modelling issues, thereby, significantly outperforming existing state of the art methods for web document retrieval task.",
"title": ""
},
{
"docid": "b0766f310c4926b475bb646911a27f34",
"text": "Currently, two frameworks of causal reasoning compete: Whereas dependency theories focus on dependencies between causes and effects, dispositional theories model causation as an interaction between agents and patients endowed with intrinsic dispositions. One important finding providing a bridge between these two frameworks is that failures of causes to generate their effects tend to be differentially attributed to agents and patients regardless of their location on either the cause or the effect side. To model different types of error attribution, we augmented a causal Bayes net model with separate error sources for causes and effects. In several experiments, we tested this new model using the size of Markov violations as the empirical indicator of differential assumptions about the sources of error. As predicted by the model, the size of Markov violations was influenced by the location of the agents and was moderated by the causal structure and the type of causal variables.",
"title": ""
},
{
"docid": "569700bd1114b1b93a13af25b2051631",
"text": "Empathy and sympathy play crucial roles in much of human social interaction and are necessary components for healthy coexistence. Sympathy is thought to be a proxy for motivating prosocial behavior and providing the affective and motivational base for moral development. The purpose of the present study was to use functional MRI to characterize developmental changes in brain activation in the neural circuits underpinning empathy and sympathy. Fifty-seven individuals, whose age ranged from 7 to 40 years old, were presented with short animated visual stimuli depicting painful and non-painful situations. These situations involved either a person whose pain was accidentally caused or a person whose pain was intentionally inflicted by another individual to elicit empathic (feeling as the other) or sympathetic (feeling concern for the other) emotions, respectively. Results demonstrate monotonic age-related changes in the amygdala, supplementary motor area, and posterior insula when participants were exposed to painful situations that were accidentally caused. When participants observed painful situations intentionally inflicted by another individual, age-related changes were detected in the dorsolateral prefrontal and ventromedial prefrontal cortex, with a gradual shift in that latter region from its medial to its lateral portion. This pattern of activation reflects a change from a visceral emotional response critical for the analysis of the affective significance of stimuli to a more evaluative function. Further, these data provide evidence for partially distinct neural mechanisms subserving empathy and sympathy, and demonstrate the usefulness of a developmental neurobiological approach to the new emerging area of moral neuroscience.",
"title": ""
},
{
"docid": "023302562ddfe48ac81943fedcf881b7",
"text": "Knitty is an interactive design system for creating knitted animals. The user designs a 3D surface model using a sketching interface. The system automatically generates a knitting pattern and then visualizes the shape of the resulting 3D animal model by applying a simple physics simulation. The user can see the resulting shape before beginning the actual knitting. The system also provides a production assistant interface for novices. The user can easily understand how to knit each stitch and what to do in each step. In a workshop for novices, we observed that even children can design their own knitted animals using our system.",
"title": ""
},
{
"docid": "691032ab4d9bcc1f536b1b8a5d8e73ae",
"text": "Many decisions must be made under stress, and many decision situations elicit stress responses themselves. Thus, stress and decision making are intricately connected, not only on the behavioral level, but also on the neural level, i.e., the brain regions that underlie intact decision making are regions that are sensitive to stress-induced changes. The purpose of this review is to summarize the findings from studies that investigated the impact of stress on decision making. The review includes those studies that examined decision making under stress in humans and were published between 1985 and October 2011. The reviewed studies were found using PubMed and PsycInfo searches. The review focuses on studies that have examined the influence of acutely induced laboratory stress on decision making and that measured both decision-making performance and stress responses. Additionally, some studies that investigated decision making under naturally occurring stress levels and decision-making abilities in patients who suffer from stress-related disorders are described. The results from the studies that were included in the review support the assumption that stress affects decision making. If stress confers an advantage or disadvantage in terms of outcome depends on the specific task or situation. The results also emphasize the role of mediating and moderating variables. The results are discussed with respect to underlying psychological and neural mechanisms, implications for everyday decision making and future research directions.",
"title": ""
},
{
"docid": "ea765da47c4280f846fe144570a755dc",
"text": "A new nonlinear noise reduction method is presented that uses the discrete wavelet transform. Similar to Donoho (1995) and Donohoe and Johnstone (1994, 1995), the authors employ thresholding in the wavelet transform domain but, following a suggestion by Coifman, they use an undecimated, shift-invariant, nonorthogonal wavelet transform instead of the usual orthogonal one. This new approach can be interpreted as a repeated application of the original Donoho and Johnstone method for different shifts. The main feature of the new algorithm is a significantly improved noise reduction compared to the original wavelet based approach. This holds for a large class of signals, both visually and in the l/sub 2/ sense, and is shown theoretically as well as by experimental results.",
"title": ""
},
{
"docid": "4427f79777bfe5ea1617f06a5aa6f0cc",
"text": "Despite decades of sustained effort, memory corruption attacks continue to be one of the most serious security threats faced today. They are highly sought after by attackers, as they provide ultimate control --- the ability to execute arbitrary low-level code. Attackers have shown time and again their ability to overcome widely deployed countermeasures such as Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP) by crafting Return Oriented Programming (ROP) attacks. Although Turing-complete ROP attacks have been demonstrated in research papers, real-world ROP payloads have had a more limited objective: that of disabling DEP so that injected native code attacks can be carried out. In this paper, we provide a systematic defense, called Control Flow and Code Integrity (CFCI), that makes injected native code attacks impossible. CFCI achieves this without sacrificing compatibility with existing software, the need to replace system programs such as the dynamic loader, and without significant performance penalty. We will release CFCI as open-source software by the time of this conference.",
"title": ""
},
{
"docid": "3969a0156c558020ca1de3b978c3ab4e",
"text": "Silver-Russell syndrome (SRS) and Beckwith-Wiedemann syndrome (BWS) are 2 clinically opposite growth-affecting disorders belonging to the group of congenital imprinting disorders. The expression of both syndromes usually depends on the parental origin of the chromosome in which the imprinted genes reside. SRS is characterized by severe intrauterine and postnatal growth retardation with various additional clinical features such as hemihypertrophy, relative macrocephaly, fifth finger clinodactyly, and triangular facies. BWS is an overgrowth syndrome with many additional clinical features such as macroglossia, organomegaly, and an increased risk of childhood tumors. Both SRS and BWS are clinically and genetically heterogeneous, and for clinical diagnosis, different diagnostic scoring systems have been developed. Six diagnostic scoring systems for SRS and 4 for BWS have been previously published. However, neither syndrome has common consensus diagnostic criteria yet. Most cases of SRS and BWS are associated with opposite epigenetic or genetic abnormalities in the 11p15 chromosomal region leading to opposite imbalances in the expression of imprinted genes. SRS is also caused by maternal uniparental disomy 7, which is usually identified in 5-10% of the cases, and is therefore the first imprinting disorder that affects 2 different chromosomes. In this review, we describe in detail the clinical diagnostic criteria and scoring systems as well as molecular causes in both SRS and BWS.",
"title": ""
},
{
"docid": "65aa27cc08fd1f3532f376b536c452ba",
"text": "Design work and design knowledge in Information Systems (IS) is important for both research and practice. Yet there has been comparatively little critical attention paid to the problem of specifying design theory so that it can be communicated, justified, and developed cumulatively. In this essay we focus on the structural components or anatomy of design theories in IS as a special class of theory. In doing so, we aim to extend the work of Walls, Widemeyer and El Sawy (1992) on the specification of information systems design theories (ISDT), drawing on other streams of thought on design research and theory to provide a basis for a more systematic and useable formulation of these theories. We identify eight separate components of design theories: (1) purpose and scope, (2) constructs, (3) principles of form and function, (4) artifact mutability, (5) testable propositions, (6) justificatory knowledge (kernel theories), (7) principles of implementation, and (8) an expository instantiation. This specification includes components missing in the Walls et al. adaptation of Dubin (1978) and Simon (1969) and also addresses explicitly problems associated with the role of instantiations and the specification of design theories for methodologies and interventions as well as for products and applications. The essay is significant as the unambiguous establishment of design knowledge as theory gives a sounder base for arguments for the rigor and legitimacy of IS as an applied discipline and for its continuing progress. A craft can proceed with the copying of one example of a design artifact by one artisan after another. A discipline cannot.",
"title": ""
}
] |
scidocsrr
|
566ebd04f64b10621289c3284fe245dd
|
SMART LIVING USING BLUETOOTH- BASED ANDROID SMARTPHONE
|
[
{
"docid": "05e4cfafcef5ad060c1f10b9c6ad2bc0",
"text": "Mobile devices have been integrated into our everyday life. Consequently, home automation and security are becoming increasingly prominent features on mobile devices. In this paper, we have developed a security system that interfaces with an Android mobile device. The mobile device and security system communicate via Bluetooth because a short-range-only communications system was desired. The mobile application can be loaded onto any compatible device, and once loaded, interface with the security system. Commands to lock, unlock, or check the status of the door to which the security system is installed can be sent quickly from the mobile device via a simple, easy to use GUI. The security system then acts on these commands, taking the appropriate action and sending a confirmation back to the mobile device. The security system can also tell the user if the door is open. The door also incorporates a traditional lock and key interface in case the user loses the mobile device.",
"title": ""
}
] |
[
{
"docid": "49a538fc40d611fceddd589b0c9cb433",
"text": "Both intuition and creativity are associated with knowledge creation, yet a clear link between them has not been adequately established. First, the available empirical evidence for an underlying relationship between intuition and creativity is sparse in nature. Further, this evidence is arguable as the concepts are diversely operationalized and the measures adopted are often not validated sufficiently. Combined, these issues make the findings from various studies examining the link between intuition and creativity difficult to replicate. Nevertheless, the role of intuition in creativity should not be neglected as it is often reported to be a core component of the idea generation process, which in conjunction with idea evaluation are crucial phases of creative cognition. We review the prior research findings in respect of idea generation and idea evaluation from the view that intuition can be construed as the gradual accumulation of cues to coherence. Thus, we summarize the literature on what role intuitive processes play in the main stages of the creative problem-solving process and outline a conceptual framework of the interaction between intuition and creativity. Finally, we discuss the main challenges of measuring intuition as well as possible directions for future research.",
"title": ""
},
{
"docid": "7e02da9e8587435716db99396c0fbbc7",
"text": "To examine thrombus formation in a living mouse, new technologies involving intravital videomicroscopy have been applied to the analysis of vascular windows to directly visualize arterioles and venules. After vessel wall injury in the microcirculation, thrombus development can be imaged in real time. These systems have been used to explore the role of platelets, blood coagulation proteins, endothelium, and the vessel wall during thrombus formation. The study of biochemistry and cell biology in a living animal offers new understanding of physiology and pathology in complex biologic systems.",
"title": ""
},
{
"docid": "700c5ed8bac3ee26051991639d2b7fe9",
"text": "A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.",
"title": ""
},
{
"docid": "84f150ecaf9fdb9a778c56a10a21de74",
"text": "Peritoneal metastasis is a primary metastatic route for gastric cancers, and the mechanisms underlying this process are still unclear. Peritoneal mesothelial cells (PMCs) undergo mesothelial-to-mesenchymal transition (MMT) to provide a favorable environment for metastatic cancer cells. In this study, we investigated how the exosomal miR-21-5p induces MMT and promotes peritoneal metastasis. Gastric cancer (GC)-derived exosomes were identified by transmission electron microscopy and western blot analysis, then the uptake of exosomes was confirmed by PKH-67 staining. The expression of miR-21-5p and SMAD7 were measured by quantitative real-time polymerase chain reaction (qRT-PCR) and western blot, and the interactions between miR-21-5p and its target genes SMAD7 were confirmed by Luciferase reporter assays. The MMT of PMCs was determined by invasion assays, adhesion assays, immunofluorescent assay, and western blot. Meanwhile, mouse model of tumor peritoneal dissemination model was performed to investigate the role of exosomal miR-21-5p in peritoneal metastasis in vivo. We found that PMCs could internalize GC-derived exosomal miR-21-5p and led to increased levels of miR-21-5p in PMCs. Through various types of in vitro and in vivo assays, we confirmed that exosomal miR-21-5p was able to induce MMT of PMCs and promote tumor peritoneal metastasis. Moreover, our study revealed that this process was promoted by exosomal miR-21-5p through activating TGF-β/Smad pathway via targeting SMAD7. Altogether, our data suggest that exosomal miR-21-5p induces MMT of PMCs and promote cancer peritoneal dissemination by targeting SMAD7. The exosomal miR-21-5p may be a novel therapeutic target for GC peritoneal metastasis.",
"title": ""
},
{
"docid": "755f2d11ad9653806f26e5ae7beaf49b",
"text": "Deep Neural Networks (DNNs) have shown remarkable success in pattern recognition tasks. However, parallelizing DNN training across computers has been difficult. We present the Deep Stacking Network (DSN), which overcomes the problem of parallelizing learning algorithms for deep architectures. The DSN provides a method of stacking simple processing modules in buiding deep architectures, with a convex learning problem in each module. Additional fine tuning further improves the DSN, while introducing minor non-convexity. Full learning in the DSN is batch-mode, making it amenable to parallel training over many machines and thus be scalable over the potentially huge size of the training data. Experimental results on both the MNIST (image) and TIMIT (speech) classification tasks demonstrate that the DSN learning algorithm developed in this work is not only parallelizable in implementation but it also attains higher classification accuracy than the DNN.",
"title": ""
},
{
"docid": "5ebb1c86aa1b915a844bf7b72a98f9dc",
"text": "Learning-based methods are believed to work well for unconstrained gaze estimation, i.e. gaze estimation from a monocular RGB camera without assumptions regarding user, environment, or camera. However, current gaze datasets were collected under laboratory conditions and methods were not evaluated across multiple datasets. Our work makes three contributions towards addressing these limitations. First, we present the MPIIGaze dataset, which contains 213,659 full face images and corresponding ground-truth gaze positions collected from 15 users during everyday laptop use over several months. An experience sampling approach ensured continuous gaze and head poses and realistic variation in eye appearance and illumination. To facilitate cross-dataset evaluations, 37,667 images were manually annotated with eye corners, mouth corners, and pupil centres. Second, we present an extensive evaluation of state-of-the-art gaze estimation methods on three current datasets, including MPIIGaze. We study key challenges including target gaze range, illumination conditions, and facial appearance variation. We show that image resolution and the use of both eyes affect gaze estimation performance, while head pose and pupil centre information are less informative. Finally, we propose GazeNet, the first deep appearance-based gaze estimation method. GazeNet improves on the state of the art by 22 percent (from a mean error of 13.9 degrees to 10.8 degrees) for the most challenging cross-dataset evaluation.",
"title": ""
},
{
"docid": "dfcb186f7da37916cbc54e154a70024a",
"text": "This article gives an overview of the, monitoring oriented programming framework (MOP). In MOP, runtime monitoring is supported and encouraged as a fundamental principle for building reliable systems. Monitors are automatically synthesized from specified properties and are used in conjunction with the original system to check its dynamic behaviors. When a specification is violated or validated at runtime, user-defined actions will be triggered, which can be any code, such as information logging or runtime recovery. Two instances of MOP are presented: JavaMOP (for Java programs) and BusMOP (for monitoring PCI bus traffic). The architecture of MOP is discussed, and an explanation of parametric trace monitoring and its implementation is given. A comprehensive evaluation of JavaMOP attests to its efficiency, especially in comparison with similar systems. The implementation of BusMOP is discussed in detail. In general, BusMOP imposes no runtime overhead on the system it is monitoring.",
"title": ""
},
{
"docid": "b700c177ab4ee014cea9a3a2fd870230",
"text": "Exploiting network data (i.e., graphs) is a rather particular case of data mining. The size and relevance of network domains justifies research on graph mining, but also brings forth severe complications. Computational aspects like scalability and parallelism have to be reevaluated, and well as certain aspects of the data mining process. One of those are the methodologies used to evaluate graph mining methods, particularly when processing large graphs. In this paper we focus on the evaluation of a graph mining task known as Link Prediction. First we explore the available solutions in traditional data mining for that purpose, discussing which methods are most appropriate. Once those are identified, we argue about their capabilities and limitations for producing a faithful and useful evaluation. Finally, we introduce a novel modification to a traditional evaluation methodology with the goal of adapting it to the problem of Link Prediction on large graphs.",
"title": ""
},
{
"docid": "dcbaaf0c098588d96ba95fb5c9b60972",
"text": "New resources to make these evaluations easier • New Advising dataset, plus 7 existing-text-to-SQL datasets cleaned, variablized, and put into a single, standard format, with tools for easy use. • Scan above or visit https://github.com/jkkummerfeld/text2sql-data Evaluations should measure how well systems generalize to realistic unseen data. Yet standard train/test splits, which ensure that no English question is in both train and test, permit the same SQL query to appear in both. Using a simple classifier with a slot-filler as a basline, we show how the standard question-based split fails to evaluate a system’s generalizability. In addition, by analyzing properties of human-generated and automatically generated text-to-SQL datasets, we show the need to evaluate on more than one dataset to ensure systems perform well on realistic data. And we release improved resources to facilitate such evaluations.",
"title": ""
},
{
"docid": "6545ea7d281be5528d9217f3b891a5da",
"text": "In this paper, a novel metamaterial absorber working in the C band frequency range has been proposed to reduce the in-band Radar Cross Section (RCS) of a typical planar antenna. The absorber is first designed in the shape of a hexagonal ring structure having dipoles at the corresponding arms of the rings. The various geometrical parameters of the proposed metamaterial structure have first been optimized using the numerical simulator, and the structure is fabricated and tested. In the second step, the metamaterial absorber is loaded on a microstrip patch antenna working in the same frequency band as that of the metamaterial absorber to reduce the in-band Radar Cross Section (RCS) of the antenna. The prototype is simulated, fabricated and tested. The simulated results show the 99% absorption of the absorber at 6.35 GHz which is in accordance with the measured data. A close agreement between the simulated and the measured results shows that the proposed absorber can be used for the RCS reduction of the planar antenna in order to improve its in-band stealth performance.",
"title": ""
},
{
"docid": "c69e002a71132641947d8e30bb2e74f7",
"text": "In this paper, we investigate a new stealthy attack simultaneously compromising actuators and sensors. This attack is referred to as coordinated attack. We show that the coordinated attack is capable of deriving the system states far away from the desired without being detected. Furthermore, designing such an attack practically does not require knowledge on target systems, which makes the attack much more dangerous compared to the other known attacks. Also, we present a method to detect the coordinated attack. To validate the effect of the proposed attack, we carry out experiments using a quadrotor.",
"title": ""
},
{
"docid": "09d1fa9a1f9af3e9560030502be1d976",
"text": "Academic Center for Computing and Media Studies, Kyoto University Graduate School of Informatics, Kyoto University Yoshidahonmachi, Sakyo-ku, Kyoto, Japan forest@i.kyoto-u.ac.jp, maeta@ar.media.kyoto-u.ac.jp, yamakata@dl.kuis.kyoto-u.ac.jp, sasada@ar.media.kyoto-u.ac.jp Abstract In this paper, we present our attempt at annotating procedural texts with a flow graph as a representation of understanding. The domain we focus on is cooking recipe. The flow graphs are directed acyclic graphs with a special root node corresponding to the final dish. The vertex labels are recipe named entities, such as foods, tools, cooking actions, etc. The arc labels denote relationships among them. We converted 266 Japanese recipe texts into flow graphs manually. 200 recipes are randomly selected from a web site and 66 are of the same dish. We detail the annotation framework and report some statistics on our corpus. The most typical usage of our corpus may be automatic conversion from texts to flow graphs which can be seen as an entire understanding of procedural texts. With our corpus, one can also try word segmentation, named entity recognition, predicate-argument structure analysis, and coreference resolution.",
"title": ""
},
{
"docid": "d9ad51299d4afb8075bd911b6655cf16",
"text": "To assess whether the passive leg raising test can help in predicting fluid responsiveness. Nonsystematic review of the literature. Passive leg raising has been used as an endogenous fluid challenge and tested for predicting the hemodynamic response to fluid in patients with acute circulatory failure. This is now easy to perform at the bedside using methods that allow a real time measurement of systolic blood flow. A passive leg raising induced increase in descending aortic blood flow of at least 10% or in echocardiographic subaortic flow of at least 12% has been shown to predict fluid responsiveness. Importantly, this prediction remains very valuable in patients with cardiac arrhythmias or spontaneous breathing activity. Passive leg raising allows reliable prediction of fluid responsiveness even in patients with spontaneous breathing activity or arrhythmias. This test may come to be used increasingly at the bedside since it is easy to perform and effective, provided that its effects are assessed by a real-time measurement of cardiac output.",
"title": ""
},
{
"docid": "8ca0edf4c51b0156c279fcbcb1941d2b",
"text": "The good fossil record of trilobite exoskeletal anatomy and ontogeny, coupled with information on their nonbiomineralized tissues, permits analysis of how the trilobite body was organized and developed, and the various evolutionary modifications of such patterning within the group. In several respects trilobite development and form appears comparable with that which may have characterized the ancestor of most or all euarthropods, giving studies of trilobite body organization special relevance in the light of recent advances in the understanding of arthropod evolution and development. The Cambrian diversification of trilobites displayed modifications in the patterning of the trunk region comparable with those seen among the closest relatives of Trilobita. In contrast, the Ordovician diversification of trilobites, although contributing greatly to the overall diversity within the clade, did so within a narrower range of trunk conditions. Trilobite evolution is consistent with an increased premium on effective enrollment and protective strategies, and with an evolutionary trade-off between the flexibility to vary the number of trunk segments and the ability to regionalize portions of the trunk. 401 A nn u. R ev . E ar th P la ne t. Sc i. 20 07 .3 5: 40 143 4. D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by U N IV E R SI T Y O F C A L IF O R N IA R IV E R SI D E L IB R A R Y o n 05 /0 2/ 07 . F or p er so na l u se o nl y. ANRV309-EA35-14 ARI 20 March 2007 15:54 Cephalon: the anteriormost or head division of the trilobite body composed of a set of conjoined segments whose identity is expressed axially Thorax: the central portion of the trilobite body containing freely articulating trunk segments Pygidium: the posterior tergite of the trilobite exoskeleton containing conjoined segments INTRODUCTION The rich record of the diversity and development of the trilobite exoskeleton (along with information on the geological occurrence, nonbiomineralized tissues, and associated trace fossils of trilobites) provides the best history of any Paleozoic arthropod group. The retention of features that may have characterized the most recent common ancestor of all living arthropods, which have been lost or obscured in most living forms, provides insights into the nature of the evolutionary radiation of the most diverse metazoan phylum alive today. Studies of phylogenetic stem-group taxa, of which Trilobita provide a prominent example, have special significance in the light of renewed interest in arthropod evolution prompted by comparative developmental genetics. Although we cannot hope to dissect the molecular controls operative within trilobites, the evolutionary developmental biology (evo-devo) approach permits a fresh perspective from which to examine the contributions that paleontology can make to evolutionary biology, which, in the context of the overall evolutionary history of Trilobita, is the subject of this review. TRILOBITES: BODY PLAN AND ONTOGENY Trilobites were a group of marine arthropods that appeared in the fossil record during the early Cambrian approximately 520 Ma and have not been reported from rocks younger than the close of the Permian, approximately 250 Ma. Roughly 15,000 species have been described to date, and although analysis of the occurrence of trilobite genera suggests that the known record is quite complete (Foote & Sepkoski 1999), many new species and genera continue to be established each year. The known diversity of trilobites results from their strongly biomineralized exoskeletons, made of two layers of low magnesium calcite, which was markedly more durable than the sclerites of most other arthropods. Because the exoskeleton was rich in morphological characters and was the only body structure preserved in the vast majority of specimens, skeletal form has figured prominently in the biological interpretation of trilobites.",
"title": ""
},
{
"docid": "9fd56a2261ade748404fcd0c6302771a",
"text": "Despite limited scientific knowledge, stretching of human skeletal muscle to improve flexibility is a widespread practice among athletes. This article reviews recent findings regarding passive properties of the hamstring muscle group during stretch based on a model that was developed which could synchronously and continuously measure passive hamstring resistance and electromyographic activity, while the velocity and angle of stretch was controlled. Resistance to stretch was defined as passive torque (Nm) offered by the hamstring muscle group during passive knee extension using an isokinetic dynamometer with a modified thigh pad. To simulate a clinical static stretch, the knee was passively extended to a pre-determined final position (0.0875 rad/s, dynamic phase) where it remained stationary for 90 s (static phase). Alternatively, the knee was extended to the point of discomfort (stretch tolerance). From the torque-angle curve of the dynamic phase of the static stretch, and in the stretch tolerance protocol, passive energy and stiffness were calculated. Torque decline in the static phase was considered to represent viscoelastic stress relaxation. Using the model, studies were conducted which demonstrated that a single static stretch resulted in a 30% viscoelastic stress relaxation. With repeated stretches muscle stiffness declined, but returned to baseline values within 1 h. Long-term stretching (3 weeks) increased joint range of motion as a result of a change in stretch tolerance rather than in the passive properties. Strength training resulted in increased muscle stiffness, which was unaffected by daily stretching. The effectiveness of different stretching techniques was attributed to a change in stretch tolerance rather than passive properties. Inflexible and older subjects have increased muscle stiffness, but a lower stretch tolerance compared to subjects with normal flexibility and younger subjects, respectively. Although far from all questions regarding the passive properties of humans skeletal muscle have been answered in these studies, the measurement technique permitted some initial important examinations of vicoelastic behavior of human skeletal muscle.",
"title": ""
},
{
"docid": "e8f7006c9235e04f16cfeeb9d3c4f264",
"text": "Widespread deployment of biometric systems supporting consumer transactions is starting to occur. Smart consumer devices, such as tablets and phones, have the potential to act as biometric readers authenticating user transactions. However, the use of these devices in uncontrolled environments is highly susceptible to replay attacks, where these biometric data are captured and replayed at a later time. Current approaches to counter replay attacks in this context are inadequate. In order to show this, we demonstrate a simple replay attack that is 100% effective against a recent state-of-the-art face recognition system; this system was specifically designed to robustly distinguish between live people and spoofing attempts, such as photographs. This paper proposes an approach to counter replay attacks for face recognition on smart consumer devices using a noninvasive challenge and response technique. The image on the screen creates the challenge, and the dynamic reflection from the person's face as they look at the screen forms the response. The sequence of screen images and their associated reflections digitally watermarks the video. By extracting the features from the reflection region, it is possible to determine if the reflection matches the sequence of images that were displayed on the screen. Experiments indicate that the face reflection sequences can be classified under ideal conditions with a high degree of confidence. These encouraging results may pave the way for further studies in the use of video analysis for defeating biometric replay attacks on consumer devices.",
"title": ""
},
{
"docid": "a56edeae4520c745003d5cd0baae7708",
"text": "A random access memory (RAM) uses n bits to randomly address N=2(n) distinct memory cells. A quantum random access memory (QRAM) uses n qubits to address any quantum superposition of N memory cells. We present an architecture that exponentially reduces the requirements for a memory call: O(logN) switches need be thrown instead of the N used in conventional (classical or quantum) RAM designs. This yields a more robust QRAM algorithm, as it in general requires entanglement among exponentially less gates, and leads to an exponential decrease in the power needed for addressing. A quantum optical implementation is presented.",
"title": ""
},
{
"docid": "ffd0494007a1b82ed6b03aaefd7f8be9",
"text": "In this paper we consider the problem of robot navigation in simple maze-like environments where the robot has to rely on its onboard sensors to perform the navigation task. In particular, we are interested in solutions to this problem that do not require localization, mapping or planning. Additionally, we require that our solution can quickly adapt to new situations (e.g., changing navigation goals and environments). To meet these criteria we frame this problem as a sequence of related reinforcement learning tasks. We propose a successor-feature-based deep reinforcement learning algorithm that can learn to transfer knowledge from previously mastered navigation tasks to new problem instances. Our algorithm substantially decreases the required learning time after the first task instance has been solved, which makes it easily adaptable to changing environments. We validate our method in both simulated and real robot experiments with a Robotino and compare it to a set of baseline methods including classical planning-based navigation.",
"title": ""
},
{
"docid": "2d3b452d7a8cf8f29ac1896f14c43faa",
"text": "Since the amount of information on the internet is growing rapidly, it is not easy for a user to find relevant information for his/her query. To tackle this issue, much attention has been paid to Automatic Document Summarization. The key point in any successful document summarizer is a good document representation. The traditional approaches based on word overlapping mostly fail to produce that kind of representation. Word embedding, distributed representation of words, has shown an excellent performance that allows words to match on semantic level. Naively concatenating word embeddings makes the common word dominant which in turn diminish the representation quality. In this paper, we employ word embeddings to improve the weighting schemes for calculating the input matrix of Latent Semantic Analysis method. Two embedding-based weighting schemes are proposed and then combined to calculate the values of this matrix. The new weighting schemes are modified versions of the augment weight and the entropy frequency. The new schemes combine the strength of the traditional weighting schemes and word embedding. The proposed approach is experimentally evaluated on three well-known English datasets, DUC 2002, DUC 2004 and Multilingual 2015 Single-document Summarization for English. The proposed model performs comprehensively better compared to the state-of-the-art methods, by at least 1% ROUGE points, leading to a conclusion that it provides a better document representation and a better document summary as a result.",
"title": ""
},
{
"docid": "434ea2b009a1479925ce20e8171aea46",
"text": "Several high-voltage silicon carbide (SiC) devices have been demonstrated over the past few years, and the latest-generation devices are showing even faster switching, and greater current densities. However, there are no commercial gate drivers that are suitable for these high-voltage, high-speed devices. Consequently, there has been a great research effort into the development of gate drivers for high-voltage SiC transistors. This work presents the first detailed report on the design and testing of a high-power-density, high-speed, and high-noise-immunity gate drive for a high-current, 10 kV SiC MOSFET module.",
"title": ""
}
] |
scidocsrr
|
14306881432e7b8363e84157717369f4
|
Performance considerations of network functions virtualization using containers
|
[
{
"docid": "b6c62936aef87ab2cce565f6142424bf",
"text": "Concerns have been raised about the performance of PC-based virtual routers as they do packet processing in software. Furthermore, it becomes challenging to maintain isolation among virtual routers due to resource contention in a shared environment. Hardware vendors recognize this issue and PC hardware with virtualization support (SR-IOV and Intel-VTd) has been introduced in recent years. In this paper, we investigate how such hardware features can be integrated with two different virtualization technologies (LXC and KVM) to enhance performance and isolation of virtual routers on shared environments. We compare LXC and KVM and our results indicate that KVM in combination with hardware support can provide better trade-offs between performance and isolation. We notice that KVM has slightly lower throughput, but has superior isolation properties by providing more explicit control of CPU resources. We demonstrate that KVM allows defining a CPU share for a virtual router, something that is difficult to achieve in LXC, where packet forwarding is done in a kernel shared by all virtual routers.",
"title": ""
}
] |
[
{
"docid": "19fe8c6452dd827ffdd6b4c6e28bc875",
"text": "Motivation for the investigation of position and waypoint controllers is the demand for Unattended Aerial Systems (UAS) capable of fulfilling e.g. surveillance tasks in contaminated or in inaccessible areas. Hence, this paper deals with the development of a 2D GPS-based position control system for 4 Rotor Helicopters able to keep positions above given destinations as well as to navigate between waypoints while minimizing trajectory errors. Additionally, the novel control system enables permanent full speed flight with reliable altitude keeping considering that the resulting lift is decreasing while changing pitch or roll angles for position control. In the following chapters the control procedure for position control and waypoint navigation is described. The dynamic behavior was simulated by means of Matlab/Simulink and results are shown. Further, the control strategies were implemented on a flight demonstrator for validation, experimental results are provided and a comparison is discussed.",
"title": ""
},
{
"docid": "ca1729ffc67b37c39eca7d98115a55ec",
"text": "Causal inference is one of the fundamental problems in science. In recent years, several methods have been proposed for discovering causal structure from observational data. These methods, however, focus specifically on numeric data, and are not applicable on nominal or binary data. In this work, we focus on causal inference for binary data. Simply put, we propose causal inference by compression. To this end we propose an inference framework based on solid information theoretic foundations, i.e. Kolmogorov complexity. However, Kolmogorov complexity is not computable, and hence we propose a practical and computable instantiation based on the Minimum Description Length (MDL) principle. To apply the framework in practice, we propose ORIGO, an efficient method for inferring the causal direction from binary data. ORIGO employs the lossless PACK compressor, works directly on the data and does not require assumptions about neither distributions nor the type of causal relations. Extensive evaluation on synthetic, benchmark, and real-world data shows that ORIGO discovers meaningful causal relations, and outperforms state-of-the-art methods by a wide margin.",
"title": ""
},
{
"docid": "17cd4876c5189cf91fbe1ad4cfd1c962",
"text": "Ad click prediction is a task to estimate the click-through rate (CTR) in sponsored ads, the accuracy of which impacts user search experience and businesses' revenue. State-of-the-art sponsored search systems typically model it as a classification problem and employ machine learning approaches to predict the CTR per ad. In this paper, we propose a new approach to predict ad CTR in sequence which considers user browsing behavior and the impact of top ads quality to the current one. To the best of our knowledge, this is the first attempt in the literature to predict ad CTR by using Recurrent Neural Networks (RNN) with Long Short-Term Memory (LSTM) cells. The proposed model is evaluated on a real dataset and we show that LSTM-RNN outperforms DNN model on both AUC and RIG. Since the RNN inference is time consuming, a simplified version is also proposed, which can achieve more than half of the gain with the overall serving cost almost unchanged.",
"title": ""
},
{
"docid": "670b1d7cf683732c38d197126e094a74",
"text": "Deep learning software demands reliability and performance. However, many of the existing deep learning frameworks are software libraries that act as an unsafe DSL in Python and a computation graph interpreter. We present DLVM, a design and implementation of a compiler infrastructure with a linear algebra intermediate representation, algorithmic differentiation by adjoint code generation, domainspecific optimizations and a code generator targeting GPU via LLVM. Designed as a modern compiler infrastructure inspired by LLVM, DLVM is more modular and more generic than existing deep learning compiler frameworks, and supports tensor DSLs with high expressivity. With our prototypical staged DSL embedded in Swift, we argue that the DLVM system enables a form of modular, safe and performant frameworks for deep learning.",
"title": ""
},
{
"docid": "420a3d0059a91e78719955b4cc163086",
"text": "The superior skills of experts, such as accomplished musicians and chess masters, can be amazing to most spectators. For example, club-level chess players are often puzzled by the chess moves of grandmasters and world champions. Similarly, many recreational athletes find it inconceivable that most other adults – regardless of the amount or type of training – have the potential ever to reach the performance levels of international competitors. Especially puzzling to philosophers and scientists has been the question of the extent to which expertise requires innate gifts versus specialized acquired skills and abilities. One of the most widely used and simplest methods of gathering data on exceptional performance is to interview the experts themselves. But are experts always capable of describing their thoughts, their behaviors, and their strategies in a manner that would allow less-skilled individuals to understand how the experts do what they do, and perhaps also understand how they might reach expert level through appropriate training? To date, there has been considerable controversy over the extent to which experts are capable of explaining the nature and structure of their exceptional performance. Some pioneering scientists, such as Binet (1893 / 1966), questioned the validity of the experts’ descriptions when they found that some experts gave reports inconsistent with those of other experts. To make matters worse, in those rare cases that allowed verification of the strategy by observing the performance, discrepancies were found between the reported strategies and the observations (Watson, 1913). Some of these discrepancies were explained, in part, by the hypothesis that some processes were not normally mediated by awareness/attention and that the mere act of engaging in self-observation (introspection) during performance changed the content of ongoing thought processes. These problems led most psychologists in first half of the 20th century to reject all types of introspective verbal reports as valid scientific evidence, and they focused almost exclusively on observable behavior (Boring, 1950). In response to the problems with the careful introspective analysis of images and perceptions, investigators such as John B.",
"title": ""
},
{
"docid": "18762f4c3115ae53b2b88aafde77856c",
"text": "BACKGROUND\nReconstruction of the skin defects of malar region poses some challenging problems including obvious scar formation, dog-ear formation, trapdoor deformity and displacement of surrounding anatomic landmarks such as the lower eyelid, oral commissure, ala nasi, and sideburn.\n\n\nPURPOSE\nHere, a new local flap procedure, namely the reading man procedure, for reconstruction of large malar skin defects is described.\n\n\nMATERIALS AND METHODS\nIn this technique, 2 flaps designed in an unequal Z-plasty manner are used. The first flap is transposed to the defect area, whereas the second flap is used for closure of the first flap's donor site. In the last 5 years, this technique has been used for closure of the large malar defects in 18 patients (11 men and 7 women) aged 21 to 95 years. The defect size was ranging between 3 and 8.5 cm in diameter.\n\n\nRESULTS\nA tension-free defect closure was obtained in all patients. There was no patient with dog-ear formation, ectropion, or distortion of the surrounding anatomic structures. No tumor recurrence was observed. A mean follow-up of 26 months (range, 5 mo to 3.5 y) revealed a cosmetically acceptable scar formation in all patients.\n\n\nCONCLUSIONS\nThe reading man procedure was found to be a useful and easygoing technique for the closure of malar defects, which allows defect closure without any additional excision of surrounding healthy tissue. It provides a tension-free closure of considerably large malar defects without creating distortions of the mobile anatomic structures.",
"title": ""
},
{
"docid": "737dda9cc50e5cf42523e6cadabf524e",
"text": "Maintaining incisor alignment is an important goal of orthodontic retention and can only be guaranteed by placement of an intact, passive and permanent fixed retainer. Here we describe a reliable technique for bonding maxillary retainers and demonstrate all the steps necessary for both technician and clinician. The importance of increasing the surface roughness of the wire and teeth to be bonded, maintaining passivity of the retainer, especially during bonding, the use of a stiff wire and correct placement of the retainer are all discussed. Examples of adverse tooth movement from retainers with twisted and multistrand wires are shown.",
"title": ""
},
{
"docid": "a74b091706f4aeb384d2bf3d477da67d",
"text": "Amazon's Echo and its conversational agent Alexa open exciting opportunities for understanding how people perceive and interact with virtual agents. Drawing from user reviews of the Echo posted to Amazon.com, this case study explores the degree to which user reviews indicate personification of the device, sociability level of interactions, factors linked with personification, and influences on user satisfaction. Results indicate marked variance in how people refer to the device, with over half using the personified name Alexa but most referencing the device with object pronouns. Degree of device personification is linked with sociability of interactions: greater personification co-occurs with more social interactions with the Echo. Reviewers mentioning multiple member households are more likely to personify the device than reviewers mentioning living alone. Even after controlling for technical issues, personification predicts user satisfaction with the Echo.",
"title": ""
},
{
"docid": "7abe1fd1b0f2a89bf51447eaef7aa989",
"text": "End users increasingly expect ubiquitous connectivity while on the move. With a variety of wireless access technologies available, we expect to always be connected to the technology that best matches our performance goals and price points. Meanwhile, sophisticated onboard units (OBUs) enable geolocation and complex computation in support of handover. In this paper, we present an overview of vertical handover techniques and propose an algorithm empowered by the IEEE 802.21 standard, which considers the particularities of the vehicular networks (VNs), the surrounding context, the application requirements, the user preferences, and the different available wireless networks [i.e., Wireless Fidelity (Wi-Fi), Worldwide Interoperability for Microwave Access (WiMAX), and Universal Mobile Telecommunications System (UMTS)] to improve users' quality of experience (QoE). Our results demonstrate that our approach, under the considered scenario, is able to meet application requirements while ensuring user preferences are also met.",
"title": ""
},
{
"docid": "5e3d770390e03445c079c05a097fb891",
"text": "Electronic Commerce has increased the global reach of small and medium scale enterprises (SMEs); its acceptance as an IT infrastructure depends on the users’ conscious assessment of the influencing constructs as depicted in Technology Acceptance Model (TAM), Theory of Reasoned Action (TRA), Theory of Planned Behaviour (TPB), and Technology-Organization-Environment (T-O-E) model. The original TAM assumes the constructs of perceived usefulness (PU) and perceived ease of use (PEOU); TPB perceived behavioural control and subjective norms; and T-O-E firm’s size, consumer readiness, trading partners’ readiness, competitive pressure, and scope of business operation. This paper reviewed and synthesized the constructs of these models and proposed an improved TAM through T-O-E. The improved TAM and T-O-E integrated more constructs than the original TAM, T-O-E, TPB, and IDT, leading to eighteen propositions to promote and facilitate future research, and to guide explanation and prediction of IT adoption in an organized system. The integrated constructscompany mission, individual difference factors, perceived trust, and perceived service quality improve existing knowledge on EC acceptance and provide bases for informed decision(s).",
"title": ""
},
{
"docid": "654f50ccb20720fdb49a2326ae014ba9",
"text": "OBJECTIVE\nThis study was undertaken to describe the distribution of pelvic organ support stages in a population of women seen at outpatient gynecology clinics for routine gynecologic health care.\n\n\nSTUDY DESIGN\nThis was an observational study. Women seen for routine gynecologic health care at four outpatient gynecology clinics were recruited to participate. After informed consent was obtained general biographic data were collected regarding obstetric history, medical history, and surgical history. Women then underwent a pelvic examination. Pelvic organ support was measured and described according to the pelvic organ prolapse quantification system. Stages of support were evaluated by variable for trends with Pearson chi(2) statistics.\n\n\nRESULTS\nA total of 497 women were examined. The average age was 44 years, with a range of 18 to 82 years. The overall distribution of pelvic organ prolapse quantification system stages was as follows: stage 0, 6.4%; stage 1, 43.3%; stage 2, 47.7%; and stage 3, 2.6%. No subjects examined had pelvic organ prolapse quantification system stage 4 prolapse. Variables with a statistically significant trend toward increased pelvic organ prolapse quantification system stage were advancing age, increasing gravidity and parity, increasing number of vaginal births, delivery of a macrosomic infant, history of hysterectomy or pelvic organ prolapse operations, postmenopausal status, and hypertension.\n\n\nCONCLUSION\nThe distribution of the pelvic organ prolapse quantification system stages in the population revealed a bell-shaped curve, with most subjects having stage 1 or 2 support. Few subjects had either stage 0 (excellent support) or stage 3 (moderate to severe pelvic support defects) results. There was a statistically significant trend toward increased pelvic organ prolapse quantification system stage of support among women with many of the historically quoted etiologic factors for the development of pelvic organ prolapse.",
"title": ""
},
{
"docid": "5569fa921ab298e25a70d92489b273fc",
"text": "We present Centiman, a system for high performance, elastic transaction processing in the cloud. Centiman provides serializability on top of a key-value store with a lightweight protocol based on optimistic concurrency control (OCC).\n Centiman is designed for the cloud setting, with an architecture that is loosely coupled and avoids synchronization wherever possible. Centiman supports sharded transaction validation; validators can be added or removed on-the-fly in an elastic manner. Processors and validators scale independently of each other and recover from failure transparently to each other. Centiman's loosely coupled design creates some challenges: it can cause spurious aborts and it makes it difficult to implement common performance optimizations for read-only transactions. To deal with these issues, Centiman uses a watermark abstraction to asynchronously propagate information about transaction commits through the system.\n In an extensive evaluation we show that Centiman provides fast elastic scaling, low-overhead serializability for read-heavy workloads, and scales to millions of operations per second.",
"title": ""
},
{
"docid": "09a23ea8fc94178fdde98cc2774abc54",
"text": "Heating, Ventilation, and Air Conditioning (HVAC) accounts for about half of the energy consumption in buildings. HVAC energy consumption can be reduced by changing the indoor air temperature setpoint, but changing the setpoint too aggressively can overly reduce user comfort. We have therefore designed and implemented SPOT: a Smart Personalized Office Thermal control system that balances energy conservation with personal thermal comfort in an office environment. SPOT relies on a new model for personal thermal comfort that we call the Predicted Personal Vote model. This model quantitatively predicts human comfort based on a set of underlying measurable environmental and personal parameters. SPOT uses a set of sensors, including a Microsoft Kinect, to measure the parameters underlying the PPV model, then controls heating and cooling elements to dynamically adjust indoor temperature to maintain comfort. Based on a deployment of SPOT in a real office environment, we find that SPOT can accurately maintain personal comfort despite environmental fluctuations and allows a worker to balance personal comfort with energy use.",
"title": ""
},
{
"docid": "9950daef3ca18eeee0482717c5e5fe5e",
"text": "Rapidly growing rate of industry of earth moving machines is assured through the high performance construction machineries with complex mechanism and automation of construction activity. Design of backhoe link mechanism is critical task in context of digging force developed through actuators during the digging operation. The digging forces developed by actuators must be greater than that of the resistive forces offered by the terrain to be excavated. This paper focuses on the evaluation method of bucket capacity and digging forces required to dig the terrain for light duty construction work. This method provides the prediction of digging forces and can be applied for autonomous operation of excavation task. The evaluated digging forces can be used as boundary condition and loading conditions to carry out Finite Element Analysis of the backhoe mechanism for strength and stress analysis. A generalized breakout force and digging force model also developed using the fundamentals of kinematics of backhoe mechanism in context of robotics. An analytical approach provided for static force analysis of mini hydraulic backhoe excavator attachment.",
"title": ""
},
{
"docid": "9b5224b94b448d5dabbd545aedd293f8",
"text": "the topic (a) has been dedicated to extolling its use as a decisionmaking criterion; (b) has presented isolated numerical examples of its calculation/determination; and (c) has considered it as part of the general discussions of profitability and discussed its role in customer acquisition decisions and customer acquisition/retention trade-offs. There has been a dearth of general modeling of the topic. This paper presents a series of mathematical models for determination of customer lifetime value. The choice of the models is based on a systematic theoretical taxonomy and on assumptions grounded in customer behavior. In NADA I. NASR is a doctoral student in Marketing at the School addition, selected managerial applications of these general models of of Management, Boston University. customer lifetime value are offered. 1998 John Wiley & Sons, Inc. and Direct Marketing Educational Foundation, Inc. CCC 1094-9968/98/010017-14",
"title": ""
},
{
"docid": "2ad80de5642ab11f6aaf079bc09f4c42",
"text": "We examine the relationship between geography and ethnic homophily in Estonia, a linguistically divided country. Analyzing the physical locations and cellular communications of tens of thousands of individuals, we document a strong relationship between the ethnic concentration of an individual's geographic neighborhood and the ethnic composition of the people with whom he interacts. The empirical evidence is consistent with a theoretical model in which individuals prefer to form ties with others living close by and of the same ethnicity. Exploiting variation in the data caused by migrants and quasi-exogenous settlement patterns, we nd suggestive evidence that the ethnic composition of geographic neighborhoods has a causal in uence on the ethnic structure of social networks.",
"title": ""
},
{
"docid": "5ef325cffe20a0337eca258fa7ad8392",
"text": "DEAP (Distributed Evolutionary Algorithms in Python) is a novel volutionary computation framework for rapid prototyping and testing of ideas. Its design departs from most other existing frameworks in that it seeks to make algorithms explicit and data structures transparent, as opposed to the more common black box type of frameworks. It also incorporates easy parallelism where users need not concern themselves with gory implementation details like synchronization and load balancing, only functional decomposition. Several examples illustrate the multiple properties of DEAP.",
"title": ""
},
{
"docid": "2c2281551bc085a12e9b9bf15ff092c5",
"text": "Clustering aims at discovering groups and identifying interesting distributions and patterns in data sets. Researchers have extensively studied clustering since it arises in many application domains in engineering and social sciences. In the last years the availability of huge transactional and experimental data sets and the arising requirements for data mining created needs for clustering algorithms that scale and can be applied in diverse domains. This paper surveys clustering methods and approaches available in literature in a comparative way. It also presents the basic concepts, principles and assumptions upon which the clustering algorithms are based. Another important issue is the validity of the clustering schemes resulting from applying algorithms. This is also related to the inherent features of the data set under concern. We review and compare clustering validity measures available in the literature. Furthermore, we illustrate the issues that are underaddressed by the recent algorithms and we address new research directions.",
"title": ""
},
{
"docid": "f560dbe8f3ff47731061d67b596ec7b0",
"text": "This paper considers the problem of fixed priority scheduling of periodic tasks with arbitrary deadlines. A general criterion for the schedulability of such a task set is given. Worst case bounds are given which generalize the Liu and Layland bound. The results are shown to provide a basis for developing predictable distributed real-time systems.",
"title": ""
},
{
"docid": "9a43387bb85efe85e9395a90a7934b5f",
"text": "0. Introduction This is a manual for coding Centering Theory (Grosz et al., 1995) in Spanish. The manual is still under revision. The coding is being done on two sets of corpora: • ISL corpus. A set of task-oriented dialogues in which participants try to find a date where they can meet. Distributed by the Interactive Systems Lab at Carnegie Mellon University. Transcription conventions for this corpus can be found in Appendix A. • CallHome corpus. Spontaneous telephone conversations, distributed by the Linguistics Data Consortium at the University of Pennsylvania. Information about this corpus can be obtained from the LDC. This manual provides guidelines for how to segment discourse (Section 1), what to include in the list of forward-looking centers (Section 2), and how to rank the list (Section 3). In Section 4, we list some unresolved issues. 1. Utterance segmentation 1.1 Utterance In this section, we discuss how to segment discourse into utterances. Besides general segmentation of coordinated and subordinated clauses, we discuss how to treat some spoken language phenomena, such as false starts. In general, an utterance U is a tensed clause. Because we are analyzing telephone conversations, a turn may be a clause or it may be not. For those cases in which the turn is not a clause, a turn is considered an utterance if it contains entities. The first pass in segmentation is to break the speech into intonation units. For the ISL corpus, an utterance U is defined as an intonation unit marked by either {period}, {quest} or {seos} (see Appendix A for details on transcription). Note that {comma}, unless it is followed by {seos}, does not define an utterance. In the example below, (1c.) corresponds to the beginning of a turn by a different speaker. However, even though (1c.) is not a tensed clause, it is treated as an utterance because it contains entities, it is followed by {comma} {seos}, and it does not seem to belong to the following utterance.",
"title": ""
}
] |
scidocsrr
|
a5c3af37a329ee1bb360fa6c40b9fa29
|
On the security of the Winternitz one-time signature scheme
|
[
{
"docid": "2b3f8f7735a6713bbbb07cf690556d11",
"text": "Let F be some block cipher (eg., DES) with block length l. The Cipher Block Chaining Message Authentication Code (CBC MAC) speci es that an m-block message x = x1 xm be authenticated among parties who share a secret key a for the block cipher by tagging x with a pre x of ym, where y0 = 0 l and yi = Fa(mi yi 1) for i = 1; 2; : : : ;m. This method is a pervasively used international and U.S. standard. We provide its rst formal justi cation, showing the following general lemma: cipher block chaining a pseudorandom function yields a pseudorandom function. Underlying our results is a technical lemma of independent interest, bounding the success probability of a computationally unbounded adversary in distinguishing between a randomml-bit to l-bit function and the CBC MAC of a random l-bit to l-bit function. Department of Computer Science & Engineering, University of California at San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA. E-Mail: mihir@cs.ucsd.edu. URL: http://www-cse.ucsd.edu/users/mihir. Supported by NSF CAREER Award CCR-9624439 and a Packard Foundation Fellowship in Science and Engineering. y NEC Research Institute, 4 Independence Way, Princeton, New Jersey 08540, USA. Email: joe@research.nj.nec.com. z Department of Computer Science, University of California at Davis, Davis, CA 95616, USA. Email: rogaway@cs.ucdavis.edu. URL: http://wwwcsif.cs.ucdavis.edu/~rogaway. Supported by NSF CAREER Award CCR-9624560.",
"title": ""
}
] |
[
{
"docid": "7d5bdaead744ac93c5b108b4e258ebbf",
"text": "Due to the large-scale nature of complex product architectures, it is necessary to develop some form of abstraction in order to be able to describe and grasp the structure of the product, facilitating product modularization. In this paper we develop three methods for describing product architectures: (a) the Dependency Structure Matrix (DSM), (b) Molecular Diagrams (MD), and (c) Visibility-Dependency (VD) signature diagrams. Each method has its own language (and abstraction), which can be used to qualitatively or quantitatively characterize any given architecture spanning the modular-integrated continuum. A consequence of abstraction is the loss of some detail. So, it is important to choose the correct method (and resolution) to characterize the architecture in order to retain the salient details. The proposed methods are suited for describing architectures of varying levels of complexity and detail. The three methods are demonstrated using a sequence of illustrative simple examples and a case-study analysis of a complex product architecture for an industrial gas turbine. © 2003 Wiley Periodicals, Inc. Syst Eng 7: 35–60, 2004",
"title": ""
},
{
"docid": "1313fbdd0721b58936a05da5080239df",
"text": "Bug tracking systems are valuable assets for managing maintenance activities. They are widely used in open-source projects as well as in the software industry. They collect many different kinds of issues: requests for defect fixing, enhancements, refactoring/restructuring activities and organizational issues. These different kinds of issues are simply labeled as \"bug\" for lack of a better classification support or of knowledge about the possible kinds.\n This paper investigates whether the text of the issues posted in bug tracking systems is enough to classify them into corrective maintenance and other kinds of activities.\n We show that alternating decision trees, naive Bayes classifiers, and logistic regression can be used to accurately distinguish bugs from other kinds of issues. Results from empirical studies performed on issues for Mozilla, Eclipse, and JBoss indicate that issues can be classified with between 77% and 82% of correct decisions.",
"title": ""
},
{
"docid": "7aad80319743ac72d2c4e117e5f831fa",
"text": "In this letter, we propose a novel method for classifying ambulatory activities using eight plantar pressure sensors within smart shoes. Using these sensors, pressure data of participants can be collected regarding level walking, stair descent, and stair ascent. Analyzing patterns of the ambulatory activities, we present new features with which to describe the ambulatory activities. After selecting critical features, a multi-class support vector machine algorithm is applied to classify these activities. Applying the proposed method to the experimental database, we obtain recognition rates up to 95.2% after six steps.",
"title": ""
},
{
"docid": "376e5237101ec6f659085df1e9521e66",
"text": "Unmanned aerial vehicles are gaining a lot of popularity among an ever growing community of amateurs as well as service providers. Emerging technologies, such as LTE 4G/5G networks and mobile edge computing, will widen the use case scenarios of UAVs. In this article, we discuss the potential of UAVs, equipped with IoT devices, in delivering IoT services from great heights. A high-level view of a UAV-based integrative IoT platform for the delivery of IoT services from large height, along with the overall system orchestrator, is presented in this article. As an envisioned use case of the platform, the article demonstrates how UAVs can be used for crowd surveillance based on face recognition. To evaluate the use case, we study the offloading of video data processing to a MEC node compared to the local processing of video data onboard UAVs. For this, we developed a testbed consisting of a local processing node and one MEC node. To perform face recognition, the Local Binary Pattern Histogram method from the Open Source Computer Vision is used. The obtained results demonstrate the efficiency of the MEC-based offloading approach in saving the scarce energy of UAVs, reducing the processing time of recognition, and promptly detecting suspicious persons.",
"title": ""
},
{
"docid": "89281eed8f3faadcf0bc07bd151728a4",
"text": "The Internet of Things (IoT) continues to increase in popularity as more “smart” devices are released and sold every year. Three protocols in particular, Zigbee, Z-wave, and Bluetooth Low Energy (BLE) are used for network communication on a significant number of IoT devices. However, devices utilizing each of these three protocols have been compromised due to either implementation failures by the manufacturer or security shortcomings in the protocol itself. This paper identifies the security features and shortcomings of each protocol citing employed attacks for reference. Additionally, it will serve to help manufacturers make two decisions: First, should they invest in creating their own protocol, and second, if they decide against this, which protocol should they use and how should they implement it to ensure their product is as secure as it can be. These answers are made with respect to the specific factors manufacturers in the IoT space face such as the reversed CIA model with availability usually being the most important of the three and the ease of use versus security tradeoff that manufacturers have to consider. This paper finishes with a section aimed at future research for IoT communication protocols.",
"title": ""
},
{
"docid": "3b903b284e6a7bfb54113242b1143ddc",
"text": "Hypertension — the chronic elevation of blood pressure — is a major human health problem. In most cases, the root cause of the disease remains unknown, but there is mounting evidence that many forms of hypertension are initiated and maintained by an elevated sympathetic tone. This review examines how the sympathetic tone to cardiovascular organs is generated, and discusses how elevated sympathetic tone can contribute to hypertension.",
"title": ""
},
{
"docid": "f8622acd0d0c2811b6ae2d0b5d4c9a6b",
"text": "Squalene is a linear triterpene that is extensively utilized as a principal component of parenteral emulsions for drug and vaccine delivery. In this review, the chemical structure and sources of squalene are presented. Moreover, the physicochemical and biological properties of squalene-containing emulsions are evaluated in the context of parenteral formulations. Historical and current parenteral emulsion products containing squalene or squalane are discussed. The safety of squalene-based products is also addressed. Finally, analytical techniques for characterization of squalene emulsions are examined.",
"title": ""
},
{
"docid": "c7d69faeac74bcf85f28b2c61dab6af1",
"text": "STATEMENT OF THE PROBLEM Thoracic trauma is a notable cause of morbidity and mortality in American trauma centers, where 25% of traumatic deaths are related to injuries sustained within the thoracic cage.1 Chest injuries occur in 60% of polytrauma cases; therefore, a rough estimate of the occurrence of hemothorax related to trauma in the United States approaches 300,000 cases per year.2 The management of hemothorax and pneumothorax has been a complex problem since it was first described over 200 years ago. Although the majority of chest trauma can be managed nonoperatively, there are several questions surrounding the management of hemothorax and occult pneumothorax that are not as easily answered. The technologic advances have raised the question of what to do with incidentally found hemothorax and pneumothorax discovered during the trauma evaluation. Previously, we were limited by our ability to visualize quantities 500 mL of blood on chest radiograph. Now that smaller volumes of blood can be visualized via chest computed tomography (CT), the management of these findings presents interesting clinical questions. In addition to early identification of these processes, these patients often find themselves with late complications such as retained hemothorax and empyema. The approach to these complex problems continues to evolve. Finally, as minimally invasive surgery grows and finds new applications, there are reproducible benefits to the patients in pursuing these interventions as both a diagnostic and therapeutic interventions. Video-assisted thoracoscopic surgery (VATS) has a growing role in the management of trauma patients.",
"title": ""
},
{
"docid": "a0d2ea9b5653d6ca54983bb3d679326e",
"text": "A dynamic reasoning system (DRS) is an adaptation of a conventional formal logical system that explicitly portrays reasoning as a temporal activity, with each extralogical input to the system and each inference rule application being viewed as occurring at a distinct timestep. Every DRS incorporates some well-defined logic together with a controller that serves to guide the reasoning process in response to user inputs. Logics are generic, whereas controllers are application specific. Every controller does, nonetheless, provide an algorithm for nonmonotonic belief revision. The general notion of a DRS comprises a framework within which one can formulate the logic and algorithms for a given application and prove that the algorithms are correct, that is, that they serve to (1) derive all salient information and (2) preserve the consistency of the belief set. This article illustrates the idea with ordinary first-order predicate calculus, suitably modified for the present purpose, and two examples. The latter example revisits some classic nonmonotonic reasoning puzzles (Opus the Penguin, Nixon Diamond) and shows how these can be resolved in the context of a DRS, using an expanded version of first-order logic that incorporates typed predicate symbols. All concepts are rigorously defined and effectively computable, thereby providing the foundation for a future software implementation.",
"title": ""
},
{
"docid": "1416f250d8ec4e47a9b8590e82dc8881",
"text": "This paper presents expressions for the limiting value of the duty cycle and the minimum value of the slope compensation for marginally stable operation as well as for the normalized crossover frequency, the maximum duty cycle, and the value of the slope compensation at a required phase margin. These quantities describe the performance of the inner-current loop in peak current-mode controlled PWM dc-dc converters. The derivations are based on the Padé approximation of z = exp(sTs). The results of this paper can be used for the design of the inner-current loop with a specified phase margin.",
"title": ""
},
{
"docid": "2c58791fd0f477fadf6d376ac4aaf16e",
"text": "Networked digital media present new challenges for people to locate information that they can trust. At the same time, societal reliance on information that is available solely or primarily via the Internet is increasing. This article discusses how and why digitally networked communication environments alter traditional notions of trust, and presents research that examines how information consumers make judgments about the credibility and accuracy of information they encounter online. Based on this research, the article focuses on the use of cognitive heuristics in credibility evaluation. Findings from recent studies are used to illustrate the types of cognitive heuristics that information consumers employ when determining what sources and information to trust online. The article concludes with an agenda for future research that is needed to better understand the role and influence of cognitive heuristics in credibility evaluation in computer-mediated communication contexts. © 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5759152f6e9a9cb1e6c72857e5b3ec54",
"text": "Deep multitask networks, in which one neural network produces multiple predictive outputs, can offer better speed and performance than their single-task counterparts but are challenging to train properly. We present a gradient normalization (GradNorm) algorithm that automatically balances training in deep multitask models by dynamically tuning gradient magnitudes. We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting across multiple tasks when compared to single-task networks, static baselines, and other adaptive multitask loss balancing techniques. GradNorm also matches or surpasses the performance of exhaustive grid search methods, despite only involving a single asymmetry hyperparameter α. Thus, what was once a tedious search process that incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks. Ultimately, we will demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.",
"title": ""
},
{
"docid": "c61107e9c5213ddb8c5e3b1b14dca661",
"text": "In advanced driving assistance systems, it is important to be able to detect the region covered by the road in the images. This paper proposes a method for estimating the road region in images captured by a vehicle-mounted monocular camera. Our proposed method first estimates all of relevant parameters for the camera motion and the 3D road plane from correspondence points between successive images. By calculating a homography matrix from the estimated camera motion and the estimated road plane parameters, and then warping the image at the previous frame, the road region can be determined. To achieve robustness in various road scenes, our method selects the threshold for determining the road region adaptively and incorporates the assumption of a simple road region boundary. In our experiments, it has been shown that the proposed method is able to estimate the road region in real road environments.",
"title": ""
},
{
"docid": "5deae44a9c14600b1a2460836ed9572d",
"text": "Grasping an object in a cluttered, unorganized environment is challenging because of unavoidable contacts and interactions between the robot and multiple immovable (static) and movable (dynamic) obstacles in the environment. Planning an approach trajectory for grasping in such situations can benefit from physics-based simulations that describe the dynamics of the interaction between the robot manipulator and the environment. In this work, we present a physics-based trajectory optimization approach for planning grasp approach trajectories. We present novel cost objectives and identify failure modes relevant to grasping in cluttered environments. Our approach uses rollouts of physics-based simulations to compute the gradient of the objective and of the dynamics. Our approach naturally generates behaviors such as choosing to push objects that are less likely to topple over, recognizing and avoiding situations which might cause a cascade of objects to fall over, and adjusting the manipulator trajectory to push objects aside in a direction orthogonal to the grasping direction. We present results in simulation for grasping in a variety of cluttered environments with varying levels of density of obstacles in the environment. Our experiments in simulation indicate that our approach outperforms a baseline approach that considers multiple straight-line trajectories modified to account for static obstacles by an aggregate success rate of 14% with varying degrees of object clutter.",
"title": ""
},
{
"docid": "ad2c61c7ad3cd6086be81f54401fe0b1",
"text": "Algorithmic brands: A decade of brand experiments with mobile and social media Nicholas Carah, University of Queensland Published in New Media and Society here: http://nms.sagepub.com/content/early/2015/09/10/1461444815605463.abstract Citation: Carah, N. (2015). Algorithmic brands: a decade of brand experiments with mobile and social media, New Media and Society, 1-16. doi 10.1177/1461444815605463 Abstract This article examines how brands have iteratively experimented with mobile and social media. The activities of brands – including Coca-Cola, Virgin and Smirnoff – at music festivals in Australia since 2005 are used as an instructive case. The article demonstrates how these brands imagined social media, attempted to instruct consumers to use mobile devices, and used cultural events to stimulate image production tuned to the decision-making of social media algorithms. The article contributes to debate by articulating how brands are important actors in the development of algorithmic media infrastructure and devices. Accounts of algorithmic media need to examine how the analytic capacities of social and mobile media are interdependent with orchestrating the creative participation of users.",
"title": ""
},
{
"docid": "64de7935c22f74069721ff6e66a8fe8c",
"text": "In the setting of secure multiparty computation, a set of n parties with private inputs wish to jointly compute some functionality of their inputs. One of the most fundamental results of secure computation was presented by Ben-Or, Goldwasser, and Wigderson (BGW) in 1988. They demonstrated that any n-party functionality can be computed with perfect security, in the private channels model. When the adversary is semi-honest, this holds as long as $$t<n/2$$ t < n / 2 parties are corrupted, and when the adversary is malicious, this holds as long as $$t<n/3$$ t < n / 3 parties are corrupted. Unfortunately, a full proof of these results was never published. In this paper, we remedy this situation and provide a full proof of security of the BGW protocol. This includes a full description of the protocol for the malicious setting, including the construction of a new subprotocol for the perfect multiplication protocol that seems necessary for the case of $$n/4\\le t<n/3$$ n / 4 ≤ t < n / 3 .",
"title": ""
},
{
"docid": "3cb25b6438593a36c6867a2edbbd6136",
"text": "One of the most significant challenges of human-robot interaction research is designing systems which foster an appropriate level of trust in their users: in order to use a robot effectively and safely, a user must place neither too little nor too much trust in the system. In order to better understand the factors which influence trust in a robot, we present a survey of prior work on trust in automated systems. We also discuss issues specific to robotics which pose challenges not addressed in the automation literature, particularly related to reliability, capability, and adjustable autonomy. We conclude with the results of a preliminary web-based questionnaire which illustrate some of the biases which autonomous robots may need to overcome in order to promote trust in users.",
"title": ""
},
{
"docid": "cb7e4a454d363b9cb1eb6118a4b00855",
"text": "Stream processing applications reduce the latency of batch data pipelines and enable engineers to quickly identify production issues. Many times, a service can log data to distinct streams, even if they relate to the same real-world event (e.g., a search on Facebook’s search bar). Furthermore, the logging of related events can appear on the server side with different delay, causing one stream to be significantly behind the other in terms of logged event times for a given log entry. To be able to stitch this information together with low latency, we need to be able to join two different streams where each stream may have its own characteristics regarding the degree in which its data is out-of-order. Doing so in a streaming fashion is challenging as a join operator consumes lots of memory, especially with significant data volumes. This paper describes an end-to-end streaming join service that addresses the challenges above through a streaming join operator that uses an adaptive stream synchronization algorithm that is able to handle the different distributions we observe in real-world streams regarding their event times. This synchronization scheme paces the parsing of new data and reduces overall operator memory footprint while still providing high accuracy. We have integrated this into a streaming SQL system and have successfully reduced the latency of several batch pipelines using this approach. PVLDB Reference Format: G. Jacques-Silva, R. Lei, L. Cheng, G. J. Chen, K. Ching, T. Hu, Y. Mei, K. Wilfong, R. Shetty, S. Yilmaz, A. Banerjee, B. Heintz, S. Iyer, A. Jaiswal. Providing Streaming Joins as a Service at Facebook. PVLDB, 11 (12): 1809-1821, 2018. DOI: : https://doi.org/10.14778/3229863.3229869",
"title": ""
},
{
"docid": "cd48c6b722f8e88f0dc514fcb6a0d890",
"text": "Multi-tier data-intensive applications are widely deployed in virtualized data centers for high scalability and reliability. As the response time is vital for user satisfaction, this requires achieving good performance at each tier of the applications in order to minimize the overall latency. However, in such virtualized environments, each tier (e.g., application, database, web) is likely to be hosted by different virtual machines (VMs) on multiple physical servers, where a guest VM is unaware of changes outside its domain, and the hypervisor also does not know the configuration and runtime status of a guest VM. As a result, isolated virtualization domains lend themselves to performance unpredictability and variance. In this paper, we propose IOrchestra, a holistic collaborative virtualization framework, which bridges the semantic gaps of I/O stacks and system information across multiple VMs, improves virtual I/O performance through collaboration from guest domains, and increases resource utilization in data centers. We present several case studies to demonstrate that IOrchestra is able to address numerous drawbacks of the current practice and improve the I/O latency of various distributed cloud applications by up to 31%.",
"title": ""
},
{
"docid": "a7cd63638f13051155f00c5453b86e12",
"text": "Along with increasing investments in new technologies, user technology acceptance becomes a frequently studied topic in the information systems discipline. The last two decades have seen user acceptance models being proposed, tested, refined, extended and unified. These models have contributed to our understanding of user technology acceptance factors and their relationships. Yet they have also presented two limitations: the relatively low explanatory power and inconsistent influences of the factors across studies. Several researchers have recently started to examine the potential moderating effects that may overcome these limitations. However, studies in this direction are far from being conclusive. This study attempts to provide a systematic analysis of the explanatory and situational limitations of existing technology acceptance studies. Ten moderating factors are identified and categorized into three groups: organizational factors, technological factors and individual factors. An integrative model is subsequently established, followed by corresponding propositions pertaining to the moderating factors. r 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
4451640f38ccc1be651423208fb69898
|
Network pharmacology: the next paradigm in drug discovery.
|
[
{
"docid": "c98ed9aedea1a229efbee9ae9d6d4123",
"text": "The application of guidelines linked to the concept of drug-likeness, such as the 'rule of five', has gained wide acceptance as an approach to reduce attrition in drug discovery and development. However, despite this acceptance, analysis of recent trends reveals that the physical properties of molecules that are currently being synthesized in leading drug discovery companies differ significantly from those of recently discovered oral drugs and compounds in clinical development. The consequences of the marked increase in lipophilicity — the most important drug-like physical property — include a greater likelihood of lack of selectivity and attrition in drug development. Tackling the threat of compound-related toxicological attrition needs to move to the mainstream of medicinal chemistry decision-making.",
"title": ""
}
] |
[
{
"docid": "91e32e80a6a2f2a504776b9fd86425ca",
"text": "We propose a method for semi-supervised semantic segmentation using an adversarial network. While most existing discriminators are trained to classify input images as real or fake on the image level, we design a discriminator in a fully convolutional manner to differentiate the predicted probability maps from the ground truth segmentation distribution with the consideration of the spatial resolution. We show that the proposed discriminator can be used to improve semantic segmentation accuracy by coupling the adversarial loss with the standard cross entropy loss of the proposed model. In addition, the fully convolutional discriminator enables semi-supervised learning through discovering the trustworthy regions in predicted results of unlabeled images, thereby providing additional supervisory signals. In contrast to existing methods that utilize weakly-labeled images, our method leverages unlabeled images to enhance the segmentation model. Experimental results on the PASCAL VOC 2012 and Cityscapes datasets demonstrate the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "bb404c0e94cde80436d2c5bd331c7816",
"text": "Conventional video segmentation methods often rely on temporal continuity to propagate masks. Such an assumption suffers from issues like drifting and inability to handle large displacement. To overcome these issues, we formulate an effective mechanism to prevent the target from being lost via adaptive object re-identification. Specifically, our Video Object Segmentation with Re-identification (VSReID) model includes a mask propagation module and a ReID module. The former module produces an initial probability map by flow warping while the latter module retrieves missing instances by adaptive matching. With these two modules iteratively applied, our VS-ReID records a global mean (Region Jaccard and Boundary F measure) of 0.699, the best performance in 2017 DAVIS Challenge.",
"title": ""
},
{
"docid": "13974867d98411b6a999374afcc5b2cb",
"text": "Current best local descriptors are learned on a large dataset of matching and non-matching keypoint pairs. However, data of this kind is not always available since detailed keypoint correspondences can be hard to establish. On the other hand, we can often obtain labels for pairs of keypoint bags. For example, keypoint bags extracted from two images of the same object under different views form a matching pair, and keypoint bags extracted from images of different objects form a non-matching pair. On average, matching pairs should contain more corresponding keypoints than non-matching pairs. We describe an end-to-end differentiable architecture that enables the learning of local keypoint descriptors from such weakly-labeled data.",
"title": ""
},
{
"docid": "8ff6325fed2f8f3323833f6ac446eb3d",
"text": "Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, this `1-norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we extend MKL to arbitrary norms. We devise new insights on the connection between several existing MKL formulations and develop two efficient interleaved optimization strategies for arbitrary norms, that is `p-norms with p ≥ 1. This interleaved optimization is much faster than the commonly used wrapper approaches, as demonstrated on several data sets. A theoretical analysis and an experiment on controlled artificial data shed light on the appropriateness of sparse, non-sparse and `∞-norm MKL in various scenarios. Importantly, empirical applications of `p-norm MKL to three real-world problems from computational biology show that non-sparse MKL achieves accuracies that surpass the state-of-the-art. Data sets, source code to reproduce the experiments, implementations of the algorithms, and further information are available at http://doc.ml.tu-berlin.de/nonsparse_mkl/.",
"title": ""
},
{
"docid": "43a0a3e8ebaa2422efa44bb9b34acd8f",
"text": "Among the open problems in P2P systems, support for nontrivial search predicates, standardized query languages, distributed query processing, query load balancing, and quality of query results have been identified as some of the most relevant issues. This paper describes how range queries as an important nontrivial search predicate can be supported in a structured overlay network that provides O(log n) search complexity on top of a trie abstraction. We provide analytical results that show that the proposed approach is efficient, supports arbitrary granularity of ranges, and demonstrate that its algorithmic complexity in terms of messages is independent of the size of the queried ranges and only depends on the size of the result set. In contrast to other systems which provide evaluation results only through simulations, we validate the theoretical analysis of the algorithms with large-scale experiments on the PlanetLab infrastructure using a fully-fledged implementation of our approach.",
"title": ""
},
{
"docid": "8e6ab2776e8e1ad7cb3d02b9dfbcb733",
"text": "We present a case report of a 4months old first born male child which was brought to our hospital with complaints of abdominal distension and mass in the upper abdomen causing feeding difficulty. Child was clinically found to have a firm non tender mass of about 10 x 8cms in the left upper quadrant of the abdomen which was clinically suspected to be Neuroblastoma. The child was subjected to ultrasound examination using 5-7Mhz Linear transducer in Philips HD11XE machine, which revealed a multicystic heterogeneous mass lesion of 10 x 8cms in the left hypochondrium, displacing the left kidney posteriorly and spleen inferiorly and crossing the midline showing significant peripheral colour uptake, possibility of Neuroblastoma. The child was then subject to CT scan of abdomen with contrast enhancement using 16slice Toshiba Activion scanner. The findings were a large, fairly well defined heterogeneous mass showing both solid and cystic areas showing significant internal and peripheral enhancement with areas of coarse amorphous calcifications. The mass was seen to erode the posterior wall of stomach and displacing the oral contrast within the stomach. The bowel loops were displaced inferiorly and towards the right, the left kidney posteriorly and the spleen inferiorly. No adjacent lymphadenopathy was seen. The child later underwent exploratory laparotomy and a large multicystic mass arising from postero-inferior wall of the stomach along its greater curvature was excised and stomach repaired. On histopathology it was proved to be an immature gastric teratoma containing mixed derivatives of all three germ cell layers.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "9218597308b80bdfa41511e977b42dd1",
"text": "The biophysical characterization of CPX-351, a liposomal formulation of cytarabine and daunorubicin encapsulated in a synergistic 5:1 molar ratio (respectively), is presented. CPX-351 is a promising drug candidate currently in two concurrent Phase 2 trials for treatment of acute myeloid leukemia. Its therapeutic activity is dependent on maintenance of the synergistic 5:1 drug:drug ratio in vivo. CPX-351 liposomes have a mean diameter of 107 nm, a single phase transition temperature of 55.3 degrees C, entrapped volume of 1.5 microL/micromol lipid and a zeta potential of -33 mV. Characterization of these physicochemical properties led to identification of an internal structure within the liposomes, later shown to be produced during the cytarabine loading procedure. Fluorescence labeling studies are presented that definitively show that the structure is composed of lipid and represents a second lamella. Extensive spectroscopic studies of the drug-excipient interactions within the liposome and in solution reveal that interactions of both cytarabine and daunorubicin with the copper(II) gluconate/triethanolamine-based buffer system play a role in maintenance of the 5:1 cytarabine:daunorubicin ratio within the formulation. These studies demonstrate the importance of extensive biophysical study of liposomal drug products to elucidate the key physicochemical properties that may impact their in vivo performance.",
"title": ""
},
{
"docid": "a3ebadf449537b5df8de3c5ab96c74cb",
"text": "Do conglomerate firms have the ability to allocate resources efficiently across business segments? We address this question by comparing the performance of firms that follow passive benchmark strategies in their capital allocation process to those that actively deviate from those benchmarks. Using three measures of capital allocation style to capture various aspects of activeness, we show that active firms have a lower average industry-adjusted profitability than passive firms. This result is robust to controlling for potential endogeneity using matching analysis and regression analysis with firm fixed effects. Moreover, active firms obtain lower valuation and lower excess stock returns in subsequent periods. Our findings suggest that, on average, conglomerate firms that actively allocate resources across their business segments do not do so efficiently and that the stock market does not fully incorporate information revealed in the internal capital allocation process. Guedj and Huang are from the McCombs School of Business, University of Texas at Austin. Guedj: guedj@mail.utexas.edu and (512) 471-5781. Huang: jennifer.huang@mccombs.utexas.edu and (512) 232-9375. Sulaeman is from the Cox School of Business, Southern Methodist University, sulaeman@smu.edu and (214) 768-8284. The authors thank Alexander Butler, Amar Gande, Mark Leary, Darius Miller, Maureen O’Hara, Owen Lamont, Gordon Phillips, Mike Roberts, Oleg Rytchkov, Gideon Saar, Zacharias Sautner, Clemens Sialm, Rex Thompson, Sheridan Titman, Yuhai Xuan, participants at the Financial Research Association meeting and seminars at Cornell University, Southern Methodist University, the University of Texas at Austin, and the University of Texas at Dallas for their helpful comments.",
"title": ""
},
{
"docid": "c101290e355e76df7581a4500c111c86",
"text": "The Internet of Things (IoT) is a network of physical things, objects, or devices, such as radio-frequency identification tags, sensors, actuators, mobile phones, and laptops. The IoT enables objects to be sensed and controlled remotely across existing network infrastructure, including the Internet, thereby creating opportunities for more direct integration of the physical world into the cyber world. The IoT becomes an instance of cyberphysical systems (CPSs) with the incorporation of sensors and actuators in IoT devices. Objects in the IoT have the potential to be grouped into geographical or logical clusters. Various IoT clusters generate huge amounts of data from diverse locations, which creates the need to process these data more efficiently. Efficient processing of these data can involve a combination of different computation models, such as in situ processing and offloading to surrogate devices and cloud-data centers.",
"title": ""
},
{
"docid": "486d31b962600141ba75dfde718f5b3d",
"text": "The design, fabrication, and measurement of a coax to double-ridged waveguide launcher and horn antenna is presented. The novel launcher design employs two symmetric field probes across the ridge gap to minimize spreading inductance in the transition, and achieves better than 15 dB return loss over a 10:1 bandwidth. The aperture-matched horn uses a half-cosine transition into a linear taper for the outer waveguide dimensions and ridge width, and a power-law scaled gap to realize monotonically varying cutoff frequencies, thus avoiding the appearance of trapped mode resonances. It achieves a nearly constant beamwidth in both E- and H-planes for an overall directivity of about 16.5 dB from 10-100 GHz.",
"title": ""
},
{
"docid": "2b9b7b218e112447fa4cdd72085d3916",
"text": "A 48-year-old female patient presented with gigantomastia. The sternal notch-nipple distance was 55 cm for the right breast and 50 cm for the left. Vertical mammaplasty based on the superior pedicle was performed. The resected tissue weighed 3400 g for the right breast and 2800 g for the left breast. The outcome was excellent with respect to symmetry, shape, size, residual scars, and sensitivity of the nipple-areola complex. Longer pedicles or larger resections were not found in the literature on vertical mammaplasty applications. In our opinion, by using the vertical mammaplasty technique in gigantomastia it is possible to achieve a well-projecting shape and preserve NAC sensitivity.",
"title": ""
},
{
"docid": "188c55ef248f7021a66c1f2e05c2fc98",
"text": "The objective of the proposed study is to explore the performance of credit scoring using a two-stage hybrid modeling procedure with artificial neural networks and multivariate adaptive regression splines (MARS). The rationale under the analyses is firstly to use MARS in building the credit scoring model, the obtained significant variables are then served as the input nodes of the neural networks model. To demonstrate the effectiveness and feasibility of the proposed modeling procedure, credit scoring tasks are performed on one bank housing loan dataset using cross-validation approach. As the results reveal, the proposed hybrid approach outperforms the results using discriminant analysis, logistic regression, artificial neural networks and MARS and hence provides an alternative in handling credit scoring tasks. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3b72a89cdd3194f29ebf5db2085cb855",
"text": "Spiking neural network (SNN) models describe key aspects of neural function in a computationally efficient manner and have been used to construct large-scale brain models. Large-scale SNNs are challenging to implement, as they demand high-bandwidth communication, a large amount of memory, and are computationally intensive. Additionally, tuning parameters of these models becomes more difficult and time-consuming with the addition of biologically accurate descriptions. To meet these challenges, we have developed CARLsim 3, a user-friendly, GPU-accelerated SNN library written in C/C++ that is capable of simulating biologically detailed neural models. The present release of CARLsim provides a number of improvements over our prior SNN library to allow the user to easily analyze simulation data, explore synaptic plasticity rules, and automate parameter tuning. In the present paper, we provide examples and performance benchmarks highlighting the library's features.",
"title": ""
},
{
"docid": "3d53bc218b6fe1f3358d70ebdea84d94",
"text": "Although the negotiations literature identifies a variety of approaches for improving one's power position, the relative benefits of these approaches remain largely unexplored. The empirical study presented in this article begins to address this issue by examining how the size of the bargaining zone affects the relative benefit of an advantage in one's BATNA (i.e., having a better alternative than one's counterpart) versus contribution (i.e., contributing more to the relationship than one's counterpart) for negotiator performance. Results indicate that whereas BATNAs exerted a stronger effect on resource allocations than contributions when the bargaining zone was small, an advantage in contributions exerted a stronger effect on resource allocations than BATNAs when the bargaining zone was large. These findings provide needed insight and supporting evidence for how to alter one's power relationship in negotiation.",
"title": ""
},
{
"docid": "39a394f6c7f42f3a5e1451b0337584ed",
"text": "Surveys throughout the world have shown consistently that persons over 65 are far less likely to be victims of crime than younger age groups. However, many elderly people are unduly fearful about crime which has an adverse effect on their quality of life. This Trends and Issues puts this matter into perspective, but also discusses the more covert phenomena of abuse and neglect of the elderly. Our senior citizens have earned the right to live in dignity and without fear: the community as a whole should contribute to this process. Duncan Chappell Director",
"title": ""
},
{
"docid": "aa79a571a84c5c6bed721648f365b9a6",
"text": "Green Security Games (GSGs) have been proposed and applied to optimize patrols conducted by law enforcement agencies in green security domains such as combating poaching, illegal logging and overfishing. However, real-time information such as footprints and agents’ subsequent actions upon receiving the information, e.g., rangers following the footprints to chase the poacher, have been neglected in previous work. To fill the gap, we first propose a new game model GSG-I which augments GSGs with sequential movement and the vital element of real-time information. Second, we design a novel deep reinforcement learning-based algorithm, DeDOL, to compute a patrolling strategy that adapts to the real-time information against a best-responding attacker. DeDOL is built upon the double oracle framework and the policy-space response oracle, solving a restricted game and iteratively adding best response strategies to it through training deep Q-networks. Exploring the game structure, DeDOL uses domain-specific heuristic strategies as initial strategies and constructs several local modes for efficient and parallelized training. To our knowledge, this is the first attempt to use Deep Q-Learning for security games.",
"title": ""
},
{
"docid": "67f73a57040f6d2a5ea79d7ad2693f2f",
"text": "This protocol details a method to immunostain organotypic slice cultures from mouse hippocampus. The cultures are based on the interface method, which does not require special equipment, is easy to execute and yields slice cultures that can be imaged repeatedly, from the time of isolation at postnatal day 6–9 up to 6 months in vitro. The preserved tissue architecture facilitates the analysis of defined hippocampal synapses, cells and entire projections. Time-lapse imaging is based on transgenes expressed in the mice or on constructs introduced through transfection or viral vectors; it can reveal processes that develop over periods ranging from seconds to months. Subsequent to imaging, the slices can be processed for immunocytochemistry to collect further information about the imaged structures. This protocol can be completed in 3 d.",
"title": ""
},
{
"docid": "9827845631238f79060345a4e86bd185",
"text": "We formulate and investigate the novel problem of finding the skyline k-tuple groups from an n-tuple dataset - i.e., groups of k tuples which are not dominated by any other group of equal size, based on aggregate-based group dominance relationship. The major technical challenge is to identify effective anti-monotonic properties for pruning the search space of skyline groups. To this end, we show that the anti-monotonic property in the well-known Apriori algorithm does not hold for skyline group pruning. We then identify order-specific property which applies to SUM, MIN, and MAX and weak candidate-generation property which applies to MIN and MAX only. Experimental results on both real and synthetic datasets verify that the proposed algorithms achieve orders of magnitude performance gain over a baseline method.",
"title": ""
},
{
"docid": "35792db324d1aaf62f19bebec6b1e825",
"text": "Keyphrases: Global Vectors for Word Representation (GloVe). Intrinsic and extrinsic evaluations. Effect of hyperparameters on analogy evaluation tasks. Correlation of human judgment with word vector distances. Dealing with ambiguity in word using contexts. Window classification. This set of notes first introduces the GloVe model for training word vectors. Then it extends our discussion of word vectors (interchangeably called word embeddings) by seeing how they can be evaluated intrinsically and extrinsically. As we proceed, we discuss the example of word analogies as an intrinsic evaluation technique and how it can be used to tune word embedding techniques. We then discuss training model weights/parameters and word vectors for extrinsic tasks. Lastly we motivate artificial neural networks as a class of models for natural language processing tasks.",
"title": ""
}
] |
scidocsrr
|
0686896e215d6e3f368823203f36b8c8
|
Embodied Energy : a Case for Wood Construction
|
[
{
"docid": "647ba490d8507eeefb50387ab95bf59c",
"text": "This study compares the cradle-to-gate total energy and major emissions for the extraction of raw materials, production, and transportation of the common wood building materials from the CORRIM 2004 reports. A life-cycle inventory produced the raw materials, including fuel resources and emission to air, water, and land for glued-laminated timbers, kiln-dried and green softwood lumber, laminated veneer lumber, softwood plywood, and oriented strandboard. Major findings from these comparisons were that the production of wood products, by the nature of the industry, uses a third of their energy consumption from renewable resources and the remainder from fossil-based, non-renewable resources when the system boundaries consider forest regeneration and harvesting, wood products and resin production, and transportation life-cycle stages. When the system boundaries are reduced to a gate-to-gate (manufacturing life-cycle stage) model for the wood products, the biomass component of the manufacturing energy increases to nearly 50% for most products and as high as 78% for lumber production from the Southeast. The manufacturing life-cycle stage consumed the most energy over all the products when resin is considered part of the production process. Extraction of log resources and transportation of raw materials for production had the least environmental impact.",
"title": ""
}
] |
[
{
"docid": "23fc59a5a53906429a9e5d9cfb54bdc4",
"text": "The greater palatine canal is an important anatomical structure that is often utilized as a pathway for infiltration of local anesthesia to affect sensation and hemostasis. Increased awareness of the length and anatomic variation in the anatomy of this structure is important when performing surgical procedures in this area (e.g., placement of osseointegrated dental implants). We examined the anatomy of the greater palatine canal using data obtained from CBCT scans of 500 subjects. Both right and left canals were viewed (N = 1000) in coronal and sagittal planes, and their paths and lengths determined. The average length of the greater palatine canal was 29 mm (±3 mm), with a range from 22 to 40 mm. Coronally, the most common anatomic pattern consisted of the canal traveling inferior-laterally for a distance then directly inferior for the remainder (43.3%). In the sagittal view, the canal traveled most frequently at an anterior-inferior angle (92.9%).",
"title": ""
},
{
"docid": "84e6a26a267c2196870f7f93e0a32e97",
"text": "One of the important tasks for bridge maintenance is bridge deck crack inspection. Traditionally, a human inspector detects cracks using his/her eyes and finds the location of cracks manually. Thus the accuracy of the inspection result is low due to the subjective nature of human judgement. We propose a system that uses a mobile robot to conduct the inspection, where the robot collects bridge deck images with a high resolution camera. In this method, the Laplacian of Gaussian algorithm is used to detect cracks and the global crack map is obtained through camera calibration and robot localization. To ensure that the robot collects all the images on the bridge deck, we develop a complete coverage path planning algorithm for the mobile robot. We compare it with other path planning strategies. Finally, we validate our proposed system through experiments and simulation.",
"title": ""
},
{
"docid": "9673939625a3caafecf3da68a19742b0",
"text": "Automatic detection of road regions in aerial images remains a challenging research topic. Most existing approaches work well on the requirement of users to provide some seedlike points/strokes in the road area as the initial location of road regions, or detecting particular roads such as well-paved roads or straight roads. This paper presents a fully automatic approach that can detect generic roads from a single unmanned aerial vehicles (UAV) image. The proposed method consists of two major components: automatic generation of road/nonroad seeds and seeded segmentation of road areas. To know where roads probably are (i.e., road seeds), a distinct road feature is proposed based on the stroke width transformation (SWT) of road image. To the best of our knowledge, it is the first time to introduce SWT as road features, which show the effectiveness on capturing road areas in images in our experiments. Different road features, including the SWT-based geometry information, colors, and width, are then combined to classify road candidates. Based on the candidates, a Gaussian mixture model is built to produce road seeds and background seeds. Finally, starting from these road and background seeds, a convex active contour model segmentation is proposed to extract whole road regions. Experimental results on varieties of UAV images demonstrate the effectiveness of the proposed method. Comparison with existing techniques shows the robustness and accuracy of our method to different roads.",
"title": ""
},
{
"docid": "d2f7b25a45d3706ef7bbdc2764bc129b",
"text": "In this paper, we present results from a qualitative study of collocated group console gaming. We focus on motivations for, perceptions of, and practices surrounding the shared use of console games by a variety of established groups of gamers. These groups include both intragenerational groups of youth, adults, and elders as well as intergenerational families. Our analysis highlights the numerous ways that console games serve as a computational meeting place for a diverse population of gamers.",
"title": ""
},
{
"docid": "cb04479a6157d9fe0d6c6e092a6b190a",
"text": "During the late 1990s, Huang introduced the algorithm called Empirical Mode Decomposition, which is widely used today to recursively decompose a signal into different modes of unknown but separate spectral bands. EMD is known for limitations like sensitivity to noise and sampling. These limitations could only partially be addressed by more mathematical attempts to this decomposition problem, like synchrosqueezing, empirical wavelets or recursive variational decomposition. Here, we propose an entirely non-recursive variational mode decomposition model, where the modes are extracted concurrently. The model looks for an ensemble of modes and their respective center frequencies, such that the modes collectively reproduce the input signal, while each being smooth after demodulation into baseband. In Fourier domain, this corresponds to a narrow-band prior. We show important relations to Wiener filter denoising. Indeed, the proposed method is a generalization of the classic Wiener filter into multiple, adaptive bands. Our model provides a solution to the decomposition problem that is theoretically well founded and still easy to understand. The variational model is efficiently optimized using an alternating direction method of multipliers approach. Preliminary results show attractive performance with respect to existing mode decomposition models. In particular, our proposed model is much more robust to sampling and noise. Finally, we show promising practical decomposition results on a series of artificial and real data.",
"title": ""
},
{
"docid": "2a4ea3452fe02605144a569797bead9a",
"text": "A novel proximity-coupled probe-fed stacked patch antenna is proposed for Global Navigation Satellite Systems (GNSS) applications. The antenna has been designed to operate for the satellite navigation frequencies in L-band including GPS, GLONASS, Galileo, and Compass (1164-1239 MHz and 1559-1610 MHz). A key feature of our design is the proximity-coupled probe feeds to increase impedance bandwidth and the integrated 90deg broadband balun to improve polarization purity. The final antenna exhibits broad pattern coverage, high gain at the low angles (more than -5 dBi), and VSWR <1.5 for all the operating bands. The design procedures and employed tuning techniques to achieve the desired performance are presented.",
"title": ""
},
{
"docid": "74421de5dedd1f06e94e3ad215a49043",
"text": "Input is a significant problem for wearable systems, particularly for head mounted virtual and augmented reality displays. Existing input techniques either lack expressive power or may not be socially acceptable. As an alternative, thumb-to-finger touches present a promising input mechanism that is subtle yet capable of complex interactions. We present DigiTouch, a reconfigurable glove-based input device that enables thumb-to-finger touch interaction by sensing continuous touch position and pressure. Our novel sensing technique improves the reliability of continuous touch tracking and estimating pressure on resistive fabric interfaces. We demonstrate DigiTouch’s utility by enabling a set of easily reachable and reconfigurable widgets such as buttons and sliders. Since DigiTouch senses continuous touch position, widget layouts can be customized according to user preferences and application needs. As an example of a real-world application of this reconfigurable input device, we examine a split-QWERTY keyboard layout mapped to the user’s fingers. We evaluate DigiTouch for text entry using a multi-session study. With our continuous sensing method, users reliably learned to type and achieved a mean typing speed of 16.0 words per minute at the end of ten 20-minute sessions, an improvement over similar wearable touch systems.",
"title": ""
},
{
"docid": "b1f0dbf303028211c028df13ef431f48",
"text": "Dealing with uncertainty is essential for e cient reinforcement learning. There is a growing literature on uncertainty estimation for deep learning from fixed datasets, but many of the most popular approaches are poorlysuited to sequential decision problems. Other methods, such as bootstrap sampling, have no mechanism for uncertainty that does not come from the observed data. We highlight why this can be a crucial shortcoming and propose a simple remedy through addition of a randomized untrainable ‘prior’ network to each ensemble member. We prove that this approach is e cient with linear representations, provide simple illustrations of its e cacy with nonlinear representations and show that this approach scales to large-scale problems far better than previous attempts.",
"title": ""
},
{
"docid": "6d76c28d29438d87a3815bd4029df63f",
"text": "We use the full query set of the TPC-H Benchmark as a case study for the efficient implementation of decision support queries on main memory column-store databases. Instead of splitting a query into separate independent operators, we consider the query as a whole and translate the execution plan into a single function performing the query. This allows highly efficient CPU utilization, minimal materialization, and execution in a single pass over the data for most queries. The single pass is performed in parallel and scales near-linearly with the number of cores. The resulting query plans for most of the 22 queries are remarkably simple and are suited for automatic generation and fast compilation. Using a data-parallel, NUMA-aware many-core implementation with block summaries, inverted index data structures, and efficient aggregation algorithms, we achieve one to two orders of magnitude better performance than the current record holders of the TPC-H Benchmark.",
"title": ""
},
{
"docid": "931af201822969eb10871ccf10d47421",
"text": "Latent tree learning models represent sentences by composing their words according to an induced parse tree, all based on a downstream task. These models often outperform baselines which use (externally provided) syntax trees to drive the composition order. This work contributes (a) a new latent tree learning model based on shift-reduce parsing, with competitive downstream performance and non-trivial induced trees, and (b) an analysis of the trees learned by our shift-reduce model and by a chart-based model.",
"title": ""
},
{
"docid": "1ebc62dc8dfeaf9c547e7fe3d4d21ae7",
"text": "Electrically small antennas are generally presumed to exhibit high impedance mismatch (high VSWR), low efficiency, high quality factor (Q); and, therefore, narrow operating bandwidth. For an electric or magnetic dipole antenna, there is a fundamental lower bound for the quality factor that is determined as a function of the antenna's occupied physical volume. In this paper, the quality factor of a resonant, electrically small electric dipole is minimized by allowing the antenna geometry to utilize the occupied spherical volume to the greatest extent possible. A self-resonant, electrically small electric dipole antenna is presented that exhibits an impedance near 50 Ohms, an efficiency in excess of 95% and a quality factor that is within 1.5 times the fundamental lower bound at a value of ka less than 0.27. Through an arrangement of the antenna's wire geometry, the electrically small dipole's polarization is converted from linear to elliptical (with an axial ratio of 3 dB), resulting in a further reduction in the quality factor. The elliptically polarized, electrically small antenna exhibits an impedance near 50 Ohms, an efficiency in excess of 95% and it has an omnidirectional, figure-eight radiation pattern.",
"title": ""
},
{
"docid": "d655222bf22e35471b18135b67326ac5",
"text": "In this paper we approach the robust motion planning problem through the lens of perception-aware planning, whereby we seek a low-cost motion plan subject to a separate constraint on perception localization quality. To solve this problem we introduce the Multiobjective Perception-Aware Planning (MPAP) algorithm which explores the state space via a multiobjective search, considering both cost and a perception heuristic. This perception-heuristic formulation allows us to both capture the history dependence of localization drift and represent complex modern perception methods. The solution trajectory from this heuristic-based search is then certified via Monte Carlo methods to be robust. The additional computational burden of perception-aware planning is offset through massive parallelization on a GPU. Through numerical experiments the algorithm is shown to find robust solutions in about a second. Finally, we demonstrate MPAP on a quadrotor flying perceptionaware and perception-agnostic plans using Google Tango for localization, finding the quadrotor safely executes the perception-aware plan every time, while crashing over 20% of the time on the perception-agnostic due to loss of localization.",
"title": ""
},
{
"docid": "84667504b580443dcef79c7ca55c87d3",
"text": "This paper describes a framework for automatic brain tumor segmentation from MR images. The detection of edema is done simultaneously with tumor segmentation, as the knowledge of the extent of edema is important for diagnosis, planning, and treatment. Whereas many other tumor segmentation methods rely on the intensity enhancement produced by the gadolinium contrast agent in the T1-weighted image, the method proposed here does not require contrast enhanced image channels. The only required input for the segmentation procedure is the T2 MR image channel, but it can make use of any additional non-enhanced image channels for improved tissue segmentation. The segmentation framework is composed of three stages. First, we detect abnormal regions using a registered brain atlas as a model for healthy brains. We then make use of the robust estimates of the location and dispersion of the normal brain tissue intensity clusters to determine the intensity properties of the different tissue types. In the second stage, we determine from the T2 image intensities whether edema appears together with tumor in the abnormal regions. Finally, we apply geometric and spatial constraints to the detected tumor and edema regions. The segmentation procedure has been applied to three real datasets, representing different tumor shapes, locations, sizes, image intensities, and enhancement.",
"title": ""
},
{
"docid": "c536e79078d7d5778895e5ac7f02c95e",
"text": "Block-based programming languages like Scratch, Alice and Blockly are becoming increasingly common as introductory languages in programming education. There is substantial research showing that these visual programming environments are suitable for teaching programming concepts. But, what do people do when they use Scratch? In this paper we explore the characteristics of Scratch programs. To this end we have scraped the Scratch public repository and retrieved 250,000 projects. We present an analysis of these projects in three different dimensions. Initially, we look at the types of blocks used and the size of the projects. We then investigate complexity, used abstractions and programming concepts. Finally we detect code smells such as large scripts, dead code and duplicated code blocks. Our results show that 1) most Scratch programs are small, however Scratch programs consisting of over 100 sprites exist, 2) programming abstraction concepts like procedures are not commonly used and 3) Scratch programs do suffer from code smells including large scripts and unmatched broadcast signals.",
"title": ""
},
{
"docid": "658c7ae98ea4b0069a7a04af1e462307",
"text": "Exploiting packetspsila timing information for covert communication in the Internet has been explored by several network timing channels and watermarking schemes. Several of them embed covert information in the inter-packet delay. These channels, however, can be detected based on the perturbed traffic pattern, and their decoding accuracy could be degraded by jitter, packet loss and packet reordering events. In this paper, we propose a novel TCP-based timing channel, named TCPScript to address these shortcomings. TCPScript embeds messages in ldquonormalrdquo TCP data bursts and exploits TCPpsilas feedback and reliability service to increase the decoding accuracy. Our theoretical capacity analysis and extensive experiments have shown that TCPScript offers much higher channel capacity and decoding accuracy than an IP timing channel and JitterBug. On the countermeasure, we have proposed three new metrics to detect aggressive TCPScript channels.",
"title": ""
},
{
"docid": "eaa175d9bb7c86c1750936389439e208",
"text": "We present data from detailed observation of 24 information workers that shows that they experience work fragmentation as common practice. We consider that work fragmentation has two components: length of time spent in an activity, and frequency of interruptions. We examined work fragmentation along three dimensions: effect of collocation, type of interruption, and resumption of work. We found work to be highly fragmented: people average little time in working spheres before switching and 57% of their working spheres are interrupted. Collocated people work longer before switching but have more interruptions. Most internal interruptions are due to personal work whereas most external interruptions are due to central work. Though most interrupted work is resumed on the same day, more than two intervening activities occur before it is. We discuss implications for technology design: how our results can be used to support people to maintain continuity within a larger framework of their working spheres.",
"title": ""
},
{
"docid": "de93e539df9f0d372d55c9dde81fb0a4",
"text": "We review recent methods for learning with positive definite kernels. All these methods formulate learning and estimation problems as linear tasks in a reproducing kernel Hilbert space (RKHS) associated with a kernel. We cover a wide range of methods, ranging from simple classifiers to sophisticated methods for estimation with structured data. (AMS 2000 subject classifications: primary 30C40 Kernel functions and applications; secondary 68T05 Learning and adaptive systems. —",
"title": ""
},
{
"docid": "3b052425c5bde8d28d0a0e50b101d344",
"text": "A dual-band circularly polarized aperture coupled microstrip RFID reader antenna using a metamaterial (MTM) branch-line coupler has been designed, fabricated, and measured. The proposed antenna is fabricated on a FR-4 substrate with relative permittivity of 4.6 and thickness of 1.6 mm. The MTM coupler is designed employing the provided explicit closed-form formulas. The dual-band (UHF and ISM) circularly-polarized RFID reader antenna with separate Tx and Rx ports is connected to the designed metamaterial (MTM) branch-line coupler. The maximum measured LHCP antenna gain is 6.6 dBic at 920 MHz (UHF) and RHCP gain is 7.9 dBic at 2.45 GHz (ISM). The cross-polar CP gains near broadside of the RFID reader antenna are approximately less than - 20 dB compared with the mentioned co-polar CP gains in both bands. The isolations between the two ports are about 25 dB and 38 dB, at 920 MHz and 2.45 GHz, respectively. The measured axial ratios are less than 0.7 dB in the UHF band (917-923 MHz) and 1.5 dB in the ISM band (2.4-2.48 GHz).",
"title": ""
}
] |
scidocsrr
|
64206b98b6c86e3bf83dcd85bd3522ce
|
SenticNet 4: A Semantic Resource for Sentiment Analysis Based on Conceptual Primitives
|
[
{
"docid": "7f74c519207e469c39f81d52f39438a0",
"text": "Automatic sentiment classification has been extensively studied and applied in recent years. However, sentiment is expressed differently in different domains, and annotating corpora for every possible domain of interest is impractical. We investigate domain adaptation for sentiment classifiers, focusing on online reviews for different types of products. First, we extend to sentiment classification the recently-proposed structural correspondence learning (SCL) algorithm, reducing the relative error due to adaptation between domains by an average of 30% over the original SCL algorithm and 46% over a supervised baseline. Second, we identify a measure of domain similarity that correlates well with the potential for adaptation of a classifier from one domain to another. This measure could for instance be used to select a small set of domains to annotate whose trained classifiers would transfer well to many other domains.",
"title": ""
},
{
"docid": "742c0b15f6a466bfb4e5130b49f79e64",
"text": "There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks (DBNs); however, scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model that scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique that shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.",
"title": ""
}
] |
[
{
"docid": "4a2fcdf5394e220a579d1414588a124a",
"text": "In this paper we introduce AR Scratch, the first augmented-reality (AR) authoring environment designed for children. By adding augmented-reality functionality to the Scratch programming platform, this environment allows pre-teens to create programs that mix real and virtual spaces. Children can display virtual objects on a real-world space seen through a camera, and they can control the virtual world through interactions between physical objects. This paper describes the system design process, which focused on appropriately presenting the AR technology to the typical Scratch population (children aged 8-12), as influenced by knowledge of child spatial cognition, programming expertise, and interaction metaphors. Evaluation of this environment is proposed, accompanied by results from an initial pilot study, as well as discussion of foreseeable impacts on the Scratch user community.",
"title": ""
},
{
"docid": "0e48de6dc8d1f51eb2a7844d4d67b8f5",
"text": "Vygotsky asserted that the student who had mastered algebra had attained “a new higher plane of thought”, a level of abstraction and generalization which transformed the meaning of the lower (arithmetic) level. He also affirmed the importance of the mastery of scientific concepts for the development of the ability to think theoretically, and emphasized the mediating role of semiotic forms and symbol systems in developing this ability. Although historically in mathematics and traditionally in education, algebra followed arithmetic, Vygotskian theory supports the reversal of this sequence in the service of orienting children to the most abstract and general level of understanding initially. This organization of learning activity for the development of algebraic thinking is very different from the introduction of elements of algebra into the study of arithmetic in the early grades. The intended theoretical (algebraic) understanding is attained through appropriation of psychological tools, in the form of specially designed schematics, whose mastery is not merely incidental to but the explicit focus of instruction. The author’s research in implementing Davydov’s Vygotskian-based elementary mathematics curriculum in the U.S. suggests that these characteristics function synergistically to develop algebraic understanding and computational competence as well. Kurzreferat: Vygotsky ging davon aus, dass Lernende, denen es gelingt, Algebra zu beherrschen, „ein höheres gedankliches Niveau” erreicht hätten, eine Ebene von Abstraktion und Generalisierung, welche die Bedeutung der niederen (arithmetischen) Ebene verändert. Er bestätigte auch die Relevanz der Beherrschung von wissenschaftlichen Begriffen für die Entwicklung der Fähigkeit, theoretisch zu denken und betonte dabei die vermittelnde Rolle von semiotischen Formen und Symbolsystemen für die Ausformung dieser Fähigkeit. Obwohl mathematik-his tor isch und t radi t ionel l erziehungswissenschaftlich betrachtet, Algebra der Arithmetik folgte, stützt Vygotski’s Theorie die Umkehrung dieser Sequenz bei dem Bemühen, Kinder an das abstrakteste und allgemeinste Niveau des ersten Verstehens heranzuführen. Diese Organisation von Lernaktivitäten für die Ausbildung algebraischen Denkens unterscheidet sich erheblich von der Einführung von Algebra-Elementen in das Lernen von Arithmetik während der ersten Schuljahre. Das beabsichtigte theoretische (algebraische) Verstehen wird erreicht durch die Aneignung psychologischer Mittel, und zwar in Form von dafür speziell entwickelten Schemata, deren Beherrschung nicht nur beiläufig erfolgt, sondern Schwerpunkt des Unterrichts ist. Die im Beitrag beschriebenen Forschungen zur Implementierung von Davydov’s elementarmathematischen Curriculum in den Vereinigten Staaten, das auf Vygotsky basiert, legt die Vermutung nahe, dass diese Charakteristika bei der Entwicklung von algebraischem Verstehen und von Rechenkompetenzen synergetisch funktionieren. ZDM-Classification: C30, D30, H20 l. Historical Context Russian psychologist Lev Vygotsky stated clearly his perspective on algebraic thinking. Commenting on its development within the structure of the Russian curriculum in the early decades of the twentieth century,",
"title": ""
},
{
"docid": "764b13c0c5c8134edad4fac65af356d6",
"text": "This thesis introduces new methods for statistically modelling text using topic models. Topic models have seen many successes in recent years, and are used in a variety of applications, including analysis of news articles, topic-based search interfaces and navigation tools for digital libraries. Despite these recent successes, the field of topic modelling is still relatively new and there remains much to be explored. One noticeable absence from most of the previous work on topic modelling is consideration of language and document structure—from low-level structures, including word order and syntax, to higher-level structures, such as relationships between documents. The focus of this thesis is therefore structured topic models—models that combine latent topics with information about document structure, ranging from local sentence structure to inter-document relationships. These models draw on techniques from Bayesian statistics, including hierarchical Dirichlet distributions and processes, Pitman-Yor processes, and Markov chain Monte Carlo methods. Several methods for estimating the parameters of Dirichlet-multinomial distributions are also compared. The main contribution of this thesis is the introduction of three structured topic models. The first is a topic-based language model. This model captures both word order and latent topics by extending a Bayesian topic model to incorporate n-gram statistics. A bigram version of the new model does better at predicting future words than either a topic model or a trigram language model. It also provides interpretable topics. The second model arises from a Bayesian reinterpretation of a classic generative dependency parsing model. The new model demonstrates that parsing performance can be substantially improved by a careful choice of prior and by sampling hyperparameters. Additionally, the generative nature of the model facilitates the inclusion of latent state variables, which act as specialised part-of-speech tags or “syntactic topics”. The third is a model that captures high-level relationships between documents. This model uses nonparametric Bayesian priors and Markov chain Monte Carlo methods to infer topic-based document clusters. The model assigns a higher probability to unseen test documents than either a clustering model without topics or a Bayesian topic model without document clusters. The model can be extended to incorporate author information, resulting in finer-grained clusters and better predictive performance.",
"title": ""
},
{
"docid": "0358eea62c126243134ed1cd2ac97121",
"text": "In the absence of vision, grasping an object often relies on tactile feedback from the ngertips. As the nger pushes the object, the ngertip can feel the contact point move. If the object is known in advance, from this motion the nger may infer the location of the contact point on the object and thereby the object pose. This paper primarily investigates the problem of determining the pose (orientation and position) and motion (velocity and angular velocity) of a planar object with known geometry from such contact motion generated by pushing. A dynamic analysis of pushing yields a nonlinear system that relates through contact the object pose and motion to the nger motion. The contact motion on the ngertip thus encodes certain information about the object pose. Nonlinear observability theory is employed to show that such information is su cient for the nger to \\observe\" not only the pose but also the motion of the object. Therefore a sensing strategy can be realized as an observer of the nonlinear dynamical system. Two observers are subsequently introduced. The rst observer, based on the result of [15], has its \\gain\" determined by the solution of a Lyapunov-like equation; it can be activated at any time instant during a push. The second observer, based on Newton's method, solves for the initial (motionless) object pose from three intermediate contact points during a push. Under the Coulomb friction model, the paper copes with support friction in the plane and/or contact friction between the nger and the object. Extensive simulations have been done to demonstrate the feasibility of the two observers. Preliminary experiments (with an Adept robot) have also been conducted. A contact sensor has been implemented using strain gauges. Accepted by the International Journal of Robotics Research.",
"title": ""
},
{
"docid": "35d220680e18898d298809272619b1d6",
"text": "This paper proposes the use of a least mean fourth (LMF)-based algorithm for single-stage three-phase grid-integrated solar photovoltaic (SPV) system. It consists of an SPV array, voltage source converter (VSC), three-phase grid, and linear/nonlinear loads. This system has an SPV array coupled with a VSC to provide three-phase active power and also acts as a static compensator for the reactive power compensation. It also conforms to an IEEE-519 standard on harmonics by improving the quality of power in the three-phase distribution network. Therefore, this system serves to provide harmonics alleviation, load balancing, power factor correction and regulating the terminal voltage at the point of common coupling. In order to increase the efficiency and maximum power to be extracted from the SPV array at varying environmental conditions, a single-stage system is used along with perturb and observe method of maximum power point tracking (MPPT) integrated with the LMF-based control technique. The proposed system is modeled and simulated using MATLAB/Simulink with available simpower system toolbox and the behaviour of the system under different loads and environmental conditions are verified experimentally on a developed system in the laboratory.",
"title": ""
},
{
"docid": "06fdd2dae0aa83ec3697342d831da39f",
"text": "Traditionally, nostalgia has been conceptualized as a medical disease and a psychiatric disorder. Instead, we argue that nostalgia is a predominantly positive, self-relevant, and social emotion serving key psychological functions. Nostalgic narratives reflect more positive than negative affect, feature the self as the protagonist, and are embedded in a social context. Nostalgia is triggered by dysphoric states such as negative mood and loneliness. Finally, nostalgia generates positive affect, increases selfesteem, fosters social connectedness, and alleviates existential threat. KEYWORDS—nostalgia; positive affect; self-esteem; social connectedness; existential meaning The term nostalgia was inadvertedly inspired by history’s most famous itinerant. Emerging victoriously from the Trojan War, Odysseus set sail for his native island of Ithaca to reunite with his faithful wife, Penelope. For 3 years, our wandering hero fought monsters, assorted evildoers, and mischievous gods. For another 7 years, he took respite in the arms of the beautiful sea nymph Calypso. Possessively, she offered to make him immortal if he stayed with her on the island of Ogygia. ‘‘Full well I acknowledge,’’ Odysseus replied to his mistress, ‘‘prudent Penelope cannot compare with your stature or beauty, for she is only a mortal, and you are immortal and ageless. Nevertheless, it is she whom I daily desire and pine for. Therefore I long for my home and to see the day of returning’’ (Homer, 1921, Book V, pp. 78–79). This romantic declaration, along with other expressions of Odyssean longing in the eponymous Homeric epic, gave rise to the term nostalgia. It is a compound word, consisting of nostos (return) and algos (pain). Nostalgia, then, is literally the suffering due to relentless yearning for the homeland. The term nostalgia was coined in the 17th century by the Swiss physician Johaness Hofer (1688/1934), but references to the emotion it denotes can be found in Hippocrates, Caesar, and the Bible. HISTORICAL AND MODERN CONCEPTIONS OF NOSTALGIA From the outset, nostalgia was equated with homesickness. It was also considered a bad omen. In the 17th and 18th centuries, speculation about nostalgia was based on observations of Swiss mercenaries in the service of European monarchs. Nostalgia was regarded as a medical disease confined to the Swiss, a view that persisted through most of the 19th century. Symptoms— including bouts of weeping, irregular heartbeat, and anorexia— were attributed variously to demons inhabiting the middle brain, sharp differentiation in atmospheric pressure wreaking havoc in the brain, or the unremitting clanging of cowbells in the Swiss Alps, which damaged the eardrum and brain cells. By the beginning of the 20th century, nostalgia was regarded as a psychiatric disorder. Symptoms included anxiety, sadness, and insomnia. By the mid-20th century, psychodynamic approaches considered nostalgia a subconscious desire to return to an earlier life stage, and it was labeled as a repressive compulsive disorder. Soon thereafter, nostalgia was downgraded to a variant of depression, marked by loss and grief, though still equated with homesickness (for a historical review of nostalgia, see Sedikides, Wildschut, & Baden, 2004). By the late 20th century, there were compelling reasons for nostalgia and homesickness to finally part ways. Adult participants regard nostalgia as different from homesickness. For example, they associate the words warm, old times, childhood, and yearning more frequently with nostalgia than with homesickness (Davis, 1979). Furthermore, whereas homesickness research focused on the psychological problems (e.g., separation anxiety) that can arise when young people transition beyond the home environment, nostalgia transcends social groups and age. For example, nostalgia is found cross-culturally and among wellfunctioning adults, children, and dementia patients (Sedikides et al., 2004; Sedikides, Wildschut, Routledge, & Arndt, 2008; Zhou, Sedikides, Wildschut, & Gao, in press). Finally, although homesickness refers to one’s place of origin, nostalgia can refer Address correspondence to Constantine Sedikides, Center for Research on Self and Identity, School of Psychology, University of Southampton, Southampton SO17 1BJ, England, U.K.; e-mail: cs2@soton.ac.uk. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 304 Volume 17—Number 5 Copyright r 2008 Association for Psychological Science to a variety of objects (e.g., persons, events, places; Wildschut, Sedikides, Arndt, & Routledge, 2006). It is in this light that we note the contemporary definition of nostalgia as a sentimental longing for one’s past. It is, moreover, a sentimentality that is pervasively experienced. Over 80% of British undergraduates reported experiencing nostalgia at least once a week (Wildschut et al., 2006). Given this apparent ubiquity, the time has come for an empirical foray into the content, causes, and functions of this emotion. THE EMPIRICAL BASIS FOR UNDERSTANDING NOSTALGIA The Canvas of Nostalgia What is the content of the nostalgic experience? Wildschut et al. (2006) analyzed the content of narratives submitted voluntarily by (American and Canadian) readers to the periodical Nostalgia. Also, Wildschut et al. asked British undergraduates to write a narrative account of a nostalgic experience. These narratives were also analyzed for content. Across both studies, the most frequently listed objects of nostalgic reverie were close others (family members, friends, partners), momentous events (birthdays, vacations), and settings (sunsets, lakes). Nostalgia has been conceptualized variously as a negative, ambivalent, or positive emotion (Sedikides et al., 2004). These conceptualizations were put to test. In a study by Wildschut, Stephan, Sedikides, Routledge, and Arndt (2008), British and American undergraduates wrote narratives about a ‘‘nostalgic event’’ (vs. an ‘‘ordinary event’’) in their lives and reflected briefly upon the event and how it made them feel. Content analysis revealed that the simultaneous expression of happiness and sadness was more common in narratives of nostalgic events than in narratives of ordinary events. Also in Wildschut et al., British undergraduates wrote about a nostalgic (vs. ordinary vs. simply positive) event in their lives and then rated their happiness and sadness. Although the recollection of ordinary and positive events rarely gave rise to both happiness and sadness, such coactivation occurred much more frequently following the recollection of a nostalgic event. Yet, nostalgic events featured more frequent expressions of happiness than of sadness and induced higher levels of happiness than of sadness. Wildschut et al. (2006) obtained additional evidence that nostalgia is mostly a positively toned emotion: The narratives included far more expressions of positive than negative affect. At the same time, though, there was evidence of bittersweetness. Many narratives contained descriptions of disappointments and losses, and some touched on such issues as separation and even the death of loved ones. Nevertheless, positive and negative elements were often juxtaposed to create redemption, a narrative pattern that progresses from a negative or undesirable state (e.g., suffering, pain, exclusion) to a positive or desirable state (e.g., acceptance, euphoria, triumph; McAdams, 2001). For example, although a family reunion started badly (e.g., an uncle insulting the protagonist), it nevertheless ended well (e.g., the family singing together after dinner). The strength of the redemption theme may explain why, despite the descriptions of sorrow, the overall affective signature of the nostalgic narratives was positive. Moreover, Wildschut et al. (2006) showed that nostalgia is a self-relevant and social emotion: The self almost invariably figured as the protagonist in the narratives and was almost always surrounded by close others. In all, the canvas of nostalgia is rich, reflecting themes of selfhood, sociality, loss, redemption, and ambivalent, yet mostly positive, affectivity. The Triggers of Nostalgia Wildschut et al. (2006) asked participants to describe when they become nostalgic. The most frequently reported trigger was negative affect (‘‘I think of nostalgic experiences when I am sad as they often make me feel better’’), and, within this category, loneliness was the most frequently reported discrete affective state (‘‘If I ever feel lonely or sad I tend to think of my friends or family who I haven’t seen in a long time’’). Given these initial reports, Wildschut et al. proceeded to test whether indeed negative mood and loneliness qualify as nostalgia triggers. British undergraduates read one of three news stories, each based on actual events, that were intended to influence their mood. In the negative-mood condition, they read about the Tsunami that struck coastal regions in Asia and Africa in December 2004. In the neutral-mood condition, they read about the January 2005 landing of the Huygens probe on Titan. In the positive-mood condition, they read about the November 2004 birth of a polar bear, ostensibly in the London Zoo (actually in the Detroit Zoo). Then they completed a measure of nostalgia, rating the extent to which they missed 18 aspects of their past (e.g., ‘‘holidays I went on,’’ ‘‘past TV shows, movies,’’ ‘‘someone I loved’’). Participants in the negativemood condition were more nostalgic (i.e., missed more aspects of their past) than were participants in the other two conditions. In another study, loneliness was successfully induced by giving participants false (high vs. low) feedback on a ‘‘loneliness’’ test (i.e., they were led to believe they were either lonely or not lonely based on the feedback). Subsequently, participants rated how much they missed 18 aspects of their past. Participants in the high-loneliness condition were more nostalgic than those in the low-loneliness condition. These findings were re",
"title": ""
},
{
"docid": "371ab18488da4e719eda8838d0d42ba8",
"text": "Research reveals dramatic differences in the ways that people from different cultures perceive the world around them. Individuals from Western cultures tend to focus on that which is object-based, categorically related, or self-relevant whereas people from Eastern cultures tend to focus more on contextual details, similarities, and group-relevant information. These different ways of perceiving the world suggest that culture operates as a lens that directs attention and filters the processing of the environment into memory. The present review describes the behavioral and neural studies exploring the contribution of culture to long-term memory and related processes. By reviewing the extant data on the role of various neural regions in memory and considering unifying frameworks such as a memory specificity approach, we identify some promising directions for future research.",
"title": ""
},
{
"docid": "8e3f8fca93ca3106b83cf85d20c061ca",
"text": "KeeLoq is a 528-round lightweight block cipher which has a 64-bit secret key and a 32-bit block length. The cube attack, proposed by Dinur and Shamir, is a new type of attacking method. In this paper, we investigate the security of KeeLoq against iterative side-channel cube attack which is an enhanced attack scheme. Based on structure of typical block ciphers, we give the model of iterative side-channel cube attack. Using the traditional single-bit leakage model, we assume that the attacker can exactly possess the information of one bit leakage after round 23. The new attack model costs a data complexity of 211.00 chosen plaintexts to recover the 23-bit key of KeeLoq. Our attack will reduce the key searching space to 241 by considering an error-free bit from internal states.",
"title": ""
},
{
"docid": "55507c03c5319de2806c0365accf2980",
"text": "Although latent factor models (e.g., matrix factorization) achieve good accuracy in rating prediction, they suffer from several problems including cold-start, non-transparency, and suboptimal recommendation for local users or items. In this paper, we employ textual review information with ratings to tackle these limitations. Firstly, we apply a proposed aspect-aware topic model (ATM) on the review text to model user preferences and item features from different aspects, and estimate the aspect importance of a user towards an item. The aspect importance is then integrated into a novel aspect-aware latent factor model (ALFM), which learns user’s and item’s latent factors based on ratings. In particular, ALFM introduces a weighted matrix to associate those latent factors with the same set of aspects discovered by ATM, such that the latent factors could be used to estimate aspect ratings. Finally, the overall rating is computed via a linear combination of the aspect ratings, which are weighted by the corresponding aspect importance. To this end, our model could alleviate the data sparsity problem and gain good interpretability for recommendation. Besides, an aspect rating is weighted by an aspect importance, which is dependent on the targeted user’s preferences and targeted item’s features. Therefore, it is expected that the proposed method can model a user’s preferences on an item more accurately for each user-item pair locally. Comprehensive experimental studies have been conducted on 19 datasets from Amazon and Yelp 2017 Challenge dataset. Results show that our method achieves significant improvement compared with strong baseline methods, especially for users with only few ratings. Moreover, our model could interpret the recommendation results in depth.",
"title": ""
},
{
"docid": "a7af0135b2214ca88883fe136bb13e70",
"text": "ITIL is one of the most used frameworks for IT service management. Implementing ITIL processes through an organization is not an easy task and present many difficulties. This paper explores the ITIL implementation's challenges and tries to experiment how Business Process Management Systems can help organization overtake those challenges.",
"title": ""
},
{
"docid": "69c253f895d2f886496332d1b3d22542",
"text": "In this paper, we present a novel refined fused model combining masked Res-C3D network and skeleton LSTM for abnormal gesture recognition in RGB-D videos. The key to our design is to learn discriminative representations of gesture sequences in particular abnormal gesture samples by fusing multiple features from different models. First, deep spatiotemporal features are well extracted by 3D convolutional neural networks with residual architecture (Res-C3D). As gestures are mainly derived from the arm or hand movements, a masked Res-C3D network is built to decrease the effect of background and other variations via exploiting the skeleton of the body to reserve arm regions with discarding other regions. And then, relative positions and angles of different key points are extracted and used to build a time-series model by long short-term memory network (LSTM). Based the above representations, a fusion scheme for blending classification results and remedy model disadvantage by abnormal gesture via a weight fusion layer is developed, in which the weights of each voting sub-classifier being advantage to a certain class in our ensemble model are adaptively obtained by training in place of fixed weights. Our experimental results show that the proposed method can distinguish the abnormal gesture samples effectively and achieve the state-of-the-art performance in the IsoGD dataset.",
"title": ""
},
{
"docid": "d29485bc844995b639bb497fb05fcb6a",
"text": "Vol. LII (June 2015), 375–393 375 © 2015, American Marketing Association ISSN: 0022-2437 (print), 1547-7193 (electronic) *Paul R. Hoban is Assistant Professor of Marketing, Wisconsin School of Business, University of Wisconsin–Madison (e-mail: phoban@ bus. wisc. edu). Randolph E. Bucklin is Professor of Marketing, Peter W. Mullin Chair in Management, UCLA Anderson School of Management, University of California, Los Angeles (e-mail: randy.bucklin@anderson. ucla. edu). Avi Goldfarb served as associate editor for this article. PAUL R. HOBAN and RANDOLPH E. BUCKLIN*",
"title": ""
},
{
"docid": "dc418c7add2456b08bc3a6f15b31da9f",
"text": "In professional search environments, such as patent search or legal search, search tasks have unique characteristics: 1) users interactively issue several queries for a topic, and 2) users are willing to examine many retrieval results, i.e., there is typically an emphasis on recall. Recent surveys have also verified that professional searchers continue to have a strong preference for Boolean queries because they provide a record of what documents were searched. To support this type of professional search, we propose a novel Boolean query suggestion technique. Specifically, we generate Boolean queries by exploiting decision trees learned from pseudo-labeled documents and rank the suggested queries using query quality predictors. We evaluate our algorithm in simulated patent and medical search environments. Compared with a recent effective query generation system, we demonstrate that our technique is effective and general.",
"title": ""
},
{
"docid": "1ca9d06a2afdd63976976a14648bf5be",
"text": "Real-time solutions for noise reduction and signal processing represent a central challenge for the development of Brain Computer Interfaces (BCI). In this paper, we introduce the Moving Average Convergence Divergence (MACD) filter, a tunable digital passband filter for online noise reduction and onset detection without preliminary learning phase, used in economic markets analysis. MACD performance was tested and benchmarked with other filters using data collected with functional Near Infrared Spectoscopy (fNIRS) during a digit sequence memorization task. This filter has a good performance on filtering and real-time peak activity onset detection, compared to other techniques. Therefore, MACD could be implemented for efficient BCI design using fNIRS.",
"title": ""
},
{
"docid": "dcab5c32a037ac31f8a541458a2d72a6",
"text": "To determine the 3D orientation and 3D location of objects in the surroundings of a camera mounted on a robot or mobile device, we developed two powerful algorithms in object detection and temporal tracking that are combined seamlessly for robotic perception and interaction as well as Augmented Reality (AR). A separate evaluation of, respectively, the object detection and the temporal tracker demonstrates the important stride in research as well as the impact on industrial robotic applications and AR. When evaluated on a standard dataset, the detector produced the highest f1score with a large margin while the tracker generated the best accuracy at a very low latency of approximately 2 ms per frame with one CPU core – both algorithms outperforming the state of the art. When combined, we achieve a powerful framework that is robust to handle multiple instances of the same object under occlusion and clutter while attaining real-time performance. Aiming at stepping beyond the simple scenarios used by current systems, often constrained by having a single object in absence of clutter, averting to touch the object to prevent close-range partial occlusion, selecting brightly colored objects to easily segment them individually or assuming that the object has simple geometric structure, we demonstrate the capacity to handle challenging cases under clutter, partial occlusion and varying lighting conditions with objects of different shapes and sizes.",
"title": ""
},
{
"docid": "aeb4af864a4e2435486a69f5694659dc",
"text": "A great amount of research has been developed around the early cognitive impairments that best predict the onset of Alzheimer's disease (AD). Given that mild cognitive impairment (MCI) is no longer considered to be an intermediate state between normal aging and AD, new paths have been traced to acquire further knowledge about this condition and its subtypes, and to determine which of them have a higher risk of conversion to AD. It is now known that other deficits besides episodic and semantic memory impairments may be present in the early stages of AD, such as visuospatial and executive function deficits. Furthermore, recent investigations have proven that the hippocampus and the medial temporal lobe structures are not only involved in memory functioning, but also in visual processes. These early changes in memory, visual, and executive processes may also be detected with the study of eye movement patterns in pathological conditions like MCI and AD. In the present review, we attempt to explore the existing literature concerning these patterns of oculomotor changes and how these changes are related to the early signs of AD. In particular, we argue that deficits in visual short-term memory, specifically in iconic memory, attention processes, and inhibitory control, may be found through the analysis of eye movement patterns, and we discuss how they might help to predict the progression from MCI to AD. We add that the study of eye movement patterns in these conditions, in combination with neuroimaging techniques and appropriate neuropsychological tasks based on rigorous concepts derived from cognitive psychology, may highlight the early presence of cognitive impairments in the course of the disease.",
"title": ""
},
{
"docid": "92a00453bc0c2115a8b37e5acc81f193",
"text": "Choosing the appropriate software development methodology is something which continues to occupy the minds of many IT professionals. The introduction of “Agile” development methodologies such as XP and SCRUM held the promise of improved software quality and reduced delivery times. Combined with a Lean philosophy, there would seem to be potential for much benefit. While evidence does exist to support many of the Lean/Agile claims, we look here at how such methodologies are being adopted in the rigorous environment of safety-critical embedded software development due to its high regulation. Drawing on the results of a systematic literature review we find that evidence is sparse for Lean/Agile adoption in these domains. However, where it has been trialled, “out-of-the-box” Agile practices do not seem to fully suit these environments but rather tailored Agile versions combined with more planbased practices seem to be making inroads.",
"title": ""
},
{
"docid": "3f7c16788bceba51f0cbf0e9c9592556",
"text": "Centralised patient monitoring systems are in huge demand as they not only reduce the labour work and cost but also the time of the clinical hospitals. Earlier wired communication was used but now Zigbee which is a wireless mesh network is preferred as it reduces the cost. Zigbee is also preferred over Bluetooth and infrared wireless communication because it is energy efficient, has low cost and long distance range (several miles). In this paper we proposed wireless transmission of data between a patient and centralised unit using Zigbee module. The paper is divided into two sections. First is patient monitoring system for multiple patients and second is the centralised patient monitoring system. These two systems are communicating using wireless transmission technology i.e. Zigbee. In the first section we have patient monitoring of multiple patients. Each patient's multiple physiological parameters like ECG, temperature, heartbeat are measured at their respective unit. If any physiological parameter value exceeds the threshold value, emergency alarm and LED blinks at each patient unit. This allows a doctor to read various physiological parameters of a patient in real time. The values are displayed on the LCD at each patient unit. Similarly multiple patients multiple physiological parameters are being measured using particular sensors and multiple patient's patient monitoring system is made. In the second section centralised patient monitoring system is made in which all multiple patients multiple parameters are displayed on a central monitor using MATLAB. ECG graph is also displayed on the central monitor using MATLAB software. The central LCD also displays parameters like heartbeat and temperature. The module is less expensive, consumes low power and has good range.",
"title": ""
},
{
"docid": "05696249c57c4b0a52ddfd5598a34f00",
"text": "The quality of word representations is frequently assessed using correlation with human judgements of word similarity. Here, we question whether such intrinsic evaluation can predict the merits of the representations for downstream tasks. We study the correlation between results on ten word similarity benchmarks and tagger performance on three standard sequence labeling tasks using a variety of word vectors induced from an unannotated corpus of 3.8 billion words, and demonstrate that most intrinsic evaluations are poor predictors of downstream performance. We argue that this issue can be traced in part to a failure to distinguish specific similarity from relatedness in intrinsic evaluation datasets. We make our evaluation tools openly available to facilitate further study.",
"title": ""
},
{
"docid": "e9ff17015d40f5c6dd5091557f336f43",
"text": "Web sites that accept and display content such as wiki articles or comments typically filter the content to prevent injected script code from running in browsers that view the site. The diversity of browser rendering algorithms and the desire to allow rich content make filtering quite difficult, however, and attacks such as the Samy and Yamanner worms have exploited filtering weaknesses. This paper proposes a simple alternative mechanism for preventing script injection called Browser-Enforced Embedded Policies (BEEP). The idea is that a web site can embed a policy in its pages that specifies which scripts are allowed to run. The browser, which knows exactly when it will run a script, can enforce this policy perfectly. We have added BEEP support to several browsers, and built tools to simplify adding policies to web applications. We found that supporting BEEP in browsers requires only small and localized modifications, modifying web applications requires minimal effort, and enforcing policies is generally lightweight.",
"title": ""
}
] |
scidocsrr
|
b27001b8f4a0f7d2953e8b647afb775c
|
Physiotherapy Exercises Recognition Based on RGB-D Human Skeleton Models
|
[
{
"docid": "29e1ecb7b1dfbf4ca2a229726dcab12e",
"text": "The recently developed depth sensors, e.g., the Kinect sensor, have provided new opportunities for human-computer interaction (HCI). Although great progress has been made by leveraging the Kinect sensor, e.g., in human body tracking, face recognition and human action recognition, robust hand gesture recognition remains an open problem. Compared to the entire human body, the hand is a smaller object with more complex articulations and more easily affected by segmentation errors. It is thus a very challenging problem to recognize hand gestures. This paper focuses on building a robust part-based hand gesture recognition system using Kinect sensor. To handle the noisy hand shapes obtained from the Kinect sensor, we propose a novel distance metric, Finger-Earth Mover's Distance (FEMD), to measure the dissimilarity between hand shapes. As it only matches the finger parts while not the whole hand, it can better distinguish the hand gestures of slight differences. The extensive experiments demonstrate that our hand gesture recognition system is accurate (a 93.2% mean accuracy on a challenging 10-gesture dataset), efficient (average 0.0750 s per frame), robust to hand articulations, distortions and orientation or scale changes, and can work in uncontrolled environments (cluttered backgrounds and lighting conditions). The superiority of our system is further demonstrated in two real-life HCI applications.",
"title": ""
},
{
"docid": "749728f5301311db9aec203ab54248c3",
"text": "Human posture recognition is an attractive and challenging topic in computer vision because of its wide range of application. The coming of low cost device Kinect with its SDK gives us a possibility to resolve with ease some difficult problems encountered when working with conventional cameras. In this paper, we explore the capacity of using skeleton information provided by Kinect for human posture recognition in a context of a health monitoring framework. We conduct 7 different experiments with 4 types of features extracted from human skeleton. The obtained results show that this device can detect with high accuracy four interested postures (lying, sitting, standing, bending).",
"title": ""
}
] |
[
{
"docid": "fe014ab328ff093deadca25eab9d965f",
"text": "Since conventional microstrip hairpin filter and diplexer are inherently formed by coupled-line resonators, spurious response and poor isolation performance are unavoidable. This letter presents a simple technique that is suitable for an inhomogeneous structure such as microstrip to cure such poor performances. The technique is based on the stepped impedance coupled-line resonator and is verified by the experimental results of the designed 0.9GHz/1.8GHz microstrip hairpin diplexer.",
"title": ""
},
{
"docid": "06f99b18bae3f15e77db8ff2d8c159cc",
"text": "The exact nature of the relationship among species range sizes, speciation, and extinction events is not well understood. The factors that promote larger ranges, such as broad niche widths and high dispersal abilities, could increase the likelihood of encountering new habitats but also prevent local adaptation due to high gene flow. Similarly, low dispersal abilities or narrower niche widths could cause populations to be isolated, but such populations may lack advantageous mutations due to low population sizes. Here we present a large-scale, spatially explicit, individual-based model addressing the relationships between species ranges, speciation, and extinction. We followed the evolutionary dynamics of hundreds of thousands of diploid individuals for 200,000 generations. Individuals adapted to multiple resources and formed ecological species in a multidimensional trait space. These species varied in niche widths, and we observed the coexistence of generalists and specialists on a few resources. Our model shows that species ranges correlate with dispersal abilities but do not change with the strength of fitness trade-offs; however, high dispersal abilities and low resource utilization costs, which favored broad niche widths, have a strong negative effect on speciation rates. An unexpected result of our model is the strong effect of underlying resource distributions on speciation: in highly fragmented landscapes, speciation rates are reduced.",
"title": ""
},
{
"docid": "20fafc2ea5ae88eff0ed98ac031963ab",
"text": "Outpatient scheduling is considered as a complex problem. Efficient solutions to this problem are required by many health care facilities. This paper proposes an efficient approach to outpatient scheduling by specifying a bidding method and converting it to a group role assignment problem. The proposed approach is validated by conducting simulations and experiments with randomly generated patient requests for available time slots. The major contribution of this paper is an efficient outpatient scheduling approach making automatic outpatient scheduling practical. The exciting result is due to the consideration of outpatient scheduling as a collaborative activity and the creation of a qualification matrix in order to apply the group role assignment algorithm.",
"title": ""
},
{
"docid": "5e530aefee0a4b1ef986a086a17078fd",
"text": "One key property of word embeddings currently under study is their capacity to encode hypernymy. Previous works have used supervised models to recover hypernymy structures from embeddings. However, the overall results do not clearly show how well we can recover such structures. We conduct the first dataset-centric analysis that shows how only the Baroni dataset provides consistent results. We empirically show that a possible reason for its good performance is its alignment to dimensions specific of hypernymy: generality and similarity.",
"title": ""
},
{
"docid": "04065494023ed79211af3ba0b5bc4c7e",
"text": "The glucagon-like peptides include glucagon, GLP-1, and GLP-2, and exert diverse actions on nutrient intake, gastrointestinal motility, islet hormone secretion, cell proliferation and apoptosis, nutrient absorption, and nutrient assimilation. GIP, a related member of the glucagon peptide superfamily, also regulates nutrient disposal via stimulation of insulin secretion. The actions of these peptides are mediated by distinct members of the glucagon receptor superfamily of G protein-coupled receptors. These receptors exhibit unique patterns of tissue-specific expression, exhibit considerable amino acid sequence identity, and share similar structural and functional properties with respect to ligand binding and signal transduction. This article provides an overview of the biology of these receptors with an emphasis on understanding the unique actions of glucagon-related peptides through studies of the biology of their cognate receptors.",
"title": ""
},
{
"docid": "ec6b6463fdbabbaade4c9186b14e7acf",
"text": "In order for robots to learn from people with no machine learning expertise, robots should learn from natural human instruction. Most machine learning techniques that incorporate explanations require people to use a limited vocabulary and provide state information, even if it is not intuitive. This paper discusses a software agent that learned to play the Mario Bros. game using explanations. Our goals to improve learning from explanations were twofold: 1) to filter explanations into advice and warnings and 2) to learn policies from sentences without state information. We used sentiment analysis to filter explanations into advice of what to do and warnings of what to avoid. We developed object-focused advice to represent what actions the agent should take when dealing with objects. A reinforcement learning agent used object-focused advice to learn policies that maximized its reward. After mitigating false negatives, using sentiment as a filter was approximately 85% accurate. object-focused advice performed better than when no advice was given, the agent learned where to apply the advice, and the agent could recover from adversarial advice. We also found the method of interaction should be designed to ease the cognitive load of the human teacher or the advice may be of poor quality.",
"title": ""
},
{
"docid": "5bdf4585df04c00ebcf00ce94a86ab38",
"text": "High-voltage pulse-generators can be used effectively for bacterial decontamination in water treatment applications. Applying a pulsed electric field to the infected water sample guarantees killing of harmful germs and bacteria. In this paper, a modular high-voltage pulse-generator with sequential charging is proposed for water treatment via underwater pulsed streamer corona discharge. The proposed generator consists of series-connected modules similar to an arm of a modular multilevel converter. The modules' capacitors are charged sequentially from a relatively low-voltage dc supply, then they are connected in series and discharged into the load. Two configurations are proposed in this paper, one for low repetitive pulse rate applications, and the other for high repetitive pulse rates. In the first topology, the equivalent resistance of the infected water sample is used as a charging resistance for the generator's capacitors during the charging process. While in the second topology, the water resistance is bypassed during the charging process, and an external charging resistance with proper value is used instead. In this paper, detailed designs for the proposed pulse-generators are presented and validated by simulation results using MATLAB. A scaled down experimental setup has been built to show the viability of the proposed concept.",
"title": ""
},
{
"docid": "1364758783c75a39112d01db7e7cfc63",
"text": "Steganography plays an important role in secret communication in digital worlds and open environments like Internet. Undetectability and imperceptibility of confidential data are major challenges of steganography methods. This article presents a secure steganography method in frequency domain based on partitioning approach. The cover image is partitioned into 8×8 blocks and then integer wavelet transform through lifting scheme is performed for each block. The symmetric RC4 encryption method is applied to secret message to obtain high security and authentication. Tree Scan Order is performed in frequency domain to find proper location for embedding secret message. Secret message is embedded in cover image with minimal degrading of the quality. Experimental results demonstrate that the proposed method has achieved superior performance in terms of high imperceptibility of stego-image and it is secure against statistical attack in comparison with existing methods.",
"title": ""
},
{
"docid": "1c9a14804cd1bd673c2547642f9b6683",
"text": "In this paper we applied multilabel classification algorithms to the EUR-Lex database of legal documents of the European Union. On this document collection, we studied three different multilabel classification problems, the largest being the categorization into the EUROVOC concept hierarchy with almost 4000 classes. We evaluated three algorithms: (i) the binary relevance approach which independently trains one classifier per label; (ii) the multiclass multilabel perceptron algorithm, which respects dependencies between the base classifiers; and (iii) the multilabel pairwise perceptron algorithm, which trains one classifier for each pair of labels. All algorithms use the simple but very efficient perceptron algorithm as the underlying classifier, which makes them very suitable for large-scale multilabel classification problems. The main challenge we had to face was that the almost 8,000,000 perceptrons that had to be trained in the pairwise setting could no longer be stored in memory. We solve this problem by resorting to the dual representation of the perceptron, which makes the pairwise approach feasible for problems of this size. The results on the EUR-Lex database confirm the good predictive performance of the pairwise approach and demonstrates the feasibility of this approach for large-scale tasks.",
"title": ""
},
{
"docid": "b69e6bf80ad13a60819ae2ebbcc93ae0",
"text": "Computational manufacturing technologies such as 3D printing hold the potential for creating objects with previously undreamed-of combinations of functionality and physical properties. Human designers, however, typically cannot exploit the full geometric (and often material) complexity of which these devices are capable. This STAR examines recent systems developed by the computer graphics community in which designers specify higher-level goals ranging from structural integrity and deformation to appearance and aesthetics, with the final detailed shape and manufacturing instructions emerging as the result of computation. It summarizes frameworks for interaction, simulation, and optimization, as well as documents the range of general objectives and domain-specific goals that have been considered. An important unifying thread in this analysis is that different underlying geometric and physical representations are necessary for different tasks: we document over a dozen classes of representations that have been used for fabrication-aware design in the literature. We analyze how these classes possess obvious advantages for some needs, but have also been used in creative manners to facilitate unexpected problem solutions.",
"title": ""
},
{
"docid": "ff952443eef41fb430ff2831b5ee33d5",
"text": "The increasing activity in the Intelligent Transportation Systems (ITS) area faces a strong limitation: the slow pace at which the automotive industry is making cars \"smarter\". On the contrary, the smartphone industry is advancing quickly. Existing smartphones are endowed with multiple wireless interfaces and high computational power, being able to perform a wide variety of tasks. By combining smartphones with existing vehicles through an appropriate interface we are able to move closer to the smart vehicle paradigm, offering the user new functionalities and services when driving. In this paper we propose an Android-based application that monitors the vehicle through an On Board Diagnostics (OBD-II) interface, being able to detect accidents. Our proposed application estimates the G force experienced by the passengers in case of a frontal collision, which is used together with airbag triggers to detect accidents. The application reacts to positive detection by sending details about the accident through either e-mail or SMS to pre-defined destinations, immediately followed by an automatic phone call to the emergency services. Experimental results using a real vehicle show that the application is able to react to accident events in less than 3 seconds, a very low time, validating the feasibility of smartphone based solutions for improving safety on the road.",
"title": ""
},
{
"docid": "f79b5057cf1bd621f8a3a69efcd5e100",
"text": "A novel, tri-band, planar plate-type antenna made of a compact metal plate for wireless local area network (WLAN) applications in the 2.4GHz (2400–2484MHz), 5.2GHz (5150– 5350MHz), and 5.8 GHz (5725–5825 MHz) bands is presented. The antenna was designed in a way that the operating principle includes dipole and loop resonant modes to cover the 2.4/5.2 and 5.8 GHz bands, respectively. The antenna comprises a larger radiating arm and a smaller loop radiating arm, which are connected to each other at the signal ground point. The antenna can easily be fed by using a 50 Ω mini-coaxial cable and shows good radiation performance. Details of the design are described and discussed in the article.",
"title": ""
},
{
"docid": "0c67bd1867014053a5bec3869f3b4f8c",
"text": "BACKGROUND AND PURPOSE\nConstraint-induced movement therapy (CI therapy) has previously been shown to produce large improvements in actual amount of use of a more affected upper extremity in the \"real-world\" environment in patients with chronic stroke (ie, >1 year after the event). This work was carried out in an American laboratory. Our aim was to determine whether these results could be replicated in another laboratory located in Germany, operating within the context of a healthcare system in which administration of conventional types of physical therapy is generally more extensive than in the United States.\n\n\nMETHODS\nFifteen chronic stroke patients were given CI therapy, involving restriction of movement of the intact upper extremity by placing it in a sling for 90% of waking hours for 12 days and training (by shaping) of the more affected extremity for 7 hours on the 8 weekdays during that period.\n\n\nRESULTS\nPatients showed a significant and very large degree of improvement from before to after treatment on a laboratory motor test and on a test assessing amount of use of the affected extremity in activities of daily living in the life setting (effect sizes, 0.9 and 2.2, respectively), with no decrement in performance at 6-month follow-up. During a pretreatment control test-retest interval, there were no significant changes on these tests.\n\n\nCONCLUSIONS\nResults replicate in Germany the findings with CI therapy in an American laboratory, suggesting that the intervention has general applicability.",
"title": ""
},
{
"docid": "bbedbe2d901f63e3f163ea0f24a2e2d7",
"text": "a r t i c l e i n f o a b s t r a c t The leader trait perspective is perhaps the most venerable intellectual tradition in leadership research. Despite its early prominence in leadership research, it quickly fell out of favor among leadership scholars. Thus, despite recent empirical support for the perspective, conceptual work in the area lags behind other theoretical perspectives. Accordingly, the present review attempts to place the leader trait perspective in the context of supporting intellectual traditions, including evolutionary psychology and behavioral genetics. We present a conceptual model that considers the source of leader traits, mediators and moderators of their effects on leader emergence and leadership effectiveness, and distinguish between perceived and actual leadership effectiveness. We consider both the positive and negative effects of specific \" bright side \" personality traits: the Big Five traits, core self-evaluations, intelligence, and charisma. We also consider the positive and negative effects of \" dark side \" leader traits: Narcissism, hubris, dominance, and Machiavellianism. If one sought to find singular conditions that existed across species, one might find few universals. One universal that does exist, at least those species that have brains and nervous systems, is leadership. From insects to reptiles to mammals, leadership exists as surely as collective activity exists. There is the queen bee, and there is the alpha male. Though the centrality of leadership may vary by species (it seems more important to mammals than, say, to avians and reptiles), it is fair to surmise that whenever there is social activity, a social structure develops, and one (perhaps the) defining characteristic of that structure is the emergence of a leader or leaders. The universality of leadership, however, does not deny the importance of individual differences — indeed the emergence of leadership itself is proof of individual differences. Moreover, even casual observation of animal (including human) collective behavior shows the existence of a leader. Among a herd of 100 cattle or a pride of 20 lions, one is able to detect a leadership structure (especially at times of eating, mating, and attack). One quickly wonders: What has caused this leadership structure to emerge? Why has one animal (the alpha) emerged to lead the collective? And how does this leadership cause this collective to flourish — or founder? Given these questions, it is of no surprise that the earliest conceptions of leadership focused on individual …",
"title": ""
},
{
"docid": "f5e6df40898a5b84f8e39784f9b56788",
"text": "OBJECTIVE\nTo determine the prevalence of anxiety and depression among medical students at Nishtar Medical College, Multan.\n\n\nMETHODS\nA cross-sectional study was carried out at Nishtar Medical College, Multan in 2008. The questionnaire was administered to 815 medical students who had spent more than 6 months in college and had no self reported physical illness. They were present at the time of distribution of the questionnaires and consented. Prevalence of anxiety and depression was assessed using a structured validated questionnaire, the Aga Khan University Anxiety and Depression Scale with a cut-off score of 19. Data Analysis was done using SPSS v. 14.\n\n\nRESULTS\nOut of 815 students, 482 completed the questionnaire with a response rate of 59.14%. The mean age of students was 20.66 +/- 1.8 years. A high prevalence of anxiety and depression (43.89%) was found amongst medical students. Prevalence of anxiety and depression among students of first, second, third, fourth and final years was 45.86%, 52.58%, 47.14%, 28.75% and 45.10% respectively. Female students were found to be more depressed than male students (OR = 2.05, 95% CI = 1.42-2.95, p = 0.0001). There was a significant association between the prevalence of anxiety and depression and the respective year of medical college (p = 0.0276). It was seen that age, marital status, locality and total family income did not significantly affect the prevalence of anxiety and depression.\n\n\nCONCLUSIONS\nThe results showed that medical students constitute a vulnerable group that has a high prevalence of psychiatric morbidity comprising of anxiety and depression.",
"title": ""
},
{
"docid": "3f45d5b611b59e0bcaa0ff527d11f5af",
"text": "Ensemble methods use multiple models to get better performance. Ensemble methods have been used in multiple research fields such as computational intelligence, statistics and machine learning. This paper reviews traditional as well as state-of-the-art ensemble methods and thus can serve as an extensive summary for practitioners and beginners. The ensemble methods are categorized into conventional ensemble methods such as bagging, boosting and random forest, decomposition methods, negative correlation learning methods, multi-objective optimization based ensemble methods, fuzzy ensemble methods, multiple kernel learning ensemble methods and deep learning based ensemble methods. Variations, improvements and typical applications are discussed. Finally this paper gives some recommendations for future research directions.",
"title": ""
},
{
"docid": "171fd68f380f445723b024f290a02d69",
"text": "Cytokines, produced at the site of entry of a pathogen, drive inflammatory signals that regulate the capacity of resident and newly arrived phagocytes to destroy the invading pathogen. They also regulate antigen presenting cells (APCs), and their migration to lymph nodes to initiate the adaptive immune response. When naive CD4+ T cells recognize a foreign antigen-derived peptide presented in the context of major histocompatibility complex class II on APCs, they undergo massive proliferation and differentiation into at least four different T-helper (Th) cell subsets (Th1, Th2, Th17, and induced T-regulatory (iTreg) cells in mammals. Each cell subset expresses a unique set of signature cytokines. The profile and magnitude of cytokines produced in response to invasion of a foreign organism or to other danger signals by activated CD4+ T cells themselves, and/or other cell types during the course of differentiation, define to a large extent whether subsequent immune responses will have beneficial or detrimental effects to the host. The major players of the cytokine network of adaptive immunity in fish are described in this review with a focus on the salmonid cytokine network. We highlight the molecular, and increasing cellular, evidence for the existence of T-helper cells in fish. Whether these cells will match exactly to the mammalian paradigm remains to be seen, but the early evidence suggests that there will be many similarities to known subsets. Alternative or additional Th populations may also exist in fish, perhaps influenced by the types of pathogen encountered by a particular species and/or fish group. These Th cells are crucial for eliciting disease resistance post-vaccination, and hopefully will help resolve some of the difficulties in producing efficacious vaccines to certain fish diseases.",
"title": ""
},
{
"docid": "ba69b4c09bbcd6cfd50632a8d4bea877",
"text": "In this report we consider the current status of the coverage of computer science in education at the lowest levels of education in multiple countries. Our focus is on computational thinking (CT), a term meant to encompass a set of concepts and thought processes that aid in formulating problems and their solutions in different fields in a way that could involve computers [130].\n The main goal of this report is to help teachers, those involved in teacher education, and decision makers to make informed decisions about how and when CT can be included in their local institutions. We begin by defining CT and then discuss the current state of CT in K-9 education in multiple countries in Europe as well as the United States. Since many students are exposed to CT outside of school, we also discuss the current state of informal educational initiatives in the same set of countries.\n An important contribution of the report is a survey distributed to K-9 teachers, aiming at revealing to what extent different aspects of CT are already part of teachers' classroom practice and how this is done. The survey data suggest that some teachers are already involved in activities that have strong potential for introducing some aspects of CT. In addition to the examples given by teachers participating in the survey, we present some additional sample activities and lesson plans for working with aspects of CT in different subjects. We also discuss ways in which teacher training can be coordinated as well as the issue of repositories. We conclude with future directions for research in CT at school.",
"title": ""
},
{
"docid": "3907bddf6a56b96c4e474d46ddd04359",
"text": "The aim of this review is to discuss the accumulating evidence that suggests that grape extracts and purified grape polyphenols possess a diverse array of biological actions and may be beneficial in the prevention of some inflammatory-mediated diseases including cardiovascular disease. The active components from grape extracts, which include the grape seed, grape skin, and grape juice, that have been identified thus far include polyphenols such as resveratrol, phenolic acids, anthocyanins, and flavonoids. All possess potent antioxidant properties and have been shown to decrease low-density lipoprotein-cholesterol oxidation and platelet aggregation. These compounds also possess a range of additional cardioprotective and vasoprotective properties including antiatherosclerotic, antiarrhythmic, and vasorelaxation actions. Although not exclusive, antioxidant properties of grape polyphenols are likely to be central to their mechanism(s) of action, which also include cellular signaling mechanisms and interactions at the genomic level. This review discusses some of the evidence favoring the consumption of grape extracts rich in polyphenols in the prevention of cardiovascular disease. Consumption of grape and grape extracts and/or grape products such as red wine may be beneficial in preventing the development of chronic degenerative diseases such as cardiovascular disease.",
"title": ""
},
{
"docid": "c700a8a3dc4aa81c475e84fc1bbf9516",
"text": "A Monte Carlo study compared 14 methods to test the statistical significance of the intervening variable effect. An intervening variable (mediator) transmits the effect of an independent variable to a dependent variable. The commonly used R. M. Baron and D. A. Kenny (1986) approach has low statistical power. Two methods based on the distribution of the product and 2 difference-in-coefficients methods have the most accurate Type I error rates and greatest statistical power except in 1 important case in which Type I error rates are too high. The best balance of Type I error and statistical power across all cases is the test of the joint significance of the two effects comprising the intervening variable effect.",
"title": ""
}
] |
scidocsrr
|
7daf08238d130b9662bf4b08386d1cfd
|
A new infrared image enhancement algorithm
|
[
{
"docid": "82592f60e0039089e3c16d9534780ad5",
"text": "A model for grey-tone image enhancement using the concept of fuzzy sets is suggested. It involves primary enhancement, smoothing, and then final enhancement. The algorithm for both the primary and final enhancements includes the extraction of fuzzy properties corresponding to pixels and then successive applications of the fuzzy operator \"contrast intensifier\" on the property plane. The three different smoothing techniques considered in the experiment are defocussing, averaging, and max-min rule over the neighbors of a pixel. The reduction of the \"index of fuzziness\" and \"entropy\" for different enhanced outputs (corresponding to different values of fuzzifiers) is demonstrated for an English script input. Enhanced output as obtained by histogram modification technique is also presented for comparison.",
"title": ""
}
] |
[
{
"docid": "814aa0089ce9c5839d028d2e5aca450d",
"text": "Espresso is a document-oriented distributed data serving platform that has been built to address LinkedIn's requirements for a scalable, performant, source-of-truth primary store. It provides a hierarchical document model, transactional support for modifications to related documents, real-time secondary indexing, on-the-fly schema evolution and provides a timeline consistent change capture stream. This paper describes the motivation and design principles involved in building Espresso, the data model and capabilities exposed to clients, details of the replication and secondary indexing implementation and presents a set of experimental results that characterize the performance of the system along various dimensions.\n When we set out to build Espresso, we chose to apply best practices in industry, already published works in research and our own internal experience with different consistency models. Along the way, we built a novel generic distributed cluster management framework, a partition-aware change- capture pipeline and a high-performance inverted index implementation.",
"title": ""
},
{
"docid": "4fb5658723d791803c1fe0fdbd7ebdeb",
"text": "WAP-8294A2 (lotilibcin, 1) is a potent antibiotic with superior in vivo efficacy to vancomycin against methicillin-resistant Staphylococcus aureus (MRSA). Despite the great medical importance, its molecular mode of action remains unknown. Here we report the total synthesis of complex macrocyclic peptide 1 comprised of 12 amino acids with a β-hydroxy fatty-acid chain, and its deoxy analogue 2. A full solid-phase synthesis of 1 and 2 enabled their rapid assembly and the first detailed investigation of their functions. Compounds 1 and 2 were equipotent against various strains of Gram-positive bacteria including MRSA. We present evidence that the antimicrobial activities of 1 and 2 are due to lysis of the bacterial membrane, and their membrane-disrupting effects depend on the presence of menaquinone, an essential factor for the bacterial respiratory chain. The established synthetic routes and the menaquinone-targeting mechanisms provide valuable information for designing and developing new antibiotics based on their structures.",
"title": ""
},
{
"docid": "caaec31a08d530071bd87e936eda79f4",
"text": "A string dictionary is a basic tool for storing a set of strings in many kinds of applications. Recently, many applications need space-efficient dictionaries to handle very large datasets. In this paper, we propose new compressed string dictionaries using improved double-array tries. The double-array trie is a data structure that can implement a string dictionary supporting extremely fast lookup of strings, but its space efficiency is low. We introduce approaches for improving the disadvantage. From experimental evaluations, our dictionaries can provide the fastest lookup compared to state-of-the-art compressed string dictionaries. Moreover, the space efficiency is competitive in many cases.",
"title": ""
},
{
"docid": "40ba65504518383b4ca2a6fabff261fe",
"text": "Fig. 1. Noirot and Quennedey's original classification of insect exocrine glands, based on a rhinotermitid sternal gland. The asterisk indicates a subcuticular space. Abbreviations: C, cuticle; D, duct cells; G1, secretory cells class 1; G2, secretory cells class 2; G3, secretory cells class 3; S, campaniform sensilla (modified after Noirot and Quennedey, 1974). ‘Describe the differences between endocrine and exocrine glands’, it sounds a typical exam question from a general biology course during our time at high school. Because of their secretory products being released to the outside world, exocrine glands definitely add flavour to our lives. Everybody is familiar with their secretions, from the salty and perhaps unpleasantly smelling secretions from mammalian sweat glands to the sweet exudates of the honey glands used by some caterpillars to attract ants, from the most painful venoms of bullet ants and scorpions to the precious wax that honeybees use to make their nest combs. Besides these functions, exocrine glands are especially known for the elaboration of a broad spectrum of pheromonal substances, and can also be involved in the production of antibiotics, lubricants, and digestive enzymes. Modern research in insect exocrinology started with the classical works of Charles Janet, who introduced a histological approach to the insect world (Billen and Wilson, 2007). The French school of insect anatomy remained strong since then, and the commonly used classification of insect exocrine glands generally follows the pioneer paper of Charles Noirot and Andr e Quennedey (1974). These authors were leading termite researchers using their extraordinary knowledge on termite glands to understand related phenomena, such as foraging and reproductive behaviour. They distinguish between class 1 with secretory cells adjoining directly to the cuticle, and class 3 with bicellular units made up of a large secretory cell and its accompanying duct cell that carries the secretion to the exterior (Fig. 1). The original classification included also class 2 secretory cells, but these are very rare and are only found in sternal and tergal glands of a cockroach and many termites (and also in the novel nasus gland described in this issue!). This classification became universally used, with the rather strange consequence that the vast majority of insect glands is illogically made up of class 1 and class 3 cells. In a follow-up paper, the uncommon class 2 cells were re-considered as oenocyte homologues (Noirot and Quennedey, 1991). Irrespectively of these objections, their 1974 pioneer paper is a cornerstone of modern works dealing with insect exocrine glands, as is also obvious in the majority of the papers in this special issue. This paper already received 545 citations at Web of Science and 588 at Google Scholar (both on 24 Aug 2015), so one can easily say that all researchers working on insect glands consider this work truly fundamental. Exocrine glands are organs of cardinal importance in all insects. The more common ones include mandibular and labial",
"title": ""
},
{
"docid": "6eb4eb9b80b73bdcd039dfc8e07c3f5a",
"text": "Code duplication or copying a code fragment and then reuse by pasting with or without any modifications is a well known code smell in software maintenance. Several studies show that about 5% to 20% of a software systems can contain duplicated code, which is basically the results of copying existing code fragments and using then by pasting with or without minor modifications. One of the major shortcomings of such duplicated fragments is that if a bug is detected in a code fragment, all the other fragments similar to it should be investigated to check the possible existence of the same bug in the similar fragments. Refactoring of the duplicated code is another prime issue in software maintenance although several studies claim that refactoring of certain clones are not desirable and there is a risk of removing them. However, it is also widely agreed that clones should at least be detected. In this paper, we survey the state of the art in clone detection research. First, we describe the clone terms commonly used in the literature along with their corresponding mappings to the commonly used clone types. Second, we provide a review of the existing clone taxonomies, detection approaches and experimental evaluations of clone detection tools. Applications of clone detection research to other domains of software engineering and in the same time how other domain can assist clone detection research have also been pointed out. Finally, this paper concludes by pointing out several open problems related to clone detection research. ∗This document represents our initial findings and a further study is being carried on. Reader’s feedback is welcome at croy@cs.queensu.ca.",
"title": ""
},
{
"docid": "cf264a124cc9f68cf64cacb436b64fa3",
"text": "Clustering validation has long been recognized as one of the vital issues essential to the success of clustering applications. In general, clustering validation can be categorized into two classes, external clustering validation and internal clustering validation. In this paper, we focus on internal clustering validation and present a detailed study of 11 widely used internal clustering validation measures for crisp clustering. From five conventional aspects of clustering, we investigate their validation properties. Experiment results show that S\\_Dbw is the only internal validation measure which performs well in all five aspects, while other measures have certain limitations in different application scenarios.",
"title": ""
},
{
"docid": "356a2c0b4837cf3d001068d43cb2b633",
"text": "A design is described of a broadband circularly-polarized (CP) slot antenna. A conventional annular-ring slot antenna is first analyzed, and it is found that two adjacent CP modes can be simultaneously excited through the proximity coupling of an L-shaped feed line. By tuning the dimensions of this L-shaped feed line, the two CP modes can be coupled together and a broad CP bandwidth is thus formed. The design method is also valid when the inner circular patch of the annular-ring slot antenna is vertically raised from the ground plane. In this case, the original band-limited ring slot antenna is converted into a wide-band structure that is composed of a circular wide slot and a parasitic patch, and consequently the CP bandwidth is further enhanced. For the patch-loaded wide slot antenna, its key parameters are investigated to show how to couple the two CP modes and achieve impedance matching. The effects of the distance between the parasitic patch and wide slot on the CP bandwidth and antenna gain are also presented and discussed in details.",
"title": ""
},
{
"docid": "784dc5ac8e639e3ba4103b4b8653b1ff",
"text": "Super-resolution reconstruction produces one or a set of high-resolution images from a set of low-resolution images. In the last two decades, a variety of super-resolution methods have been proposed. These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their shortcomings. We propose an alternate approach using L/sub 1/ norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. This computationally inexpensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other super-resolution methods.",
"title": ""
},
{
"docid": "b4d7a17eb034bcf5f6616d9338fe4265",
"text": "Accessory breasts, usually with a protuberant appearance, are composed of both the central accessory breast tissue and adjacent fat tissue. They are a palpable convexity and cosmetically unsightly. Consequently, patients often desire cosmetic improvement. The traditional general surgical treatment for accessory breasts is removal of the accessory breast tissue, fat tissue, and covering skin as a whole unit. A rather long ugly scar often is left after this operation. A minimally invasive method frequently used by the plastic surgeon is to “dig out” the accessory breast tissue. A central depression appearance often is left due to the adjacent fat tissue remnant. From the cosmetic point of view, neither a long scar nor a bulge is acceptable. A minimal incision is made, and the tumescent liposuction technique is used to aspirate out both the central accessory breast tissue and adjacent fat tissue. If there is an areola or nipple in the accessory breast, either the areola or nipple is excised after liposuction during the same operation. For patients who have too much extra skin in the accessory breast area, a small fusiform incision is made to remove the extra skin after the accessory breast tissue and fat tissue have been aspirated out. From August 2003 to January 2008, 51 patients underwent surgery using the described technique. All were satisfied with their appearance after their initial surgery except for two patients with minimal associated morbidity. This report describes a new approach for treating accessory breasts that results in minimal scarring and a better appearance than can be achieved with traditional methods.",
"title": ""
},
{
"docid": "4d3468bb14b7ad933baac5c50feec496",
"text": "Conventional material removal techniques, like CNC milling, have been proven to be able to tackle nearly any machining challenge. On the other hand, the major drawback of using conventional CNC machines is the restricted working area and their produced shape limitation limitations. From a conceptual point of view, industrial robot technology could provide an excellent base for machining being both flexible and cost efficient. However, industrial machining robots lack absolute positioning accuracy, are unable to reject/absorb disturbances in terms of process forces and lack reliable programming and simulation tools to ensure right first time machining, at production startups. This paper reviews the penetration of industrial robots in the challenging field of machining.",
"title": ""
},
{
"docid": "c6befaca710e45101b9a12dbc8110a0b",
"text": "The realized strategy contents of information systems (IS) strategizing are a result of both deliberate and emergent patterns of action. In this paper, we focus on emergent patterns of action by studying the formation of strategies that build on local technology-mediated practices. This is done through case study research of the emergence of a sustainability strategy at a European automaker. Studying the practices of four organizational sub-communities, we develop a process perspective of sub-communities’ activity-based production of strategy contents. The process model explains the contextual conditions that make subcommunities initiate SI strategy contents production, the activity-based process of strategy contents production, and the IS strategy outcome. The process model, which draws on Jarzabkowski’s strategy-as-practice lens and Mintzberg’s strategy typology, contributes to the growing IS strategizing literature that examines local practices in IS efforts of strategic importance. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3caa8fc1ea07fcf8442705c3b0f775c5",
"text": "Recent research in the field of computational social science have shown how data resulting from the widespread adoption and use of social media channels such as twitter can be used to predict outcomes such as movie revenues, election winners, localized moods, and epidemic outbreaks. Underlying assumptions for this research stream on predictive analytics are that social media actions such as tweeting, liking, commenting and rating are proxies for user/consumer's attention to a particular object/product and that the shared digital artefact that is persistent can create social influence. In this paper, we demonstrate how social media data from twitter can be used to predict the sales of iPhones. Based on a conceptual model of social data consisting of social graph (actors, actions, activities, and artefacts) and social text (topics, keywords, pronouns, and sentiments), we develop and evaluate a linear regression model that transforms iPhone tweets into a prediction of the quarterly iPhone sales with an average error close to the established prediction models from investment banks. This strong correlation between iPhone tweets and iPhone sales becomes marginally stronger after incorporating sentiments of tweets. We discuss the findings and conclude with implications for predictive analytics with big social data.",
"title": ""
},
{
"docid": "f9571dc9a91dd8c2c6495814c44c88c0",
"text": "Automatic number plate recognition is the task of extracting vehicle registration plates and labeling it for its underlying identity number. It uses optical character recognition on images to read symbols present on the number plates. Generally, numberplate recognition system includes plate localization, segmentation, character extraction and labeling. This research paper describes machine learning based automated Nepali number plate recognition model. Various image processing algorithms are implemented to detect number plate and to extract individual characters from it. Recognition system then uses Support Vector Machine (SVM) based learning and prediction on calculated Histograms of Oriented Gradients (HOG) features from each character. The system is evaluated on self-created Nepali number plate dataset. Evaluation accuracy of number plate character dataset is obtained as; 6.79% of average system error rate, 87.59% of average precision, 98.66% of average recall and 92.79% of average f-score. The accuracy of the complete number plate labeling experiment is obtained as 75.0%. Accuracy of the automatic number plate recognition is greatly influenced by the segmentation accuracy of the individual characters along with the size, resolution, pose, and illumination of the given image. Keywords—Nepali License Plate Recognition, Number Plate Detection, Feature Extraction, Histograms of Oriented Gradients, Optical Character Recognition, Support Vector Machines, Computer Vision, Machine Learning",
"title": ""
},
{
"docid": "6a0c269074d80f26453d1fec01cafcec",
"text": "Advances in neurobiology permit neuroscientists to manipulate specific brain molecules, neurons and systems. This has lead to major advances in the neuroscience of reward. Here, it is argued that further advances will require equal sophistication in parsing reward into its specific psychological components: (1) learning (including explicit and implicit knowledge produced by associative conditioning and cognitive processes); (2) affect or emotion (implicit 'liking' and conscious pleasure) and (3) motivation (implicit incentive salience 'wanting' and cognitive incentive goals). The challenge is to identify how different brain circuits mediate different psychological components of reward, and how these components interact.",
"title": ""
},
{
"docid": "0d2e9d514586f083007f5e93d8bb9844",
"text": "Recovering Matches: Analysis-by-Synthesis Results Starting point: Unsupervised learning of image matching Applications: Feature matching, structure from motion, dense optical flow, recognition, motion segmentation, image alignment Problem: Difficult to do directly (e.g. based on video) Insights: Image matching is a sub-problem of frame interpolation Frame interpolation can be learned from natural video sequences",
"title": ""
},
{
"docid": "c28b48557a4eda0d29200170435f2935",
"text": "An important role is reserved for nuclear imaging techniques in the imaging of neuroendocrine tumors (NETs). Somatostatin receptor scintigraphy (SRS) with (111)In-DTPA-octreotide is currently the most important tracer in the diagnosis, staging and selection for peptide receptor radionuclide therapy (PRRT). In the past decade, different positron-emitting tomography (PET) tracers have been developed. The largest group is the (68)Gallium-labeled somatostatin analogs ((68)Ga-SSA). Several studies have demonstrated their superiority compared to SRS in sensitivity and specificity. Furthermore, patient comfort and effective dose are favorable for (68)Ga-SSA. Other PET targets like β-[(11)C]-5-hydroxy-L-tryptophan ((11)C-5-HTP) and 6-(18)F-L-3,4-dihydroxyphenylalanine ((18)F-DOPA) were developed recently. For insulinomas, glucagon-like peptide-1 receptor imaging is a promising new technique. The evaluation of response after PRRT and other therapies is a challenge. Currently, the official follow-up is performed with radiological imaging techniques. The role of nuclear medicine may increase with the newest tracers for PET. In this review, the different nuclear imaging techniques and tracers for the imaging of NETs will be discussed.",
"title": ""
},
{
"docid": "b3f5d9335cccf62797c86b76fa2c9e7e",
"text": "For most families with elderly relatives, care within their own home is by far the most preferred option both for the elderly and their carers. However, frequently these carers are the partners of the person with long-term care needs, and themselves are elderly and in need of support to cope with the burdens and stress associated with these duties. When it becomes too much for them, they may have to rely on professional care services, or even use residential care for a respite. In order to support the carers as well as the elderly person, an ambient assisted living platform has been developed. The system records information about the activities of daily living using unobtrusive sensors within the home, and allows the carers to record their own wellbeing state. By providing facilities to schedule and monitor the activities of daily care, and providing orientation and advice to improve the care given and their own wellbeing, the system helps to reduce the burden on the informal carers. Received on 30 August 2016; accepted on 03 February 2017; published on 21 March 2017",
"title": ""
},
{
"docid": "dcec6ef9e08d7bcfa86aca8d045b6bd4",
"text": "This article examines the intellectual and institutional factors that contributed to the collaboration of neuropsychiatrist Warren McCulloch and mathematician Walter Pitts on the logic of neural networks, which culminated in their 1943 publication, \"A Logical Calculus of the Ideas Immanent in Nervous Activity.\" Historians and scientists alike often refer to the McCulloch-Pitts paper as a landmark event in the history of cybernetics, and fundamental to the development of cognitive science and artificial intelligence. This article seeks to bring some historical context to the McCulloch-Pitts collaboration itself, namely, their intellectual and scientific orientations and backgrounds, the key concepts that contributed to their paper, and the institutional context in which their collaboration was made. Although they were almost a generation apart and had dissimilar scientific backgrounds, McCulloch and Pitts had similar intellectual concerns, simultaneously motivated by issues in philosophy, neurology, and mathematics. This article demonstrates how these issues converged and found resonance in their model of neural networks. By examining the intellectual backgrounds of McCulloch and Pitts as individuals, it will be shown that besides being an important event in the history of cybernetics proper, the McCulloch-Pitts collaboration was an important result of early twentieth-century efforts to apply mathematics to neurological phenomena.",
"title": ""
},
{
"docid": "5a912359338b6a6c011e0d0a498b3e8d",
"text": "Learning Granger causality for general point processes is a very challenging task. In this paper, we propose an effective method, learning Granger causality, for a special but significant type of point processes — Hawkes process. According to the relationship between Hawkes process’s impact function and its Granger causality graph, our model represents impact functions using a series of basis functions and recovers the Granger causality graph via group sparsity of the impact functions’ coefficients. We propose an effective learning algorithm combining a maximum likelihood estimator (MLE) with a sparsegroup-lasso (SGL) regularizer. Additionally, the flexibility of our model allows to incorporate the clustering structure event types into learning framework. We analyze our learning algorithm and propose an adaptive procedure to select basis functions. Experiments on both synthetic and real-world data show that our method can learn the Granger causality graph and the triggering patterns of the Hawkes processes simultaneously.",
"title": ""
},
{
"docid": "e13d6cd043ea958e9731c99a83b6de18",
"text": "In this article, an overview and an in-depth analysis of the most discussed 5G waveform candidates are presented. In addition to general requirements, the nature of each waveform is revealed including the motivation, the underlying methodology, and the associated advantages and disadvantages. Furthermore, these waveform candidates are categorized and compared both qualitatively and quantitatively. By doing all these, the study in this work offers not only design guidelines but also operational suggestions for the 5G waveform.",
"title": ""
}
] |
scidocsrr
|
51daa90398d59d92015166b7fbbfd226
|
Data-driven advice for applying machine learning to bioinformatics problems
|
[
{
"docid": "40f21a8702b9a0319410b716bda0a11e",
"text": "A number of supervised learning methods have been introduced in the last decade. Unfortunately, the last comprehensive empirical evaluation of supervised learning was the Statlog Project in the early 90's. We present a large-scale empirical comparison between ten supervised learning methods: SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps. We also examine the effect that calibrating the models via Platt Scaling and Isotonic Regression has on their performance. An important aspect of our study is the use of a variety of performance criteria to evaluate the learning methods.",
"title": ""
},
{
"docid": "71b5c8679979cccfe9cad229d4b7a952",
"text": "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.\n In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.",
"title": ""
}
] |
[
{
"docid": "27bed0efd42918f783e16ca0cf0b8c4a",
"text": "This report documents the program and the outcomes of Dagstuhl Seminar 17301 “User-Generated Content in Social Media”. Social media have a profound impact on individuals, businesses, and society. As users post vast amounts of text and multimedia content every minute, the analysis of this user generated content (UGC) can offer insights to individual and societal concerns and could be beneficial to a wide range of applications. In this seminar, we brought together researchers from different subfields of computer science, such as information retrieval, multimedia, natural language processing, machine learning and social media analytics. We discussed the specific properties of UGC, the general research tasks currently operating on this type of content, identifying their limitations, and imagining new types of applications. We formed two working groups, WG1 “Fake News and Credibility”, WG2 “Summarizing and Story Telling from UGC”. WG1 invented an “Information Nutrition Label” that characterizes a document by different features such as e.g. emotion, opinion, controversy, and topicality; For computing these feature values, available methods and open research issues were identified. WG2 developed a framework for summarizing heterogeneous, multilingual and multimodal data, discussed key challenges and applications of this framework. Seminar July 23–28, 2017 – http://www.dagstuhl.de/17301 1998 ACM Subject Classification H Information Systems, H.5 Information Interfaces and Presentation, H.5.1 Multimedia Information Systems, H.3 Information Storage and Retrieval, H.1 Models and principles, I Computing methodologies, I.2 Artificial Intelligence, I.2.6 Learning, I.2.7 Natural language processing, J Computer Applications, J.4 Social and behavioural sciences, K Computing Milieux, K.4 Computers and Society, K.4.1 Public policy issues",
"title": ""
},
{
"docid": "69bb10420be07fe9fb0fd372c606d04e",
"text": "Contextual text mining is concerned with extracting topical themes from a text collection with context information (e.g., time and location) and comparing/analyzing the variations of themes over different contexts. Since the topics covered in a document are usually related to the context of the document, analyzing topical themes within context can potentially reveal many interesting theme patterns. In this paper, we generalize some of these models proposed in the previous work and we propose a new general probabilistic model for contextual text mining that can cover several existing models as special cases. Specifically, we extend the probabilistic latent semantic analysis (PLSA) model by introducing context variables to model the context of a document. The proposed mixture model, called contextual probabilistic latent semantic analysis (CPLSA) model, can be applied to many interesting mining tasks, such as temporal text mining, spatiotemporal text mining, author-topic analysis, and cross-collection comparative analysis. Empirical experiments show that the proposed mixture model can discover themes and their contextual variations effectively.",
"title": ""
},
{
"docid": "52e1acca8a09cec2a97822dc24d0ed7b",
"text": "In this paper virtual teams are defined as living systems and as such made up of people with different needs and characteristics. Groups generally perform better when they are able to establish a high level of group cohesion. According to Druskat and Wolff [2001] this status can be reached by establishing group emotional intelligence. Group emotional intelligence is reached via interactions among members and the interactions are allowed through the disposable linking factors. Virtual linking factors differ from traditional linking factors; therefore, the concept of Virtual Emotional Intelligence is here introduced in order to distinguish the group cohesion reaching process in virtual teams.",
"title": ""
},
{
"docid": "9de00d8cf6b3001f976fa49c42875620",
"text": "This paper is a preliminary report on the efficiency of two strategies of data reduction in a data preprocessing stage. In the first experiment, we apply the Count-Min sketching algorithm, while in the second experiment we discretize our data prior to applying the Count-Min algorithm. By conducting a discretization before sketching, the need for the increased number of buckets in sketching is reduced. This preliminary attempt of combining two methods with the same purpose has shown potential. In our experiments, we use sensor data collected to study the environmental fluctuation and its impact on the quality of fresh peaches and nectarines in cold chain.",
"title": ""
},
{
"docid": "1c1cc9d6b538fda6d2a38ff1dcce7085",
"text": "Major speech production models from speech science literature and a number of popular statistical “generative” models of speech used in speech technology are surveyed. Strengths and weaknesses of these two styles of speech models are analyzed, pointing to the need to integrate the respective strengths while eliminating the respective weaknesses. As an example, a statistical task-dynamic model of speech production is described, motivated by the original deterministic version of the model and targeted for integrated-multilingual speech recognition applications. Methods for model parameter learning (training) and for likelihood computation (recognition) are described based on statistical optimization principles integrated in neural network and dynamic system theories.",
"title": ""
},
{
"docid": "f4bdd6416013dfd2b552efef9c1b22e9",
"text": "ABSTRACT\nTraumatic hemipelvectomy is an uncommon and life threatening injury. We report a case of a 16-year-old boy involved in a traffic accident who presented with an almost circumferential pelvic wound with wide diastasis of the right sacroiliac joint and symphysis pubis. The injury was associated with complete avulsion of external and internal iliac vessels as well as the femoral and sciatic nerves. He also had ipsilateral open comminuted fractures of the femur and tibia. Emergency debridement and completion of amputation with preservation of the posterior gluteal flap and primary anastomosis of the inferior gluteal vessels to the internal iliac artery stump were performed. A free fillet flap was used to close the massive exposed area.\n\n\nKEY WORDS\ntraumatic hemipelvectomy, amputation, and free gluteus maximus fillet flap.",
"title": ""
},
{
"docid": "4e46fb5c1abb3379519b04a84183b055",
"text": "Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns of 6 basic emotions (disgust, fear, happiness, sadness, anger, and surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified emotions induced by both methods, and the classification generalized from one induction condition to another and across individuals. Brain regions contributing most to the classification accuracy included medial and inferior lateral prefrontal cortices, frontal pole, precentral and postcentral gyri, precuneus, and posterior cingulate cortex. Thus, specific neural signatures across these regions hold representations of different emotional states in multimodal fashion, independently of how the emotions are induced. Similarity of subjective experiences between emotions was associated with similarity of neural patterns for the same emotions, suggesting a direct link between activity in these brain regions and the subjective emotional experience.",
"title": ""
},
{
"docid": "5cd48ee461748d989c40f8e0f0aa9581",
"text": "Being able to identify which rhetorical relations (e.g., contrast or explanation) hold between spans of text is important for many natural language processing applications. Using machine learning to obtain a classifier which can distinguish between different relations typically depends on the availability of manually labelled training data, which is very time-consuming to create. However, rhetorical relations are sometimes lexically marked, i.e., signalled by discourse markers (e.g., because, but, consequently etc.), and it has been suggested (Marcu and Echihabi, 2002) that the presence of these cues in some examples can be exploited to label them automatically with the corresponding relation. The discourse markers are then removed and the automatically labelled data are used to train a classifier to determine relations even when no discourse marker is present (based on other linguistic cues such as word co-occurrences). In this paper, we investigate empirically how feasible this approach is. In particular, we test whether automatically labelled, lexically marked examples are really suitable training material for classifiers that are then applied to unmarked examples. Our results suggest that training on this type of data may not be such a good strategy, as models trained in this way do not seem to generalise very well to unmarked data. Furthermore, we found some evidence that this behaviour is largely independent of the classifiers used and seems to lie in the data itself (e.g., marked and unmarked examples may be too dissimilar linguistically and removing unambiguous markers in the automatic labelling process may lead to a meaning shift in the examples).",
"title": ""
},
{
"docid": "601748e27c7b3eefa4ff29252b42bf93",
"text": "A simple, fast method is presented for the interpolation of texture coordinates and shading parameters for polygons viewed in perspective. The method has application in scan conversion algorithms like z-bu er and painter's algorithms that perform screen space interpolation of shading parameters such as texture coordinates, colors, and normal vectors. Some previous methods perform linear interpolation in screen space, but this is rotationally variant, and in the case of texture mapping, causes a disturbing \\rubber sheet\" e ect. To correctly compute the nonlinear, projective transformation between screen space and parameter space, we use rational linear interpolation across the polygon, performing several divisions at each pixel. We present simpler formulas for setting up these interpolation computations, reducing the setup cost per polygon to nil and reducing the cost per vertex to a handful of divisions. Additional keywords: incremental, perspective, projective, a ne.",
"title": ""
},
{
"docid": "c227f76c42ae34af11193e3ecb224ecb",
"text": "Antibiotics and antibiotic resistance determinants, natural molecules closely related to bacterial physiology and consistent with an ancient origin, are not only present in antibiotic-producing bacteria. Throughput sequencing technologies have revealed an unexpected reservoir of antibiotic resistance in the environment. These data suggest that co-evolution between antibiotic and antibiotic resistance genes has occurred since the beginning of time. This evolutionary race has probably been slow because of highly regulated processes and low antibiotic concentrations. Therefore to understand this global problem, a new variable must be introduced, that the antibiotic resistance is a natural event, inherent to life. However, the industrial production of natural and synthetic antibiotics has dramatically accelerated this race, selecting some of the many resistance genes present in nature and contributing to their diversification. One of the best models available to understand the biological impact of selection and diversification are β-lactamases. They constitute the most widespread mechanism of resistance, at least among pathogenic bacteria, with more than 1000 enzymes identified in the literature. In the last years, there has been growing concern about the description, spread, and diversification of β-lactamases with carbapenemase activity and AmpC-type in plasmids. Phylogenies of these enzymes help the understanding of the evolutionary forces driving their selection. Moreover, understanding the adaptive potential of β-lactamases contribute to exploration the evolutionary antagonists trajectories through the design of more efficient synthetic molecules. In this review, we attempt to analyze the antibiotic resistance problem from intrinsic and environmental resistomes to the adaptive potential of resistance genes and the driving forces involved in their diversification, in order to provide a global perspective of the resistance problem.",
"title": ""
},
{
"docid": "4927fee47112be3d859733c498fbf594",
"text": "To design effective tools for detecting and recovering from software failures requires a deep understanding of software bug characteristics. We study software bug characteristics by sampling 2,060 real world bugs in three large, representative open-source projects—the Linux kernel, Mozilla, and Apache. We manually study these bugs in three dimensions—root causes, impacts, and components. We further study the correlation between categories in different dimensions, and the trend of different types of bugs. The findings include: (1) semantic bugs are the dominant root cause. As software evolves, semantic bugs increase, while memory-related bugs decrease, calling for more research effort to address semantic bugs; (2) the Linux kernel operating system (OS) has more concurrency bugs than its non-OS counterparts, suggesting more effort into detecting concurrency bugs in operating system code; and (3) reported security bugs are increasing, and the majority of them are caused by semantic bugs, suggesting more support to help developers diagnose and fix security bugs, especially semantic security bugs. In addition, to reduce the manual effort in building bug benchmarks for evaluating bug detection and diagnosis tools, we use machine learning techniques to classify 109,014 bugs automatically.",
"title": ""
},
{
"docid": "089ef4e4469554a4d4ef75089fe9c7be",
"text": "The attention of software vendors has moved recently to SMEs (smallto medium-sized enterprises), offering them a vast range of enterprise systems (ES), which were formerly adopted by large firms only. From reviewing information technology innovation adoption literature, it can be argued that IT innovations are highly differentiated technologies for which there is not necessarily a single adoption model. Additionally, the question of why one SME adopts an ES while another does not is still understudied. This study intends to fill this gap by investigating the factors impacting SME adoption of ES. A qualitative approach was adopted in this study involving key decision makers in nine SMEs in the Northwest of England. The contribution of this study is twofold: it provides a framework that can be used as a theoretical basis for studying SME adoption of ES, and it empirically examines the impact of the factors within this framework on SME adoption of ES. The findings of this study confirm that factors impacting the adoption of ES are different from factors impacting SME adoption of other previously studied IT innovations. Contrary to large companies that are mainly affected by organizational factors, this study shows that SMEs are not only affected by environmental factors as previously established, but also affected by technological and organizational factors.",
"title": ""
},
{
"docid": "0bd3beaad8cd6d6f19603eca9320718d",
"text": "For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Vercellis, Carlo. Business intelligence : data mining and optimization for decision making / Carlo Vercellis. p. cm. Includes bibliographical references and index.",
"title": ""
},
{
"docid": "af2ccb9d51cd28426fd4f03e7454d7bf",
"text": "How we categorize certain objects depends on the processes they afford: something is a vehicle because it affords transportation, a house because it offers shelter or a watercourse because water can flow in it. The hypothesis explored here is that image schemas (such as LINK, CONTAINER, SUPPORT, and PATH) capture abstractions that are essential to model affordances and, by implication, categories. To test the idea, I develop an algebraic theory formalizing image schemas and accounting for the role of affordances in categorizing spatial entities.",
"title": ""
},
{
"docid": "ae9219c7e3d85b7b8f83569d000a02bb",
"text": "This paper proposes a bidirectional switched-capacitor dc-dc converter for applications that require high voltage gain. Some of conventional switched-capacitor dc-dc converters have diverse voltage or current stresses for the switching devices in the circuit, not suitable for modular configuration or for high efficiency demand; some suffer from relatively high power loss or large device count for high voltage gain, even if the device voltage stress could be low. By contrast, the proposed dc-dc converter features low component (switching device and capacitor) power rating, small switching device count, and low output capacitance requirement. In addition to its low current stress, the combination of two short symmetric paths of charge pumps further lowers power loss. Therefore, a small and light converter with high voltage gain and high efficiency can be achieved. Simulation and experimental results of a 450-W prototype with a voltage conversion ratio of six validate the principle and features of this topology.",
"title": ""
},
{
"docid": "eb8d681fcfd5b18c15dd09738ab4717c",
"text": "Building a dialogue agent to fulfill complex tasks, such as travel planning, is challenging because the agent has to learn to collectively complete multiple subtasks. For example, the agent needs to reserve a hotel and book a flight so that there leaves enough time for commute between arrival and hotel check-in. This paper addresses this challenge by formulating the task in the mathematical framework of options over Markov Decision Processes (MDPs), and proposing a hierarchical deep reinforcement learning approach to learning a dialogue manager that operates at different temporal scales. The dialogue manager consists of (1) a top-level dialogue policy that selects among subtasks or options, (2) a low-level dialogue policy that selects primitive actions to complete the subtask given by the top-level policy, and (3) a global state tracker that helps ensure all cross-subtask constraints be satisfied. Experiments on a travel planning task with simulated and real users show that our approach leads to significant improvements over two baselines, one based on handcrafted rules and the other based on flat deep reinforcement learning.",
"title": ""
},
{
"docid": "71bc346237c5f97ac245dd7b7bbb497f",
"text": "Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or property crime based on their sexual orientation; about half had experienced verbal harassment, and more than 1 in 10 reported having experienced employment or housing discrimination. Gay men were significantly more likely than lesbians or bisexuals to experience violence and property crimes. Employment and housing discrimination were significantly more likely among gay men and lesbians than among bisexual men and women. Implications for future research and policy are discussed.",
"title": ""
},
{
"docid": "4f40700ccdc1b6a8a306389f1d7ea107",
"text": "Skin cancer exists in different forms like Melanoma, Basal and Squamous cell Carcinoma among which Melanoma is the most dangerous and unpredictable. In this paper, we implement an image processing technique for the detection of Melanoma Skin Cancer using the software MATLAB which is easy for implementation as well as detection of Melanoma skin cancer. The input to the system is the skin lesion image. This image proceeds with the image pre-processing methods such as conversion of RGB image to Grayscale image, noise removal and so on. Further Otsu thresholding is used to segment the images followed by feature extraction that includes parameters like Asymmetry, Border Irregularity, Color and Diameter (ABCD) and then Total Dermatoscopy Score (TDS) is calculated. The calculation of TDS determines the presence of Melanoma skin cancer by classifying it as benign, suspicious or highly suspicious skin lesion.",
"title": ""
},
{
"docid": "419f031c3220676ba64c3ec983d4e160",
"text": "Volumetric muscle loss (VML) injuries exceed the considerable intrinsic regenerative capacity of skeletal muscle, resulting in permanent functional and cosmetic deficits. VML and VML-like injuries occur in military and civilian populations, due to trauma and surgery as well as due to a host of congenital and acquired diseases/syndromes. Current therapeutic options are limited, and new approaches are needed for a more complete functional regeneration of muscle. A potential solution is human hair-derived keratin (KN) biomaterials that may have significant potential for regenerative therapy. The goal of these studies was to evaluate the utility of keratin hydrogel formulations as a cell and/or growth factor delivery vehicle for functional muscle regeneration in a surgically created VML injury in the rat tibialis anterior (TA) muscle. VML injuries were treated with KN hydrogels in the absence and presence of skeletal muscle progenitor cells (MPCs), and/or insulin-like growth factor 1 (IGF-1), and/or basic fibroblast growth factor (bFGF). Controls included VML injuries with no repair (NR), and implantation of bladder acellular matrix (BAM, without cells). Initial studies conducted 8 weeks post-VML injury indicated that application of keratin hydrogels with growth factors (KN, KN+IGF-1, KN+bFGF, and KN+IGF-1+bFGF, n = 8 each) enabled a significantly greater functional recovery than NR (n = 7), BAM (n = 8), or the addition of MPCs to the keratin hydrogel (KN+MPC, KN+MPC+IGF-1, KN+MPC+bFGF, and KN+MPC+IGF-1+bFGF, n = 8 each) (p < 0.05). A second series of studies examined functional recovery for as many as 12 weeks post-VML injury after application of keratin hydrogels in the absence of cells. A significant time-dependent increase in functional recovery of the KN, KN+bFGF, and KN+IGF+bFGF groups was observed, relative to NR and BAM implantation, achieving as much as 90% of the maximum possible functional recovery. Histological findings from harvested tissue at 12 weeks post-VML injury documented significant increases in neo-muscle tissue formation in all keratin treatment groups as well as diminished fibrosis, in comparison to both BAM and NR. In conclusion, keratin hydrogel implantation promoted statistically significant and physiologically relevant improvements in functional outcomes post-VML injury to the rodent TA muscle.",
"title": ""
},
{
"docid": "f18a0ae573711eb97b9b4150d53182f3",
"text": "The Electrocardiogram (ECG) is commonly used to detect arrhythmias. Traditionally, a single ECG observation is used for diagnosis, making it difficult to detect irregular arrhythmias. Recent technology developments, however, have made it cost-effective to collect large amounts of raw ECG data over time. This promises to improve diagnosis accuracy, but the large data volume presents new challenges for cardiologists. This paper introduces ECGLens, an interactive system for arrhythmia detection and analysis using large-scale ECG data. Our system integrates an automatic heartbeat classification algorithm based on convolutional neural network, an outlier detection algorithm, and a set of rich interaction techniques. We also introduce A-glyph, a novel glyph designed to improve the readability and comparison of ECG signals. We report results from a comprehensive user study showing that A-glyph improves the efficiency in arrhythmia detection, and demonstrate the effectiveness of ECGLens in arrhythmia detection through two expert interviews.",
"title": ""
}
] |
scidocsrr
|
2c90d38baf7071352aa4a45ea975828a
|
Robust Extreme Multi-label Learning
|
[
{
"docid": "78f8d28f4b20abbac3ad848033bb088b",
"text": "Many real-world applications involve multilabel classification, in which the labels are organized in the form of a tree or directed acyclic graph (DAG). However, current research efforts typically ignore the label dependencies or can only exploit the dependencies in tree-structured hierarchies. In this paper, we present a novel hierarchical multilabel classification algorithm which can be used on both treeand DAG-structured hierarchies. The key idea is to formulate the search for the optimal consistent multi-label as the finding of the best subgraph in a tree/DAG. Using a simple greedy strategy, the proposed algorithm is computationally efficient, easy to implement, does not suffer from the problem of insufficient/skewed training data in classifier training, and can be readily used on large hierarchies. Theoretical results guarantee the optimality of the obtained solution. Experiments are performed on a large number of functional genomics data sets. The proposed method consistently outperforms the state-of-the-art method on both treeand DAG-structured hierarchies.",
"title": ""
},
{
"docid": "c6a44d2313c72e785ae749f667d5453c",
"text": "Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti) + zi, i = 0; : : : ; n 1, ti = i=n, zi iid N(0; 1). The reconstruction f̂ n is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an amount p 2 log(n) = p n. We prove two results about that estimator. [Smooth]: With high probability f̂ n is at least as smooth as f , in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model.",
"title": ""
},
{
"docid": "c60ffb344e85887e06ed178d4941eb0e",
"text": "Multi-label learning arises in many real-world tasks where an object is naturally associated with multiple concepts. It is well-accepted that, in order to achieve a good performance, the relationship among labels should be exploited. Most existing approaches require the label relationship as prior knowledge, or exploit by counting the label co-occurrence. In this paper, we propose the MAHR approach, which is able to automatically discover and exploit label relationship. Our basic idea is that, if two labels are related, the hypothesis generated for one label can be helpful for the other label. MAHR implements the idea as a boosting approach with a hypothesis reuse mechanism. In each boosting round, the base learner for a label is generated by not only learning on its own task but also reusing the hypotheses from other labels, and the amount of reuse across labels provides an estimate of the label relationship. Extensive experimental results validate that MAHR is able to achieve superior performance and discover reasonable label relationship. Moreover, we disclose that the label relationship is usually asymmetric.",
"title": ""
},
{
"docid": "2e8e601fd25bbee74b843af86eb98c5f",
"text": "In multi-label learning, each training example is associated with a set of labels and the task is to predict the proper label set for the unseen example. Due to the tremendous (exponential) number of possible label sets, the task of learning from multi-label examples is rather challenging. Therefore, the key to successful multi-label learning is how to effectively exploit correlations between different labels to facilitate the learning process. In this paper, we propose to use a Bayesian network structure to efficiently encode the conditional dependencies of the labels as well as the feature set, with the feature set as the common parent of all labels. To make it practical, we give an approximate yet efficient procedure to find such a network structure. With the help of this network, multi-label learning is decomposed into a series of single-label classification problems, where a classifier is constructed for each label by incorporating its parental labels as additional features. Label sets of unseen examples are predicted recursively according to the label ordering given by the network. Extensive experiments on a broad range of data sets validate the effectiveness of our approach against other well-established methods.",
"title": ""
}
] |
[
{
"docid": "a48ac362b2206e608303231593cf776b",
"text": "Model-based test case generation is gaining acceptance to the software practitioners. Advantages of this are the early detection of faults, reducing software development time etc. In recent times, researchers have considered different UML diagrams for generating test cases. Few work on the test case generation using activity diagrams is reported in literatures. However, the existing work consider activity diagrams in method scope and mainly follow UML 1.x for modeling. In this paper, we present an approach of generating test cases from activity diagrams using UML 2.0 syntax and with use case scope. We consider a test coverage criterion, called activity path coverage criterion. The test cases generated using our approach are capable of detecting more faults like synchronization faults, loop faults unlike the existing approaches.",
"title": ""
},
{
"docid": "1eea81ad47613c7cd436af451aea904d",
"text": "The Internet of Things (IoT) brings together a large variety of devices of different platforms, computational capacities and functionalities. The network heterogeneity and the ubiquity of IoT devices introduce increased demands on both security and privacy protection. Therefore, the cryptographic mechanisms must be strong enough to meet these increased requirements but, at the same time, they must be efficient enough for the implementation on constrained devices. In this paper, we present a detailed assessment of the performance of the most used cryptographic algorithms on constrained devices that often appear in IoT networks. We evaluate the performance of symmetric primitives, such as block ciphers, hash functions, random number generators, asymmetric primitives, such as digital signature schemes, and privacyenhancing schemes on various microcontrollers, smart-cards and mobile devices. Furthermore, we provide the analysis of the usability of upcoming schemes, such as the homomorphic encryption schemes, group signatures and attribute-based schemes. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "349f24f645b823a7b0cc411d5e2a308e",
"text": "In this paper, the analysis and design of an asymmetrical half bridge flyback DC-DC converter is presented, which can minimize the switching power loss by realizing the zero-voltage switching (ZVS) during the transition between the two switches and the zero-current-switching (ZCS) on the output diode. As a result, high efficiency can be achieved. The principle of the converter operation is explained and analyzed. In order to ensure the realization of ZVS in operation, the required interlock delay time between the gate signals of the two switches, the transformer leakage inductance, and the ZVS range of the output current variation are properly calculated. Experimental results from a 8 V/8 A, 200 kHz circuit are also presented, which verify the theoretical analysis.",
"title": ""
},
{
"docid": "e1958dc823feee7f88ab5bf256655bee",
"text": "We describe an approach for testing a software system for possible securi ty flaws. Traditionally, security testing is done using penetration analysis and formal methods. Based on the observation that most security flaws are triggered due to a flawed interaction with the envi ronment, we view the security testing problem as the problem of testing for the fault-tolerance prop erties of a software system. We consider each environment perturbation as a fault and the resulting security ompromise a failure in the toleration of such faults. Our approach is based on the well known techn ique of fault-injection. Environment faults are injected into the system under test and system beha vior observed. The failure to tolerate faults is an indicator of a potential security flaw in the syst em. An Environment-Application Interaction (EAI) fault model is proposed. EAI allows us to decide what f aults to inject. Based on EAI, we present a security-flaw classification scheme. This scheme was used to classif y 142 security flaws in a vulnerability database. This classification revealed that 91% of the security flaws in the database are covered by the EAI model.",
"title": ""
},
{
"docid": "79ca455db7e7348000c6590a442f9a4c",
"text": "This paper considers the electrical actuation of aircraft wing surfaces, with particular emphasis upon flap systems. It discusses existing electro-hydraulic systems and proposes an electrical alternative, examining the potential system benefits in terms of increased functionality, maintenance and life cycle costs. The paper then progresses to describe a full scale actuation demonstrator of the flap system, including the high speed electrical drive, step down gearbox and flaps. Detailed descriptions are given of the fault tolerant motor, power electronics, control architecture and position sensor systems, along with a range of test results, demonstrating the system in operation",
"title": ""
},
{
"docid": "d277a7e6a819af474b31c7a35b9c840f",
"text": "Blending face geometry in different expressions is a popular approach for facial animation in films and games. The quality of the animation relies on the set of blend shape expressions, and creating sufficient blend shapes takes a large amount of time and effort. This paper presents a complete pipeline to create a set of blend shapes in different expressions for a face mesh having only a neutral expression. A template blend shapes model having sufficient expressions is provided and the neutral expression of the template mesh model is registered into the target face mesh using a non-rigid ICP (iterative closest point) algorithm. Deformation gradients between the template and target neutral mesh are then transferred to each expression to form a new set of blend shapes for the target face. We solve optimization problem to consistently map the deformation of the source blend shapes to the target face model. The result is a new set of blend shapes for a target mesh having triangle-wise correspondences between the source face and target faces. After creating blend shapes, the blend shape animation of the source face is retargeted to the target mesh automatically.",
"title": ""
},
{
"docid": "43e3d3639d30d9e75da7e3c5a82db60a",
"text": "This paper studies deep network architectures to address the problem of video classification. A multi-stream framework is proposed to fully utilize the rich multimodal information in videos. Specifically, we first train three Convolutional Neural Networks to model spatial, short-term motion and audio clues respectively. Long Short Term Memory networks are then adopted to explore long-term temporal dynamics. With the outputs of the individual streams, we propose a simple and effective fusion method to generate the final predictions, where the optimal fusion weights are learned adaptively for each class, and the learning process is regularized by automatically estimated class relationships. Our contributions are two-fold. First, the proposed multi-stream framework is able to exploit multimodal features that are more comprehensive than those previously attempted. Second, we demonstrate that the adaptive fusion method using the class relationship as a regularizer outperforms traditional alternatives that estimate the weights in a “free” fashion. Our framework produces significantly better results than the state of the arts on two popular benchmarks, 92.2% on UCF-101 (without using audio) and 84.9% on Columbia Consumer Videos.",
"title": ""
},
{
"docid": "9f6f00bf0872c54fbf2ec761bf73f944",
"text": "Nanoscience emerged in the late 1980s and is developed and applied in China since the middle of the 1990s. Although nanotechnologies have been less developed in agronomy than other disciplines, due to less investment, nanotechnologies have the potential to improve agricultural production. Here, we review more than 200 reports involving nanoscience in agriculture, livestock, and aquaculture. The major points are as follows: (1) nanotechnologies used for seeds and water improved plant germination, growth, yield, and quality. (2) Nanotechnologies could increase the storage period for vegetables and fruits. (3) For livestock and poultry breeding, nanotechnologies improved animals immunity, oxidation resistance, and production and decreased antibiotic use and manure odor. For instance, the average daily gain of pig increased by 9.9–15.3 %, the ratio of feedstuff to weight decreased by 7.5–10.3 %, and the diarrhea rate decreased by 55.6–66.7 %. (4) Nanotechnologies for water disinfection in fishpond increased water quality and increased yields and survivals of fish and prawn. (5) Nanotechnologies for pesticides increased pesticide performance threefold and reduced cost by 50 %. (6) Nano urea increased the agronomic efficiency of nitrogen fertilization by 44.5 % and the grain yield by 10.2 %, versus normal urea. (7) Nanotechnologies are widely used for rapid detection and diagnosis, notably for clinical examination, food safety testing, and animal epidemic surveillance. (8) Nanotechnologies may also have adverse effects that are so far not well known.",
"title": ""
},
{
"docid": "9c85f1543c688d4fda2124f9d282264f",
"text": "Many modern sensors used for mapping produce 3D point clouds, which are typically registered together using the iterative closest point (ICP) algorithm. Because ICP has many variants whose performances depend on the environment and the sensor, hundreds of variations have been published. However, no comparison frameworks are available, leading to an arduous selection of an appropriate variant for particular experimental conditions. The first contribution of this paper consists of a protocol that allows for a comparison between ICP variants, taking into account a broad range of inputs. The second contribution is an open-source ICP library, which is fast enough to be usable in multiple real-world applications, while being modular enough to ease comparison of multiple solutions. This paper presents two examples of these field applications. The last contribution is the comparison of two baseline ICP variants using data sets that cover a rich variety of environments. Besides demonstrating the need for improved ICP methods for natural, unstructured and information-deprived environments, these baseline variants also provide a solid basis to which novel solutions could be compared. The combination of our protocol, software, and baseline results demonstrate convincingly how open-source software can push forward the research in mapping and navigation. F. Pomerleau (B) · F. Colas · R. Siegwart · S. Magnenat Autonomous System Lab, ETH Zurich, Tannenstrasse 3, 8092 Zurich, Switzerland e-mail: f.pomerleau@gmail.com F. Colas e-mail: francis.colas@mavt.ethz.ch R. Siegwart e-mail: rsiegwart@ethz.ch S. Magnenat e-mail: stephane@magnenat.net",
"title": ""
},
{
"docid": "eb4fa30a38e27a27dc02c60e007d1f01",
"text": "In this paper the design and kinematic performances are presented for a low-cost parallel manipulator with 4 driven cables. It has been conceived for an easy programming of its operation by properly formulating the Kinematics of the parallel architecture that uses cables. A prototype has been built and tests have experienced the feasibility of the system design and its operation.",
"title": ""
},
{
"docid": "434ee509ddfe4afde1407aa3ea7ce9ca",
"text": "Phonocardiogram (PCG) signal is used as a diagnostic test in ambulatory monitoring in order to evaluate the heart hemodynamic status and to detect a cardiovascular disease. The objective of this study is to develop an automatic classification method for anomaly (normal vs. abnormal) and quality (good vs. bad) detection of PCG recordings without segmentation. For this purpose, a subset of 18 features is selected among 40 features based on a wrapper feature selection scheme. These features are extracted from time, frequency, and time-frequency domains without any segmentation. The selected features are fed into an ensemble of 20 feedforward neural networks for classification task. The proposed algorithm achieved the overall score of 91.50% (94.23% sensitivity and 88.76% specificity) and 85.90% (86.91% sensitivity and 84.90% specificity) on the train and unseen test datasets, respectively. The proposed method got the second best score in the PhysioNet/CinC Challenge 2016.",
"title": ""
},
{
"docid": "85f41be6bac18846634c725505d78239",
"text": "We propose SmartEscape, a real-time, dynamic, intelligent and user-specific evacuation system with a mobile interface for emergency cases such as fire. Unlike past work, we explore dynamically changing conditions and calculate a personal route for an evacuee by considering his/her individual features. SmartEscape, which is fast, low-cost, low resource-consuming and mobile supported, collects various environmental sensory data and takes evacuees’ individual features into account, uses an artificial neural network (ANN) to calculate personal usage risk of each link in the building, eliminates the risky ones, and calculates an optimum escape route under existing circumstances. Then, our system guides the evacuee to the exit through the calculated route with vocal and visual instructions on the smartphone. While the position of the evacuee is detected by RFID (Radio-Frequency Identification) technology, the changing environmental conditions are measured by the various sensors in the building. Our ANN (Artificial Neural Network) predicts dynamically changing risk states of all links according to changing environmental conditions. Results show that SmartEscape, with its 98.1% accuracy for predicting risk levels of links for each individual evacuee in a building, is capable of evacuating a great number of people simultaneously, through the shortest and the safest route.",
"title": ""
},
{
"docid": "d2e078d0e40b4be456c57f288c7aaa95",
"text": "This study examines the factors influencing online shopping behavior of urban consumers in the State of Andhra Pradesh, India and provides a better understanding of the potential of electronic marketing for both researchers and online retailers. Data from a sample of 1500 Internet users (distributed evenly among six selected major cities) was collected by a structured questionnaire covering demographic profile and the factors influencing online shopping. Factor analysis and multiple regression analysis are used to establish relationship between the factors influencing online shopping and online shopping behavior. The study identified that perceived risk and price positively influenced online shopping behavior. Results also indicated that positive attitude, product risk and financial risk affect negatively the online shopping behavior. Factors Influencing Online Shopping Behavior of Urban Consumers in India",
"title": ""
},
{
"docid": "0a5df67766cd1027913f7f595950754c",
"text": "While a number of efficient sequential pattern mining algorithms were developed over the years, they can still take a long time and produce a huge number of patterns, many of which are redundant. These properties are especially frustrating when the goal of pattern mining is to find patterns for use as features in classification problems. In this paper, we describe BIDE-Discriminative, a modification of BIDE that uses class information for direct mining of predictive sequential patterns. We then perform an extensive evaluation on nine real-life datasets of the different ways in which the basic BIDE-Discriminative can be used in real multi-class classification problems, including 1-versus-rest and model-based search tree approaches. The results of our experiments show that 1-versus-rest provides an efficient solution with good classification performance.",
"title": ""
},
{
"docid": "8240df0c9498482522ef86b4b1e924ab",
"text": "The advent of the IT-led era and the increased competition have forced companies to react to the new changes in order to remain competitive. Enterprise resource planning (ERP) systems offer distinct advantages in this new business environment as they lower operating costs, reduce cycle times and (arguably) increase customer satisfaction. This study examines, via an exploratory survey of 26 companies, the underlying reasons why companies choose to convert from conventional information systems (IS) to ERP systems and the changes brought in, particularly in the accounting process. The aim is not only to understand the changes and the benefits involved in adopting ERP systems compared with conventional IS, but also to establish the best way forward in future ERP applications. The empirical evidence confirms a number of changes in the accounting process introduced with the adoption of ERP systems.",
"title": ""
},
{
"docid": "6633bf4bf80c4c0a9ceb6024297476ce",
"text": "Software Testing In The Real World provides the reader with a tool-box for effectively improving the software testing process. The book gives the practicing. Improving software practices, delivering more customer value, and. The outsourcing process, Martin shares a real-life case study, including a.This work offers a toolbox for the practical implementation of the software testing process and how to improve it. Based on real-world issues and examples.Software Testing in the Real World provides the reader with a tool-box for effectively improving the software testing process. The book contains many testing.Software testing is a process, or a series of processes, designed to make sure. From working with this example, that thoroughly testing a complex, real-world.",
"title": ""
},
{
"docid": "a78782e389313600620bfb68fc57a81f",
"text": "Online consumer reviews reflect the testimonials of real people, unlike advertisements. As such, they have critical impact on potential consumers, and indirectly on businesses. According to a Harvard study (Luca 2011), +1 rise in star-rating increases revenue by 5–9%. Problematically, such financial incentives have created a market for spammers to fabricate reviews, to unjustly promote or demote businesses, activities known as opinion spam (Jindal and Liu 2008). A vast majority of existing work on this problem have formulations based on static review data, with respective techniques operating in an offline fashion. Spam campaigns, however, are intended to make most impact during their course. Abnormal events triggered by spammers’ activities could be masked in the load of future events, which static analysis would fail to identify. In this work, we approach the opinion spam problem with a temporal formulation. Specifically, we monitor a list of carefully selected indicative signals of opinion spam over time and design efficient techniques to both detect and characterize abnormal events in real-time. Experiments on datasets from two different review sites show that our approach is fast, effective, and practical to be deployed in real-world systems.",
"title": ""
},
{
"docid": "197e64b55c60c684cfd9696652df7a2e",
"text": "We describe a method to estimate the power spectral density of nonstationary noise when a noisy speech signal is given. The method can be combined with any speech enhancement algorithm which requires a noise power spectral density estimate. In contrast to other methods, our approach does not use a voice activity detector. Instead it tracks spectral minima in each frequency band without any distinction between speech activity and speech pause. By minimizing a conditional mean square estimation error criterion in each time step we derive the optimal smoothing parameter for recursive smoothing of the power spectral density of the noisy speech signal. Based on the optimally smoothed power spectral density estimate and the analysis of the statistics of spectral minima an unbiased noise estimator is developed. The estimator is well suited for real time implementations. Furthermore, to improve the performance in nonstationary noise we introduce a method to speed up the tracking of the spectral minima. Finally, we evaluate the proposed method in the context of speech enhancement and low bit rate speech coding with various noise types.",
"title": ""
},
{
"docid": "04f939d59dcfdca93bbc60577c78e073",
"text": "This paper presents a k-nearest neighbors (kNN) method to detect outliers in large-scale traffic data collected daily in every modern city. Outliers include hardware and data errors as well as abnormal traffic behaviors. The proposed kNN method detects outliers by exploiting the relationship among neighborhoods in data points. The farther a data point is beyond its neighbors, the more possible the data is an outlier. Traffic data here was recorded in a video format, and converted to spatial-temporal (ST) traffic signals by statistics. The ST signals are then transformed to a two-dimensional (2D) (x, y) -coordinate plane by Principal Component Analysis (PCA) for dimension reduction. The distance-based kNN method is evaluated by unsupervised and semi-supervised approaches. The semi-supervised approach reaches 96.19% accuracy.",
"title": ""
},
{
"docid": "8ad1d9fe113f2895e29860ebf773a502",
"text": "Recent advances in sensor technologies and instrumentation have led to an extraordinary growth of data sources and streaming applications. A wide variety of devices, from smart phones to dedicated sensors, have the capability of collecting and streaming large amounts of data at unprecedented rates. A number of distinct streaming data models have been proposed. Typical applications for this include smart cites & built environments for instance, where sensor-based infrastructures continue to increase in scale and variety. Understanding how such streaming content can be processed within some time threshold remains a non-trivial and important research topic. We investigate how a cloud-based computational infrastructure can autonomically respond to such streaming content, offering Quality of Service guarantees. We propose an autonomic controller (based on feedback control and queueing theory) to elastically provision virtual machines to meet performance targets associated with a particular data stream. Evaluation is carried out using a federated Cloud-based infrastructure (implemented using CometCloud)-where the allocation of new resources can be based on: (i) differences between sites, i.e., types of resources supported (e.g., GPU versus CPU only), (ii) cost of execution; (iii) failure rate and likely resilience, etc. In particular, we demonstrate how Little's Law-a widely used result in queuing theory-can be adapted to support dynamic control in the context of such resource provisioning.",
"title": ""
}
] |
scidocsrr
|
c58039248b9e300d92b209ed56a523c0
|
Penile Dysmorphic Disorder: Development of a Screening Scale.
|
[
{
"docid": "35725331e4abd61ed311b14086dd3d5c",
"text": "BACKGROUND\nBody dysmorphic disorder (BDD) consists of a preoccupation with an 'imagined' defect in appearance which causes significant distress or impairment in functioning. There has been little previous research into BDD. This study replicates a survey from the USA in a UK population and evaluates specific measures of BDD.\n\n\nMETHOD\nCross-sectional interview survey of 50 patients who satisfied DSM-IV criteria for BDD as their primary disorder.\n\n\nRESULTS\nThe average age at onset was late adolescence and a large proportion of patients were either single or divorced. Three-quarters of the sample were female. There was a high degree of comorbidity with the most common additional Axis l diagnosis being either a mood disorder (26%), social phobia (16%) or obsessive-compulsive disorder (6%). Twenty-four per cent had made a suicide attempt in the past. Personality disorders were present in 72% of patients, the most common being paranoid, avoidant and obsessive-compulsive.\n\n\nCONCLUSIONS\nBDD patients had a high associated comorbidity and previous suicide attempts. BDD is a chronic handicapping disorder and patients are not being adequately identified or treated by health professionals.",
"title": ""
}
] |
[
{
"docid": "7c1bc4197f3924e499f63cecb1e548ff",
"text": "This study examined the effect of varying rearing and testing conditions on guinea pig aggression, courting behavior, endocrine responses and body weight. Pairs of 7-8-month-old males were placed in chronic confrontations for 6-50 days in 2 m2 enclosures. Social behavior was recorded with a total of 882 h observation time. Body weight as well as plasma glucocorticoid, testosterone and norepinephrine titers were determined for each male 20 h before, and 4, 52 and 124 h after, the onset of the chronic encounters. Three experiments were conducted: in Experiment I, 7 pairs of males, each male raised singly with one female (FRM), were confronted in the presence of an unfamiliar female, in Experiment II, 6 pairs of FRM were confronted with no female present, and in Experiment III, 7 pairs of males which were raised in different large colonies were confronted in the presence of an unfamiliar female. In Experiment II and III low levels of aggression, no distinct endocrine changes and no indications of physical injury occurred in winners or losers, whereas in Experiment I high levels of aggression and courting behavior, extreme increases in glucocorticoid titers and distinct decreases in body weights were found in both males. Losers, however, were affected to a much greater extent than winners. These findings suggest that in guinea pigs a causal relationship exists between social rearing conditions, behavior as adults and degree of social stress in chronic encounters.",
"title": ""
},
{
"docid": "9f6fb1de80f4500384097978c3712c68",
"text": "Reflection is a language feature which allows to analyze and transform the behavior of classes at the runtime. Reflection is used for software debugging and testing. Malware authors can leverage reflection to subvert the malware detection by static analyzers. Reflection initializes the class, invokes any method of class, or accesses any field of class. But, instead of utilizing usual programming language syntax, reflection passes classes/methods etc. as parameters to reflective APIs. As a consequence, these parameters can be constructed dynamically or can be encrypted by malware. These cannot be detected by state-of-the-art static tools. We propose EspyDroid, a system that combines dynamic analysis with code instrumentation for a more precise and automated detection of malware employing reflection. We evaluate EspyDroid on 28 benchmark apps employing major reflection categories. Our technique show improved results over FlowDroid via detection of additional undetected flows. These flows have potential to leak sensitive and private information of the users, through various sinks.",
"title": ""
},
{
"docid": "5eab71f546a7dc8bae157a0ca4dd7444",
"text": "We introduce a new usability inspection method called HED (heuristic evaluation during demonstrations) for measuring and comparing usability of competing complex IT systems in public procurement. The method presented enhances traditional heuristic evaluation to include the use context, comprehensive view of the system, and reveals missing functionality by using user scenarios and demonstrations. HED also quantifies the results in a comparable way. We present findings from a real-life validation of the method in a large-scale procurement project of a healthcare and social welfare information system. We analyze and compare the performance of HED to other usability evaluation methods used in procurement. Based on the analysis HED can be used to evaluate the level of usability of an IT system during procurement correctly, comprehensively and efficiently.",
"title": ""
},
{
"docid": "a86c79f52fc8399ab00430459d4f0737",
"text": "Complex networks have emerged as a simple yet powerful framework to represent and analyze a wide range of complex systems. The problem of ranking the nodes and the edges in complex networks is critical for a broad range of real-world problems because it affects how we access online information and products, how success and talent are evaluated in human activities, and how scarce resources are allocated by companies and policymakers, among others. This calls for a deep understanding of how existing ranking algorithmsperform, andwhich are their possible biases thatmay impair their effectiveness. Many popular ranking algorithms (such as Google’s PageRank) are static in nature and, as a consequence, they exhibit important shortcomings when applied to real networks that rapidly evolve in time. At the same time, recent advances in the understanding and modeling of evolving networks have enabled the development of a wide and diverse range of ranking algorithms that take the temporal dimension into account. The aim of this review is to survey the existing ranking algorithms, both static and time-aware, and their applications to evolving networks.We emphasize both the impact of network evolution on well-established static algorithms and the benefits from including the temporal dimension for tasks such as prediction of network traffic, prediction of future links, and identification of significant nodes. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "cb9f89949979f2144e45e06dccdde2e8",
"text": "This paper describes the double mode surface acoustic wave (DMS) filter design techniques for achieving the ultra-steep cut-off characteristics and low insertion loss required for the Rx filter in the personal communications services (PCS) duplexer. Simulations demonstrate that the optimal combination of the additional common ground inductance Lg and the coupling capacitance Cc between the input and output terminals of the DMS filters drastically enhances the skirt steepness and attenuation for the lower frequency side of the passband. Based on this result, we propose a novel DMS filter structure that utilizes the parasitic reactance generated in bonding wires and interdigital transducer (IDT) busbars as Lg and Cc, respectively. Because the proposed structure does not need any additional reactance component, the filter size can be small. Moreover, we propose a compact multiple-connection configuration for low insertion loss. Applying these technologies to the Rx filter, we successfully develop a PCS SAW duplexer.",
"title": ""
},
{
"docid": "c9f7aa228bb7615e29e67b0653f47848",
"text": "This paper proposes an algorithm that improves the locality of a loop nest by transforming the code via interchange, reversal, skewing and tiling. The loop transformation algorithm is based on two concepts: a mathematical formulation of reuse and locality, and a loop transformation theory that unifies the various transforms as unimodular matrix transformations.The algorithm has been implemented in the SUIF (Stanford University Intermediate Format) compiler, and is successful in optimizing codes such as matrix multiplication, successive over-relaxation (SOR), LU decomposition without pivoting, and Givens QR factorization. Performance evaluation indicates that locality optimization is especially crucial for scaling up the performance of parallel code.",
"title": ""
},
{
"docid": "70df369be2c95afd04467cd291e60175",
"text": "In this paper, we introduce two novel metric learning algorithms, χ-LMNN and GB-LMNN, which are explicitly designed to be non-linear and easy-to-use. The two approaches achieve this goal in fundamentally different ways: χ-LMNN inherits the computational benefits of a linear mapping from linear metric learning, but uses a non-linear χ-distance to explicitly capture similarities within histogram data sets; GB-LMNN applies gradient-boosting to learn non-linear mappings directly in function space and takes advantage of this approach’s robustness, speed, parallelizability and insensitivity towards the single additional hyperparameter. On various benchmark data sets, we demonstrate these methods not only match the current state-of-the-art in terms of kNN classification error, but in the case of χ-LMNN, obtain best results in 19 out of 20 learning settings.",
"title": ""
},
{
"docid": "ebc7f54b969eb491afb7032f6c2a46b6",
"text": "The Wi-Fi fingerprinting (WF) technique normally suffers from the RSS (Received Signal Strength) variance problem caused by environmental changes that are inherent in both the training and localization phases. Several calibration algorithms have been proposed but they only focus on the hardware variance problem. Moreover, smartphones were not evaluated and these are now widely used in WF systems. In this paper, we analyze various aspect of the RSS variance problem when using smartphones for WF: device type, device placement, user direction, and environmental changes over time. To overcome the RSS variance problem, we also propose a smartphone-based, indoor pedestrian-tracking system. The scheme uses the location where the maximum RSS is observed, which is preserved even though RSS varies significantly. We experimentally validate that the proposed system is robust to the RSS variance problem.",
"title": ""
},
{
"docid": "b440605c81b9a0e14b568704f522ab5c",
"text": "Smoking-induced diseases are known to be the leading cause of death in the United States. In this work, we design RisQ, a mobile solution that leverages a wristband containing a 9-axis inertial measurement unit to capture changes in the orientation of a person's arm, and a machine learning pipeline that processes this data to accurately detect smoking gestures and sessions in real-time. Our key innovations are four-fold: a) an arm trajectory-based method that extracts candidate hand-to-mouth gestures, b) a set of trajectory-based features to distinguish smoking gestures from confounding gestures including eating and drinking, c) a probabilistic model that analyzes sequences of hand-to-mouth gestures and infers which gestures are part of individual smoking sessions, and d) a method that leverages multiple IMUs placed on a person's body together with 3D animation of a person's arm to reduce burden of self-reports for labeled data collection. Our experiments show that our gesture recognition algorithm can detect smoking gestures with high accuracy (95.7%), precision (91%) and recall (81%). We also report a user study that demonstrates that we can accurately detect the number of smoking sessions with very few false positives over the period of a day, and that we can reliably extract the beginning and end of smoking session periods.",
"title": ""
},
{
"docid": "f24f686a705a1546d211ac37d5cc2fdb",
"text": "In commercial-off-the-shelf (COTS) multi-core systems, a task running on one core can be delayed by other tasks running simultaneously on other cores due to interference in the shared DRAM main memory. Such memory interference delay can be large and highly variable, thereby posing a significant challenge for the design of predictable real-time systems. In this paper, we present techniques to provide a tight upper bound on the worst-case memory interference in a COTS-based multi-core system. We explicitly model the major resources in the DRAM system, including banks, buses and the memory controller. By considering their timing characteristics, we analyze the worst-case memory interference delay imposed on a task by other tasks running in parallel. To the best of our knowledge, this is the first work bounding the request re-ordering effect of COTS memory controllers. Our work also enables the quantification of the extent by which memory interference can be reduced by partitioning DRAM banks. We evaluate our approach on a commodity multi-core platform running Linux/RK. Experimental results show that our approach provides an upper bound very close to our measured worst-case interference.",
"title": ""
},
{
"docid": "9d0b7f84d0d326694121a8ba7a3094b4",
"text": "Passive sensing of human hand and limb motion is important for a wide range of applications from human-computer interaction to athletic performance measurement. High degree of freedom articulated mechanisms like the human hand are di cult to track because of their large state space and complex image appearance. This article describes a model-based hand tracking system, called DigitEyes, that can recover the state of a 27 DOF hand model from ordinary gray scale images at speeds of up to 10 Hz.",
"title": ""
},
{
"docid": "00904281e8f6d5770e1ba3ff7febd20b",
"text": "This paper proposes a data-driven method for concept-to-text generation, the task of automatically producing textual output from non-linguistic input. A key insight in our approach is to reduce the tasks of content selection (“what to say”) and surface realization (“how to say”) into a common parsing problem. We define a probabilistic context-free grammar that describes the structure of the input (a corpus of database records and text describing some of them) and represent it compactly as a weighted hypergraph. The hypergraph structure encodes exponentially many derivations, which we rerank discriminatively using local and global features. We propose a novel decoding algorithm for finding the best scoring derivation and generating in this setting. Experimental evaluation on the ATIS domain shows that our model outperforms a competitive discriminative system both using BLEU and in a judgment elicitation study.",
"title": ""
},
{
"docid": "2271dd42ca1f9682dc10c9832387b55f",
"text": "People who score low on a performance test overestimate their own performance relative to others, whereas high scorers slightly underestimate their own performance. J. Kruger and D. Dunning (1999) attributed these asymmetric errors to differences in metacognitive skill. A replication study showed no evidence for mediation effects for any of several candidate variables. Asymmetric errors were expected because of statistical regression and the general better-than-average (BTA) heuristic. Consistent with this parsimonious model, errors were no longer asymmetric when either regression or the BTA effect was statistically removed. In fact, high rather than low performers were more error prone in that they were more likely to neglect their own estimates of the performance of others when predicting how they themselves performed relative to the group.",
"title": ""
},
{
"docid": "96e10f0858818ce150dba83882557aee",
"text": "Embedding and visualizing large-scale high-dimensional data in a two-dimensional space is an important problem since such visualization can reveal deep insights out of complex data. Most of the existing embedding approaches, however, run on an excessively high precision, ignoring the fact that at the end, embedding outputs are converted into coarsegrained discrete pixel coordinates in a screen space. Motivated by such an observation and directly considering pixel coordinates in an embedding optimization process, we accelerate Barnes-Hut tree-based t-distributed stochastic neighbor embedding (BH-SNE), known as a state-of-the-art 2D embedding method, and propose a novel method called PixelSNE, a highly-efficient, screen resolution-driven 2D embedding method with a linear computational complexity in terms of the number of data items. Our experimental results show the significantly fast running time of PixelSNE by a large margin against BH-SNE, while maintaining the minimal degradation in the embedding quality. Finally, the source code of our method is publicly available at https: //github.com/awesome-davian/sasne.",
"title": ""
},
{
"docid": "abdd8eb3c08b63762cb0a0dffdbade12",
"text": "Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. However, these algorithms have been used mainly in batch mode, i.e., they require the entire training set to be available at once and, in some cases, require random access to the data. In this paper, we present online versions of bagging and boosting that require only one pass through the training data. We build on previously presented work by describing some theoretical results. We also compare the online and batch algorithms experimentally in terms of accuracy and running time.",
"title": ""
},
{
"docid": "e6869c4f8cdd0e321d59ff24c6b09ef2",
"text": "Economic pressures continue to force the petroleum industry to seek less expensive alternatives to conventional gravity based separation. The gas-liquid cylindrical cyclone (GLCC) is a simple, compact, low-cost separator that can provide an economically attractive alternative to conventional separators over a wide range of applications. Although cyclones have long been used for liquid/liquid, solid/liquid, and gas/solid separation, they have experienced only limited use in full range gas/liquid separation applications. The biggest impediment to the wide spread use of GLCCs has been the lack of reliable performance prediction tools in order to know where they can and cannot be successfully applied. This paper presents the status of the development of the GLCC, the state-of-the-art with respect to modeling the GLCC, and discusses current installations and potential applications. INTRODUCTION The GLCC is a simple, compact, low-weight and low-cost separator that is rapidly gaining popularity as an alternative to conventional gravity based separators. Shown in Fig 1 is a simple GLCC consisting of a vertical pipe with a tangential inlet and outlets for gas and liquid. The tangential flow from the inlet to the body of the GLCC causes the flow to swirl with sufficient tangential velocity to produce centripetal forces on the entrained gas which are an order of magnitude higher than the force of gravity. The combination of gravitational and centrifugal forces pushes the liquid radially outward and downward toward the liquid exit, while the gas is driven inward and upward toward the gas exit. The performance of a GLCC is characterized by it’s operational envelope which is bounded by lines of constant liquid carry-over in the gas stream and constant gas carryunder in the liquid stream. The onset to liquid carry-over is identified by the smallest flow of liquid observed in the gas stream. Similarly, the first observable bubbles in the liquid underflow mark the onset of gas carry-under. Despite the long history of cyclone technology and the seemingly simple design and operation of the GLCC, these cyclones have not been widely used for full-range gas-liquid separation. Part of the reluctance to use GLCCs must be attributed to the uncertainty in predicting performance of the GLCC over a full range of gas-liquid flows. The difficulty in developing accurate performance predictions is largely due to the variety of complex flow patterns that can occur in the GLCC. The flow patterns above the inlet can include bubble, slug, churn, mist and liquid ribbon. Below the inlet the flow generally consists of a liquid vortex with a gas core. At lower liquid levels, a region of annular swirl flow may exist between the inlet and the vortex. Further refinements of flow pattern definition below the inlet have not been made. This difficulty in predicting the hydrodynamic performance of the GLCC has been the single largest obstruction to broader usage of the GLCC. Even without tried and tested performance predictions, several successful applications of GLCCs have been reported. The development of reliable performance prediction tools will govern the speed and extent to which the GLCC technology can spread in existing and new applications. APPLICATIONS The GLCC has a distinct advantage over conventional gravity based separators when compactness, weight, and cost are the overriding considerations for separator selection. There are a variety of applications where requirements may vary from partial to complete gas-liquid separation. Below are some of the current installations and potential applications for the GLCC. Multiphase Measurement Loop: Figure 2 shows a GLCC in a multiphase metering loop configuration. This type of measurement loop configuration affords several advantages over either conventional separation and single phase measurement or nonseparating multiphase meters. The loop configuration is somewhat selfregulating which can reduce or even eliminate the need for active level control. The compactness of the GLCC allows the measurement loop to weigh less, occupy less space, and maintain less hydrocarbon inventory than a conventional test separator. Furthermore, complete or even partial gasliquid separation can improve the accuracy of each phase rate measurement in a multiphase metering system. When complete gas-liquid separation is achieved in the GLCC, several liquid metering options are available, e.g., bulk liquid metering and proportional sampling. Chevron has several multiphase metering loops in operation that use this standard liquid metering approach on the liquid leg of a GLCC. This is a very low cost option for multiphase measurement although sampling can be labor intensive. Two-phase liquid-liquid meters are also available for the liquid leg. Liu and Kouba have shown that Coriolis meters with the net oil computer (NOC) option can simultaneously measure oil and water flow rates with excellent accuracy for production allocation applications, such as well testing, provided there is no gas present in the meter. Chevron has deployed several multiphase metering loops with Coriolis NOCs on the liquid leg as shown in Fig.3. One of the main limitations of the Coriolis NOC in the measurement loop is the sensitivity of the Coriolis NOC to small amounts of gas that may carry-under with the liquid. The Accuflow multiphase measurement system, shown in Fig. 4, utilizes a second stage horizontal pipe separator between the GLCC and the Coriolis meter to prevent gas carry-under from reaching the Coriolis meter. When gas carry-under cannot be prevented, a three-phase metering system is required on the liquid leg. In general, the accuracy of a multiphase meter on the liquid leg will benefit significantly from removing some of the gas. Most multiphase meters have an upper limit on the gas volume fraction allowed through the meter in order to maintain their accuracy specifications. Beyond improved accuracy, partial gas separation provides the additional benefit of utilizing a smaller, less expensive, multiphase meter. For some multiphase meters whose price scales directly with size, the cost savings of using a smaller meter was over 4 times the cost of the GLCC. The effect of partial gas separation on multiphase metering can be so pronounced that several multiphase meter manufacturers are configuring their meters in multiphase measurement loops utilizing compact gas-liquid separation. At least two manufacturers are supporting further research on GLCCs. Preseparation: A compact GLCC is often very appropriate for applications where only partial separation of gas from liquid is required. One such application is the partial separation of raw gas from high pressure wells to use for gas lift of low pressure wells. Weingarten et al. developed a gas-liquid cyclone separator with an auger internal for downhole and surface separation of raw gas. They showed that the auger cyclone could successfully separate up to 80% of the gas without significant liquid carry-over into raw lift gas stream. The cost of the auger separator was reported to be about 2% of a conventional separator. The real savings in this sort of application comes from reducing or eliminating gas compression facilities. Separating a significant portion of the gas will reduce fluctuations in the liquid flow and may result in improved performance of other downstream separation devices. Krebs Petroleum Technologies is investigating the use of a GLCC in series with other compact separation devices such as a wellhead desanding hydrocyclone and a free water knockout hydrocyclone. Chevron is investigating the series combination of a GLCC with a freewater knockout hydrocyclone and a deoiling hydrocyclone in an effort to improve discharge water quality, as shown in Fig. 5. Arato and Barnes investigated the use of GLCC to control GLR to a multiphase pump to improve pumping efficiency. Sarshar et al showed several combinations of GLCC and jet pumps that could be used to extract energy from high pressure multiphase wells to enhance production from low pressure wells. Production Separation: Vertical separators with tangential inlets are fairly common in the oil field. These predecessors of the GLCC are often big and bulky with low velocity perpendicular tangential pipe inlets. The tangential velocities are usually so low that gravitational and centrifugal forces contribute roughly equally to separation. Technological developments (discussed in later sections) in both GLCC hardware and software should reduce the size and improve the performance of vertical separators. One challenge in optimizing the size of a GLCC for production separation is designing a system that can respond quickly to surges without serious upsets. Cyclone separation has already proven useful in internal separation devices for large horizontal separators. The GLCC may also provide a useful external preseparation device to enhance performance of existing horizontal separators, as in Fig. 6. By separating part of the gas, the separator level might be raised to increase residence time without encountering the mist flow regime in the vessel. The biggest impact to the petroleum industry from GLCC technology may be in subsea separation applications. Baker and Entress have concluded that “wellhead separation and pumping is the most thermodynamically efficient method for wellstream transfer over long distances, particularly from deep water”. No doubt, the development of marginal offshore fields will depend upon developing efficient and profit effective technologies. Subsea applications demand a high degree of confidence in separator design and performance while demanding that the equipment be simple, compact, robust and economical. Here again the virtues of the GLCC should place it in good standing among competing technologies. DEVELOPMENTS Few systematic studies of design configurations of different GLCC physical features have been conducted. La",
"title": ""
},
{
"docid": "a6287828106cdfa0360607504016eff1",
"text": "Predicting emotion categories, such as anger, joy, and anxiety, expressed by a sentence is challenging due to its inherent multi-label classification difficulty and data sparseness. In this paper, we address above two challenges by incorporating the label dependence among the emotion labels and the context dependence among the contextual instances into a factor graph model. Specifically, we recast sentence-level emotion classification as a factor graph inferring problem in which the label and context dependence are modeled as various factor functions. Empirical evaluation demonstrates the great potential and effectiveness of our proposed approach to sentencelevel emotion classification. 1",
"title": ""
},
{
"docid": "4800fd4c07c97f139d01f9d41398dd27",
"text": "Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc.). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: “different instances but a similar viewpoint and category” and “different viewpoints of the same instance”. By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised counterpart (24.4%) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task.",
"title": ""
},
{
"docid": "78d879c810c64413825d7a243c9de78c",
"text": "Algebra greatly broadened the very notion of algebra in two ways. First, the traditional numerical domains such as Z, Q R, and C, were now seen as instances of more general concepts of equationally-defined algebraic structure, which did not depend on any particular representation for their elements, but only on abstract sets of elements, operations on such elements, and equational properties satisfied by such operations. In this way, the integers Z were seen as an instance of the ring algebraic structure, that is, a set R with constants 0 and 1, and with addition + and mutiplication ∗ operations satisfying the equational axioms of the theory of rings, along with other rings such as the ring Zk of the residue classes of integers modulo k, the ring Z[x1, . . . , xn] of polynomials on n variables, and so on. Likewise, Q, R, and C were viewed as instances of the field structure, that is, a ring F together with a division operator / , so that each nonzero element x has an inverse 1/x with x ∗ (1/x) = 1, along with other fields such as the fields Zp, with p prime, the fields of rational functions Q(x1, . . . , xn), R(x1, . . . , xn), and C(x1, . . . , xn) (whose elements are quotients p/q with p, q polynomials and q , 0), and so on. A second way in which Abstract Algebra broadened the notion of algebra was by considering other equationally-defined structures besides rings and fields, such as monoids, groups, modules, vector spaces, and so on. This intimately connected algebra with other areas of mathematics such as geometry, analysis and topology in new ways, besides the already well-known connections with geometic figures defined as solutions of polynomal equations (the so-called algebraic varieties, such as algebraic curves or surfaces). Universal Algebra (the seminal paper is the one by Garett Birkhoff [4]), takes one more step in this line of generalization: why considering only the usual suspects: monoids, groups, rings, fields, modules, and vector spaces? Why not considering any algebraic structure defined by an arbitrary collection Σ of function symbols (called a signature), and obeying an arbitrary set E of equational axioms? And why not developing algebra in this much more general setting? That is, Universal Algebra is just Abstract Algebra brought to its full generality. Of course, generalization never stops, so that Universal Algebra itself has been further generalized in various directions. One of them, which we will fully pursue in this Part II and which, as we shall see, has many applications to Computer Science, is from considering a single set of data elements (unsorted algebras) to considering a family of such sets (many-sorted algebras), or a family of such sets but allowing subtype inclusions (order-sorted algebras). Three other, are: (i) replacing the underlying sets by richer structures such as posets, topological spaces, sheaves, or algebraic varieties, leading to notions such as those of an ordered algebra, a topological algebra, or an algebraic structure on a sheaf or on an algebraic variety; for example, an elliptic curve is a cubic curve having a commutative group structure; (ii) allowing not only finitary operations but also infinitary ones (we have already seen examples of such algebras with infinitary operations —namely, complete lattices and complete semi-lattices— in §7.5); and (iii) allowing operations to be partial functions, leading to the notion of a partial algebra. Order-sorted algebras already provide quite useful support for certain forms of partiality; and their generalization to algebras in membership equational logic provides full support for partiality (see [36, 39]).",
"title": ""
},
{
"docid": "1be58e70089b58ca3883425d1a46b031",
"text": "In this work, we propose a novel way to consider the clustering and the reduction of the dimension simultaneously. Indeed, our approach takes advantage of the mutual reinforcement between data reduction and clustering tasks. The use of a low-dimensional representation can be of help in providing simpler and more interpretable solutions. We show that by doing so, our model is able to better approximate the relaxed continuous dimension reduction solution by the true discrete clustering solution. Experiment results show that our method gives better results in terms of clustering than the state-of-the-art algorithms devoted to similar tasks for data sets with different proprieties.",
"title": ""
}
] |
scidocsrr
|
f7fd3a416267e67ea3a2a781ba019f5d
|
Association Toward a Theory of Culturally Relevant Pedagogy
|
[
{
"docid": "622823023c038a60113a41ba9350b077",
"text": "This seminal work was published in 1968 in Portuguese. The author, Paulo Freire, was an educationalist working in Brazil, though for political reasons, (he was imprisoned by a military junta in 1964) he spent time in other countries including a period in Geneva where he worked as an adviser on education for the World Council of Churches. This book itself was written while he was in Chile. After his return to Brazil in 1979 he became involved with a socialist political party and eventually came to hold an administrative position as Secretary of Education for São Paulo city.",
"title": ""
}
] |
[
{
"docid": "b733ffe2cf4e0ee19b07614075c091a8",
"text": "BACKGROUND\nPENS is a rare neuro-cutaneous syndrome that has been recently described. It involves one or more congenital epidermal hamartomas of the papular epidermal nevus with \"skyline\" basal cell layer type (PENS) as well as non-specific neurological anomalies. Herein, we describe an original case in which the epidermal hamartomas are associated with autism spectrum disorder (ASD).\n\n\nPATIENTS AND METHODS\nA 6-year-old boy with a previous history of severe ASD was referred to us for asymptomatic pigmented congenital plaques on the forehead and occipital region. Clinical examination revealed a light brown verrucous mediofrontal plaque in the form of an inverted comma with a flat striated surface comprising coalescent polygonal papules, and a clinically similar round occipital plaque. Repeated biopsies revealed the presence of acanthotic epidermis covered with orthokeratotic hyperkeratosis with occasionally broadened epidermal crests and basal hyperpigmentation, pointing towards an anatomoclinical diagnosis of PENS.\n\n\nDISCUSSION\nA diagnosis of PENS hamartoma was made on the basis of the clinical characteristics and histopathological analysis of the skin lesions. This condition is defined clinically as coalescent polygonal papules with a flat or rough surface, a round or comma-like shape and light brown coloring. Histopathological examination showed the presence of a regular palisade \"skyline\" arrangement of basal cell epidermal nuclei which, while apparently pathognomonic, is neither a constant feature nor essential for diagnosis. Association of a PENS hamartoma and neurological disorders allows classification of PENS as a new keratinocytic epidermal hamartoma syndrome. The early neurological signs, of varying severity, are non-specific and include psychomotor retardation, learning difficulties, dyslexia, hyperactivity, attention deficit disorder and epilepsy. There have been no reports hitherto of the presence of ASD as observed in the case we present.\n\n\nCONCLUSION\nThis new case report of PENS confirms the autonomous nature of this neuro-cutaneous disorder associated with keratinocytic epidermal hamartoma syndromes.",
"title": ""
},
{
"docid": "f0242a2a54b1c4538abdd374c74f69f6",
"text": "Background: An increasing research effort has devoted to just-in-time (JIT) defect prediction. A recent study by Yang et al. at FSE'16 leveraged individual change metrics to build unsupervised JIT defect prediction model. They found that many unsupervised models performed similarly to or better than the state-of-the-art supervised models in effort-aware JIT defect prediction. Goal: In Yang et al.'s study, code churn (i.e. the change size of a code change) was neglected when building unsupervised defect prediction models. In this study, we aim to investigate the effectiveness of code churn based unsupervised defect prediction model in effort-aware JIT defect prediction. Methods: Consistent with Yang et al.'s work, we first use code churn to build a code churn based unsupervised model (CCUM). Then, we evaluate the prediction performance of CCUM against the state-of-the-art supervised and unsupervised models under the following three prediction settings: cross-validation, time-wise cross-validation, and cross-project prediction. Results: In our experiment, we compare CCUM against the state-of-the-art supervised and unsupervised JIT defect prediction models. Based on six open-source projects, our experimental results show that CCUM performs better than all the prior supervised and unsupervised models. Conclusions: The result suggests that future JIT defect prediction studies should use CCUM as a baseline model for comparison when a novel model is proposed.",
"title": ""
},
{
"docid": "0c991f86cee8ab7be1719831161a3fec",
"text": "Conversational systems have become increasingly popular as a way for humans to interact with computers. To be able to provide intelligent responses, conversational systems must correctly model the structure and semantics of a conversation. We introduce the task of measuring semantic (in)coherence in a conversation with respect to background knowledge, which relies on the identification of semantic relations between concepts introduced during a conversation. We propose and evaluate graph-based and machine learning-based approaches for measuring semantic coherence using knowledge graphs, their vector space embeddings and word embedding models, as sources of background knowledge. We demonstrate how these approaches are able to uncover different coherence patterns in conversations on the Ubuntu Dialogue Corpus.",
"title": ""
},
{
"docid": "960d9adecf6807a9b25ebea5119c1345",
"text": "In this paper, a hybrid network combining visible light communication (VLC) with a radio frequency (RF) wireless local area network (WLAN) is considered. In indoor scenarios, a light fidelity (Li-Fi) access point (AP) can provide very high throughput and satisfy any illumination demands while wireless fidelity (Wi-Fi) offers basic coverage. Such a hybrid network with both fixed and mobile users has the problem of variable user locations, and thus large fluctuations in spatially distributed traffic demand. Generally, a handover occurs in such a method when a user is allocated by the central controller unit to a different AP which is better placed to serve the user. In order to be representative of real deployments, this paper studies the problem of load balancing of a dynamic system where we consider the signalling overhead for handover. We propose a scheme for dynamic allocation of resources to users, where the utility function takes into account both throughput and fairness. The simulation results show that there is a trade off between the aggregate throughput and user fairness when handover overhead is considered. The proposed dynamic scheme always outperforms the considered benchmarks in terms of fairness and can achieve better aggregate throughput in the case of low user density.",
"title": ""
},
{
"docid": "8eb598be861e261605bfbea4f28204de",
"text": "Recently, deep neural networks based hashing methods have greatly improved the multimedia retrieval performance by simultaneously learning feature representations and binary hash functions. Inspired by the latest advance in the asymmetric hashing scheme, in this work, we propose a novel Deep Asymmetric Pairwise Hashing approach (DAPH) for supervised hashing. The core idea is that two deep convolutional models are jointly trained such that their output codes for a pair of images can well reveal the similarity indicated by their semantic labels. A pairwise loss is elaborately designed to preserve the pairwise similarities between images as well as incorporating the independence and balance hash code learning criteria. By taking advantage of the flexibility of asymmetric hash functions, we devise an efficient alternating algorithm to optimize the asymmetric deep hash functions and high-quality binary code jointly. Experiments on three image benchmarks show that DAPH achieves the state-of-the-art performance on large-scale image retrieval.",
"title": ""
},
{
"docid": "4ad7cf99a6a67748a9cc98b99c12c1b9",
"text": "During social interaction humans extract important information from tactile stimuli that can improve their understanding of the interaction. The development of a similar capability in a robot will contribute to the future success of intuitive human–robot interaction. This paper presents a thin, flexible and stretchable artificial skin for robotics based on the principle of electrical impedance tomography. This skin, which can be used to extract information such as location, duration and intensity of touch, was used to cover the forearm and upper arm of a full-size mannequin. A classifier based on the ‘LogitBoost’ algorithm was used to classify the modality of eight different types of touch applied by humans to the mannequin arm. Experiments showed that the modality of touch was correctly classified in approximately 71% of the trials. This was shown to be comparable to the accuracy of humans when identifying touch. The classification accuracies obtained represent significant improvements over previous classification algorithms applied to artificial sensitive skins. It is shown that features based on touch duration and intensity are sufficient to provide a good classification of touch modality. Gender and cultural background were examined and found to have no statistically significant effect on the classification results.",
"title": ""
},
{
"docid": "d1fd4d535052a1c2418259c9b2abed66",
"text": "BACKGROUND\nSit-to-stand tests (STST) have recently been developed as easy-to-use field tests to evaluate exercise tolerance in COPD patients. As several modalities of the test exist, this review presents a synthesis of the advantages and limitations of these tools with the objective of helping health professionals to identify the STST modality most appropriate for their patients.\n\n\nMETHOD\nSeventeen original articles dealing with STST in COPD patients have been identified and analysed including eleven on 1min-STST and four other versions of the test (ranging from 5 to 10 repetitions and from 30 s to 3 min). In these studies the results obtained in sit-to-stand tests and the recorded physiological variables have been correlated with the results reported in other functional tests.\n\n\nRESULTS\nA good set of correlations was achieved between STST performances and the results reported in other functional tests, as well as quality of life scores and prognostic index. According to the different STST versions the processes involved in performance are different and consistent with more or less pronounced associations with various physical qualities. These tests are easy to use in a home environment, with excellent metrological properties and responsiveness to pulmonary rehabilitation, even though repetition of the same movement remains a fragmented and restrictive approach to overall physical evaluation.\n\n\nCONCLUSIONS\nThe STST appears to be a relevant and valid tool to assess functional status in COPD patients. While all versions of STST have been tested in COPD patients, they should not be considered as equivalent or interchangeable.",
"title": ""
},
{
"docid": "34d8bd1dd1bbe263f04433a6bf7d1b29",
"text": "algorithms for image processing and computer vision algorithms for image processing and computer vision exploring computer vision and image processing algorithms free ebooks algorithms for image processing and computer parallel algorithms for digital image processing computer algorithms for image processing and computer vision pdf algorithms for image processing and computer vision computer vision: algorithms and applications brown gpu algorithms for image processing and computer vision high-end computer vision algorithms image processing handbook of computer vision algorithms in image algebra the university of cs 4487/9587 algorithms for image analysis an analysis of rigid image alignment computer vision computer vision with matlab massachusetts institute of handbook of computer vision algorithms in image algebra tips and tricks for image processing and computer vision limitations of human vision what is computer vision algorithms for image processing and computer vision gbv algorithms for image processing and computer vision. 2nd computer vision for nanoscale imaging algorithms for image processing and computer vision a survey of distributed computer vision algorithms computer vision: algorithms and applications sci home algorithms for image processing and computer vision ebook engineering of computer vision algorithms using algorithms for image processing and computer vision by j real-time algorithms: prom signal processing to computer expectationmaximization algorithms for image processing automated techniques for detection and recognition of algorithms for image processing and computer vision dictionary of computer vision and image processing implementing video image processing algorithms on fpga open source libraries for image processing computer vision and image processing: a practical approach computer vision i algorithms and applications: image algorithms for image processing and computer vision algorithms for image processing and computer vision j. r",
"title": ""
},
{
"docid": "1d1651943403ba91927553d24627f5f0",
"text": "BACKGROUND\nObesity is a growing epidemic in the United States, with waistlines expanding (overweight) for almost 66% of the population (National Health and Nutrition Examination Survey 1999-2004). The attitude of society, which includes healthcare providers, toward people of size has traditionally been negative, regardless of their own gender, age, experience, and occupation. The purpose of the present study was to determine whether bariatric sensitivity training could improve nursing attitudes and beliefs toward adult obese patients and whether nurses' own body mass index (BMI) affected their attitude and belief scores.\n\n\nMETHODS\nAn on-line survey was conducted of nursing attitudes and beliefs regarding adult obese patients. The responses were compared between 1 hospital that offered bariatric sensitivity training and 1 that did not. The primary study measures were 2 scales that have been validated to assess weight bias: Attitudes Toward Obese Persons (ATOP) and Beliefs Against Obese Persons (BAOP). The primary outcome measures were the scores derived from the ATOP and BAOP scales.\n\n\nRESULTS\nData were obtained from 332 on-line surveys, to which 266 nurses responded with complete data, 145 from hospital 1 (intervention) and 121 from hospital 2 (control). The mean ATOP scores for hospital 1 were modestly greater than those for hospital 2 (18.0 versus 16.1, P = .03). However, no differences were found between the 2 hospitals for the mean BAOP scores (67.1 versus 67.1, P = .86). No statistically significant differences were found between the 2 hospitals among the BMI groups for either ATOP or BAOP. Within each hospital, no statistically significant trend was found among the BMI groups for either ATOP or BAOP. The association of BMI with the overall ATOP (r = .13, P = .04) and BOAP (r = .12, P = .05) scores was very weak, although marginally significant. The association of the overall ATOP score with the BAOP score was weak, although significant (r = .26, P < .001).\n\n\nCONCLUSION\nAnnual bariatric sensitivity training might improve nursing attitudes toward obese patients, but it does not improve nursing beliefs, regardless of the respondent's BMI.",
"title": ""
},
{
"docid": "03bd569e01c0f508dc2d63002389ec7d",
"text": "In this paper we propose a framework for procedural text understanding. Procedural texts are relatively clear without modality nor dependence on viewpoints, etc. and have many potential applications in artificial intelligence. Thus they are suitable as the first target of natural language understanding. As our framework we extend parsing technologies to connect important concepts in a text. Our framework first tokenizes the input text, a sequence of sentences, then recognizes important concepts like named entity recognition, and finally connect them like a sentence parser but dealing all the concepts in the text at once. We tested our framework on cooking recipe texts annotated with a directed acyclic graph as their meaning. We present experimental results and evaluate our framework.",
"title": ""
},
{
"docid": "c57d4b7ea0e5f7126329626408f1da2d",
"text": "Educational Data Mining (EDM) is an interdisciplinary ingenuous research area that handles the development of methods to explore data arising in a scholastic fields. Computational approaches used by EDM is to examine scholastic data in order to study educational questions. As a result, it provides intrinsic knowledge of teaching and learning process for effective education planning. This paper conducts a comprehensive study on the recent and relevant studies put through in this field to date. The study focuses on methods of analysing educational data to develop models for improving academic performances and improving institutional effectiveness. This paper accumulates and relegates literature, identifies consequential work and mediates it to computing educators and professional bodies. We identify research that gives well-fortified advice to amend edifying and invigorate the more impuissant segment students in the institution. The results of these studies give insight into techniques for ameliorating pedagogical process, presaging student performance, compare the precision of data mining algorithms, and demonstrate the maturity of open source implements.",
"title": ""
},
{
"docid": "d849872fad3a96fc9959c62adc5ab96f",
"text": "A Gaussian Process Regression model is equivalent to an infinitely wide neural network with single hidden layer and similarly a DGP is a multi-layer neural network with multiple infinitely wide hidden layers [Neal, 1995]. DGPs employ a hierarchical structural of GP mappings and therefore are arguably more flexible, have a greater capacity to generalize, and are able to provide better predictive performance [Damianou, 2015]. Then it comes into my mind that why we would like to proceed to be deep and what are the benefits about being deep. It has been argued that the addition of non-linear hidden layers can also potentially overcome practical limitations of shallow GPs [Bui et al., 2016]. So what are the limitations exactly? Actually a GPR model is fully specified by a mean function E [·] and the covariance function cov [·, ·]. Conventionally we manually set the mean function to be 0. Then we can say that a GPR model is fully specified by its covariance function which also can be denoted as the kernel. Let us briefly examine the priors on functions encoded by some commonly used kernels 2 Expressing Structure with Kernels",
"title": ""
},
{
"docid": "d7f4fe72783c9eb7ebcb948f40823323",
"text": "Complex networks are dynamic, evolving structures that can host a great number of dynamical processes. In this thesis, we address current challenges regarding the dynamics of and dynamical processes on complex networks. First, we study complex network dynamics from the standpoint of network growth. As a quantitative measure of the complexity and information content of networks generated by growing network models, we define and evaluate their entropy rate. We propose stochastic growth models inspired by the duplication-divergence mechanism to generate epistatic interaction networks and find that they exhibit the property of monochromaticity as a result of their dynamical evolution. Second, we explore the dynamics of quantum mechanical processes on complex networks. We investigate the Bose-Hubbard model on annealed and quenched scale-free networks as well as Apollonian networks and show that their phase diagram changes significantly in the presence of complex topologies, depending on the second degree of the degree distribution and the maximal eigenvalue of the adjacency matrix. We then study the Jaynes-Cummings-Hubbard model on various complex topologies and demonstrate the importance of the maximal eigenvalue of the hopping matrix in determining the phase diagram of the model. Third, we investigate dynamical processes on interacting and multiplex networks. We study opinion dynamics in a simulated setting of two antagonistically interacting networks and recover the importance of connectivity and committed agents. We propose a multiplex centrality measure that takes into account the connectivity patterns within and across different layers and find that the dynamics of biased random walks on multiplex networks gives rise to a centrality ranking that is different from univariate centrality measures. Finally, we study the statistical mechanics of multilayered spatial networks and demonstrate the emergence of significant link overlap and improved navigability in multiplex and interacting spatial networks.",
"title": ""
},
{
"docid": "60c8a0ba8087c6aa81af672318f616c7",
"text": "Information retrieval evaluation based on the pooling method is inherently biased against systems that did not contribute to the pool of judged documents. This may distort the results obtained about the relative quality of the systems evaluated and thus lead to incorrect conclusions about the performance of a particular ranking technique.\n We examine the magnitude of this effect and explore how it can be countered by automatically building an unbiased set of judgements from the original, biased judgements obtained through pooling. We compare the performance of this method with other approaches to the problem of incomplete judgements, such as bpref, and show that the proposed method leads to higher evaluation accuracy, especially if the set of manual judgements is rich in documents, but highly biased against some systems.",
"title": ""
},
{
"docid": "bea3238013d93210d38db5abcea6cefa",
"text": "Changes from the fourth edition of the Wechsler Intelligence Scale for Children (WISC) to the fifth edition are discussed, with particular emphasis on how the electronic administration facilitated assessment. The hierarchical organization and conceptualization of primary indices have been adjusted, based on recent theory and research on the construct of intelligence. Changes also include updates to psychometric properties and consideration of cultural bias. The scoring program allows intelligence scores to be linked statistically to achievement measures to aid in diagnoses of learning disabilities. Electronic assessment was clunky at times but overall delivered on its promise of quicker and more accurate administration and scoring.",
"title": ""
},
{
"docid": "efde92d1e86ff0b5f91b006521935621",
"text": "Sizing equations for electrical machinery are developed from basic principles. The technique provides new insights into: 1. The effect of stator inner and outer diameters. 2. The amount of copper and steel used. 3. A maximizing function. 4. Equivalent slot dimensions in terms of diameters and flux density distribution. 5. Pole number effects. While the treatment is analytical, the scope is broad and intended to assist in the design of electrical machinery. Examples are given showing how the machine's internal geometry can assume extreme proportions through changes in basic variables.",
"title": ""
},
{
"docid": "57502ae793808fded7d446a3bb82ca74",
"text": "Over the last decade, the “digitization” of the electron enterprise has grown at exponential rates. Utility, industrial, commercial, and even residential consumers are transforming all aspects of their lives into the digital domain. Moving forward, it is expected that every piece of equipment, every receptacle, every switch, and even every light bulb will possess some type of setting, monitoring and/or control. In order to be able to manage the large number of devices and to enable the various devices to communicate with one another, a new communication model was needed. That model has been developed and standardized as IEC61850 – Communication Networks and Systems in Substations. This paper looks at the needs of next generation communication systems and provides an overview of the IEC61850 protocol and how it meets these needs. I. Communication System Needs Communication has always played a critical role in the real-time operation of the power system. In the beginning, the telephone was used to communicate line loadings back to the control center as well as to dispatch operators to perform switching operations at substations. Telephoneswitching based remote control units were available as early as the 1930’s and were able to provide status and control for a few points. As digital communications became a viable option in the 1960’s, data acquisition systems (DAS) were installed to automatically collect measurement data from the substations. Since bandwidth was limited, DAS communication protocols were optimized to operate over low-bandwidth communication channels. The “cost” of this optimization was the time it took to configure, map, and document the location of the various data bits received by the protocol. As we move into the digital age, literally thousands of analog and digital data points are available in a single Intelligent Electronic Device (IED) and communication bandwidth is no longer a limiting factor. Substation to master communication data paths operating at 64,000 bits per second are becoming commonplace with an obvious migration path to much high rates. With this migration in technology, the “cost” component of a data acquisition system has now become the configuration and documentation component. Consequently, a key component of a communication system is the ability to describe themselves from both a data and services (communication functions that an IED performs) perspective. Other “key” requirements include: • High-speed IED to IED communication",
"title": ""
},
{
"docid": "34d024643d687d092c0859497ab0001c",
"text": "BACKGROUND\nHealth IT is expected to have a positive impact on the quality and efficiency of health care. But reports on negative impact and patient harm continue to emerge. The obligation of health informatics is to make sure that health IT solutions provide as much benefit with as few negative side effects as possible. To achieve this, health informatics as a discipline must be able to learn, both from its successes as well as from its failures.\n\n\nOBJECTIVES\nTo present motivation, vision, and history of evidence-based health informatics, and to discuss achievements, challenges, and needs for action.\n\n\nMETHODS\nReflections on scientific literature and on own experiences.\n\n\nRESULTS\nEight challenges on the way towards evidence-based health informatics are identified and discussed: quality of studies; publication bias; reporting quality; availability of publications; systematic reviews and meta-analysis; training of health IT evaluation experts; translation of evidence into health practice; and post-market surveillance. Identified needs for action comprise: establish health IT study registers; increase the quality of publications; develop a taxonomy for health IT systems; improve indexing of published health IT evaluation papers; move from meta-analysis to meta-summaries; include health IT evaluation competencies in curricula; develop evidence-based implementation frameworks; and establish post-marketing surveillance for health IT.\n\n\nCONCLUSIONS\nThere has been some progress, but evidence-based health informatics is still in its infancy. Building evidence in health informatics is our obligation if we consider medical informatics a scientific discipline.",
"title": ""
},
{
"docid": "4465a375859cfe6ed4c242d6896a1042",
"text": "Despite tremendous variation in the appearance of visual objects, primates can recognize a multitude of objects, each in a fraction of a second, with no apparent effort. However, the brain mechanisms that enable this fundamental ability are not understood. Drawing on ideas from neurophysiology and computation, we present a graphical perspective on the key computational challenges of object recognition, and argue that the format of neuronal population representation and a property that we term 'object tangling' are central. We use this perspective to show that the primate ventral visual processing stream achieves a particularly effective solution in which single-neuron invariance is not the goal. Finally, we speculate on the key neuronal mechanisms that could enable this solution, which, if understood, would have far-reaching implications for cognitive neuroscience.",
"title": ""
}
] |
scidocsrr
|
2faf063cd213d639c8aaad3b0a2722e4
|
Gender identity development in adolescence
|
[
{
"docid": "1cdd599b49d9122077a480a75391aae8",
"text": "Two aspects of children's early gender development-the spontaneous production of gender labels and gender-typed play-were examined longitudinally in a sample of 82 children. Survival analysis, a statistical technique well suited to questions involving developmental transitions, was used to investigate the timing of the onset of children's gender labeling as based on mothers' biweekly telephone interviews regarding their children's language from 9 through 21 months. Videotapes of children's play both alone and with mother during home visits at 17 and 21 months were independently analyzed for play with gender-stereotyped and gender-neutral toys. Finally, the relation between gender labeling and gender-typed play was examined. Children transitioned to using gender labels at approximately 19 months, on average. Although girls and boys showed similar patterns in the development of gender labeling, girls began labeling significantly earlier than boys. Modest sex differences in play were present at 17 months and increased at 21 months. Gender labeling predicted increases in gender-typed play, suggesting that knowledge of gender categories might influence gender typing before the age of 2.",
"title": ""
},
{
"docid": "558abc8028d1d5b6956d2cf046efb983",
"text": "A key question concerns the extent to which sexual differentiation of human behavior is influenced by sex hormones present during sensitive periods of development (organizational effects), as occurs in other mammalian species. The most important sensitive period has been considered to be prenatal, but there is increasing attention to puberty as another organizational period, with the possibility of decreasing sensitivity to sex hormones across the pubertal transition. In this paper, we review evidence that sex hormones present during the prenatal and pubertal periods produce permanent changes to behavior. There is good evidence that exposure to high levels of androgens during prenatal development results in masculinization of activity and occupational interests, sexual orientation, and some spatial abilities; prenatal androgens have a smaller effect on gender identity, and there is insufficient information about androgen effects on sex-linked behavior problems. There is little good evidence regarding long-lasting behavioral effects of pubertal hormones, but there is some suggestion that they influence gender identity and perhaps some sex-linked forms of psychopathology, and there are many opportunities to study this issue.",
"title": ""
},
{
"docid": "6d45e9d4d1f46debcbf1b95429be60fd",
"text": "Sex differences in cortical thickness (CTh) have been extensively investigated but as yet there are no reports on CTh in transsexuals. Our aim was to determine whether the CTh pattern in transsexuals before hormonal treatment follows their biological sex or their gender identity. We performed brain magnetic resonance imaging on 94 subjects: 24 untreated female-to-male transsexuals (FtMs), 18 untreated male-to-female transsexuals (MtFs), and 29 male and 23 female controls in a 3-T TIM-TRIO Siemens scanner. T1-weighted images were analyzed to obtain CTh and volumetric subcortical measurements with FreeSurfer software. CTh maps showed control females have thicker cortex than control males in the frontal and parietal regions. In contrast, males have greater right putamen volume. FtMs had a similar CTh to control females and greater CTh than males in the parietal and temporal cortices. FtMs had larger right putamen than females but did not differ from males. MtFs did not differ in CTh from female controls but had greater CTh than control males in the orbitofrontal, insular, and medial occipital regions. In conclusion, FtMs showed evidence of subcortical gray matter masculinization, while MtFs showed evidence of CTh feminization. In both types of transsexuals, the differences with respect to their biological sex are located in the right hemisphere.",
"title": ""
},
{
"docid": "2b8296f8760e826046cd039c58026f83",
"text": "This study provided a descriptive and quantitative comparative analysis of data from an assessment protocol for adolescents referred clinically for gender identity disorder (n = 192; 105 boys, 87 girls) or transvestic fetishism (n = 137, all boys). The protocol included information on demographics, behavior problems, and psychosexual measures. Gender identity disorder and transvestic fetishism youth had high rates of general behavior problems and poor peer relations. On the psychosexual measures, gender identity disorder patients had considerably greater cross-gender behavior and gender dysphoria than did transvestic fetishism youth and other control youth. Male gender identity disorder patients classified as having a nonhomosexual sexual orientation (in relation to birth sex) reported more indicators of transvestic fetishism than did male gender identity disorder patients classified as having a homosexual sexual orientation (in relation to birth sex). The percentage of transvestic fetishism youth and male gender identity disorder patients with a nonhomosexual sexual orientation self-reported similar degrees of behaviors pertaining to transvestic fetishism. Last, male and female gender identity disorder patients with a homosexual sexual orientation had more recalled cross-gender behavior during childhood and more concurrent cross-gender behavior and gender dysphoria than did patients with a nonhomosexual sexual orientation. The authors discuss the clinical utility of their assessment protocol.",
"title": ""
}
] |
[
{
"docid": "ee19f23ddd9aaf77923cb3a7607b67fa",
"text": "With worldwide shipments of smartphones (487.7 million) exceeding PCs (414.6 million including tablets) in 2011, and in the US alone, more users predicted to access the Internet from mobile devices than from PCs by 2015, clearly there is a desire to be able to use mobile devices and networks like we use PCs and wireline networks today. However, in spite of advances in the capabilities of mobile devices, a gap will continue to exist, and may even widen, with the requirements of rich multimedia applications. Mobile cloud computing can help bridge this gap, providing mobile applications the capabilities of cloud servers and storage together with the benefits of mobile devices and mobile connectivity, possibly enabling a new generation of truly ubiquitous multimedia applications on mobile devices: Cloud Mobile Media (CMM) applications.",
"title": ""
},
{
"docid": "66d24e13c8ac0dc5c0e85b3e2873346c",
"text": "In advanced CMOS technologies, the negative bias temperature instability (NBTI) phenomenon in pMOSFETs is a major reliability concern as well as a limiting factor in future device scaling. Recently, much effort has been expended to further the basic understanding of this mechanism. This tutorial gives an overview of the physics of NBTI. Discussions include such topics as the impact of NBTI on the observed changes in the device characteristics as well as the impact of gate oxide processes on the physics of NBTI. Current experimental results, exploring various NBTI effects such as frequency dependence and relaxation, are also discussed. Since some of the recent work on the various NBTI effects seems contradictory, focus is placed on highlighting our current understanding, our open questions and our future challenges.",
"title": ""
},
{
"docid": "e7f771269ee99c04c69d1a7625a4196f",
"text": "This report is a summary of Device-associated (DA) Module data collected by hospitals participating in the National Healthcare Safety Network (NHSN) for events occurring from January through December 2010 and re ported to the Centers for Disease Control and Prevention (CDC) by July 7, 2011. This report updates previously published DA Module data from the NHSN and provides contemporary comparative rates. This report comple ments other NHSN reports, including national and state-specific reports of standardized infection ratios for select health care-associated infections (HAIs). The NHSN was established in 2005 to integrate and supersede 3 legacy surveillance systems at the CDC: the National Nosocomial Infections Surveillance system, the Dialysis Surveillance Network, and the National Sur veillance System for Healthcare Workers. NHSN data col lection, reporting, and analysis are organized into 3 components—Patient Safety, Healthcare Personnel",
"title": ""
},
{
"docid": "28a86caf1d86c58941f72c71699fabb1",
"text": "Dicing of ultrathin (e.g. <; 75um thick) “via-middle” 3DI/TSV semiconductor wafers proves to be challenging because the process flow requires the dicing step to occur after wafer thinning and back side processing. This eliminates the possibility of using any type of “dice-before-grind” techniques. In addition, the presence of back side alignment marks, TSVs, or other features in the dicing street can add challenges for the dicing process. In this presentation, we will review different dicing processes used for 3DI/TSV via-middle products. Examples showing the optimization process for a 3DI/TSV memory device wafer product are provided.",
"title": ""
},
{
"docid": "6087ad77caa9947591eb9a3f8b9b342d",
"text": "Geobacter sulfurreducens is a well-studied representative of the Geobacteraceae, which play a critical role in organic matter oxidation coupled to Fe(III) reduction, bioremediation of groundwater contaminated with organics or metals, and electricity production from waste organic matter. In order to investigate G. sulfurreducens central metabolism and electron transport, a metabolic model which integrated genome-based predictions with available genetic and physiological data was developed via the constraint-based modeling approach. Evaluation of the rates of proton production and consumption in the extracellular and cytoplasmic compartments revealed that energy conservation with extracellular electron acceptors, such as Fe(III), was limited relative to that associated with intracellular acceptors. This limitation was attributed to lack of cytoplasmic proton consumption during reduction of extracellular electron acceptors. Model-based analysis of the metabolic cost of producing an extracellular electron shuttle to promote electron transfer to insoluble Fe(III) oxides demonstrated why Geobacter species, which do not produce shuttles, have an energetic advantage over shuttle-producing Fe(III) reducers in subsurface environments. In silico analysis also revealed that the metabolic network of G. sulfurreducens could synthesize amino acids more efficiently than that of Escherichia coli due to the presence of a pyruvate-ferredoxin oxidoreductase, which catalyzes synthesis of pyruvate from acetate and carbon dioxide in a single step. In silico phenotypic analysis of deletion mutants demonstrated the capability of the model to explore the flexibility of G. sulfurreducens central metabolism and correctly predict mutant phenotypes. These results demonstrate that iterative modeling coupled with experimentation can accelerate the understanding of the physiology of poorly studied but environmentally relevant organisms and may help optimize their practical applications.",
"title": ""
},
{
"docid": "b1789c3522ae188b3838a09d764e460f",
"text": "Saliency in Context (SALICON) is an ongoing effort that aims at understanding and predicting visual attention. Conventional saliency models typically rely on low-level image statistics to predict human fixations. While these models perform significantly better than chance, there is still a large gap between model prediction and human behavior. This gap is largely due to the limited capability of models in predicting eye fixations with strong semantic content, the so-called semantic gap. This paper presents a focused study to narrow the semantic gap with an architecture based on Deep Neural Network (DNN). It leverages the representational power of high-level semantics encoded in DNNs pretrained for object recognition. Two key components are fine-tuning the DNNs fully convolutionally with an objective function based on the saliency evaluation metrics, and integrating information at different image scales. We compare our method with 14 saliency models on 6 public eye tracking benchmark datasets. Results demonstrate that our DNNs can automatically learn features particularly for saliency prediction that surpass by a big margin the state-of-the-art. In addition, our model ranks top to date under all seven metrics on the MIT300 challenge set.",
"title": ""
},
{
"docid": "bb547f90a98aa25d0824dc63b9de952d",
"text": "When designing distributed web services, there are three properties that are commonly desired: consistency, availability, and partition tolerance. It is impossible to achieve all three. In this note, we prove this conjecture in the asynchronous network model, and then discuss solutions to this dilemma in the partially synchronous model.",
"title": ""
},
{
"docid": "e05f857b063275500cf54d4596c646d4",
"text": "This paper is a contribution to the electric modeling of electrochemical cells. Specifically, cells for a new copper electrowinning process, which uses bipolar electrodes, are studied. Electrowinning is used together with solvent extraction and has gained great importance, due to its significant cost and environmental advantages, as compared to other copper reduction methods. Current electrowinning cells use unipolar electrodes connected electrically in parallel. Instead, bipolar electrodes, are connected in series. They are also called floating, because they are not wire-connected, but just immersed in the electrolyte. The main advantage of this technology is that, for the same copper production, a cell requires a much lower DC current, as compared with the unipolar case. This allows the cell to be supplied from a modular and compact PWM rectifier instead of a bulk high current thyristor rectifier, having a significant economic impact. In order to study the quality of the copper, finite difference algorithms in two dimensions are derived to obtain the distribution of the potential and the electric field inside the cell. Different geometrical configurations of cell and floating electrodes are analyzed. The proposed method is a useful tool for analysis and design of electrowinning cells, reducing the time-consuming laboratory implementations.",
"title": ""
},
{
"docid": "ee4288bcddc046ae5e9bcc330264dc4f",
"text": "Emerging recognition of two fundamental errors underpinning past polices for natural resource issues heralds awareness of the need for a worldwide fundamental change in thinking and in practice of environmental management. The first error has been an implicit assumption that ecosystem responses to human use are linear, predictable and controllable. The second has been an assumption that human and natural systems can be treated independently. However, evidence that has been accumulating in diverse regions all over the world suggests that natural and social systems behave in nonlinear ways, exhibit marked thresholds in their dynamics, and that social-ecological systems act as strongly coupled, complex and evolving integrated systems. This article is a summary of a report prepared on behalf of the Environmental Advisory Council to the Swedish Government, as input to the process of the World Summit on Sustainable Development (WSSD) in Johannesburg, South Africa in 26 August 4 September 2002. We use the concept of resilience--the capacity to buffer change, learn and develop--as a framework for understanding how to sustain and enhance adaptive capacity in a complex world of rapid transformations. Two useful tools for resilience-building in social-ecological systems are structured scenarios and active adaptive management. These tools require and facilitate a social context with flexible and open institutions and multi-level governance systems that allow for learning and increase adaptive capacity without foreclosing future development options.",
"title": ""
},
{
"docid": "a208187fc81a633ac9332ee11567b1a7",
"text": "Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain-machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin-Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.",
"title": ""
},
{
"docid": "c0e99b3b346ef219e8898c3608d2664f",
"text": "A depth image-based rendering (DIBR) technique is one of the rendering processes of virtual views with a color image and the corresponding depth map. The most important issue of DIBR is that the virtual view has no information at newly exposed areas, so called disocclusion. The general solution is to smooth the depth map using a Gaussian smoothing filter before 3D warping. However, the filtered depth map causes geometric distortion and the depth quality is seriously degraded. Therefore, we propose a new depth map filtering algorithm to solve the disocclusion problem while maintaining the depth quality. In order to preserve the visual quality of the virtual view, we smooth the depth map with further reduced deformation. After extracting object boundaries depending on the position of the virtual view, we apply a discontinuity-adaptive smoothing filter according to the distance of the object boundary and the amount of depth discontinuities. Finally, we obtain the depth map with higher quality compared to other methods. Experimental results showed that the disocclusion is efficiently removed and the visual quality of the virtual view is maintained.",
"title": ""
},
{
"docid": "cc220d8ae1fa77b9e045022bef4a6621",
"text": "Cuneiform tablets appertain to the oldest textual artifacts and are in extent comparable to texts written in Latin or ancient Greek. The Cuneiform Commentaries Project (CPP) from Yale University provides tracings of cuneiform tablets with annotated transliterations and translations. As a part of our work analyzing cuneiform script computationally with 3D-acquisition and word-spotting, we present a first approach for automatized learning of transliterations of cuneiform tablets based on a corpus of parallel lines. These consist of manually drawn cuneiform characters and their transliteration into an alphanumeric code. Since the Cuneiform script is only available as raster-data, we segment lines with a projection profile, extract Histogram of oriented Gradients (HoG) features, detect outliers caused by tablet damage, and align those features with the transliteration. We apply methods from part-of-speech tagging to learn a correspondence between features and transliteration tokens. We evaluate point-wise classification with K-Nearest Neighbors (KNN) and a Support Vector Machine (SVM); sequence classification with a Hidden Markov Model (HMM) and a Structured Support Vector Machine (SVM-HMM). Analyzing our findings, we reach the conclusion that the sparsity of data, inconsistent labeling and the variety of tracing styles do currently not allow for fully automatized transliterations with the presented approach. However, the pursuit of automated learning of transliterations is of great relevance as manual annotation in larger quantities is not viable, given the few experts capable of transcribing cuneiform tablets.",
"title": ""
},
{
"docid": "5b9ca6d2cec03c771e89fe8e5dd23012",
"text": "Posttraumatic agitation is a challenging problem for acute and rehabilitation staff, persons with traumatic brain injury, and their families. Specific variables for evaluation and care remain elusive. Clinical trials have not yielded a strong foundation for evidence-based practice in this arena. This review seeks to evaluate the present literature (with a focus on the decade 1995-2005) and employ previous clinical experience to deliver a review of the topic. We will discuss definitions, pathophysiology, evaluation techniques, and treatment regimens. A recommended approach to the evaluation and treatment of the person with posttraumatic agitation will be presented. The authors hope that this review will spur discussion and assist in facilitating clinical care paradigms and research programs.",
"title": ""
},
{
"docid": "39991ac199197e44aaf1a0d656175963",
"text": "Weakly-supervised object localization methods tend to fail for object classes that consistently co-occur with the same background elements, e.g. trains on tracks. We propose a method to overcome these failures by adding a very small amount of modelspecific additional annotation. The main idea is to cluster a deep network’s mid-level representations and assign object or distractor labels to each cluster. Experiments show substantially improved localization results on the challenging ILSVC2014 dataset for bounding box detection and the PASCAL VOC2012 dataset for semantic segmentation.",
"title": ""
},
{
"docid": "64c44342abbce474e21df67c0a5cc646",
"text": "In this paper it is shown that the principal eigenvector is a necessary representation of the priorities derived from a positive reciprocal pairwise comparison judgment matrix A 1⁄4 ðaijÞ when A is a small perturbation of a consistent matrix. When providing numerical judgments, an individual attempts to estimate sequentially an underlying ratio scale and its equivalent consistent matrix of ratios. Near consistent matrices are essential because when dealing with intangibles, human judgment is of necessity inconsistent, and if with new information one is able to improve inconsistency to near consistency, then that could improve the validity of the priorities of a decision. In addition, judgment is much more sensitive and responsive to large rather than to small perturbations, and hence once near consistency is attained, it becomes uncertain which coefficients should be perturbed by small amounts to transform a near consistent matrix to a consistent one. If such perturbations were forced, they could be arbitrary and thus distort the validity of the derived priority vector in representing the underlying decision. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "0bce954374d27d4679eb7562350674fc",
"text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.",
"title": ""
},
{
"docid": "ebd8e2cfc51e78fbf6772128d8e4e479",
"text": "This paper uses delaying functions, functions that require signiicant calculation time, in the development of a one-pass lottery scheme in which winners are chosen fairly using only internal information. Since all this information may be published (even before the lottery closes), anyone can do the calculation and therefore verify that the winner was chosen correctly. Since the calculation uses a delaying function, ticket purchasers cannot take advantage of this information. Fraud on the part of the lottery agent is detectable and no single ticket purchaser needs to be trusted. Coalitions of purchasers attempting to control the winning ticket calculation are either unsuccessful or are detected. The scheme can be made resistant to coalitions of arbitrary size. Since we assume that coalitions of larger size are harder to assemble, the probability that the lottery is fair can be made arbitrarily high. The paper deenes delaying functions and contrasts them with pricing functions 8] and time-lock puzzles 16].",
"title": ""
},
{
"docid": "f4e7e0ea60d9697e8fea434990409c16",
"text": "Prognostics is very useful to predict the degradation trend of machinery and to provide an alarm before a fault reaches critical levels. This paper proposes an ARIMA approach to predict the future machine status with accuracy improvement by an improved forecasting strategy and an automatic prediction algorithm. Improved forecasting strategy increases the times of model building and creates datasets for modeling dynamically to avoid using the previous values predicted to forecast and generate the predictions only based on the true observations. Automatic prediction algorithm can satisfy the requirement of real-time prognostics by automates the whole process of ARIMA modeling and forecasting based on the Box-Jenkins's methodology and the improved forecasting strategy. The feasibility and effectiveness of the approach proposed is demonstrated through the prediction of the vibration characteristic in rotating machinery. The experimental results show that the approach can be applied successfully and effectively for prognostics of machine health condition.",
"title": ""
},
{
"docid": "06907205e1fd513f0d1ddef33b92e40c",
"text": "Better shape priors improve the mask accuracy and reduce false removal. Moving down the table from the no prior case to the box priors and then to the class specific shape priors from the Pascal dataset masks the masks smaller, improves the mIoU and also reduces the false removal rate. Input image Global loss Local loss Qualitative comparison of global vs local loss. Local real-fake loss improves the in-painting results producing sharper, texture-rich images, compared to smooth blurry results obtained by the global loss. References",
"title": ""
},
{
"docid": "1fe0bfec531eac34bd81a11b3d5cf1ab",
"text": "We demonstrate an advanced ReRAM based analog artificial synapse for neuromorphic systems. Nitrogen doped TiN/PCMO based artificial synapse is proposed to improve the performance and reliability of the neuromorphic systems by using simple identical spikes. For the first time, we develop fully unsupervised learning with proposed analog synapses which is illustrated with the help of auditory and electroencephalography (EEG) applications.",
"title": ""
}
] |
scidocsrr
|
e214d123af838f90c0eefe1a854bde52
|
A novel optimization strategy for the design of large tolerance circular waveguide septum polarizer
|
[
{
"docid": "9df78ef5769ed4da768d1a7b359794ab",
"text": "We describe a computer-aided optimization technique for the efficient and reliable design of compact wide-band waveguide septum polarizers (WSP). Wide-band performance is obtained by a global optimization which considers not only the septum section but also several step discontinuities placed before the ridge-to-rectangular bifurcation and the square-to-circular discontinuity. The proposed technique mnakes use of a dynamical optimization procedure which has been tested by designing several WSP operating in different frequency bands. In this work two examples are reported, one operating at Ku band and a very wideband prototype (3.4-4.2 GHz) operating in the C band. The component design, entirely carried out at computer level, has demonstrated significant advantages in terms of development times and no need of post manufacturing adjustments. The very satisfactory agreement between experimental and theoretical results further confirm the validity of the proposed technique.",
"title": ""
}
] |
[
{
"docid": "c86c204dabfad62246b7f04559513df7",
"text": "A better understanding of disease progression is beneficial for early diagnosis and appropriate individual therapy. There are many different approaches for statistical modelling of disease progression proposed in the literature, including simple path models up to complex restricted Bayesian networks. Important fields of application are diseases like cancer and HIV. Tumour progression is measured by means of chromosome aberrations, whereas people infected with HIV develop drug resistances because of genetic changes of the HI-virus. These two very different diseases have typical courses of disease progression, which can be modelled partly by consecutive and partly by independent steps. This paper gives an overview of the different progression models and points out their advantages and drawbacks. Different models are compared via simulations to analyse how they work if some of their assumptions are violated. So far, such a comparison has not been done and there are no established methods to compare different progression models. This paper is a step into both directions.",
"title": ""
},
{
"docid": "a14656cc178eeffb5327c74649fdb456",
"text": "White light emitting diode (LED) with high brightness has attracted a lot of attention from both industry and academia for its high efficiency, ease to drive, environmental friendliness, and long lifespan. They become possible applications to replace the incandescent bulbs and fluorescent lamps in residential, industrial and commercial lighting. The realization of this new lighting source requires both tight LED voltage regulation and high power factor as well. This paper proposed a single-stage flyback converter for the LED lighting applications and input power factor correction. A type-II compensator has been inserted in the voltage loop providing sufficient bandwidth and stable phase margin. The flyback converter is controlled with voltage mode pulse width modulation (PWM) and run in discontinuous conduction mode (DCM) so that the inductor current follows the rectified input voltage, resulting in high power factor. A prototype topology of closed-loop, single-stage flyback converter for LED driver circuit designed for an 18W LED lighting source is constructed and tested to verify the theoretical predictions. The measured performance of the LED lighting fixture can achieve a high power factor greater than 0.998 and a low total harmonic distortion less than 5.0%. Experimental results show the functionality of the overall system and prove it to be an effective solution for the new lighting applications.",
"title": ""
},
{
"docid": "a693eeae7abe600c11da8d5dedabbcf9",
"text": "Objectives: This study was designed to investigate psychometric properties of the Jefferson Scale of Patient Perceptions of Physician Empathy (JSPPPE), and to examine correlations between its scores and measures of overall satisfaction with physicians, personal trust, and indicators of patient compliance. Methods: Research participants included 535 out-patients (between 18-75 years old, 66% female). A survey was mailed to participants which included the JSPPPE (5-item), a scale for measuring overall satisfaction with the primary care physician (10-item), and demographic questions. Patients were also asked about compliance with their physician’s recommendation for preventive tests (colonoscopy, mammogram, and PSA for age and gender appropriate patients). Results: Factor analysis of the JSPPPE resulted in one prominent component. Corrected item-total score correlations ranged from .88 to .94. Correlation between scores of the JSPPPE and scores on the patient satisfaction scale was 0.93. Scores of the JSPPPE were highly correlated with measures of physician-patient trust (r >.73). Higher scores of the JSPPPE were significantly associated with physicians’ recommendations for preventive tests (colonoscopy, mammogram, and PSA) and with compliance rates which were > .80). Cronbach’s coefficient alpha for the JSPPPE ranged from .97 to .99 for the total sample and for patients in different gender and age groups. Conclusions: Empirical evidence supported the psychometrics of the JSPPPE, and confirmed significant links with patients’ satisfaction with their physicians, interpersonal trust, and compliance with physicians’ recommendations. Availability of this psychometrically sound instrument will facilitate empirical research on empathy in patient care in different countries.",
"title": ""
},
{
"docid": "0039f089fa355bb1e6c980e1d6fb1b64",
"text": "YouTube, with millions of content creators, has become the preferred destination for watching videos online. Through the Partner program, YouTube allows content creators to monetize their popular videos. Of significant importance for content creators is which meta-level features (e.g. title, tag, thumbnail) are most sensitive for promoting video popularity. The popularity of videos also depends on the social dynamics, i.e. the interaction of the content creators (or channels) with YouTube users. Using real-world data consisting of about 6 million videos spread over 25 thousand channels, we empirically examine the sensitivity of YouTube meta-level features and social dynamics. The key meta-level features that impact the view counts of a video include: first day view count , number of subscribers, contrast of the video thumbnail, Google hits, number of keywords, video category, title length, and number of upper-case letters in the title respectively and illustrate that these meta-level features can be used to estimate the popularity of a video. In addition, optimizing the meta-level features after a video is posted increases the popularity of videos. In the context of social dynamics, we discover that there is a causal relationship between views to a channel and the associated number of subscribers. Additionally, insights into the effects of scheduling and video playthrough in a channel are also provided. Our findings provide a useful understanding of user engagement in YouTube.",
"title": ""
},
{
"docid": "56699ed886613a07fc1aa8d666a00585",
"text": "The unfolding-type flyback inverter operating in discontinuous conduction mode (DCM) is popular as a low-cost solution for a photovoltaic (PV) ac module application. This paper aims to improve the efficiency by using a scheme based on continuous conduction mode (CCM) for this application. Design issues, both for the power scheme and the control scheme, are identified and trade-offs investigated. An open-loop control of the secondary current, based on feedback control of the primary current, is proposed in order to bypass the difficulties posed by the moving right half plane zero in the duty cycle to secondary current transfer function. The results presented show an improvement of 8% in California efficiency compared to the benchmark DCM scheme for a 200-W PV module application. The output power quality at rated power level is capable of meeting IEC61727 requirements. The stability of the flyback inverter in CCM has been verified at selected working conditions.",
"title": ""
},
{
"docid": "189662abcea526192f46b620ed87ae13",
"text": "INTRODUCTION\nA key portion of medical simulation is self-reflection and instruction during a debriefing session; however, there have been surprisingly few direct comparisons of various approaches. The objective of this study was to compare two styles of managing a simulation session: postsimulation debriefing versus in-simulation debriefing.\n\n\nMETHODS\nOne hundred sixty-one students were randomly assigned to receive either postsimulation debriefing or in-simulation debriefing. Retrospective pre-post assessment was made through survey using Likert-scale questions assessing students' self-reported confidence and knowledge level as it relates to medical resuscitation and statements related to the simulation itself.\n\n\nRESULTS\nThere were statistically significant differences in the reliable self-reported results between the two groups for effectiveness of the debriefing style, debriefing leading to effective learning, and the debriefing helping them to understand the correct and incorrect actions, with the group that received postsimulation debriefing ranking all these measures higher. Both groups showed significantly higher posttest scores compared with their pretest scores for individual and overall measures.\n\n\nDISCUSSION\nStudents felt that a simulation experience followed by a debriefing session helped them learn more effectively, better understand the correct and incorrect actions, and was overall more effective compared with debriefing that occurred in-simulation. Students did not feel that interruptions during a simulation significantly altered the realism of the simulation.",
"title": ""
},
{
"docid": "78ca8024a825fc8d5539b899ad34fc18",
"text": "In this paper, we examine whether managers use optimistic and pessimistic language in earnings press releases to provide information about expected future firm performance to the market, and whether the market responds to optimistic and pessimistic language usage in earnings press releases after controlling for the earnings surprise and other factors likely to influence the market’s response to the earnings announcement. We use textual-analysis software to measure levels of optimistic and pessimistic language for a sample of approximately 24,000 earnings press releases issued between 1998 and 2003. We find a positive (negative) association between optimistic (pessimistic) language usage and future firm performance and a significant incremental market response to optimistic and pessimistic language usage in earnings press releases. Results suggest managers use optimistic and pessimistic language to provide credible information about expected future firm performance to the market, and that the market responds to managers’ language usage.",
"title": ""
},
{
"docid": "5f8fe83afe6870305536f29fa187e56e",
"text": "Textual grounding, i.e., linking words to objects in images, is a challenging but important task for robotics and human-computer interaction. Existing techniques benefit from recent progress in deep learning and generally formulate the task as a supervised learning problem, selecting a bounding box from a set of possible options. To train these deep net based approaches, access to a large-scale datasets is required, however, constructing such a dataset is time-consuming and expensive. Therefore, we develop a completely unsupervised mechanism for textual grounding using hypothesis testing as a mechanism to link words to detected image concepts. We demonstrate our approach on the ReferIt Game dataset and the Flickr30k data, outperforming baselines by 7.98% and 6.96% respectively.",
"title": ""
},
{
"docid": "f2fc46012fa4b767f514b9d145227ec7",
"text": "Derivation of backpropagation in convolutional neural network (CNN) is conducted based on an example with two convolutional layers. The step-by-step derivation is helpful for beginners. First, the feedforward procedure is claimed, and then the backpropagation is derived based on the example. 1 Feedforward",
"title": ""
},
{
"docid": "180e1eb6c7c9c752de5cfca2c2149d1d",
"text": "State-of-the-art CNN models for Image recognition use deep networks with small filters instead of shallow networks with large filters, because the former requires fewer weights. In the light of above trend, we present a fast and efficient FPGA based convolution engine to accelerate CNN models over small filters. The convolution engine implements Winograd minimal filtering algorithm to reduce the number of multiplications by 38% to 55% for state-of-the-art CNNs. We exploit the parallelism of the Winograd convolution engine to scale the overall performance. We show that our overall design sustains the peak throughput of the convolution engines. We propose a novel data layout to reduce the required memory bandwidth of our design by half. One noteworthy feature of our Winograd convolution engine is that it hides the computation latency of the pooling layer. As a case study we implement VGG16 CNN model and compare it with previous approaches. Compared with the state-of-the-art reduced precision VGG16 implementation, our implementation achieves 1.2× improvement in throughput by using 3× less multipliers and 2× less on-chip memory without impacting the classification accuracy. The improvements in throughput per multiplier and throughput per unit on-chip memory are 3.7× and 2.47× respectively, compared with the state-of-the-art design.",
"title": ""
},
{
"docid": "e08cfc5d9c67a5c806750dc7c747c52f",
"text": "To build large-scale query-by-example image retrieval systems, embedding image features into a binary Hamming space provides great benefits. Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the binary Hamming space. Most existing approaches apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of those methods, and can result in complex optimization problems that are difficult to solve. In this work we proffer a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. The proposed framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the hashing learning problem into two steps: binary code (hash bit) learning and hash function learning. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training a standard binary classifier. For solving large-scale binary code inference, we show how it is possible to ensure that the binary quadratic problems are submodular such that efficient graph cut methods may be used. To achieve efficiency as well as efficacy on large-scale high-dimensional data, we propose to use boosted decision trees as the hash functions, which are nonlinear, highly descriptive, and are very fast to train and evaluate. Experiments demonstrate that the proposed method significantly outperforms most state-of-the-art methods, especially on high-dimensional data.",
"title": ""
},
{
"docid": "1e33efef22f44869fd4fe45c3504a2f0",
"text": "H.264/AVC as the most recent video coding standard delivers significantly better performance compared to previous standards, supporting higher video quality over lower bit rate channels. The H.264 in-loop deblocking filter is one of the several complex techniques that have realized this superior coding quality. The deblocking filter is a computationally and data intensive tool resulting in increased execution time of both the encoding and decoding processes. In this paper and in order to reduce the deblocking complexity, we propose a new 2D deblocking filtering algorithm based on the existing 1D method of the H.264/AVC standard. Simulation results indicate that the proposed technique achieves a 40% speed improvement compared to the existing 1D H.264/AVC deblocking filter, while affecting the SNR by 0.15% in average",
"title": ""
},
{
"docid": "eb32ce661a0d074ce90861793a2e4de7",
"text": "A new transfer function from control voltage to duty cycle, the closed-current loop, which captures the natural sampling effect is used to design a controller for the voltage-loop of a pulsewidth modulated (PWM) dc-dc converter operating in continuous-conduction mode (CCM) with peak current-mode control (PCM). This paper derives the voltage loop gain and the closed-loop transfer function from reference voltage to output voltage. The closed-loop transfer function from the input voltage to the output voltage, or the closed-loop audio-susceptibility is derived. The closed-loop transfer function from output current to output voltage, or the closed loop output impedance is also derived. The derivation is performed using an averaged small-signal model of the example boost converter for CCM. Experimental verification is presented. The theoretical and experimental results were in good agreement, confirming the validity of the transfer functions derived.",
"title": ""
},
{
"docid": "e3caf8dcb01139ae780616c022e1810d",
"text": "The relative age effect (RAE) and its relationships with maturation, anthropometry, and physical performance characteristics were examined across a representative sample of English youth soccer development programmes. Birth dates of 1,212 players, chronologically age-grouped (i.e., U9's-U18's), representing 17 professional clubs (i.e., playing in Leagues 1 & 2) were obtained and categorised into relative age quartiles from the start of the selection year (Q1 = Sep-Nov; Q2 = Dec-Feb; Q3 = Mar-May; Q4 = Jun-Aug). Players were measured for somatic maturation and performed a battery of physical tests to determine aerobic fitness (Multi-Stage Fitness Test [MSFT]), Maximal Vertical Jump (MVJ), sprint (10 & 20m), and agility (T-Test) performance capabilities. Odds ratio's (OR) revealed Q1 players were 5.3 times (95% confidence intervals [CI]: 4.08-6.83) more likely to be selected than Q4's, with a particularly strong RAE bias observed in U9 (OR: 5.56) and U13-U16 squads (OR: 5.45-6.13). Multivariate statistical models identified few between quartile differences in anthropometric and fitness characteristics, and confirmed chronological age-group and estimated age at peak height velocity (APHV) as covariates. Assessment of practical significance using magnitude-based inferences demonstrated body size advantages in relatively older players (Q1 vs. Q4) that were very-likely small (Effect Size [ES]: 0.53-0.57), and likely to very-likely moderate (ES: 0.62-0.72) in U12 and U14 squads, respectively. Relatively older U12-U14 players also demonstrated small advantages in 10m (ES: 0.31-0.45) and 20m sprint performance (ES: 0.36-0.46). The data identify a strong RAE bias at the entry-point to English soccer developmental programmes. RAE was also stronger circa-PHV, and relatively older players demonstrated anaerobic performance advantages during the pubescent period. Talent selectors should consider motor function and maturation status assessments to avoid premature and unwarranted drop-out of soccer players within youth development programmes.",
"title": ""
},
{
"docid": "6c8c21e7cc5a9cc88fa558d7917a81b2",
"text": "Recent engineering experiences with the Missile Defense Agency (MDA) Ballistic Missile Defense System (BMDS) highlight the need to analyze the BMDS System of Systems (SoS) including the numerous potential interactions between independently developed elements of the system. The term “interstitials” is used to define the domain of interfaces, interoperability, and integration between constituent systems in an SoS. The authors feel that this domain, at an SoS level, has received insufficient attention within systems engineering literature. The BMDS represents a challenging SoS case study as many of its initial elements were assembled from existing programs of record. The elements tend to perform as designed but their performance measures may not be consistent with the higher level SoS requirements. One of the BMDS challenges is interoperability, to focus the independent elements to interact in a number of ways, either subtle or overt, for a predictable and sustainable national capability. New capabilities desired by national leadership may involve modifications to kill chains, Command and Control (C2) constructs, improved coordination, and performance. These capabilities must be realized through modifications to programs of record and integration across elements of the system that have their own independent programmatic momentum. A challenge of SoS Engineering is to objectively evaluate competing solutions and assess the technical viability of tradeoff options. This paper will present a multifaceted technical approach for integrating a complex, adaptive SoS to achieve a functional capability. Architectural frameworks will be explored, a mathematical technique utilizing graph theory will be introduced, adjuncts to more traditional modeling and simulation techniques such as agent based modeling will be explored, and, finally, newly developed technical and managerial metrics to describe design maturity will be introduced. A theater BMDS construct will be used as a representative set of elements together with the *Author to whom all correspondence should be addressed (e-mail: DLGR_NSWC_G25@navy.mil; DLGR_NSWC_K@Navy.mil; DLGR_NSWC_W@navy.mil; DLGR_NSWC_W@Navy.mil). †Commanding Officer, 6149 Welsh Road, Suite 203, Dahlgren, VA 22448-5130",
"title": ""
},
{
"docid": "eb2459cbb99879b79b94653c3b9ea8ef",
"text": "Extending the success of deep neural networks to natural language understanding and symbolic reasoning requires complex operations and external memory. Recent neural program induction approaches have attempted to address this problem, but are typically limited to differentiable memory, and consequently cannot scale beyond small synthetic tasks. In this work, we propose the Manager-ProgrammerComputer framework, which integrates neural networks with non-differentiable memory to support abstract, scalable and precise operations through a friendly neural computer interface. Specifically, we introduce a Neural Symbolic Machine, which contains a sequence-to-sequence neural \"programmer\", and a nondifferentiable \"computer\" that is a Lisp interpreter with code assist. To successfully apply REINFORCE for training, we augment it with approximate gold programs found by an iterative maximum likelihood training process. NSM is able to learn a semantic parser from weak supervision over a large knowledge base. It achieves new state-of-the-art performance on WEBQUESTIONSSP, a challenging semantic parsing dataset, with weak supervision. Compared to previous approaches, NSM is end-to-end, therefore does not rely on feature engineering or domain specific knowledge.",
"title": ""
},
{
"docid": "c3eec24d9e7e051a34c72bdc301b3894",
"text": "Scheduling has a significant influence on application performance. Deciding on a quantum length can be very tricky, especially when concurrent applications have various characteristics. This is actually the case in virtualized cloud computing environments where virtual machines from different users are colocated on the same physical machine. We claim that in a multi-core virtualized platform, different quantum lengths should be associated with different application types. We apply this principle in a new scheduler called AQL_Sched. We identified 5 main application types and experimentally found the best quantum length for each of them. Dynamically, AQL_Sched associates an application type with each virtual CPU (vCPU) and schedules vCPUs according to their type on physical CPU (pCPU) pools with the best quantum length. Therefore, each vCPU is scheduled on a pCPU with the best quantum length. We implemented a prototype of AQL_Sched in Xen and we evaluated it with various reference benchmarks (SPECweb2009, SPECmail2009, SPEC CPU2006, and PARSEC). The evaluation results show that AQL_Sched outperforms Xen's credit scheduler. For instance, up to 20%, 10% and 15% of performance improvements have been obtained with SPECweb2009, SPEC CPU2006 and PARSEC, respectively.",
"title": ""
},
{
"docid": "cff5ceab3d0b181e5278688371652495",
"text": "The redesign of business processes has a huge potential in terms of reducing costs and throughput times, as well as improving customer satisfaction. Despite rapid developments in the business process management discipline during the last decade, a comprehensive overview of the options to methodologically support a team to move from as-is process insights to to-be process alternatives is lacking. As such, no safeguard exists that a systematic exploration of the full range of redesign possibilities takes place by practitioners. Consequently, many attractive redesign possibilities remain unidentified and the improvement potential of redesign initiatives is not fulfilled. This systematic literature review establishes a comprehensive methodological framework, which serves as a catalog for process improvement use cases. The framework contains an overview of all the method options regarding the generation of process improvement ideas. This is established by identifying six key methodological decision areas, e.g. the human actors who can be invited to generate these ideas or the information that can be collected prior to this act. This framework enables practitioners to compose a well-considered method to generate process improvement ideas themselves. Based on a critical evaluation of the framework, the authors also offer recommendations that support academic researchers in grounding and improving methods for generating process Accepted after two revisions by the editors of the special issue. Electronic supplementary material The online version of this article (doi:10.1007/s12599-015-0417-x) contains supplementary material, which is available to authorized users. ir. R. J. B. Vanwersch (&) Dr. ir. I. Vanderfeesten Prof. Dr. ir. P. Grefen School of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, De Lismortel, room K.3, P.O. Box 513, 5600 MB Eindhoven, The Netherlands e-mail: r.j.b.vanwersch@tue.nl Dr. K. Shahzad College of Information Technology, University of the Punjab, Lahore, Pakistan Dr. K. Vanhaecht Department of Public Health and Primary Care, KU Leuven, University of Leuven, Leuven, Belgium Dr. K. Vanhaecht Department of Quality Management, University Hospitals KU Leuven, Leuven, Belgium Prof. Dr. ir. L. Pintelon Centre for Industrial Management/Traffic and Infrastructure, KU Leuven, University of Leuven, Leuven, Belgium Prof. Dr. J. Mendling Institute for Information Business, Vienna University of Economics and Business, Vienna, Austria Prof. Dr. G. G. van Merode Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands Prof. Dr. ir. H. A. Reijers Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, The Netherlands Prof. Dr. ir. H. A. Reijers Department of Computer Science, VU University Amsterdam, Amsterdam, The Netherlands 123 Bus Inf Syst Eng 58(1):43–53 (2016) DOI 10.1007/s12599-015-0417-x",
"title": ""
},
{
"docid": "8ed89ceb6456ef4d32dc639c62346b1a",
"text": "Neural machine translation (NMT) often makes mistakes in translating low-frequency content words that are essential to understanding the meaning of the sentence. We propose a method to alleviate this problem by augmenting NMT systems with discrete translation lexicons that efficiently encode translations of these low-frequency words. We describe a method to calculate the lexicon probability of the next word in the translation candidate by using the attention vector of the NMT model to select which source word lexical probabilities the model should focus on. We test two methods to combine this probability with the standard NMT probability: (1) using it as a bias, and (2) linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3 BLEU and 0.13-0.44 NIST score, and faster convergence time. 1",
"title": ""
},
{
"docid": "debd60cb7f7c7746a0ceb333068f7057",
"text": "Previous work has shown that playing violent video games can stimulate aggression toward others. The current research has identified a potential exception. Participants who played a violent game in which the violence had an explicitly prosocial motive (i.e., protecting a friend and furthering his nonviolent goals) were found to show lower short-term aggression (Study 1) and show higher levels of prosocial cognition (Study 2) than individuals who played a violent game in which the violence was motivated by more morally ambiguous motives. Thus, violent video games that are framed in an explicitly prosocial context may evoke more prosocial sentiments and thereby mitigate some of the short-term effects on aggression observed in previous research. While these findings are promising regarding the potential aggression-reducing effects of prosocial context, caution is still warranted as a small effect size difference (d = .2-.3), although nonsignificant, was still observed between those who played the explicitly prosocial violent game and those who played a nonviolent game; indicating that aggressive behavior was not completely eliminated by the inclusion of a prosocial context for the violence.",
"title": ""
}
] |
scidocsrr
|
0e99be3ad1f88954c9ab5b4d486fac5c
|
Disentangled Sequential Autoencoder
|
[
{
"docid": "9869bc5dfc8f20b50608f0d68f7e49ba",
"text": "Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of “objectness”.",
"title": ""
},
{
"docid": "e95541d0401a196b03b94dd51dd63a4b",
"text": "In the information age, computer applications have become part of modern life and this has in turn encouraged the expectations of friendly interaction with them. Speech, as “the” communication mode, has seen the successful development of quite a number of applications using automatic speech recognition (ASR), including command and control, dictation, dialog systems for people with impairments, translation, etc. But the actual challenge goes beyond the use of speech in control applications or to access information. The goal is to use speech as an information source, competing, for example, with text online. Since the technology supporting computer applications is highly dependent on the performance of the ASR system, research into ASR is still an active topic, as is shown by the range of research directions suggested in (Baker et al., 2009a, 2009b). Automatic speech recognition – the recognition of the information embedded in a speech signal and its transcription in terms of a set of characters, (Junqua & Haton, 1996) – has been object of intensive research for more than four decades, achieving notable results. It is only to be expected that speech recognition advances make spoken language as convenient and accessible as online text when the recognizers reach error rates near zero. But while digit recognition has already reached a rate of 99.6%, (Li, 2008), the same cannot be said of phone recognition, for which the best rates are still under 80% 1,(Mohamed et al., 2011; Siniscalchi et al., 2007). Speech recognition based on phones is very attractive since it is inherently free from vocabulary limitations. Large Vocabulary ASR (LVASR) systems’ performance depends on the quality of the phone recognizer. That is why research teams continue developing phone recognizers, in order to enhance their performance as much as possible. Phone recognition is, in fact, a recurrent problem for the speech recognition community. Phone recognition can be found in a wide range of applications. In addition to typical LVASR systems like (Morris & Fosler-Lussier, 2008; Scanlon et al., 2007; Schwarz, 2008), it can be found in applications related to keyword detection, (Schwarz, 2008), language recognition, (Matejka, 2009; Schwarz, 2008), speaker identification, (Furui, 2005) and applications for music identification and translation, (Fujihara & Goto, 2008; Gruhne et al., 2007). The challenge of building robust acoustic models involves applying good training algorithms to a suitable set of data. The database defines the units that can be trained and",
"title": ""
},
{
"docid": "2021b780b666751b19928307bd69ea2c",
"text": "We introduce Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning of latent Markovian state space models. Leveraging recent advances in Stochastic Gradient Variational Bayes, DVBF can overcome intractable inference distributions by means of variational inference. Thus, it can handle highly nonlinear input data with temporal and spatial dependencies such as image sequences without domain knowledge. Our experiments show that enabling backpropagation through transitions enforces state space assumptions and significantly improves information content of the latent embedding. This also enables realistic long-term prediction.",
"title": ""
},
{
"docid": "102bec350390b46415ae07128cb4e77f",
"text": "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.",
"title": ""
}
] |
[
{
"docid": "700670962242f53b665df0d0df3cdea8",
"text": "Simulation Optimization (SO) refers to the optimization of an objective function subject to constraints, both of which can be evaluated through a stochastic simulation. To address specific features of a particular simulation—discrete or continuous decisions, expensive or cheap simulations, single or multiple outputs, homogeneous or heterogeneous noise—various algorithms have been proposed in the literature. As one can imagine, there exist several competing algorithms for each of these classes of problems. This document emphasizes the difficulties in simulation optimization as compared to mathematical programming, makes reference to state-of-the-art algorithms in the field, examines and contrasts the different approaches used, reviews some of the diverse applications that have been tackled by these methods, and speculates on future directions in the field.",
"title": ""
},
{
"docid": "5404c00708c64d9f254c25f0065bc13c",
"text": "In this paper, we discuss the problem of automatic skin lesion analysis, specifically melanoma detection and semantic segmentation. We accomplish this by using deep learning techniques to perform classification on publicly available dermoscopic images. Skin cancer, of which melanoma is a type, is the most prevalent form of cancer in the US and more than four million cases are diagnosed in the US every year. In this work, we present our efforts towards an accessible, deep learning-based system that can be used for skin lesion classification, thus leading to an improved melanoma screening system. For classification, a deep convolutional neural network architecture is first implemented over the raw images. In addition, hand-coded features such as 166-D color histogram distribution, edge histogram and Multiscale Color local binary patterns are extracted from the images and presented to a random forest classifier. The average of the outputs from the two mentioned classifiers is taken as the final classification result. The classification task achieves an accuracy of 80.3%, AUC score of 0.69 and a precision score of 0.81. For segmentation, we implement a convolutional-deconvolutional architecture and the segmentation model achieves a Dice coefficient of 73.5%.",
"title": ""
},
{
"docid": "9089a8cc12ffe163691d81e319ec0f25",
"text": "Complex problem solving (CPS) emerged in the last 30 years in Europe as a new part of the psychology of thinking and problem solving. This paper introduces into the field and provides a personal view. Also, related concepts like macrocognition or operative intelligence will be explained in this context. Two examples for the assessment of CPS, Tailorshop and MicroDYN, are presented to illustrate the concept by means of their measurement devices. Also, the relation of complex cognition and emotion in the CPS context is discussed. The question if CPS requires complex cognition is answered with a tentative “yes.”",
"title": ""
},
{
"docid": "2affffd57677d58df6fc63cc4a83da5d",
"text": "Dealing with failure is easy: Work hard to improve. Success is also easy to handle: You've solved the wrong problem. Work hard to improve.",
"title": ""
},
{
"docid": "259df0ad497b5fc3318dfca7f8ee1f9a",
"text": "BACKGROUND\nColorectal cancer is a leading cause of morbidity and mortality, especially in the Western world. The human and financial costs of this disease have prompted considerable research efforts to evaluate the ability of screening tests to detect the cancer at an early curable stage. Tests that have been considered for population screening include variants of the faecal occult blood test, flexible sigmoidoscopy and colonoscopy. Reducing mortality from colorectal cancer (CRC) may be achieved by the introduction of population-based screening programmes.\n\n\nOBJECTIVES\nTo determine whether screening for colorectal cancer using the faecal occult blood test (guaiac or immunochemical) reduces colorectal cancer mortality and to consider the benefits, harms and potential consequences of screening.\n\n\nSEARCH STRATEGY\nPublished and unpublished data for this review were identified by: Reviewing studies included in the previous Cochrane review; Searching several electronic databases (Cochrane Library, Medline, Embase, CINAHL, PsychInfo, Amed, SIGLE, HMIC); and Writing to the principal investigators of potentially eligible trials.\n\n\nSELECTION CRITERIA\nWe included in this review all randomised trials of screening for colorectal cancer that compared faecal occult blood test (guaiac or immunochemical) on more than one occasion with no screening and reported colorectal cancer mortality.\n\n\nDATA COLLECTION AND ANALYSIS\nData from the eligible trials were independently extracted by two reviewers. The primary data analysis was performed using the group participants were originally randomised to ('intention to screen'), whether or not they attended screening; a secondary analysis adjusted for non-attendence. We calculated the relative risks and risk differences for each trial, and then overall, using fixed and random effects models (including testing for heterogeneity of effects). We identified nine articles concerning four randomised controlled trials and two controlled trials involving over 320,000 participants with follow-up ranging from 8 to 18 years.\n\n\nMAIN RESULTS\nCombined results from the 4 eligible randomised controlled trials shows that participants allocated to screening had a 16% reduction in the relative risk of colorectal cancer mortality (RR 0.84, CI: 0.78-0.90). In the 3 studies that used biennial screening (Funen, Minnesota, Nottingham) there was a 15% relative risk reduction (RR 0.85, CI: 0.78-0.92) in colorectal cancer mortality. When adjusted for screening attendance in the individual studies, there was a 25% relative risk reduction (RR 0.75, CI: 0.66 - 0.84) for those attending at least one round of screening using the faecal occult blood test.\n\n\nAUTHORS' CONCLUSIONS\nBenefits of screening include a modest reduction in colorectal cancer mortality, a possible reduction in cancer incidence through the detection and removal of colorectal adenomas, and potentially, the less invasive surgery that earlier treatment of colorectal cancers may involve. Harmful effects of screening include the psycho-social consequences of receiving a false-positive result, the potentially significant complications of colonoscopy or a false-negative result, the possibility of overdiagnosis (leading to unnecessary investigations or treatment) and the complications associated with treatment.",
"title": ""
},
{
"docid": "b68a716a1ef3e7970b94ad7cda366b8b",
"text": "The underlying mechanisms and neuroanatomical correlates of theory of mind (ToM), the ability to make inferences on others' mental states, remain largely unknown. While numerous studies have implicated the ventromedial (VM) frontal lobes in ToM, recent findings have questioned the role of the prefrontal cortex. We designed two novel tasks that examined the hypothesis that affective ToM processing is distinct from that related to cognitive ToM and depends in part on separate anatomical substrates. The performance of patients with localized lesions in the VM was compared to responses of patients with dorsolateral lesions, mixed prefrontal lesions, and posterior lesions and with healthy control subjects. While controls made fewer errors on affective as compared to cognitive ToM conditions in both tasks, patients with VM damage showed a different trend. Furthermore, while affective ToM was mostly impaired by VM damage, cognitive ToM was mostly impaired by extensive prefrontal damage, suggesting that cognitive and affective mentalizing abilities are partly dissociable. By introducing the concept of 'affective ToM' to the study of social cognition, these results offer new insights into the mediating role of the VM in the affective facets of social behavior that may underlie the behavioral disturbances observed in these patients.",
"title": ""
},
{
"docid": "d639525be41a05f1aec5d0637eff79ac",
"text": "We analyze X-COM: UFO Defense and its successful remake XCOM: Enemy Unknown to understand how remakes can repropose a concept across decades, updating most mechanics, and yet retain the dynamic and aesthetic values that defined the original experience. We use gameplay design patterns along with the MDA framework to understand the changes, identifying an unchanged core among a multitude of differences. We argue that two forces polarize the context within which the new game was designed, simultaneously guaranteeing a sameness of experience across the two games and at the same time pushing for radical changes. The first force, which resists the push for an updated experience, can be described as experiential isomorphism, or “sameness of form” in terms of related Gestalt qualities. The second force is generated by the necessity to update the usability of the design, aligning it to a current usability paradigm. We employ game usability heuristics (PLAY) to evaluate aesthetic patterns present in both games, and to understand the implicit vector for change. Our finding is that while patterns on the mechanical and to a slight degree the dynamic levels change between the games, the same aesthetic patterns are present in both, but produced through different means. The method we use offers new understanding of how sequels and remakes of games can change significantly from their originals while still giving rise to similar experiences.",
"title": ""
},
{
"docid": "080032ded41edee2a26320e3b2afb123",
"text": "The aim of this study was to evaluate the effects of calisthenic exercises on psychological status in patients with ankylosing spondylitis (AS) and multiple sclerosis (MS). This study comprised 40 patients diagnosed with AS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based) and 40 patients diagnosed with MS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based). The exercise programme was completed by 73 participants (hospital-based = 34, home-based = 39). Mean age was 33.75 ± 5.77 years. After the 8-week exercise programme in the AS group, the home-based exercise group showed significant improvements in erythrocyte sedimentation rates (ESR). The hospital-based exercise group showed significant improvements in terms of the Bath AS Metrology Index (BASMI) and Hospital Anxiety and Depression Scale-Anxiety (HADS-A) scores. After the 8-week exercise programme in the MS group, the home-based and hospital-based exercise groups showed significant improvements in terms of the 10-m walking test, Berg Balance Scale (BBS), HADS-A, and MS international Quality of Life (MusiQoL) scores. There was a significant improvement in the hospital-based and a significant deterioration in the home-based MS patients according to HADS-Depression (HADS-D) score. The positive effects of exercises on neurologic and rheumatic chronic inflammatory processes associated with disability should not be underestimated. Ziel der vorliegenden Studie war die Untersuchung der Wirkungen von gymnastischen Übungen auf die psychische Verfassung von Patienten mit Spondylitis ankylosans (AS) und multipler Sklerose (MS). Die Studie umfasste 40 Patienten mit der Diagnose AS, die randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant), und 40 Patienten mit der Diagnose MS, die ebenfalls randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant). Vollständig absolviert wurde das Übungsprogramm von 73 Patienten (stationär: 34, ambulant: 39). Das Durchschnittsalter betrug 33,75 ± 5,77 Jahre. Nach dem 8-wöchigen Übungsprogramm in der AS-Gruppe zeigten sich bei der ambulanten Übungsgruppe signifikante Verbesserungen bei der Blutsenkungsgeschwindigkeit (BSG). Die stationäre Übungsgruppe wies signifikante Verbesserungen in Bezug auf den BASMI-Score (Bath AS Metrology Index) und den HADS-A-Score (Hospital Anxiety and Depression Scale-Anxiety) auf. Nach dem 8-wöchigen Übungsprogramm in der MS-Gruppe zeigten sich sowohl in der ambulanten als auch in der stationären Übungsgruppe signifikante Verbesserungen hinsichtlich des 10-m-Gehtests, des BBS-Ergebnisses (Berg Balance Scale), des HADS-A- sowie des MusiQoL-Scores (MS international Quality of Life). Beim HADS-D-Score (HADS-Depression) bestand eine signifikante Verbesserung bei den stationären und eine signifikante Verschlechterung bei den ambulanten MS-Patienten. Die positiven Wirkungen von gymnastischen Übungen auf neurologische und rheumatische chronisch entzündliche Prozesse mit Behinderung sollten nicht unterschätzt werden.",
"title": ""
},
{
"docid": "9c5d3f89d5207b42d7e2c8803b29994c",
"text": "With the advent of data mining, machine learning has come of age and is now a critical technology in many businesses. However, machine learning evolved in a different research context to that in which it now finds itself employed. A particularly important problem in the data mining world is working effectively with large data sets. However, most machine learning research has been conducted in the context of learning from very small data sets. To date most approaches to scaling up machine learning to large data sets have attempted to modify existing algorithms to deal with large data sets in a more computationally efficient and effective manner. But is this necessarily the best method? This paper explores the possibility of designing algorithms specifically for large data sets. Specifically, the paper looks at how increasing data set size affects bias and variance error decompositions for classification algorithms. Preliminary results of experiments to determine these effects are presented, showing that, as hypothesised variance can be expected to decrease as training set size increases. No clear effect of training set size on bias was observed. These results have profound implications for data mining from large data sets, indicating that developing effective learning algorithms for large data sets is not simply a matter of finding computationally efficient variants of existing learning algorithms.",
"title": ""
},
{
"docid": "19ab044ed5154b4051cae54387767c9b",
"text": "An approach is presented for minimizing power consumption for digital systems implemented in CMOS which involves optimization at all levels of the design. This optimization includes the technology used to implement the digital circuits, the circuit style and topology, the architecture for implementing the circuits and at the highest level the algorithms that are being implemented. The most important technology consideration is the threshold voltage and its control which allows the reduction of supply voltage without signijcant impact on logic speed. Even further supply reductions can be made by the use of an architecture-based voltage scaling strategy, which uses parallelism and pipelining, to tradeoff silicon area and power reduction. Since energy is only consumed when capacitance is being switched, power can be reduced by minimizing this capacitance through operation reduction, choice of number representation, exploitation of signal correlations, resynchronization to minimize glitching, logic design, circuit design, and physical design. The low-power techniques that are presented have been applied to the design of a chipset for a portable multimedia terminal that supports pen input, speech I/O and fullmotion video. The entire chipset that perjorms protocol conversion, synchronization, error correction, packetization, buffering, video decompression and D/A conversion operates from a 1.1 V supply and consumes less than 5 mW.",
"title": ""
},
{
"docid": "b2e8d42c86b2ee63c36ecc6123736f8b",
"text": "The balance between detrimental, pro-aging, often stochastic processes and counteracting homeostatic mechanisms largely determines the progression of aging. There is substantial evidence suggesting that the endocannabinoid system (ECS) is part of the latter system because it modulates the physiological processes underlying aging. The activity of the ECS declines during aging, as CB1 receptor expression and coupling to G proteins are reduced in the brain tissues of older animals and the levels of the major endocannabinoid 2-arachidonoylglycerol (2-AG) are lower. However, a direct link between endocannabinoid tone and aging symptoms has not been demonstrated. Here we show that a low dose of Δ9-tetrahydrocannabinol (THC) reversed the age-related decline in cognitive performance of mice aged 12 and 18 months. This behavioral effect was accompanied by enhanced expression of synaptic marker proteins and increased hippocampal spine density. THC treatment restored hippocampal gene transcription patterns such that the expression profiles of THC-treated mice aged 12 months closely resembled those of THC-free animals aged 2 months. The transcriptional effects of THC were critically dependent on glutamatergic CB1 receptors and histone acetylation, as their inhibition blocked the beneficial effects of THC. Thus, restoration of CB1 signaling in old individuals could be an effective strategy to treat age-related cognitive impairments.",
"title": ""
},
{
"docid": "f3b9269e3d6e6098384eda277129864c",
"text": "Action planning using learned and differentiable forward models of the world is a general approach which has a number of desirable properties, including improved sample complexity over modelfree RL methods, reuse of learned models across different tasks, and the ability to perform efficient gradient-based optimization in continuous action spaces. However, this approach does not apply straightforwardly when the action space is discrete. In this work, we show that it is in fact possible to effectively perform planning via backprop in discrete action spaces, using a simple paramaterization of the actions vectors on the simplex combined with input noise when training the forward model. Our experiments show that this approach can match or outperform model-free RL and discrete planning methods on gridworld navigation tasks in terms of performance and/or planning time while using limited environment interactions, and can additionally be used to perform model-based control in a challenging new task where the action space combines discrete and continuous actions. We furthermore propose a policy distillation approach which yields a fast policy network which can be used at inference time, removing the need for an iterative planning procedure.",
"title": ""
},
{
"docid": "b2e1b184096433db2bbd46cf01ef99c6",
"text": "This is a short overview of a totally ordered broadcast protocol used by ZooKeeper, called Zab. It is conceptually easy to understand, is easy to implement, and gives high performance. In this paper we present the requirements ZooKeeper makes on Zab, we show how the protocol is used, and we give an overview of how the protocol works.",
"title": ""
},
{
"docid": "517916f4c62bc7b5766efa537359349d",
"text": "Document-level sentiment classification aims to predict user’s overall sentiment in a document about a product. However, most of existing methods only focus on local text information and ignore the global user preference and product characteristics. Even though some works take such information into account, they usually suffer from high model complexity and only consider wordlevel preference rather than semantic levels. To address this issue, we propose a hierarchical neural network to incorporate global user and product information into sentiment classification. Our model first builds a hierarchical LSTM model to generate sentence and document representations. Afterwards, user and product information is considered via attentions over different semantic levels due to its ability of capturing crucial semantic components. The experimental results show that our model achieves significant and consistent improvements compared to all state-of-theart methods. The source code of this paper can be obtained from https://github. com/thunlp/NSC.",
"title": ""
},
{
"docid": "2ef2e4f2d001ab9221b3d513627bcd0b",
"text": "Semantic segmentation is in-demand in satellite imagery processing. Because of the complex environment, automatic categorization and segmentation of land cover is a challenging problem. Solving it can help to overcome many obstacles in urban planning, environmental engineering or natural landscape monitoring. In this paper, we propose an approach for automatic multi-class land segmentation based on a fully convolutional neural network of feature pyramid network (FPN) family. This network is consisted of pre-trained on ImageNet Resnet50 encoder and neatly developed decoder. Based on validation results, leaderboard score and our own experience this network shows reliable results for the DEEPGLOBE - CVPR 2018 land cover classification sub-challenge. Moreover, this network moderately uses memory that allows using GTX 1080 or 1080 TI video cards to perform whole training and makes pretty fast predictions.",
"title": ""
},
{
"docid": "727a53dad95300ee9749c13858796077",
"text": "Device to device (D2D) communication underlaying LTE can be used to distribute traffic loads of eNBs. However, a conventional D2D link is controlled by an eNB, and it still remains burdens to the eNB. We propose a completely distributed power allocation method for D2D communication underlaying LTE using deep learning. In the proposed scheme, a D2D transmitter can decide the transmit power without any help from other nodes, such as an eNB or another D2D device. Also, the power set, which is delivered from each D2D node independently, can optimize the overall cell throughput. We suggest a distirbuted deep learning architecture in which the devices are trained as a group, but operate independently. The deep learning can optimize total cell throughput while keeping constraints such as interference to eNB. The proposed scheme, which is implemented model using Tensorflow, can provide same throughput with the conventional method even it operates completely on distributed manner.",
"title": ""
},
{
"docid": "67033d89acee89763fa1b2a06fe00dc4",
"text": "We demonstrate a novel query interface that enables users to construct a rich search query without any prior knowledge of the underlying schema or data. The interface, which is in the form of a single text input box, interacts in real-time with the users as they type, guiding them through the query construction. We discuss the issues of schema and data complexity, result size estimation, and query validity; and provide novel approaches to solving these problems. We demonstrate our query interface on two popular applications; an enterprise-wide personnel search, and a biological information database.",
"title": ""
},
{
"docid": "df16624d219181c4af5f8fc3f7fd0ce5",
"text": "This paper presents results from an electronic interface that significantly improves the stability, power output, and spectral flexibility of light emitting diode (LED)-based systems used to excite fluorescence or other forms of luminescence. LEDs are an attractive alternative to conventional white-light sources used in fluorescence analysis because of reduced power of operation, enhanced modularity, reduced optical loss, fewer imaging artifacts, and increased flexibility in spectral control without the need for high overhead optics. Drawbacks of previously presented LED-based systems include insufficient light output, instability (poor lifetime), and limited flexibility in broadband spectral output. T ld increase i le spectral o on in power, s appropriate o igns. ©",
"title": ""
},
{
"docid": "d763947e969ade3c54c18f0b792a0f7b",
"text": "Recent results in compressive sampling have shown that sparse signals can be recovered from a small number of random measurements. This property raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. Through both theoretical and experimental results, we show that encoding a sparse signal through simple scalar quantization of random measurements incurs a significant penalty relative to direct or adaptive encoding of the sparse signal. Information theory provides alternative quantization strategies, but they come at the cost of much greater estimation complexity.",
"title": ""
}
] |
scidocsrr
|
3ac26b503b33bc09d6d95c2d36e7d9e4
|
Interaction techniques for older adults using touchscreen devices: a literature review
|
[
{
"docid": "708309417183398e86ab537158459a98",
"text": "Despite the demonstrated benefits of bimanual interaction, most tablets use just one hand for interaction, to free the other for support. In a preliminary study, we identified five holds that permit simultaneous support and interaction, and noted that users frequently change position to combat fatigue. We then designed the BiTouch design space, which introduces a support function in the kinematic chain model for interacting with hand-held tablets, and developed BiPad, a toolkit for creating bimanual tablet interaction with the thumb or the fingers of the supporting hand. We ran a controlled experiment to explore how tablet orientation and hand position affect three novel techniques: bimanual taps, gestures and chords. Bimanual taps outperformed our one-handed control condition in both landscape and portrait orientations; bimanual chords and gestures in portrait mode only; and thumbs outperformed fingers, but were more tiring and less stable. Together, BiTouch and BiPad offer new opportunities for designing bimanual interaction on hand-held tablets.",
"title": ""
}
] |
[
{
"docid": "48716199f7865e8cf16fc723b897bb13",
"text": "The current study aimed to review studies on computational thinking (CT) indexed in Web of Science (WOS) and ERIC databases. A thorough search in electronic databases revealed 96 studies on computational thinking which were published between 2006 and 2016. Studies were exposed to a quantitative content analysis through using an article control form developed by the researchers. Studies were summarized under several themes including the research purpose, design, methodology, sampling characteristics, data analysis, and main findings. The findings were reported using descriptive statistics to see the trends. It was observed that there was an increase in the number of CT studies in recent years, and these were mainly conducted in the field of computer sciences. In addition, CT studies were mostly published in journals in the field of Education and Instructional Technologies. Theoretical paradigm and literature review design were preferred more in previous studies. The most commonly used sampling method was the purposive sampling. It was also revealed that samples of previous CT studies were generally pre-college students. Written data collection tools and quantitative analysis were mostly used in reviewed papers. Findings mainly focused on CT skills. Based on current findings, recommendations and implications for further researches were provided.",
"title": ""
},
{
"docid": "277071a4a2dde56c13ca2be8abd4b73d",
"text": "Most state-of-the-art information extraction approaches rely on token-level labels to find the areas of interest in text. Unfortunately, these labels are time-consuming and costly to create, and consequently, not available for many real-life IE tasks. To make matters worse, token-level labels are usually not the desired output, but just an intermediary step. End-to-end (E2E) models, which take raw text as input and produce the desired output directly, need not depend on token-level labels. We propose an E2E model based on pointer networks, which can be trained directly on pairs of raw input and output text. We evaluate our model on the ATIS data set, MIT restaurant corpus and the MIT movie corpus and compare to neural baselines that do use token-level labels. We achieve competitive results, within a few percentage points of the baselines, showing the feasibility of E2E information extraction without the need for token-level labels. This opens up new possibilities, as for many tasks currently addressed by human extractors, raw input and output data are available, but not token-level labels.",
"title": ""
},
{
"docid": "8730b884da4444c9be6d8c13d7b983e1",
"text": "The design and structure of a self-assembly modular robot (Sambot) are presented in this paper. Each module has its own autonomous mobility and can connect with other modules to form robotic structures with different manipulation abilities. Sambot has a versatile, robust, and flexible structure. The computing platform provided for each module is distributed and consists of a number of interlinked microcontrollers. The interaction and connectivity between different modules is achieved through infrared sensors and Zigbee wireless communication in discrete state and control area network bus communication in robotic configuration state. A new mechanical design is put forth to realize the autonomous motion and docking of Sambots. It is a challenge to integrate actuators, sensors, microprocessors, power units, and communication elements into a highly compact and flexible module with the overall size of 80 mm × 80 mm × 102 mm. The work describes represents a mature development in the area of self-assembly distributed robotics.",
"title": ""
},
{
"docid": "df354ff3f0524d960af7beff4ec0a68b",
"text": "The paper presents digital beamforming for Passive Coherent Location (PCL) radar. The considered circular antenna array is a part of a passive system developed at Warsaw University of Technology. The system is based on FM radio transmitters. The array consists of eight half-wave dipoles arranged in a circular array covering 360deg with multiple beams. The digital beamforming procedure is presented, including mutual coupling correction and antenna pattern optimization. The results of field calibration and measurements are also shown.",
"title": ""
},
{
"docid": "8ee0a87116d700c8ad982f08d8215c1d",
"text": "Game generation systems perform automated, intelligent design of games (i.e. videogames, boardgames), reasoning about both the abstract rule system of the game and the visual realization of these rules. Although, as an instance of the problem of creative design, game generation shares some common research themes with other creative AI systems such as story and art generators, game generation extends such work by having to reason about dynamic, playable artifacts. Like AI work on creativity in other domains, work on game generation sheds light on the human game design process, offering opportunities to make explicit the tacit knowledge involved in game design and test game design theories. Finally, game generation enables new game genres which are radically customized to specific players or situations; notable examples are cell phone games customized for particular users and newsgames providing commentary on current events. We describe an approach to formalizing game mechanics and generating games using those mechanics, using WordNet and ConceptNet to assist in performing common-sense reasoning about game verbs and nouns. Finally, we demonstrate and describe in detail a prototype that designs micro-games in the style of Nintendo’s",
"title": ""
},
{
"docid": "34919dc04bab57299c22d709902aea68",
"text": "In the rank join problem, we are given a set of relations and a scoring function, and the goal is to return the join results with the top k scores. It is often the case in practice that the inputs may be accessed in ranked order and the scoring function is monotonic. These conditions allow for efficient algorithms that solve the rank join problem without reading all of the input. In this article, we present a thorough analysis of such rank join algorithms. A strong point of our analysis is that it is based on a more general problem statement than previous work, making it more relevant to the execution model that is employed by database systems. One of our results indicates that the well-known HRJN algorithm has shortcomings, because it does not stop reading its input as soon as possible. We find that it is NP-hard to overcome this weakness in the general case, but cases of limited query complexity are tractable. We prove the latter with an algorithm that infers provably tight bounds on the potential benefit of reading more input in order to stop as soon as possible. As a result, the algorithm achieves a cost that is within a constant factor of optimal.",
"title": ""
},
{
"docid": "605c6b431b336ebe2ed07e7fcf529121",
"text": "Standard approaches to probabilistic reasoning require that one possesses an explicit model of the distribution in question. But, the empirical learning of models of probability distributions from partial observations is a problem for which efficient algorithms are generally not known. In this work we consider the use of bounded-degree fragments of the “sum-of-squares” logic as a probability logic. Prior work has shown that we can decide refutability for such fragments in polynomial-time. We propose to use such fragments to decide queries about whether a given probability distribution satisfies a given system of constraints and bounds on expected values. We show that in answering such queries, such constraints and bounds can be implicitly learned from partial observations in polynomial-time as well. It is known that this logic is capable of deriving many bounds that are useful in probabilistic analysis. We show here that it furthermore captures key polynomial-time fragments of resolution. Thus, these fragments are also quite expressive.",
"title": ""
},
{
"docid": "40c93dacc8318bc440d23fedd2acbd47",
"text": "An electrical-balance duplexer uses series connected step-down transformers to enhance linearity and power handling capability by reducing the voltage swing across nonlinear components. Wideband, dual-notch Tx-to-Rx isolation is demonstrated experimentally with a planar inverted-F antenna. The 0.18μm CMOS prototype achieves >50dB isolation for 220MHz aggregated bandwidth or >40dB dual-notch isolation for 160MHz bandwidth, +49dBm Tx-path IIP3 and -48dBc ACLR1 for +27dBm at the antenna.",
"title": ""
},
{
"docid": "ba10bfce4c5deabb663b5ca490c320c9",
"text": "OBJECTIVE\nAlthough the relationship between religious practice and health is well established, the relationship between spirituality and health is not as well studied. The objective of this study was to ascertain whether participation in the mindfulness-based stress reduction (MBSR) program was associated with increases in mindfulness and spirituality, and to examine the associations between mindfulness, spirituality, and medical and psychological symptoms.\n\n\nMETHODS\nForty-four participants in the University of Massachusetts Medical School's MBSR program were assessed preprogram and postprogram on trait (Mindful Attention and Awareness Scale) and state (Toronto Mindfulness Scale) mindfulness, spirituality (Functional Assessment of Chronic Illness Therapy--Spiritual Well-Being Scale), psychological distress, and reported medical symptoms. Participants also kept a log of daily home mindfulness practice. Mean changes in scores were computed, and relationships between changes in variables were examined using mixed-model linear regression.\n\n\nRESULTS\nThere were significant improvements in spirituality, state and trait mindfulness, psychological distress, and reported medical symptoms. Increases in both state and trait mindfulness were associated with increases in spirituality. Increases in trait mindfulness and spirituality were associated with decreases in psychological distress and reported medical symptoms. Changes in both trait and state mindfulness were independently associated with changes in spirituality, but only changes in trait mindfulness and spirituality were associated with reductions in psychological distress and reported medical symptoms. No association was found between outcomes and home mindfulness practice.\n\n\nCONCLUSIONS\nParticipation in the MBSR program appears to be associated with improvements in trait and state mindfulness, psychological distress, and medical symptoms. Improvements in trait mindfulness and spirituality appear, in turn, to be associated with improvements in psychological and medical symptoms.",
"title": ""
},
{
"docid": "4ddf4cf69d062f7ea1da63e68c316f30",
"text": "The Di†use Infrared Background Experiment (DIRBE) on the Cosmic Background Explorer (COBE) spacecraft was designed primarily to conduct a systematic search for an isotropic cosmic infrared background (CIB) in 10 photometric bands from 1.25 to 240 km. The results of that search are presented here. Conservative limits on the CIB are obtained from the minimum observed brightness in all-sky maps at each wavelength, with the faintest limits in the DIRBE spectral range being at 3.5 km (lIl \\ 64 nW m~2 sr~1, 95% conÐdence level) and at 240 km nW m~2 sr~1, 95% conÐdence level). The (lIl\\ 28 bright foregrounds from interplanetary dust scattering and emission, stars, and interstellar dust emission are the principal impediments to the DIRBE measurements of the CIB. These foregrounds have been modeled and removed from the sky maps. Assessment of the random and systematic uncertainties in the residuals and tests for isotropy show that only the 140 and 240 km data provide candidate detections of the CIB. The residuals and their uncertainties provide CIB upper limits more restrictive than the dark sky limits at wavelengths from 1.25 to 100 km. No plausible solar system or Galactic source of the observed 140 and 240 km residuals can be identiÐed, leading to the conclusion that the CIB has been detected at levels of and 14^ 3 nW m~2 sr~1 at 140 and 240 km, respectively. The intelIl\\ 25 ^ 7 grated energy from 140 to 240 km, 10.3 nW m~2 sr~1, is about twice the integrated optical light from the galaxies in the Hubble Deep Field, suggesting that star formation might have been heavily enshrouded by dust at high redshift. The detections and upper limits reported here provide new constraints on models of the history of energy-releasing processes and dust production since the decoupling of the cosmic microwave background from matter. Subject headings : cosmology : observations È di†use radiation È infrared : general",
"title": ""
},
{
"docid": "7d62ae437a6b77e19f0d3292954a8471",
"text": "A numerical tool for the optimisation of the scantlings of a ship is extended by considering production cost, weight and moment of inertia in the objective function. A multi-criteria optimisation of a passenger ship is conducted to illustrate the analysis process. Pareto frontiers are obtained and results are verified with Bureau Veritas rules.",
"title": ""
},
{
"docid": "93bebbc1112dbfd34fce1b3b9d228f9a",
"text": "UNLABELLED\nThere has been no established qualitative system of interpretation for therapy response assessment using PET/CT for head and neck cancers. The objective of this study was to validate the Hopkins interpretation system to assess therapy response and survival outcome in head and neck squamous cell cancer patients (HNSCC).\n\n\nMETHODS\nThe study included 214 biopsy-proven HNSCC patients who underwent a posttherapy PET/CT study, between 5 and 24 wk after completion of treatment. The median follow-up was 27 mo. PET/CT studies were interpreted by 3 nuclear medicine physicians, independently. The studies were scored using a qualitative 5-point scale, for the primary tumor, for the right and left neck, and for overall assessment. Scores 1, 2, and 3 were considered negative for tumors, and scores 4 and 5 were considered positive for tumors. The Cohen κ coefficient (κ) was calculated to measure interreader agreement. Overall survival (OS) and progression-free survival (PFS) were analyzed by Kaplan-Meier plots with a Mantel-Cox log-rank test and Gehan Breslow Wilcoxon test for comparisons.\n\n\nRESULTS\nOf the 214 patients, 175 were men and 39 were women. There was 85.98%, 95.33%, 93.46%, and 87.38% agreement between the readers for overall, left neck, right neck, and primary tumor site response scores, respectively. The corresponding κ coefficients for interreader agreement between readers were, 0.69-0.79, 0.68-0.83, 0.69-0.87, and 0.79-0.86 for overall, left neck, right neck, and primary tumor site response, respectively. The sensitivity, specificity, positive predictive value, negative predictive value, and overall accuracy of the therapy assessment were 68.1%, 92.2%, 71.1%, 91.1%, and 86.9%, respectively. Cox multivariate regression analysis showed human papillomavirus (HPV) status and PET/CT interpretation were the only factors associated with PFS and OS. Among the HPV-positive patients (n = 123), there was a significant difference in PFS (hazard ratio [HR], 0.14; 95% confidence interval, 0.03-0.57; P = 0.0063) and OS (HR, 0.01; 95% confidence interval, 0.00-0.13; P = 0.0006) between the patients who had a score negative for residual tumor versus positive for residual tumor. A similar significant difference was observed in PFS and OS for all patients. There was also a significant difference in the PFS of patients with PET-avid residual disease in one site versus multiple sites in the neck (HR, 0.23; log-rank P = 0.004).\n\n\nCONCLUSION\nThe Hopkins 5-point qualitative therapy response interpretation criteria for head and neck PET/CT has substantial interreader agreement and excellent negative predictive value and predicts OS and PFS in patients with HPV-positive HNSCC.",
"title": ""
},
{
"docid": "86aaee95a4d878b53fd9ee8b0735e208",
"text": "The tensegrity concept has long been considered as a basis for lightweight and compact packaging deployable structures, but very few studies are available. This paper presents a complete design study of a deployable tensegrity mast with all the steps involved: initial formfinding, structural analysis, manufacturing and deployment. Closed-form solutions are used for the formfinding. A manufacturing procedure in which the cables forming the outer envelope of the mast are constructed by two-dimensional weaving is used. The deployment of the mast is achieved through the use of self-locking hinges. A stiffness comparison between the tensegrity mast and an articulated truss mast shows that the tensegrity mast is weak in bending.",
"title": ""
},
{
"docid": "046207a87b7b01f6bc12f08a195670b9",
"text": "Text normalization is the task of transforming lexical variants to their canonical forms. We model the problem of text normalization as a character-level sequence to sequence learning problem and present a neural encoder-decoder model for solving it. To train the encoder-decoder model, many sentences pairs are generally required. However, Japanese non-standard canonical pairs are scarce in the form of parallel corpora. To address this issue, we propose a method of data augmentation to increase data size by converting existing resources into synthesized non-standard forms using handcrafted rules. We conducted an experiment to demonstrate that the synthesized corpus contributes to stably train an encoder-decoder model and improve the performance of Japanese text normalization.",
"title": ""
},
{
"docid": "a5cd94446abfc46c6d5c4e4e376f1e0a",
"text": "Commitment problem in credit market and its eãects on economic growth are discussed. Completions of investment projects increase capital stock of the economy. These projects require credits which are ånanced by ånacial intermediaries. A simpliåed credit model of Dewatripont and Maskin is used to describe the ånancing process, in which the commitment problem or the \\soft budget constraint\" problem arises. However, in dynamic general equilibrium setup with endougenous determination of value and cost of projects, there arise multiple equilibria in the project ånancing model, namely reånancing equilirium and no-reånancing equilibrium. The former leads the economy to the stationary state with smaller capital stock level than the latter. Both the elimination of reånancing equilibrium and the possibility of \\Animal Spirits Cycles\" equilibrium are also discussed.",
"title": ""
},
{
"docid": "43831e29e62c574a93b6029409690bfe",
"text": "We present a convolutional network that is equivariant to rigid body motions. The model uses scalar-, vector-, and tensor fields over 3D Euclidean space to represent data, and equivariant convolutions to map between such representations. These SE(3)-equivariant convolutions utilize kernels which are parameterized as a linear combination of a complete steerable kernel basis, which is derived analytically in this paper. We prove that equivariant convolutions are the most general equivariant linear maps between fields over R. Our experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry.",
"title": ""
},
{
"docid": "257eca5511b1657f4a3cd2adff1989f8",
"text": "The monitoring of volcanoes is mainly performed by sensors installed on their structures, aiming at recording seismic activities and reporting them to observatories to be later analyzed by specialists. However, due to the high volume of data continuously collected, the use of automatic techniques is an important requirement to support real time analyses. In this sense, a basic but challenging task is the classification of seismic activities to identify signals yielded by different sources as, for instance, the movement of magmatic fluids. Although there exists several approaches proposed to perform such task, they were mainly designed to deal with raw signals. In this paper, we present a 2D approach developed considering two main steps. Firstly, spectrograms for every collected signal are calculated by using Fourier Transform. Secondly, we set a deep neural network to discriminate seismic activities by analyzing the spectrogram shapes. As a consequence, our classifier provided outstanding results with accuracy rates greater than 95%.",
"title": ""
},
{
"docid": "c6347c06d84051023baaab39e418fb65",
"text": "This paper presents a complete approach to a successful utilization of a high-performance extreme learning machines (ELMs) Toolbox for Big Data. It summarizes recent advantages in algorithmic performance; gives a fresh view on the ELM solution in relation to the traditional linear algebraic performance; and reaps the latest software and hardware performance achievements. The results are applicable to a wide range of machine learning problems and thus provide a solid ground for tackling numerous Big Data challenges. The included toolbox is targeted at enabling the full potential of ELMs to the widest range of users.",
"title": ""
},
{
"docid": "b50ea06c20fb22d7060f08bc86d9d6ca",
"text": "The advent of the Social Web has provided netizens with new tools for creating and sharing, in a time- and cost-efficient way, their contents, ideas, and opinions with virtually the millions of people connected to the World Wide Web. This huge amount of information, however, is mainly unstructured as specifically produced for human consumption and, hence, it is not directly machine-processable. In order to enable a more efficient passage from unstructured information to structured data, aspect-based opinion mining models the relations between opinion targets contained in a document and the polarity values associated with these. Because aspects are often implicit, however, spotting them and calculating their respective polarity is an extremely difficult task, which is closer to natural language understanding rather than natural language processing. To this end, Sentic LDA exploits common-sense reasoning to shift LDA clustering from a syntactic to a semantic level. Rather than looking at word co-occurrence frequencies, Sentic LDA leverages on the semantics associated with words and multi-word expressions to improve clustering and, hence, outperform state-of-the-art techniques for aspect extraction.",
"title": ""
},
{
"docid": "ef7e0be7ec3af89c5f8f5a050c52ed9a",
"text": "We approach recognition in the framework of deformable shape matching, relying on a new algorithm for finding correspondences between feature points. This algorithm sets up correspondence as an integer quadratic programming problem, where the cost function has terms based on similarity of corresponding geometric blur point descriptors as well as the geometric distortion between pairs of corresponding feature points. The algorithm handles outliers, and thus enables matching of exemplars to query images in the presence of occlusion and clutter. Given the correspondences, we estimate an aligning transform, typically a regularized thin plate spline, resulting in a dense correspondence between the two shapes. Object recognition is handled in a nearest neighbor framework where the distance between exemplar and query is the matching cost between corresponding points. We show results on two datasets. One is the Caltech 101 dataset (Li, Fergus and Perona), a challenging dataset with large intraclass variation. Our approach yields a 45% correct classification rate in addition to localization. We also show results for localizing frontal and profile faces that are comparable to special purpose approaches tuned to faces.",
"title": ""
}
] |
scidocsrr
|
7145210831784609fa954b021f5bedad
|
Language-driven synthesis of 3D scenes from scene databases
|
[
{
"docid": "3f1a841a1ca29d94ee7a26a3fdd613aa",
"text": "We introduce a contextual descriptor which aims to provide a geometric description of the functionality of a 3D object in the context of a given scene. Differently from previous works, we do not regard functionality as an abstract label or represent it implicitly through an agent. Our descriptor, called interaction context or ICON for short, explicitly represents the geometry of object-to-object interactions. Our approach to object functionality analysis is based on the key premise that functionality should mainly be derived from interactions between objects and not objects in isolation. Specifically, ICON collects geometric and structural features to encode interactions between a central object in a 3D scene and its surrounding objects. These interactions are then grouped based on feature similarity, leading to a hierarchical structure. By focusing on interactions and their organization, ICON is insensitive to the numbers of objects that appear in a scene, the specific disposition of objects around the central object, or the objects' fine-grained geometry. With a series of experiments, we demonstrate the potential of ICON in functionality-oriented shape processing, including shape retrieval (either directly or by complementing existing shape descriptors), segmentation, and synthesis.",
"title": ""
}
] |
[
{
"docid": "8948409bbfe3e4d7a9384ef85383679e",
"text": "The security of today's Web rests in part on the set of X.509 certificate authorities trusted by each user's browser. Users generally do not themselves configure their browser's root store but instead rely upon decisions made by the suppliers of either the browsers or the devices upon which they run. In this work we explore the nature and implications of these trust decisions for Android users. Drawing upon datasets collected by Netalyzr for Android and ICSI's Certificate Notary, we characterize the certificate root store population present in mobile devices in the wild. Motivated by concerns that bloated root stores increase the attack surface of mobile users, we report on the interplay of certificate sets deployed by the device manufacturers, mobile operators, and the Android OS. We identify certificates installed exclusively by apps on rooted devices, thus breaking the audited and supervised root store model, and also discover use of TLS interception via HTTPS proxies employed by a market research company.",
"title": ""
},
{
"docid": "d8df2d714bb9a00f8fc953a5b3c1acdd",
"text": "CARBON nanotubes are predicted to have interesting mechanical properties—in particular, high stiffness and axial strength—as a result of their seamless cylindrical graphitic structure1–5. Their mechanical properties have so far eluded direct measurement, however, because of the very small dimensions of nanotubes. Here we estimate the Young's modulus of isolated nanotubes by measuring, in the transmission electron microscope, the amplitude of their intrinsic thermal vibrations. We find that carbon nanotubes have exceptionally high Young's moduli, in the terapascal (TPa) range. Their high stiffness, coupled with their low density, implies that nanotubes might be useful as nanoscale fibres in strong, lightweight composite materials.",
"title": ""
},
{
"docid": "098b9b80d27fddd6407ada74a8fd4590",
"text": "We have developed a 1.55-μm 40 Gbps electro-absorption modulator laser (EML)-based transmitter optical subassembly (TOSA) using a novel flexible printed circuit (FPC). The return loss at the junctions of the printed circuit board and the FPC, and of the FPC and the ceramic feedthrough connection was held better than 20 dB at up to 40 GHz by a newly developed three-layer FPC. The TOSA was fabricated and demonstrated a mask margin of >16% and a path penalty of <;0.63 dB for a 43 Gbps signal after 2.4-km SMF transmission over the entire case temperature range from -5° to 80 °C, demonstrating compliance with ITU-T G.693. These results are comparable to coaxial connector type EML modules. This TOSA is expected to be a strong candidate for 40 Gbps EML modules with excellent operating characteristics, economy, and a small footprint.",
"title": ""
},
{
"docid": "a60d79008bfb7cccee262667b481d897",
"text": "It is well known that utterances convey a great deal of information about the speaker in addition to their semantic content. One such type of information consists of cues to the speaker’s personality traits, the most fundamental dimension of variation between humans. Recent work explores the automatic detection of other types of pragmatic variation in text and conversation, such as emotion, deception, speaker charisma, dominance, point of view, subjectivity, opinion and sentiment. Personality affects these other aspects of linguistic production, and thus personality recognition may be useful for these tasks, in addition to many other potential applications. However, to date, there is little work on the automatic recognition of personality traits. This article reports experimental results for recognition of all Big Five personality traits, in both conversation and text, utilising both self and observer ratings of personality. While other work reports classification results, we experiment with classification, regression and ranking models. For each model, we analyse the effect of different feature sets on accuracy. Results show that for some traits, any type of statistical model performs significantly better than the baseline, but ranking models perform best overall. We also present an experiment suggesting that ranking models are more accurate than multi-class classifiers for modelling personality. In addition, recognition models trained on observed personality perform better than models trained using selfreports, and the optimal feature set depends on the personality trait. A qualitative analysis of the learned models confirms previous findings linking language and personality, while revealing many new linguistic markers.",
"title": ""
},
{
"docid": "ec19c40473bb1316b9390b6d7bcaae7f",
"text": "Online crowdfunding platforms like DonorsChoose.org and Kickstarter allow specific projects to get funded by targeted contributions from a large number of people. Critical for the success of crowdfunding communities is recruitment and continued engagement of donors. With donor attrition rates above 70%, a significant challenge for online crowdfunding platforms as well as traditional offline non-profit organizations is the problem of donor retention. We present a large-scale study of millions of donors and donations on DonorsChoose.org, a crowdfunding platform for education projects. Studying an online crowdfunding platform allows for an unprecedented detailed view of how people direct their donations. We explore various factors impacting donor retention which allows us to identify different groups of donors and quantify their propensity to return for subsequent donations. We find that donors are more likely to return if they had a positive interaction with the receiver of the donation. We also show that this includes appropriate and timely recognition of their support as well as detailed communication of their impact. Finally, we discuss how our findings could inform steps to improve donor retention in crowdfunding communities and non-profit organizations.",
"title": ""
},
{
"docid": "054b5be56ae07c58b846cf59667734fc",
"text": "Optical motion capture systems have become a widely used technology in various fields, such as augmented reality, robotics, movie production, etc. Such systems use a large number of cameras to triangulate the position of optical markers. The marker positions are estimated with high accuracy. However, especially when tracking articulated bodies, a fraction of the markers in each timestep is missing from the reconstruction. In this paper, we propose to use a neural network approach to learn how human motion is temporally and spatially correlated, and reconstruct missing markers positions through this model. We experiment with two different models, one LSTM-based and one time-window-based. Both methods produce state-of-the-art results, while working online, as opposed to most of the alternative methods, which require the complete sequence to be known. The implementation is publicly available at https://github.com/Svitozar/NN-for-Missing-Marker-Reconstruction.",
"title": ""
},
{
"docid": "b4dd6c9634e86845795bcbe32216ee44",
"text": "Several program analysis tools - such as plagiarism detection and bug finding - rely on knowing a piece of code's relative semantic importance. For example, a plagiarism detector should not bother reporting two programs that have an identical simple loop counter test, but should report programs that share more distinctive code. Traditional program analysis techniques (e.g., finding data and control dependencies) are useful, but do not say how surprising or common a line of code is. Natural language processing researchers have encountered a similar problem and addressed it using an n-gram model of text frequency, derived from statistics computed over text corpora.\n We propose and compute an n-gram model for programming languages, computed over a corpus of 2.8 million JavaScript programs we downloaded from the Web. In contrast to previous techniques, we describe a code n-gram as a subgraph of the program dependence graph that contains all nodes and edges reachable in n steps from the statement. We can count n-grams in a program and count the frequency of n-grams in the corpus, enabling us to compute tf-idf-style measures that capture the differing importance of different lines of code. We demonstrate the power of this approach by implementing a plagiarism detector with accuracy that beats previous techniques, and a bug-finding tool that discovered over a dozen previously unknown bugs in a collection of real deployed programs.",
"title": ""
},
{
"docid": "d0112ccdeae9063abf2c98c70506f0d3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/ncte.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "41261cf72d8ee3bca4b05978b07c1c4f",
"text": "The association of Sturge-Weber syndrome with naevus of Ota is an infrequently reported phenomenon and there are only four previously described cases in the literature. In this paper we briefly review the literature regarding the coexistence of vascular and pigmentary naevi and present an additional patient with the association of the Sturge-Weber syndrome and naevus of Ota.",
"title": ""
},
{
"docid": "9badb6e864118f1782d86486f6df9ff3",
"text": "The genera Opechona Looss and Prodistomum Linton are redefined: the latter is re-established, its diagnostic character being the lack of a uroproct. Pharyngora Lebour and Neopechona Stunkard are considered synonyms of Opechona, and Acanthocolpoides Travassos, Freitas & Bührnheim is considered a synonym of Prodistomum. Opechona bacillaris (Molin) and Prodistomum [originally Distomum] polonii (Molin) n. comb. are described from the NE Atlantic Ocean. Separate revisions with keys to Opechona, Prodistomum and ‘Opechona-like’ species incertae sedis are presented. Opechona is considered to contain: O. bacillaris (type-species), O. alaskensis Ward & Fillingham, O. [originally Neopechona] cablei (Stunkard) n. comb., O. chloroscombri Nahhas & Cable, O. occidentalis Montgomery, O. parvasoma Ching sp. inq., O. pharyngodactyla Manter, O. [originally Distomum] pyriforme (Linton) n. comb. and O. sebastodis (Yamaguti). Prodistomum includes: P. gracile Linton (type-species), P. [originally Opechona] girellae (Yamaguti) n. comb., P. [originally Opechona] hynnodi (Yamaguti) n. comb., P. [originally Opechona] menidiae (Manter) n. comb., P. [originally Pharyngora] orientalis (Layman) n. comb., P. polonii and P. [originally Opechona] waltairensis (Madhavi) n. comb. Some species are considered ‘Opechona-like’ species incertae sedis: O. formiae Oshmarin, O. siddiqii Ahmad, 1986 nec 1984, O. mohsini Ahmad, O. magnatestis Gaevskaya & Kovaleva, O. vinodae Ahmad, O. travassosi Ahmad, ‘Lepidapedon’ nelsoni Gupta & Mehrotra and O. siddiqi Ahmad, 1984 nec 1986. The related genera Cephalolepidapedon Yamaguti and Clavogalea Bray and the synonymies of their constituent species are discussed, and further comments are made on related genera and misplaced species. The new combination Clavogalea [originally Stephanostomum] trachinoti (Fischthal & Thomas) is made. The taxonomy, life-history, host-specificity and zoogeography of the genera are briefly discussed.",
"title": ""
},
{
"docid": "b01c1a2eb508ca1f4b2de3978b2fd821",
"text": "The chapter includes a description via examples of the: objectives of integrating programming and robotics in elementary school; the pedagogical infrastructure, including a description of constructionism and computational thinking; the hardware-software support of the projects with Scratch and WeDo; and the academic support to teachers and students with LearnScratch.org. Programming and Robotics are areas of knowledge that have been historically the domain of courses in higher education and more recently in secondary education and professional studies. Today, as a result of technological advances, we have access to graphic platforms of programming, specially designed for younger students, as well as construction kits with simple sensors and actuators that can be programmed from a computer.",
"title": ""
},
{
"docid": "04aee863f06b448b99fc0d7d9f829e48",
"text": "The need of a water quality monitoring system is crucial for aquaculture and environmental control evaluation. This paper focuses on the development of the Water Quality (WQ) monitoring module that consists of hardware and software components. It highlights the details of the hardware components and the algorithm as well as the software that is connected to the cloud. There are many works on storing environmental data in cloud storage in Malaysia. The new platform to date for the Internet of Things (IoT) and cloud database is Favoriot. Favoriot is a platform for IoT and machine-to-machine (M2M) development. For this project, Favoriot platform is used for real time data. The self-healing algorithm is design to reduce human intervention and continuous data collected in the remote areas. The result shows that the self-healing algorithm is able to recover itself without physical reseting, in case during distruption of wireless service connection failure.",
"title": ""
},
{
"docid": "9118de2f5c7deebb9c3c6175c0b507b2",
"text": "The integration of facts derived from information extraction systems into existing knowledge bases requires a system to disambiguate entity mentions in the text. This is challenging due to issues such as non-uniform variations in entity names, mention ambiguity, and entities absent from a knowledge base. We present a state of the art system for entity disambiguation that not only addresses these challenges but also scales to knowledge bases with several million entries using very little resources. Further, our approach achieves performance of up to 95% on entities mentioned from newswire and 80% on a public test set that was designed to include challenging queries.",
"title": ""
},
{
"docid": "be4d9686e2730b67a383d730c1761e8b",
"text": "Many factors have been cited for poor performance of students in CS1. To investigate how assessment mechanisms may impact student performance, nine experienced CS1 instructors reviewed final examinations from a variety of North American institutions. The majority of the exams reviewed were composed predominantly of high-value, integrative code-writing questions, and the reviewers regularly underestimated the number of CS1 concepts required to answer these questions. An evaluation of the content and cognitive requirements of individual questions suggests that in order to succeed, students must internalize a large amount of CS1 content. This emphasizes the need for focused assessment techniques to provide students with the opportunity to demonstrate their knowledge.",
"title": ""
},
{
"docid": "8ff481b3b35b74356d876c28513dc703",
"text": "This paper describes the ScratchJr research project, a collaboration between Tufts University's Developmental Technologies Research Group, MIT's Lifelong Kindergarten Group, and the Playful Invention Company. Over the past five years, dozens of ScratchJr prototypes have been designed and studied with over 300 K-2nd grade students, teachers and parents. ScratchJr allows children ages 5 to 7 years to explore concepts of computer programming and digital content creation in a safe and fun environment. This paper describes the progression of major prototypes leading to the current public version, as well as the educational resources developed for use with ScratchJr. Future directions and educational implications are also discussed.",
"title": ""
},
{
"docid": "e79083e2619792045b5d0536b6a003e0",
"text": "Intensive research efforts in the field of Parkinson's disease (PD) are focusing on identifying reliable biomarkers which possibly help physicians in predicting disease onset, diagnosis, and progression as well as evaluating the response to disease-modifying treatments. Given that abnormal alpha-synuclein (α-syn) accumulation is a primary component of PD pathology, this protein has attracted considerable interest as a potential biomarker for PD. Alpha-synuclein can be detected in several body fluids, including plasma, where it can be found as free form or in association with exosomes, small membranous vesicles secreted by virtually all cell types. Together with α-syn accumulation, lysosomal dysfunctions seem to play a central role in the pathogenesis of PD, given the crucial role of lysosomes in the α-syn degradation. In particular, heterozygous mutations in the GBA1 gene encoding lysosomal enzyme glucocerebrosidase (GCase) are currently considered as the most important risk factor for PD. Different studies have found that GCase deficiency leads to accumulation of α-syn; whereas at the same time, increased α-syn may inhibit GCase function, thus inducing a bidirectional pathogenic loop. In this study, we investigated whether changes in plasma total and exosome-associated α-syn could correlate with disease status and clinical parameters in PD and their relationship with GCase activity. We studied 39 PD patients (mean age: 65.2 ± 8.9; men: 25), without GBA1 mutations, and 33 age-matched controls (mean age: 61.9 ± 6.2; men: 15). Our results showed that exosomes from PD patients contain a greater amount of α-syn compared to healthy subjects (25.2 vs. 12.3 pg/mL, p < 0.001) whereas no differences were found in plasma total α-syn levels (15.7 vs. 14.8 ng/mL, p = 0.53). Moreover, we highlighted a significant increase of plasma exosomal α-syn/total α-syn ratio in PD patients (1.69 vs. 0.89, p < 0.001), which negatively correlates with disease severity (p = 0.014). Intriguingly, a significant inverse correlation between GCase activity and this ratio in PD subjects was found (p = 0.006). Additional and large-scale studies comparing GCase activity and pathological protein levels will be clearly needed to corroborate these data and determine whether the association between key players in the lysosomal system and α-syn can be used as diagnostic or prognostic biomarkers for PD.",
"title": ""
},
{
"docid": "c8967be119df778e98954a7e94bee4ca",
"text": "We consider the problem of predicting real valued scores for reviews based on various categories of features of the review text, and other metadata associated with the review, with the purpose of generating a rank for a given list of reviews. For this task, we explore various machine learning models and evaluate the effectiveness of them through a well known measure for goodness of fit. We also explored regularization methods to reduce variance in the model. Random forests was the most effective regressor in the end, outperforming all the other models that we have tried.",
"title": ""
},
{
"docid": "d0962cd3c6f5f1c5c932aa635e47e024",
"text": "This paper presents an extension to Bitcoin’s script language enabling covenants, a primitive that allows transactions to restrict how the value they transfer is used in the future. Covenants expand the set of financial instruments expressible in Bitcoin, and enable new powerful and novel use cases. We illustrate two novel security constructs built using covenants. The first, vaults, focuses on improving the security of private cryptographic keys. Historically, maintaining these keys securely and reliably has been a critical vulnerability for Bitcoin users. We show how covenants enable vaults, which disincentivize key theft by preventing an attacker from gaining full access to stolen funds. The second construct, poison transactions, is a generally useful mechanism for penalizing double-spending attacks. Bitcoin-NG, a protocol that has been recently proposed to improve Bitcoin’s throughput, latency and overall scalability, requires this feature. We show how covenants enable poison transactions, and detail how Bitcoin-NG can be implemented progressively as an overlay on top of the Bitcoin blockchain.",
"title": ""
},
{
"docid": "2e2e8219b7870529e8ca17025190aa1b",
"text": "M multitasking competes with television advertising for consumers’ attention, but may also facilitate immediate and measurable response to some advertisements. This paper explores whether and how television advertising influences online shopping. We construct a massive data set spanning $3.4 billion in spending by 20 brands, measures of brands’ website traffic and transactions, and ad content measures for 1,224 commercials. We use a quasi-experimental design to estimate whether and how TV advertising influences changes in online shopping within two-minute pre/post windows of time. We use nonadvertising competitors’ online shopping in a difference-in-differences approach to measure the same effects in two-hour windows around the time of the ad. The findings indicate that television advertising does influence online shopping and that advertising content plays a key role. Action-focus content increases direct website traffic and sales. Information-focus and emotion-focus ad content actually reduce website traffic while simultaneously increasing purchases, with a positive net effect on sales for most brands. These results imply that brands seeking to attract multitaskers’ attention and dollars must select their advertising copy carefully.",
"title": ""
},
{
"docid": "0580342f7efb379fc417d2e5e48c4b73",
"text": "The use of System Dynamics Modeling in Supply Chain Management has only recently re-emerged after a lengthy slack period. Current research on System Dynamics Modelling in supply chain management focuses on inventory decision and policy development, time compression, demand amplification, supply chain design and integration, and international supply chain management. The paper first gives an overview of recent research work in these areas, followed by a discussion of research issues that have evolved, and presents a taxonomy of research and development in System Dynamics Modelling in supply chain management.",
"title": ""
}
] |
scidocsrr
|
d50fcaf12b57ce6801b8d0eea3d9052e
|
NASA-TASK LOAD INDEX ( NASA-TLX ) ; 20 YEARS LATER
|
[
{
"docid": "c9dd964f5421171d4302d1b159c2b415",
"text": "The results of a multi-year research program to identify the factors associated with variations in subjective workload within and between different types of tasks are reviewed. Subjective evaluations of 10 workload-related factors were obtained from 16 different experiments. The experimental tasks included simple cognitive and manual control tasks, complex laboratory and supervisory control tasks, and aircraft simulation. Task-, behavior-, and subject-related correlates of subjective workload experiences varied as a function of difficulty manipulations within experiments, different sources of workload between experiments, and individual differences in workload definition. A multi-dimensional rating scale is proposed in which information about the magnitude and sources of six workload-related factors are combined to derive a sensitive and reliable estimate of workload. .",
"title": ""
}
] |
[
{
"docid": "6b97ad3fc20e56f28ae5bf7c6fd0eb57",
"text": "We propose a new model of steganography based on a list of pseudo-randomly sorted sequences of characters. Given a list L of m columns containing n distinct strings each, with low or no semantic relationship between columns taken two by two, and a secret message s ∈ {0, 1}∗, our model embeds s in L block by block, by generating, for each column of L, a permutation number and by reordering strings contained in it according to that number. Where, letting l be average bit length of a string, the embedding capacity is given by [(m − 1) ∗ log2(n! − 1)/n ∗ l]. We’ve shown that optimal efficiency of the method can be obtained with the condition that (n >> l). The results which has been obtained by experiments, show that our model performs a better hiding process than some of the important existing methods, in terms of hiding capacity.",
"title": ""
},
{
"docid": "306a833c0130678e1b2ece7e8b824d5e",
"text": "In many natural languages, there are clear syntactic and/or intonational differences between declarative sentences, which are primarily used to provide information, and interrogative sentences, which are primarily used to request information. Most logical frameworks restrict their attention to the former. Those that are concerned with both usually assume a logical language that makes a clear syntactic distinction between declaratives and interrogatives, and usually assign different types of semantic values to these two types of sentences. A different approach has been taken in recent work on inquisitive semantics. This approach does not take the basic syntactic distinction between declaratives and interrogatives as its starting point, but rather a new notion of meaning that captures both informative and inquisitive content in an integrated way. The standard way to treat the logical connectives in this approach is to associate them with the basic algebraic operations on these new types of meanings. For instance, conjunction and disjunction are treated as meet and join operators, just as in classical logic. This gives rise to a hybrid system, where sentences can be both informative and inquisitive at the same time, and there is no clearcut division between declaratives and interrogatives. It may seem that these two general approaches in the existing literature are quite incompatible. The main aim of this paper is to show that this is not the case. We develop an inquisitive semantics for a logical language that has a clearcut division between declaratives and interrogatives. We show that this language coincides in expressive power with the hybrid language that is standardly assumed in inquisitive semantics, we establish a sound and complete axiomatization for the associated logic, and we consider a natural enrichment of the system with presuppositional interrogatives.",
"title": ""
},
{
"docid": "73769f4540c326533fb78b8c48684833",
"text": "BACKGROUND\nThe importance of findings derived from syntheses of qualitative research has been increasingly acknowledged. Findings that arise from qualitative syntheses inform questions of practice and policy in their own right and are commonly used to complement findings from quantitative research syntheses. The GRADE approach has been widely adopted by international organisations to rate the quality and confidence of the findings of quantitative systematic reviews. To date, there has been no widely accepted corresponding approach to assist health care professionals and policy makers in establishing confidence in the synthesised findings of qualitative systematic reviews.\n\n\nMETHODS\nA methodological group was formed develop a process to assess the confidence in synthesised qualitative research findings and develop a Summary of Findings tables for meta-aggregative qualitative systematic reviews.\n\n\nRESULTS\nDependability and credibility are two elements considered by the methodological group to influence the confidence of qualitative synthesised findings. A set of critical appraisal questions are proposed to establish dependability, whilst credibility can be ranked according to the goodness of fit between the author's interpretation and the original data. By following the processes outlined in this article, an overall ranking can be assigned to rate the confidence of synthesised qualitative findings, a system we have labelled ConQual.\n\n\nCONCLUSIONS\nThe development and use of the ConQual approach will assist users of qualitative systematic reviews to establish confidence in the evidence produced in these types of reviews and can serve as a practical tool to assist in decision making.",
"title": ""
},
{
"docid": "d7573e7b3aac75b49132076ce9fc83e0",
"text": "The prevalent use of social media produces mountains of unlabeled, high-dimensional data. Feature selection has been shown effective in dealing with high-dimensional data for efficient data mining. Feature selection for unlabeled data remains a challenging task due to the absence of label information by which the feature relevance can be assessed. The unique characteristics of social media data further complicate the already challenging problem of unsupervised feature selection, (e.g., part of social media data is linked, which makes invalid the independent and identically distributed assumption), bringing about new challenges to traditional unsupervised feature selection algorithms. In this paper, we study the differences between social media data and traditional attribute-value data, investigate if the relations revealed in linked data can be used to help select relevant features, and propose a novel unsupervised feature selection framework, LUFS, for linked social media data. We perform experiments with real-world social media datasets to evaluate the effectiveness of the proposed framework and probe the working of its key components.",
"title": ""
},
{
"docid": "fb861d63ceba44dcbc713181b269c8a8",
"text": "MiloviC, B. and v. RadojeviC, 2015. application of data mining in agriculture. Bulg. J. Agric. Sci., 21: 26-34 Today, agricultural organizations work with large amounts of data. Processing and retrieval of significant data in this abundance of agricultural information is necessary. Utilization of information and communications technology enables automation of extracting significant data in an effort to obtain knowledge and trends, which enables the elimination of manual tasks and easier data extraction directly from electronic sources, transfer to secure electronic system of documentation which will enable production cost reduction, higher yield and higher market price. Data mining in addition to information about crops enables agricultural enterprises to predict trends about customer’s conditions or their behavior, which is achieved by analyzing data from different perspectives and finding connections and relationships in seemingly unrelated data. Raw data of agricultural enterprises are very ample and diverse. it is necessary to collect and store them in an organized form, and their integration enables the creation of agricultural information system. Data mining in agriculture provides many opportunities for exploring hidden patterns in these collections of data. These patterns can be used to determine the condition of customers in agricultural organizations.",
"title": ""
},
{
"docid": "4836103098abb22aaba8decc3d1de9e4",
"text": "Porcelain laminate veneers have been a common treatment strategy in dental clinics. It is a conservative method for treatment of esthetic and functional problems in anterior region of oral cavity. Wide range of dental ceramics is now available on market for fabrication of laminate veneers. Clinician should have enough knowledge regarding the composition and properties of these materials in order to be able to choose the appropriate one according to clinical situations.",
"title": ""
},
{
"docid": "bdb3bbeb2bdeeca00c29c4a47b25589b",
"text": "Bicondylar tibial plateau fractures involving four articular quadrants are severe and complex injuries, and they remain a challenging problem in orthopaedic trauma. The aim of this study was to introduce a new treatment protocol with dual-incision and multi-plate fixation in the floating supine patient position as well as to report the preliminary clinical results. From January 2006 to December 2011, 16 consecutive patients with closed bicondylar four-quadrant tibial plateau fractures (Schatzker type VI, OTA/AO 41C2/3) were treated with posteromedial inverted L-shaped and anterolateral incisions. With the posteromedial approach, three quadrants (posteromedial, anteromedial and posterolateral) can be exposed, reduced and fixed with multiple small antiglide plates and short screws in an enclosure pattern. With the anterolateral approach, after articular elevation and bone substitute grafting, a strong locking plate with long screws to the medial cortex is used to raft-buttress the reduced lateral plateau fracture, hold the entire reconstructed tibial condyles together, and contact the condyles with the tibial shaft. All patients were encouraged to exercise knee motion at an early stage. The outcome was evaluated clinically and radiologically after a minimum two-year follow-up. The average operation time was 98 ± 26 minutes (range 70–128) and the average duration of hospitalization was 29 ± 8.6 days (range 20–41). Three cases used five plates, nine cases used four plates, and four cases used three plates. All patients were followed for a mean of 28.7 ± 6.1 months (range 26–38). Fifteen incisions healed initially, while one patient developed a medial wound dehiscence and was successfully managed by debridement. All patients achieved radiological fracture union after an average of 20.2 weeks. At the two-year follow up, the average knee range of motion (ROM) was 98° ± 13.7 (range 88–125°), with a Hospital for Special Surgery (HSS) knee score of 87.7 ± 10.3 (range 75–95), and SMFA score of 21.3 ± 8.6 (range 12–33). For bicondylar four-quadrant tibial plateau fractures, the treatment protocol of multiple medial-posterior small plates combined with a lateral strong locking plate through dual incisions can provide stable fracture fixation to allow for early stage rehabilitation. Good clinical outcomes can be anticipated.",
"title": ""
},
{
"docid": "319f681b2956c058bd7777f0372c7e2c",
"text": "We present the data model, architecture, and evaluation of LightDB, a database management system designed to efficiently manage virtual, augmented, and mixed reality (VAMR) video content. VAMR video differs from its two-dimensional counterpart in that it is spherical with periodic angular dimensions, is nonuniformly and continuously sampled, and applications that consume such videos often have demanding latency and throughput requirements. To address these challenges, LightDB treats VAMR video data as a logically-continuous six-dimensional light field. Furthermore, LightDB supports a rich set of operations over light fields, and automatically transforms declarative queries into executable physical plans. We have implemented a prototype of LightDB and, through experiments with VAMR applications in the literature, we find that LightDB offers up to 4× throughput improvements compared with prior work. PVLDB Reference Format: Brandon Haynes, Amrita Mazumdar, Armin Alaghi, Magdalena Balazinska, Luis Ceze, Alvin Cheung. LightDB: A DBMS for Virtual Reality Video. PVLDB, 11 (10): 1192-1205, 2018. DOI: https://doi.org/10.14778/3231751.3231768",
"title": ""
},
{
"docid": "7a8ded6daecbee4492f19ef85c92b0fd",
"text": "Sleep problems bave become epidemic aod traditional research has discovered many causes of poor sleep. The purpose of this study was to complement existiog research by using a salutogenic or health origins framework to investigate the correlates of good sleep. The aoalysis for this study used the National College Health Assessment data that included 54,111 participaots at 71 institutions. Participaots were raodomly selected or were in raodomly selected classrooms. Results of these aoalyses indicated that males aod females who reported \"good sleep\" were more likely to have engaged regularly in physical activity, felt less exhausted, were more likely to have a healthy Body Mass Index (BMI), aod also performed better academically. In addition, good male sleepers experienced less anxietY aod had less back pain. Good female sleepers also had fewer abusive relationships aod fewer broken bones, were more likely to have been nonsmokers aod were not binge drinkers. Despite the limitations of this exploratory study, these results are compelling, however they suggest the need for future research to clarify the identified relationships.",
"title": ""
},
{
"docid": "7c829563e98a6c75eb9b388bf0627271",
"text": "Research in learning analytics and educational data mining has recently become prominent in the fields of computer science and education. Most scholars in the field emphasize student learning and student data analytics; however, it is also important to focus on teaching analytics and teacher preparation because of their key roles in student learning, especially in K-12 learning environments. Nonverbal communication strategies play an important role in successful interpersonal communication of teachers with their students. In order to assist novice or practicing teachers with exhibiting open and affirmative nonverbal cues in their classrooms, we have designed a multimodal teaching platform with provisions for online feedback. We used an interactive teaching rehearsal software, TeachLivE, as our basic research environment. TeachLivE employs a digital puppetry paradigm as its core technology. Individuals walk into this virtual environment and interact with virtual students displayed on a large screen. They can practice classroom management, pedagogy and content delivery skills with a teaching plan in the TeachLivE environment. We have designed an experiment to evaluate the impact of an online nonverbal feedback application. In this experiment, different types of multimodal data have been collected during two experimental settings. These data include talk-time and nonverbal behaviors of the virtual students, captured in log files; talk time and full body tracking data of the participant; and video recording of the virtual classroom with the participant. 34 student teachers participated in this 30-minute experiment. In each of the settings, the participants were provided with teaching plans from which they taught. All the participants took part in both of the experimental settings. In order to have a balanced experiment design, half of the participants received nonverbal online feedback in their first session and the other half received this feedback in the second session. A visual indication was used for feedback each time the participant exhibited a closed, defensive posture. Based on recorded full-body tracking data, we observed that only those who received feedback in their first session demonstrated a significant number of open postures in the session containing no feedback. However, the post-questionnaire information indicated that all participants were more mindful of their body postures while teaching after they had participated in the study.",
"title": ""
},
{
"docid": "2e07ca60f1b720c94eed8e9ca76afbdd",
"text": "This paper is concerned with the problem of how to better exploit 3D geometric information for dense semantic image labeling. Existing methods often treat the available 3D geometry information (e.g., 3D depth-map) simply as an additional image channel besides the R-G-B color channels, and apply the same technique for RGB image labeling. In this paper, we demonstrate that directly performing 3D convolution in the framework of a residual connected 3D voxel top-down modulation network can lead to superior results. Specifically, we propose a 3D semantic labeling method to label outdoor street scenes whenever a dense depth map is available. Experiments on the “Synthia” and “Cityscape” datasets show our method outperforms the state-of-the-art methods, suggesting such a simple 3D representation is effective in incorporating 3D geometric information.",
"title": ""
},
{
"docid": "22724325cdadd29a0d41498a44ab7aca",
"text": "INTRODUCTION: Traumatic loss of teeth in the esthetic zone commonly results in significant loss of buccal bone. This leads to reduced esthetics, problems with phonetics and reduction in function. Single tooth replacement has become an indication for implant-based restoration. In case of lack of bone volume the need of surgical reconstruction of the alveolar ridge is warranted. Several bone grafting techniques have been described to ensure sufficient bone volume for implantation. OBJECTIVES: Evaluation of using the zygomatic buttress as an intraoral bone harvesting donor site for pre-implant grafting. MATERIALS AND METHODS: Twelve patients were selected with limited alveolar ridge defect in the esthetic zone that needs bone grafting procedure prior to dental implants. Patients were treated using a 2-stage technique where bone blocks harvested from the zygomatic buttress region were placed as onlay grafts and fixed with osteosynthesis micro screws. After 4 months of healing, screws were removed for implant placement RESULTS: Harvesting of 12 bone blocks were performed for all patients indicating a success rate of 100% for the zygomatic buttress area as a donor site. Final rehabilitation with dental implants was possible in 11 of 12 patients, yielding a success rate of 91.6%. Three patients (25%) had postoperative complications at the donor site and one patient (8.3%) at the recipient site. The mean value of bone width pre-operatively was 3.64 ± .48 mm which increased to 5.47 ± .57 mm post-operatively, the increase in mean value of bone width was statistically significant (p < 0.001). CONCLUSIONS: Harvesting of intraoral bone blocks from the zygomatic buttress region is an effective and safe method to treat localized alveolar ridge defect before implant placement.",
"title": ""
},
{
"docid": "c58aaa7e1b197a1ee95fb343b0de8664",
"text": "Natural language understanding (NLU) is an important module of spoken dialogue systems. One of the difficulties when it comes to adapting NLU to new domains is the high cost of constructing new training data for each domain. To reduce this cost, we propose a zero-shot learning of NLU that takes into account the sequential structures of sentences together with general question types across different domains. Experimental results show that our methods achieve higher accuracy than baseline methods in two completely different domains (insurance and sightseeing).",
"title": ""
},
{
"docid": "e016c72bf2c3173d5c9f4973d03ab380",
"text": "SDN controllers demand tight performance guarantees over the control plane actions performed by switches. For example, traffic engineering techniques that frequently reconfigure the network require guarantees on the speed of reconfiguring the network. Initial experiments show that poor performance of Ternary Content-Addressable Memory (TCAM) control actions (e.g., rule insertion) can inflate application performance by a factor of 2x! Yet, modern switches provide no guarantees for these important control plane actions -- inserting, modifying, or deleting rules.\n In this paper, we present the design and evaluation of Hermes, a practical and immediately deployable framework that offers a novel method for partitioning and optimizing switch TCAM to enable performance guarantees. Hermes builds on recent studies on switch performance and provides guarantees by trading-off a nominal amount of TCAM space for assured performance. We evaluated Hermes using large-scale simulations. Our evaluations show that with less than 5% overheads, Hermes provides 5ms insertion guarantees that translates into an improvement of application level metrics by up to 80%. Hermes is more than 50% better than existing state of the art techniques and provides significant improvement for traditional networks running BGP.",
"title": ""
},
{
"docid": "9ae435f5169e867dc9d4dc0da56ec9fb",
"text": "Renewable energy is currently the main direction of development of electric power. Because of its own characteristics, the reliability of renewable energy generation is low. Renewable energy generation system needs lots of energy conversion devices which are made of power electronic devices. Too much power electronic components can damage power quality in microgrid. High Frequency AC (HFAC) microgrid is an effective way to solve the problems of renewable energy generation system. Transmitting electricity by means of HFAC is a novel idea in microgrid. Although the HFAC will cause more loss of power, it can improve the power quality in microgrid. HFAC can also reduce the impact of fluctuations of renewable energy in microgrid. This paper mainly simulates the HFAC with Matlab/Simulink and analyzes the feasibility of HFAC in microgrid.",
"title": ""
},
{
"docid": "11c3b4c63bb9cdc19f542bb477cca191",
"text": "Although there are many motion planning techniques, there is no single one that performs optimally in every environment for every movable object. Rather, each technique has different strengths and weaknesses which makes it best-suited for particular types of situations. Also, since a given environment can consist of vastly different regions, there may not even be a single planner that is well suited for the problem. Ideally, one would use a suite of planners in concert to solve the problem by applying the best-suited planner in each region. In this paper, we propose an automated framework for feature-sensitive motion planning. We use a machine learning approach to characterize and partition C-space into (possibly overlapping) regions that are well suited to one of the planners in our library of roadmap-based motion planning methods. After the best-suited method is applied in each region, their resulting roadmaps are combined to form a roadmap of the entire planning space. We demonstrate on a range of problems that our proposed feature-sensitive approach achieves results superior to those obtainable by any of the individual planners on their own. “A Machine Learning Approach for ...”, Morales et al. TR04-001, Parasol Lab, Texas A&M, February 2004 1",
"title": ""
},
{
"docid": "efa4f154549c81a31421d32ad44267b9",
"text": "PURPOSE OF REVIEW\nDespite the American public following recommendations to decrease absolute dietary fat intake and specifically decrease saturated fat intake, we have seen a dramatic rise over the past 40 years in the rates of non-communicable diseases associated with obesity and overweight, namely cardiovascular disease. The development of the diet-heart hypothesis in the mid twentieth century led to faulty but long-held beliefs that dietary intake of saturated fat led to heart disease. Saturated fat can lead to increased LDL cholesterol levels, and elevated plasma cholesterol levels have been shown to be a risk factor for cardiovascular disease; however, the correlative nature of their association does not assign causation.\n\n\nRECENT FINDINGS\nAdvances in understanding the role of various lipoprotein particles and their atherogenic risk have been helpful for understanding how different dietary components may impact CVD risk. Numerous meta-analyses and systematic reviews of both the historical and current literature reveals that the diet-heart hypothesis was not, and still is not, supported by the evidence. There appears to be no consistent benefit to all-cause or CVD mortality from the reduction of dietary saturated fat. Further, saturated fat has been shown in some cases to have an inverse relationship with obesity-related type 2 diabetes. Rather than focus on a single nutrient, the overall diet quality and elimination of processed foods, including simple carbohydrates, would likely do more to improve CVD and overall health. It is in the best interest of the American public to clarify dietary guidelines to recognize that dietary saturated fat is not the villain we once thought it was.",
"title": ""
},
{
"docid": "f63993e721a16ac0a06f0ffb3c01ed5d",
"text": "This paper explores temporary identities on social media platforms and individuals' uses of these identities with respect to their perceptions of anonymity. Given the research on multiple profile maintenance, little research has examined the role that some social media platforms play in affording users with temporary identities. Further, most of the research on anonymity stops short of the concept of varying perceptions of anonymity. This paper builds on these research areas by describing the phenomenon of temporary \"throwaway accounts\" and their uses on reddit.com, a popular social news site. In addition to ethnographic trace analysis to examine the contexts in which throwaway accounts are adopted, this paper presents a predictive model that suggests that perceptions of anonymity significantly shape the potential uses of throwaway accounts and that women are much more likely to adopt temporary identities than men.",
"title": ""
},
{
"docid": "5794c31579595f8267bbad9278fe5fd2",
"text": "Designed based on the underactuated mechanism, HIT/DLR Prosthetic Hand is a multi-sensory flve-flngered bio- prosthetic hand. Similarly with adult's hand, it is simple constructed and comprises 13 joints. Three motors actuate the thumb, index finger and the other three fingers each. Actuated by a motor, the thumb can move along cone surface, which resembles human thumb and is superior in the appearance. Driven by another motor and transmitted by springs, the mid finger, ring finger and little finger can envelop objects with complex shape. The appearance designation and sensory system are introduced. The grasp experiments are presented in detail. The hand has been greatly improved from HIT-ARhand. It was verified from experimentations, the hand has strong capability of self adaptation, can accomplish precise and power grasp for objects with complex shape.",
"title": ""
},
{
"docid": "2a2497839dafe8c2d2ea2b8404f7444b",
"text": "Face analysis in images in the wild still pose a challenge for automatic age and gender recognition tasks, mainly due to their high variability in resolution, deformation, and occlusion. Although the performance has highly increased thanks to Convolutional Neural Networks (CNNs), it is still far from optimal when compared to other image recognition tasks, mainly because of the high sensitiveness of CNNs to facial variations. In this paper, inspired by biology and the recent success of attention mechanisms on visual question answering and fine-grained recognition, we propose a novel feedforward attention mechanism that is able to discover the most informative and reliable parts of a given face for improving age and gender classification. In particular, given a downsampled facial image, the proposed model is trained based on a novel end-to-end learning framework to extract the most discriminative patches from the original high-resolution image. Experimental validation on the standard Adience, Images of Groups, and MORPH II benchmarks show Preprint submitted to Pattern Recognition June 30, 2017",
"title": ""
}
] |
scidocsrr
|
a9f1fadd61ef01ef76c985e57d9f5cc6
|
A Survey on Platoon-Based Vehicular Cyber-Physical Systems
|
[
{
"docid": "1927e46cd9a198b59b83dedd13881388",
"text": "Vehicle automation has been one of the fundamental applications within the field of intelligent transportation systems (ITS) since the start of ITS research in the mid-1980s. For most of this time, it has been generally viewed as a futuristic concept that is not close to being ready for deployment. However, recent development of “self-driving” cars and the announcement by car manufacturers of their deployment by 2020 show that this is becoming a reality. The ITS industry has already been focusing much of its attention on the concepts of “connected vehicles” (United States) or “cooperative ITS” (Europe). These concepts are based on communication of data among vehicles (V2V) and/or between vehicles and the infrastructure (V2I/I2V) to provide the information needed to implement ITS applications. The separate threads of automated vehicles and cooperative ITS have not yet been thoroughly woven together, but this will be a necessary step in the near future because the cooperative exchange of data will provide vital inputs to improve the performance and safety of the automation systems. Thus, it is important to start thinking about the cybersecurity implications of cooperative automated vehicle systems. In this paper, we investigate the potential cyberattacks specific to automated vehicles, with their special needs and vulnerabilities. We analyze the threats on autonomous automated vehicles and cooperative automated vehicles. This analysis shows the need for considerably more redundancy than many have been expecting. We also raise awareness to generate discussion about these threats at this early stage in the development of vehicle automation systems.",
"title": ""
},
{
"docid": "a8b8f36f7093c79759806559fb0f0cf4",
"text": "Cooperative adaptive cruise control (CACC) is an extension of ACC. In addition to measuring the distance to a predecessor, a vehicle can also exchange information with a predecessor by wireless communication. This enables a vehicle to follow its predecessor at a closer distance under tighter control. This paper focuses on the impact of CACC on traffic-flow characteristics. It uses the traffic-flow simulation model MIXIC that was specially designed to study the impact of intelligent vehicles on traffic flow. The authors study the impacts of CACC for a highway-merging scenario from four to three lanes. The results show an improvement of traffic-flow stability and a slight increase in traffic-flow efficiency compared with the merging scenario without equipped vehicles",
"title": ""
}
] |
[
{
"docid": "804cee969d47d912d8bdc40f3a3eeb32",
"text": "The problem of matching a forensic sketch to a gallery of mug shot images is addressed in this paper. Previous research in sketch matching only offered solutions to matching highly accurate sketches that were drawn while looking at the subject (viewed sketches). Forensic sketches differ from viewed sketches in that they are drawn by a police sketch artist using the description of the subject provided by an eyewitness. To identify forensic sketches, we present a framework called local feature-based discriminant analysis (LFDA). In LFDA, we individually represent both sketches and photos using SIFT feature descriptors and multiscale local binary patterns (MLBP). Multiple discriminant projections are then used on partitioned vectors of the feature-based representation for minimum distance matching. We apply this method to match a data set of 159 forensic sketches against a mug shot gallery containing 10,159 images. Compared to a leading commercial face recognition system, LFDA offers substantial improvements in matching forensic sketches to the corresponding face images. We were able to further improve the matching performance using race and gender information to reduce the target gallery size. Additional experiments demonstrate that the proposed framework leads to state-of-the-art accuracys when matching viewed sketches.",
"title": ""
},
{
"docid": "6eb1fdb83d936b978429c4a014e2da59",
"text": "Marigold (Tagetes erecta), besides being an ornamental plant, has various medicinal properties—it is nematocidal, fungicidal, antibacterial and insecticidal and aids in wound healing. Our work is focused on the blood clotting activity of its leaf extracts. Extraction was done by conventional as well as the Soxhlet method, which was found to be much more efficient using a 1:1 ratio of ethanol to water as solvent. Blood clotting activity of the leaf extract was examined using prothrombin time test using the Owren method. For both extraction methods, the yield percentage and coagulation activity in terms of coagulation time were analysed. Marigold leaf extract obtained using the Soxhlet method has shown very good blood coagulation properties in lower quantities—in the range of microlitres. Further research is needed for identification and quantification of its bioactive compounds, which could be purified further and encapsulated. Since marigold leaf has antibacterial properties too, therefore, it might be possible in the future to develop an antiseptic with blood coagulation activity.",
"title": ""
},
{
"docid": "c7ea816f2bb838b8c5aac3cdbbd82360",
"text": "Semantic annotated parallel corpora, though rare, play an increasingly important role in natural language processing. These corpora provide valuable data for computational tasks like sense-based machine translation and word sense disambiguation, but also to contrastive linguistics and translation studies. In this paper we present the ongoing development of a web-based corpus semantic annotation environment that uses the Open Multilingual Wordnet (Bond and Foster, 2013) as a sense inventory. The system includes interfaces to help coordinating the annotation project and a corpus browsing interface designed specifically to meet the needs of a semantically annotated corpus. The tool was designed to build the NTU-Multilingual Corpus (Tan and Bond, 2012). For the past six years, our tools have been tested and developed in parallel with the semantic annotation of a portion of this corpus in Chinese, English, Japanese and Indonesian. The annotation system is released under an open source license (MIT).",
"title": ""
},
{
"docid": "3f1488c678933361bac4541a97f46a97",
"text": "computers in conversational speech has long been a favorite subject in science fiction, reflecting the persistent belief that spoken dialogue would be the most natural and powerful user interface to computers. With recent improvements in computer technology and in speech and language processing, such systems are starting to appear feasible. There are significant technical problems that still need to be solved before speech-driven interfaces become truly conversational. This article describes the results of a 10-year effort building robust spoken dialogue systems at the University of Rochester.",
"title": ""
},
{
"docid": "ceb02ddf8b2085d67ccf27c3c5b57dfd",
"text": "We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.",
"title": ""
},
{
"docid": "0918688b8d8fccc3d98ae790d42b3e01",
"text": "Structure-from-Motion for unordered image collections has significantly advanced in scale over the last decade. This impressive progress can be in part attributed to the introduction of efficient retrieval methods for those systems. While this boosts scalability, it also limits the amount of detail that the large-scale reconstruction systems are able to produce. In this paper, we propose a joint reconstruction and retrieval system that maintains the scalability of large-scale Structure-from-Motion systems while also recovering the often lost ability of reconstructing fine details of the scene. We demonstrate our proposed method on a large-scale dataset of 7.4 million images downloaded from the Internet.",
"title": ""
},
{
"docid": "b8cd2ce49efd26b08581bea5129dd663",
"text": "Automotive radar sensors are applied to measure the target range, azimuth angle and radial velocity simultaneously even in multiple target situations. The single target measured data are necessary for target tracking in advanced driver assistance systems (ADAS) e.g. in highway scenarios. In typical city traffic situations the radar measurement is also important but additionally even the lateral velocity component of each detected target such as a vehicle is of large interest in this case. It is shown in this paper that the lateral velocity of an extended target can be measured even in a mono observation situation. For an automotive radar sensor a high spectral resolution is required in this case which means the time on target should be sufficiently large",
"title": ""
},
{
"docid": "01bfdc1124bdab2efa56aba50180129d",
"text": "Outlier detection algorithms are often computationally intensive because of their need to score each point in the data. Even simple distance-based algorithms have quadratic complexity. High-dimensional outlier detection algorithms such as subspace methods are often even more computationally intensive because of their need to explore different subspaces of the data. In this paper, we propose an exceedingly simple subspace outlier detection algorithm, which can be implemented in a few lines of code, and whose complexity is linear in the size of the data set and the space requirement is constant. We show that this outlier detection algorithm is much faster than both conventional and high-dimensional algorithms and also provides more accurate results. The approach uses randomized hashing to score data points and has a neat subspace interpretation. Furthermore, the approach can be easily generalized to data streams. We present experimental results showing the effectiveness of the approach over other state-of-the-art methods.",
"title": ""
},
{
"docid": "aa30615991a1eaa8986c58954d4ca00c",
"text": "The real-time analyses of oscillatory EEG components during right and left hand movement imagination allows the control of an electric device. Such a system, called brain-computer interface (BCI), can be used e.g. by patients who are totally paralyzed (e.g. Amyotrophic Lateral Sclerosis) to communicate with their environment. The paper demonstrates a system that utilizes the EEG for the control of a hand prosthesis.",
"title": ""
},
{
"docid": "68c7cf8a10382fab04a7c851a9caebb0",
"text": "Circular economy (CE) is a term that exists since the 1970s and has acquired greater importance in the past few years, partly due to the scarcity of natural resources available in the environment and changes in consumer behavior. Cutting-edge technologies such as big data and internet of things (IoT) have the potential to leverage the adoption of CE concepts by organizations and society, becoming more present in our daily lives. Therefore, it is fundamentally important for researchers interested in this subject to understand the status quo of studies being undertaken worldwide and to have the overall picture of it. We conducted a bibliometric literature review from the Scopus Database over the period of 2006–2015 focusing on the application of big data/IoT on the context of CE. This produced the combination of 30,557 CE documents with 32,550 unique big data/IoT studies resulting in 70 matching publications that went through content and social network analysis with the use of ‘R’ statistical tool. We then compared it to some current industry initiatives. Bibliometrics findings indicate China and USA are the most interested countries in the area and reveal a context with significant opportunities for research. In addition, large producers of greenhouse gas emissions, such as Brazil and Russia, still lack studies in the area. Also, a disconnection between important industry initiatives and scientific research seems to exist. The results can be useful for institutions and researchers worldwide to understand potential research gaps and to focus future investments/studies in the field.",
"title": ""
},
{
"docid": "c39295b4334a22547b2c4370ef329a7c",
"text": "In this paper, we propose a Mobile Edge Internet of Things (MEIoT) architecture by leveraging the fiber-wireless access technology, the cloudlet concept, and the software defined networking framework. The MEIoT architecture brings computing and storage resources close to Internet of Things (IoT) devices in order to speed up IoT data sharing and analytics. Specifically, the IoT devices (belonging to the same user) are associated to a specific proxy Virtual Machine (VM) in the nearby cloudlet. The proxy VM stores and analyzes the IoT data (generated by its IoT devices) in realtime. Moreover, we introduce the semantic and social IoT technology in the context of MEIoT to solve the interoperability and inefficient access control problem in the IoT system. In addition, we propose two dynamic proxy VM migration methods to minimize the end-to-end delay between proxy VMs and their IoT devices and to minimize the total on-grid energy consumption of the cloudlets, respectively. Performance of the proposed methods is validated via extensive simulations. key words: Internet of Things, mobile edge computing, cloudlet, semantics, social network, green energy.",
"title": ""
},
{
"docid": "7749fd32da3e853f9e9cfea74ddda5f8",
"text": "This study describes the roles of architects in scaling agile frameworks with the help of a structured literature review. We aim to provide a primary analysis of 20 identified scaling agile frameworks. Subsequently, we thoroughly describe three popular scaling agile frameworks: Scaled Agile Framework, Large Scale Scrum, and Disciplined Agile 2.0. After specifying the main concepts of scaling agile frameworks, we characterize roles of enterprise, software, solution, and information architects, as identified in four scaling agile frameworks. Finally, we provide a discussion of generalizable findings on the role of architects in scaling agile frameworks.",
"title": ""
},
{
"docid": "98978373c863f49ed7cccda9867b8a5e",
"text": "Increasing vulnerability of plants to a variety of stresses such as drought, salt and extreme temperatures poses a global threat to sustained growth and productivity of major crops. Of these stresses, drought represents a considerable threat to plant growth and development. In view of this, developing staple food cultivars with improved drought tolerance emerges as the most sustainable solution toward improving crop productivity in a scenario of climate change. In parallel, unraveling the genetic architecture and the targeted identification of molecular networks using modern \"OMICS\" analyses, that can underpin drought tolerance mechanisms, is urgently required. Importantly, integrated studies intending to elucidate complex mechanisms can bridge the gap existing in our current knowledge about drought stress tolerance in plants. It is now well established that drought tolerance is regulated by several genes, including transcription factors (TFs) that enable plants to withstand unfavorable conditions, and these remain potential genomic candidates for their wide application in crop breeding. These TFs represent the key molecular switches orchestrating the regulation of plant developmental processes in response to a variety of stresses. The current review aims to offer a deeper understanding of TFs engaged in regulating plant's response under drought stress and to devise potential strategies to improve plant tolerance against drought.",
"title": ""
},
{
"docid": "ec490d7599370ab357336af33763a559",
"text": "A key challenge of entity set expansion is that multifaceted input seeds can lead to significant incoherence in the result set. In this paper, we present a novel solution to handling multifaceted seeds by combining existing user-generated ontologies with a novel word-similarity metric based on skip-grams. By blending the two resources we are able to produce sparse word ego-networks that are centered on the seed terms and are able to capture semantic equivalence among words. We demonstrate that the resulting networks possess internally-coherent clusters, which can be exploited to provide non-overlapping expansions, in order to reflect different semantic classes of the seeds. Empirical evaluation against state-of-the-art baselines shows that our solution, EgoSet, is able to not only capture multiple facets in the input query, but also generate expansions for each facet with higher precision.",
"title": ""
},
{
"docid": "b6b58b7a1c5d9112ea24c74539c95950",
"text": "We describe a view-management component for interactive 3D user interfaces. By view management, we mean maintaining visual constraints on the projections of objects on the view plane, such as locating related objects near each other, or preventing objects from occluding each other. Our view-management component accomplishes this by modifying selected object properties, including position, size, and transparency, which are tagged to indicate their constraints. For example, some objects may have geometric properties that are determined entirely by a physical simulation and which cannot be modified, while other objects may be annotations whose position and size are flexible.We introduce algorithms that use upright rectangular extents to represent on the view plane a dynamic and efficient approximation of the occupied space containing the projections of visible portions of 3D objects, as well as the unoccupied space in which objects can be placed to avoid occlusion. Layout decisions from previous frames are taken into account to reduce visual discontinuities. We present augmented reality and virtual reality examples to which we have applied our approach, including a dynamically labeled and annotated environment.",
"title": ""
},
{
"docid": "6fa191434ae343d4d645587b5a240b1f",
"text": "An integrated framework for density-based cluster analysis, outlier detection, and data visualization is introduced in this article. The main module consists of an algorithm to compute hierarchical estimates of the level sets of a density, following Hartigan’s classic model of density-contour clusters and trees. Such an algorithm generalizes and improves existing density-based clustering techniques with respect to different aspects. It provides as a result a complete clustering hierarchy composed of all possible density-based clusters following the nonparametric model adopted, for an infinite range of density thresholds. The resulting hierarchy can be easily processed so as to provide multiple ways for data visualization and exploration. It can also be further postprocessed so that: (i) a normalized score of “outlierness” can be assigned to each data object, which unifies both the global and local perspectives of outliers into a single definition; and (ii) a “flat” (i.e., nonhierarchical) clustering solution composed of clusters extracted from local cuts through the cluster tree (possibly corresponding to different density thresholds) can be obtained, either in an unsupervised or in a semisupervised way. In the unsupervised scenario, the algorithm corresponding to this postprocessing module provides a global, optimal solution to the formal problem of maximizing the overall stability of the extracted clusters. If partially labeled objects or instance-level constraints are provided by the user, the algorithm can solve the problem by considering both constraints violations/satisfactions and cluster stability criteria. An asymptotic complexity analysis, both in terms of running time and memory space, is described. Experiments are reported that involve a variety of synthetic and real datasets, including comparisons with state-of-the-art, density-based clustering and (global and local) outlier detection methods.",
"title": ""
},
{
"docid": "cb667b5d3dd2e680f15b7167d20734cd",
"text": "In this letter, a low loss high isolation broadband single-port double-throw (SPDT) traveling-wave switch using 90 nm CMOS technology is presented. A body bias technique is utilized to enhance the circuit performance of the switch, especially for the operation frequency above 30 GHz. The parasitic capacitance between the drain and source of the NMOS transistor can be further reduced using the negative body bias technique. Moreover, the insertion loss, the input 1 dB compression point (P1 dB)> and the third-order intermodulation (IMD3) of the switch are all improved. With the technique, the switch demonstrates an insertion loss of 3 dB and an isolation of better than 48 dB from dc to 60 GHz. The chip size of the proposed switch is 0.68 × 0.87 mm2 with a core area of only 0.32 × 0.21 mm2.",
"title": ""
},
{
"docid": "ab7184c576396a1da32c92093d606a53",
"text": "Power electronics has progressively gained an important status in power generation, distribution, and consumption. With more than 70% of electricity processed through power electronics, recent research endeavors to improve the reliability of power electronic systems to comply with more stringent constraints on cost, safety, and availability in various applications. This paper serves to give an overview of the major aspects of reliability in power electronics and to address the future trends in this multidisciplinary research direction. The ongoing paradigm shift in reliability research is presented first. Then, the three major aspects of power electronics reliability are discussed, respectively, which cover physics-of-failure analysis of critical power electronic components, state-of-the-art design for reliability process and robustness validation, and intelligent control and condition monitoring to achieve improved reliability under operation. Finally, the challenges and opportunities for achieving more reliable power electronic systems in the future are discussed.",
"title": ""
},
{
"docid": "544591326b250f5d68a64f793d55539b",
"text": "Introduction: Exfoliative cheilitis, one of a spectrum of diseases that affect the vermilion border of the lips, is uncommon and has no known cause. It is a chronic superficial inflammatory disorder of the vermilion borders of the lips characterized by persistent scaling; it can be a difficult condition to manage. The diagnosis is now restricted to those few patients whose lesions cannot be attributed to other causes, such as contact sensitization or light. Case Report: We present a 17 year-old male presented to the out clinic in Baghdad with the chief complaint of a persistent scaly on his lower lips. The patient reported that the skin over the lip thickened gradually over a 3 days period and subsequently became loose, causing discomfort. Once he peeled away the loosened layer, a new layer began to form again. Conclusion: The lack of specific treatment makes exfoliative cheilitis a chronic disease that radically affects a person’s life. The aim of this paper is to describe a case of recurrent exfoliative cheilitis successfully treated with intralesional corticosteroids and to present possible hypotheses as to the cause.",
"title": ""
}
] |
scidocsrr
|
99858edac3919735f8c7442d07167f37
|
Genetic Algorithm for VRP with Constraints Based on Feasible Insertion
|
[
{
"docid": "636cb349f6a8dcdde70ee39b663dbdbe",
"text": "Estimation and modelling problems as they arise in many data analysis areas often turn out to be unstable and/or intractable by standard numerical methods. Such problems frequently occur in fitting of large data sets to a certain model and in predictive learning. Heuristics are general recommendations based on practical statistical evidence, in contrast to a fixed set of rules that cannot vary, although guarantee to give the correct answer. Although the use of these methods became more standard in several fields of sciences, their use for estimation and modelling in statistics appears to be still limited. This paper surveys a set of problem-solving strategies, guided by heuristic information, that are expected to be used more frequently. The use of recent advances in different fields of large-scale data analysis is promoted focusing on applications in medicine, biology and technology.",
"title": ""
}
] |
[
{
"docid": "bef86730221684b8e9236cb44179b502",
"text": "secure software. In order to find the real-life issues, this case study was initiated to investigate whether the existing FDD can withstand requirements change and software security altogether. The case study was performed in controlled environment – in a course called Application Development—a four credit hours course at UTM. The course began by splitting up the class to seven software development groups and two groups were chosen to implement the existing process of FDD. After students were given an introduction to FDD, they started to adapt the processes to their proposed system. Then students were introduced to the basic concepts on how to make software systems secure. Though, they were still new to security and FDD, however, this study produced a lot of interest among the students. The students seemed to enjoy the challenge of creating secure system using FDD model.",
"title": ""
},
{
"docid": "16c87d75564404d52fc2abac55297931",
"text": "SHADE is an adaptive DE which incorporates success-history based parameter adaptation and one of the state-of-the-art DE algorithms. This paper proposes L-SHADE, which further extends SHADE with Linear Population Size Reduction (LPSR), which continually decreases the population size according to a linear function. We evaluated the performance of L-SHADE on CEC2014 benchmarks and compared its search performance with state-of-the-art DE algorithms, as well as the state-of-the-art restart CMA-ES variants. The experimental results show that L-SHADE is quite competitive with state-of-the-art evolutionary algorithms.",
"title": ""
},
{
"docid": "de8415d1674a0e5e84cfc067fd3940cc",
"text": "We apply the FLUSH+RELOAD side-channel attack based on cache hits/misses to extract a small amount of data from OpenSSL ECDSA signature requests. We then apply a “standard” lattice technique to extract the private key, but unlike previous attacks we are able to make use of the side-channel information from almost all of the observed executions. This means we obtain private key recovery by observing a relatively small number of executions, and by expending a relatively small amount of post-processing via lattice reduction. We demonstrate our analysis via experiments using the curve secp256k1 used in the Bitcoin protocol. In particular we show that with as little as 200 signatures we are able to achieve a reasonable level of success in recovering the secret key for a 256-bit curve. This is significantly better than prior methods of applying lattice reduction techniques to similar side channel information.",
"title": ""
},
{
"docid": "ba2597379304852f36c5b427eebc7223",
"text": "Constituent parsing is typically modeled by a chart-based algorithm under probabilistic context-free grammars or by a transition-based algorithm with rich features. Previous models rely heavily on richer syntactic information through lexicalizing rules, splitting categories, or memorizing long histories. However enriched models incur numerous parameters and sparsity issues, and are insufficient for capturing various syntactic phenomena. We propose a neural network structure that explicitly models the unbounded history of actions performed on the stack and queue employed in transition-based parsing, in addition to the representations of partially parsed tree structure. Our transition-based neural constituent parsing achieves performance comparable to the state-of-the-art parsers, demonstrating F1 score of 90.68% for English and 84.33% for Chinese, without reranking, feature templates or additional data to train model parameters.",
"title": ""
},
{
"docid": "f1b1dc51cf7a6d8cb3b644931724cad6",
"text": "OBJECTIVE\nTo evaluate the curing profile of bulk-fill resin-based composites (RBC) using micro-Raman spectroscopy (μRaman).\n\n\nMETHODS\nFour bulk-fill RBCs were compared to a conventional RBC. RBC blocks were light-cured using a polywave LED light-curing unit. The 24-h degree of conversion (DC) was mapped along a longitudinal cross-section using μRaman. Curing profiles were constructed and 'effective' (>90% of maximum DC) curing parameters were calculated. A statistical linear mixed effects model was constructed to analyze the relative effect of the different curing parameters.\n\n\nRESULTS\nCuring efficiency differed widely with the flowable bulk-fill RBCs presenting a significantly larger 'effective' curing area than the fibre-reinforced RBC, which on its turn revealed a significantly larger 'effective' curing area than the full-depth bulk-fill and conventional (control) RBC. A decrease in 'effective' curing depth within the light beam was found in the same order. Only the flowable bulk-fill RBCs were able to cure 'effectively' at a 4-mm depth for the whole specimen width (up to 4mm outside the light beam). All curing parameters were found to statistically influence the statistical model and thus the curing profile, except for the beam inhomogeneity (regarding the position of the 410-nm versus that of 470-nm LEDs) that did not significantly affect the model for all RBCs tested.\n\n\nCONCLUSIONS\nMost of the bulk-fill RBCs could be cured up to at least a 4-mm depth, thereby validating the respective manufacturer's recommendations.\n\n\nCLINICAL SIGNIFICANCE\nAccording to the curing profiles, the orientation and position of the light guide is less critical for the bulk-fill RBCs than for the conventional RBC.",
"title": ""
},
{
"docid": "60ad412d0d6557d2a06e9914bbf3c680",
"text": "Helpfulness of online reviews is a multi-faceted concept that can be driven by several types of factors. This study was designed to extend existing research on online review helpfulness by looking at not just the quantitative factors (such as word count), but also qualitative aspects of reviewers (including reviewer experience, reviewer impact, reviewer cumulative helpfulness). This integrated view uncovers some insights that were not available before. Our findings suggest that word count has a threshold in its effects on review helpfulness. Beyond this threshold, its effect diminishes significantly or becomes near non-existent. Reviewer experience and their impact were not statistically significant predictors of helpfulness, but past helpfulness records tended to predict future helpfulness ratings. Review framing was also a strong predictor of helpfulness. As a result, characteristics of reviewers and review messages have a varying degree of impact on review helpfulness. Theoretical and practical implications are discussed. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "76a2c62999a256076cdff0fffefca1eb",
"text": "Learning a second language is challenging. Becoming fluent requires learning contextual information about how language should be used as well as word meanings and grammar. The majority of existing language learning applications provide only thin context around content. In this paper, we present Crystallize, a collaborative 3D game that provides rich context along with scaffolded learning and engaging gameplay mechanics. Players collaborate through joint tasks, or quests. We present a user study with 42 participants that examined the impact of low and high levels of task interdependence on language learning experience and outcomes. We found that requiring players to help each other led to improved collaborative partner interactions, learning outcomes, and gameplay. A detailed analysis of the chat-logs further revealed that changes in task interdependence affected learning behaviors.",
"title": ""
},
{
"docid": "a794be8dfed40c3de9e15715aa64cc79",
"text": "In the winter of 1991 I (GR) sent to Nature a report on a surprising set of neurons that we (Giuseppe Di Pellegrino, Luciano Fadiga, Leonardo Fogassi, Vittorio Gallese) had found in the ventral premotor cortex of the monkey. The fundamental characteristic of these neurons was that they discharged both when the monkey performed a certain motor act (e.g., grasping an object) and when it observed another individual (monkey or human) performing that or a similar motor act (Di Pellegrino et al. 1992). These neurons are now known as mirror neurons (Fig. 1). Nature rejected our paper for its “lack of general interest” and suggested publication in a specialized journal. At this point I called Prof. Otto Creutzfeld, the then Coordinating Editor of Experimental Brain Research. I told him that I thought we found something really interesting and asked him to read our manuscript before sending it to the referees. After a few days he called me back saying that indeed our Wndings were, according to him, of extraordinary interest. Our article appeared in Experimental Brain Research a few months later. The idea of sending our report on mirror neurons to Experimental Brain Research, rather than to another neuroscience journal, was motivated by a previous positive experience with that journal. A few years earlier, Experimental Brain Research accepted an article in which we presented (Rizzolatti et al. 1988) a new view (something that typically referees did not like) on the organization of the ventral premotor cortex of the monkey and reported the Wndings that paved the way for the discovery of mirror neurons. In that article we described how, in the ventral premotor cortex (area F5) of the monkey, there are neurons that respond both when the monkey performs a motor act (e.g., grasping or holding) and when it observes an object whose physical features Wt the type of grip coded by that neuron (e.g., precision grip/small objects; whole hand/large objects). These neurons (now known as “canonical neurons”, Murata et al. 1997) and neurons with similar properties, described by Sakata et al. (1995) in the parietal cortex are now universally considered the neural substrate of the mechanism through which object aVordances are translated into motor acts (see Jeannerod et al. 1995). We performed the experiments on the motor properties of F5 in 1988 using an approach that should almost necessarily lead to the discovery of mirror neurons if these neurons existed in area F5. In order to test the F5 neurons with objects that may interest the monkeys, we used pieces of food of diVerent size and shape. To give the monkey some food, we had, of course, to grasp it. To our surprise we found that some F5 neurons discharged not when the monkey looked at the food, but when the experimenter grasped it. The mirror mechanism was discovered. The next important role of Experimental Brain Research in the discovery of mirror neurons was its acceptance in G. Rizzolatti (&) · M. Fabbri-Destro Dipartimento di Neuroscienze, Sezione Fisiologia, Università di Parma, via Volturno, 39, 43100 Parma, Italy e-mail: giacomo.rizzolatti@unipr.it",
"title": ""
},
{
"docid": "aeb1dfa0f62722a2b8a736792d2408af",
"text": "In this paper, we demonstrate the application of Fuzzy Markup Language (FML) to construct an FMLbased Dynamic Assessment Agent (FDAA), and we present an FML-based Human–Machine Cooperative System (FHMCS) for the game of Go. The proposed FDAA comprises an intelligent decision-making and learning mechanism, an intelligent game bot, a proximal development agent, and an intelligent agent. The intelligent game bot is based on the open-source code of Facebook’s Darkforest, and it features a representational state transfer application programming interface mechanism. The proximal development agent contains a dynamic assessment mechanism, a GoSocket mechanism, and an FML engine with a fuzzy knowledge base and rule base. The intelligent agent contains a GoSocket engine and a summarization agent that is based on the estimated win rate, realtime simulation number, and matching degree of predicted moves. Additionally, the FML for player performance evaluation and linguistic descriptions for game results commentary are presented. We experimentally verify and validate the performance of the FDAA and variants of the FHMCS by testing five games in 2016 and 60 games of Google’s Master Go, a new version of the AlphaGo program, in January 2017. The experimental results demonstrate that the proposed FDAA can work effectively for Go applications.",
"title": ""
},
{
"docid": "6c3c88705b06657ae1ac4c9ff37e5263",
"text": "The Generative Adversarial Networks (GANs) have demonstrated impressive performance for data synthesis, and are now used in a wide range of computer vision tasks. In spite of this success, they gained a reputation for being difficult to train, what results in a time-consuming and human-involved development process to use them. We consider an alternative training process, named SGAN, in which several adversarial \"local\" pairs of networks are trained independently so that a \"global\" supervising pair of networks can be trained against them. The goal is to train the global pair with the corresponding ensemble opponent for improved performances in terms of mode coverage. This approach aims at increasing the chances that learning will not stop for the global pair, preventing both to be trapped in an unsatisfactory local minimum, or to face oscillations often observed in practice. To guarantee the latter, the global pair never affects the local ones. The rules of SGAN training are thus as follows: the global generator and discriminator are trained using the local discriminators and generators, respectively, whereas the local networks are trained with their fixed local opponent. Experimental results on both toy and real-world problems demonstrate that this approach outperforms standard training in terms of better mitigating mode collapse, stability while converging and that it surprisingly, increases the convergence speed as well.",
"title": ""
},
{
"docid": "a1cd5424dea527e365f038fce60fd821",
"text": "Producing literature reviews of complex evidence for policymaking questions is a challenging methodological area. There are several established and emerging approaches to such reviews, but unanswered questions remain, especially around how to begin to make sense of large data sets drawn from heterogeneous sources. Drawing on Kuhn's notion of scientific paradigms, we developed a new method-meta-narrative review-for sorting and interpreting the 1024 sources identified in our exploratory searches. We took as our initial unit of analysis the unfolding 'storyline' of a research tradition over time. We mapped these storylines by using both electronic and manual tracking to trace the influence of seminal theoretical and empirical work on subsequent research within a tradition. We then drew variously on the different storylines to build up a rich picture of our field of study. We identified 13 key meta-narratives from literatures as disparate as rural sociology, clinical epidemiology, marketing and organisational studies. Researchers in different traditions had conceptualised, explained and investigated diffusion of innovations differently and had used different criteria for judging the quality of empirical work. Moreover, they told very different over-arching stories of the progress of their research. Within each tradition, accounts of research depicted human characters emplotted in a story of (in the early stages) pioneering endeavour and (later) systematic puzzle-solving, variously embellished with scientific dramas, surprises and 'twists in the plot'. By first separating out, and then drawing together, these different meta-narratives, we produced a synthesis that embraced the many complexities and ambiguities of 'diffusion of innovations' in an organisational setting. We were able to make sense of seemingly contradictory data by systematically exposing and exploring tensions between research paradigms as set out in their over-arching storylines. In some traditions, scientific revolutions were identifiable in which breakaway researchers had abandoned the prevailing paradigm and introduced a new set of concepts, theories and empirical methods. We concluded that meta-narrative review adds value to the synthesis of heterogeneous bodies of literature, in which different groups of scientists have conceptualised and investigated the 'same' problem in different ways and produced seemingly contradictory findings. Its contribution to the mixed economy of methods for the systematic review of complex evidence should be explored further.",
"title": ""
},
{
"docid": "46b13741add1385269e18de2f8faf1f8",
"text": "It has been suggested that there are two forms of narcissism: a grandiose subtype and a vulnerable subtype. Although these forms of narcissism share certain similarities, it is believed that these subtypes may differ in the domains upon which their self-esteem is based. To explore this possibility, the present study examined the associations between these narcissistic subtypes and domain-specific contingencies of self-worth. The results show that vulnerable narcissism was positively associated with contingencies of self-worth across a variety of domains. In contrast, the associations between grandiose narcissism and domain-specific contingencies of self-worth were more complex and included both positive and negative relationships. These results provide additional support for the distinction between grandiose and vulnerable narcissism by showing that the domains of contingent self-esteem associated with grandiose narcissism may be more limited in scope than those associated with vulnerable narcissism.",
"title": ""
},
{
"docid": "5f563fd7eefd6d15951b4f47441daf36",
"text": "Sparse representation has recently attracted enormous interests in the field of image restoration. The conventional sparsity-based methods enforce sparse coding on small image patches with certain constraints. However, they neglected the characteristics of image structures both within the same scale and across the different scales for the image sparse representation. This drawback limits the modeling capability of sparsity-based super-resolution methods, especially for the recovery of the observed low-resolution images. In this paper, we propose a joint super-resolution framework of structure-modulated sparse representations to improve the performance of sparsity-based image super-resolution. The proposed algorithm formulates the constrained optimization problem for high-resolution image recovery. The multistep magnification scheme with the ridge regression is first used to exploit the multiscale redundancy for the initial estimation of the high-resolution image. Then, the gradient histogram preservation is incorporated as a regularization term in sparse modeling of the image super-resolution problem. Finally, the numerical solution is provided to solve the super-resolution problem of model parameter estimation and sparse representation. Extensive experiments on image super-resolution are carried out to validate the generality, effectiveness, and robustness of the proposed algorithm. Experimental results demonstrate that our proposed algorithm, which can recover more fine structures and details from an input low-resolution image, outperforms the state-of-the-art methods both subjectively and objectively in most cases.",
"title": ""
},
{
"docid": "f4639c2523687aa0d5bfdd840df9cfa4",
"text": "This established database of manufacturers and thei r design specification, determined the condition and design of the vehicle based on the perception and preference of jeepney drivers and passengers, and compared the pa rts of the jeepney vehicle using Philippine National Standards and international sta ndards. The study revealed that most jeepney manufacturing firms have varied specificati ons with regard to the capacity, dimensions and weight of the vehicle and similar sp ecification on the parts and equipment of the jeepney vehicle. Most of the jeepney drivers an d passengers want to improve, change and standardize the parts of the jeepney vehicle. The p arts of jeepney vehicles have similar specifications compared to the 4 out of 5 mandatory PNS and 22 out 32 UNECE Regulations applicable for jeepney vehicle. It is concluded tha t t e jeepney vehicle can be standardized in terms of design, safety and environmental concerns.",
"title": ""
},
{
"docid": "dc310f1a5fb33bd3cbe9de95b2a0159c",
"text": "The MYO armband from Thalmic Labs is a complete and wireless motion and muscle sensing platform. This paper evaluates the armband’s sensors and its potential for NIME applications. This is followed by a presentation of the prototype instrument MuMYO. We conclude that, despite some shortcomings, the armband has potential of becoming a new “standard” controller in the NIME community.",
"title": ""
},
{
"docid": "b590d144f65b6dc88b1ff6a4f5fb4378",
"text": "BACKGROUND\nIt is controversial whether maternal hyperglycemia less severe than that in diabetes mellitus is associated with increased risks of adverse pregnancy outcomes.\n\n\nMETHODS\nA total of 25,505 pregnant women at 15 centers in nine countries underwent 75-g oral glucose-tolerance testing at 24 to 32 weeks of gestation. Data remained blinded if the fasting plasma glucose level was 105 mg per deciliter (5.8 mmol per liter) or less and the 2-hour plasma glucose level was 200 mg per deciliter (11.1 mmol per liter) or less. Primary outcomes were birth weight above the 90th percentile for gestational age, primary cesarean delivery, clinically diagnosed neonatal hypoglycemia, and cord-blood serum C-peptide level above the 90th percentile. Secondary outcomes were delivery before 37 weeks of gestation, shoulder dystocia or birth injury, need for intensive neonatal care, hyperbilirubinemia, and preeclampsia.\n\n\nRESULTS\nFor the 23,316 participants with blinded data, we calculated adjusted odds ratios for adverse pregnancy outcomes associated with an increase in the fasting plasma glucose level of 1 SD (6.9 mg per deciliter [0.4 mmol per liter]), an increase in the 1-hour plasma glucose level of 1 SD (30.9 mg per deciliter [1.7 mmol per liter]), and an increase in the 2-hour plasma glucose level of 1 SD (23.5 mg per deciliter [1.3 mmol per liter]). For birth weight above the 90th percentile, the odds ratios were 1.38 (95% confidence interval [CI], 1.32 to 1.44), 1.46 (1.39 to 1.53), and 1.38 (1.32 to 1.44), respectively; for cord-blood serum C-peptide level above the 90th percentile, 1.55 (95% CI, 1.47 to 1.64), 1.46 (1.38 to 1.54), and 1.37 (1.30 to 1.44); for primary cesarean delivery, 1.11 (95% CI, 1.06 to 1.15), 1.10 (1.06 to 1.15), and 1.08 (1.03 to 1.12); and for neonatal hypoglycemia, 1.08 (95% CI, 0.98 to 1.19), 1.13 (1.03 to 1.26), and 1.10 (1.00 to 1.12). There were no obvious thresholds at which risks increased. Significant associations were also observed for secondary outcomes, although these tended to be weaker.\n\n\nCONCLUSIONS\nOur results indicate strong, continuous associations of maternal glucose levels below those diagnostic of diabetes with increased birth weight and increased cord-blood serum C-peptide levels.",
"title": ""
},
{
"docid": "6fe0c00d138165bbd3153c0cc4539c55",
"text": "A key skill for mobile robots is the ability to navigate e ciently through their environment. In the case of social or assistive robots, this involves navigating through human crowds. Typical performance criteria, such as reaching the goal using the shortest path, are not appropriate in such environments, where it is more important for the robot to move in a socially adaptive manner such as respecting comfort zones of the pedestrians. We propose a framework for socially adaptive path planning in dynamic environments, by generating human-like path trajectory. Our framework consists of three modules: a feature extraction module, Inverse Reinforcement Learning module, and a path planning module. The feature extraction module extracts features necessary to characterize the state information, such as density and velocity of surrounding obstacles, from a RGB-Depth sensor. The Inverse Reinforcement Learning module uses a set of demonstration trajectories generated by an expert to learn the expert’s behaviour when faced with di↵erent state features, and represent it as a cost function that respects social variables. Finally, the planning module integrates a threelayer architecture, where a global path is optimized according to a classical shortest-path objective using a global map known a priori, a local path is planned over a shorter distance using the features extracted from a RGB-D sensor and the cost function inferred from Inverse Reinforcement Learning module, and a low-level Beomjoon Kim E-mail: beomjoon.kim@mail.mcgill.ca Joelle Pineau School of Computer Science, McGill University, 3480 University, Canada Tel.: 514-398-5432 Fax: 514-398-3883 E-mail: jpineau@cs.mcgill.ca system handles avoidance of immediate obstacles. We evaluate our approach by deploying it on a real robotic wheelchair platform in various scenarios, and comparing the robot trajectories to human trajectories.",
"title": ""
},
{
"docid": "04e627bbb63da238d7d87e86a8eb9641",
"text": "Parsing sentences to linguisticallyexpressive semantic representations is a key goal of Natural Language Processing. Yet statistical parsing has focussed almost exclusively on bilexical dependencies or domain-specific logical forms. We propose a neural encoder-decoder transition-based parser which is the first full-coverage semantic graph parser for Minimal Recursion Semantics (MRS). The model architecture uses stack-based embedding features, predicting graphs jointly with unlexicalized predicates and their token alignments. Our parser is more accurate than attention-based baselines on MRS, and on an additional Abstract Meaning Representation (AMR) benchmark, and GPU batch processing makes it an order of magnitude faster than a high-precision grammar-based parser. Further, the 86.69% Smatch score of our MRS parser is higher than the upper-bound on AMR parsing, making MRS an attractive choice as a semantic representation.",
"title": ""
},
{
"docid": "38c25450464202b975e2ab1f54b70f3a",
"text": "A neonatal intensive care unit (NICU) provides critical services to preterm and high-risk infants. Over the years, many tools and techniques have been introduced to support the clinical decisions made by specialists in the NICU. This study systematically reviewed the different technologies used in neonatal decision support systems (DSS), including cognitive analysis, artificial neural networks, data mining techniques, multi-agent systems, and highlighted their role in patient diagnosis, prognosis, monitoring, and healthcare management. Articles on NICU DSS were surveyed, Searches were based on the PubMed, Science Direct, and IEEE databases and only English articles published after 1990 were included. The overall search strategy was to retrieve articles that included terms that were related to “NICU Decision Support Systems” or “Artificial Intelligence” and “Neonatal”. Different methods and artificial intelligence techniques used in NICU decision support systems were assessed and related outcomes, variables, methods and performance measures was reported and discussed. Because of the dynamic, heterogeneous, and real-time environment of the NICU, the processes and medical rules that are followed within a NICU are complicated, and the data records that are produced are complex and frequent. Therefore, a single tool or technology could not cover all the needs of a NICU. However, it is important to examine and deploy new temporal data mining approaches and system architectures, such as multi-agent systems, services, and sensors, to provide integrated real-time solutions for NICU.",
"title": ""
},
{
"docid": "916f6f0942a08501139f6d4d1750816d",
"text": "The development of local anesthesia in dentistry has marked the beginning of a new era in terms of pain control. Lignocaine is the most commonly used local anesthetic (LA) agent even though it has a vasodilative effect and needs to be combined with adrenaline. Centbucridine is a non-ester, non amide group LA and has not been comprehensively studied in the dental setting and the objective was to compare it to Lignocaine. This was a randomized study comparing the onset time, duration, depth and cardiovascular parameters between Centbucridine (0.5%) and Lignocaine (2%). The study was conducted in the dental outpatient department at the Government Dental College in India on patients attending for the extraction of lower molars. A total of 198 patients were included and there were no significant differences between the LAs except those who received Centbucridine reported a significantly longer duration of anesthesia compared to those who received Lignocaine. None of the patients reported any side effects. Centbucridine was well tolerated and its substantial duration of anesthesia could be attributed to its chemical compound. Centbucridine can be used for dental procedures and can confidently be used in patients who cannot tolerate Lignocaine or where adrenaline is contraindicated.",
"title": ""
}
] |
scidocsrr
|
b8e77079fe1a8aa50556500aa7a859af
|
Air interface design and ray tracing study for 5G millimeter wave communications
|
[
{
"docid": "292981db9a4f16e4ba7e02303cbee6c1",
"text": "The millimeter wave frequency spectrum offers unprecedented bandwidths for future broadband cellular networks. This paper presents the world's first empirical measurements for 28 GHz outdoor cellular propagation in New York City. Measurements were made in Manhattan for three different base station locations and 75 receiver locations over distances up to 500 meters. A 400 megachip-per-second channel sounder and directional horn antennas were used to measure propagation characteristics for future mm-wave cellular systems in urban environments. This paper presents measured path loss as a function of the transmitter - receiver separation distance, the angular distribution of received power using directional 24.5 dBi antennas, and power delay profiles observed in New York City. The measured data show that a large number of resolvable multipath components exist in both non line of sight and line of sight environments, with observed multipath excess delay spreads (20 dB) as great as 1388.4 ns and 753.5 ns, respectively. The widely diverse spatial channels observed at any particular location suggest that millimeter wave mobile communication systems with electrically steerable antennas could exploit resolvable multipath components to create viable links for cell sizes on the order of 200 m.",
"title": ""
},
{
"docid": "3585ee8052b23d2ea996dc8ad14cbb04",
"text": "The 5th generation (5G) of mobile radio access technologies is expected to become available for commercial launch around 2020. In this paper, we present our envisioned 5G system design optimized for small cell deployment taking a clean slate approach, i.e. removing most compatibility constraints with the previous generations of mobile radio access technologies. This paper mainly covers the physical layer aspects of the 5G concept design.",
"title": ""
}
] |
[
{
"docid": "8ad20ab4523e4cc617142a2de299dd4a",
"text": "OBJECTIVE\nTo determine the reliability and internal validity of the Hypospadias Objective Penile Evaluation (HOPE)-score, a newly developed scoring system assessing the cosmetic outcome in hypospadias.\n\n\nPATIENTS AND METHODS\nThe HOPE scoring system incorporates all surgically-correctable items: position of meatus, shape of meatus, shape of glans, shape of penile skin and penile axis. Objectivity was established with standardized photographs, anonymously coded patients, independent assessment by a panel, standards for a \"normal\" penile appearance, reference pictures and assessment of the degree of abnormality. A panel of 13 pediatric urologists completed 2 questionnaires, each consisting of 45 series of photographs, at an interval of at least 1 week. The inter-observer reliability, intra-observer reliability and internal validity were analyzed.\n\n\nRESULTS\nThe correlation coefficients for the HOPE-score were as follows: intra-observer reliability 0.817, inter-observer reliability 0.790, \"non-parametric\" internal validity 0.849 and \"parametric\" internal validity 0.842. These values reflect good reproducibility, sufficient agreement among observers and a valid measurement of differences and similarities in cosmetic appearance.\n\n\nCONCLUSIONS\nThe HOPE-score is the first scoring system that fulfills the criteria of a valid measurement tool: objectivity, reliability and validity. These favorable properties support its use as an objective outcome measure of the cosmetic result after hypospadias surgery.",
"title": ""
},
{
"docid": "993a81f3b0ea8bbf255209d240bbaa56",
"text": "Fingerprints give a lot of information about various factors related to an individual. The main characteristic is that they are unique from person to person in many ways. The size, shape, pattern are some of the uniqueness factors seen, so they are area of research and study. Forensic science makes use of different evidences obtained out of which fingerprints are the one to be considered. Fingerprints play a vital role in getting details through the exact identification. Gender identification can also be done easily and efficiently through the fingerprints. Forensic anthropology has gender identification from fingerprints as an important part in order to identify the gender of a criminal and minimize the list of suspects search. Identification of fingerprints is studied and researched a lot in past and is continuously increasing day by day. The gender identification from fingerprints is carried in both spatial domain and frequency domain by applying different techniques. This paper studies frequency domain methods applied for gender identification from fingerprints. A survey of techniques show that DWT is widely used and also in combination with SVD and PCA for gender identification from fingerprints. An overall comparison of frequency domain techniques mainly focusing on DWT and its combinations is presented in this paper with a proposed canny edge detector and Haar DWT based fingerprint gender classification technique.",
"title": ""
},
{
"docid": "2cbf690c565c6a201d4d8b6bda20b766",
"text": "Visualizations that can handle flat files, or simple table data are most often used in data mining. In this paper we survey most visualizations that can handle more than three dimensions and fit our definition of Table Visualizations. We define Table Visualizations and some additional terms needed for the Table Visualization descriptions. For a preliminary evaluation of some of these visualizations see “Benchmark Development for the Evaluation of Visualization for Data Mining” also included in this volume. Data Sets Used Most of the datasets for the visualization examples are either the automobile or the Iris flower dataset. Nearly every data mining package comes with at least one of these two datasets. The datasets are available UC Irvine Machine Learning Repository [Uci97]. • Iris Plant Flowers – from Fischer 1936, physical measurements from three types of flowers. • Car (Automobile) – data concerning cars manufactured in America, Japan and Europe from 1970 to 1982 Definition of Table Visualizations A two-dimensional table of data is defined by M rows and N columns. A visualization of this data is termed a Table Visualization. In our definition, we define the columns to be the dimensions or the variates (also called fields or attributes), and the rows to be the data records. The data records are sometimes called ndimensional points, or cases. For a more thorough discussion of the table model, see [Car99]. This very general definition only rules out some structured or hierarchical data. In the most general case, a visualization maps certain dimensions to certain features in the visualization. In geographical, scientific, and imaging visualizations, the spatial dimensions are normally assigned to the appropriate X, Y or Z spatial dimension. In a typical information visualization there is no inherent spatial dimension, but quite often the dimension mapped to height and width on the screen has a dominating effect. For example in a scatter plot of four-dimensional data one could map two features to the Xand Y-axis and the other two features to the color and shape of the plotted points. The dimensions assigned to the Xand Y-axis would dominate many aspects of analysis, such as clustering and outlier detection. Some Table Visualizations such as Parallel Coordinates, Survey Plots, or Radviz, treat all of the data dimensions equally. We call these Regular Table Visualizations (RTVs). The data in a Table Visualizations is discrete. The data can be represented by different types, such as integer, real, categorical, nominal, etc. In most visualizations all data is converted to a real type before rendering the visualization. We are concerned with issues that arise from the various types of data, and use the more general term “Table Visualization.” These visualizations can also be called “Array Visualizations” because all the data are of the same type. Table Visualization data is not hierarchical. It does not explicitly contain internal structure or links. The data has a finite size (N and M are bounded). The data can be viewed as M points having N dimensions or features. The order of the table can sometimes be considered another dimension, which is an ordered sequence of integer values from 1 to M. If the table represents points in some other sequence such as a time series, that information should be represented as another column.",
"title": ""
},
{
"docid": "fc522482dbbcdeaa06e3af9a2f82b377",
"text": "Background/Objectives:As rates of obesity have increased throughout much of the world, so too have bias and prejudice toward people with higher body weight (that is, weight bias). Despite considerable evidence of weight bias in the United States, little work has examined its extent and antecedents across different nations. The present study conducted a multinational examination of weight bias in four Western countries with comparable prevalence rates of adult overweight and obesity.Methods:Using comprehensive self-report measures with 2866 individuals in Canada, the United States, Iceland and Australia, the authors assessed (1) levels of explicit weight bias (using the Fat Phobia Scale and the Universal Measure of Bias) and multiple sociodemographic predictors (for example, sex, age, race/ethnicity and educational attainment) of weight-biased attitudes and (2) the extent to which weight-related variables, including participants’ own body weight, personal experiences with weight bias and causal attributions of obesity, play a role in expressions of weight bias in different countries.Results:The extent of weight bias was consistent across countries, and in each nation attributions of behavioral causes of obesity predicted stronger weight bias, as did beliefs that obesity is attributable to lack of willpower and personal responsibility. In addition, across all countries the magnitude of weight bias was stronger among men and among individuals without family or friends who had experienced this form of bias.Conclusions:These findings offer new insights and important implications regarding sociocultural factors that may fuel weight bias across different cultural contexts, and for targets of stigma-reduction efforts in different countries.",
"title": ""
},
{
"docid": "29c4156e966f2e177a71d604b1883204",
"text": "This paper discusses the use of factorization techniques in distributional semantic models. We focus on a method for redistributing the weight of latent variables, which has previously been shown to improve the performance of distributional semantic models. However, this result has not been replicated and remains poorly understood. We refine the method, and provide additional theoretical justification, as well as empirical results that demonstrate the viability of the proposed approach.",
"title": ""
},
{
"docid": "0b4f44030a922ba2c970c263583e8465",
"text": "BACKGROUND\nSmoking remains one of the few potentially preventable factors associated with low birthweight, preterm birth and perinatal death.\n\n\nOBJECTIVES\nTo assess the effects of smoking cessation programs implemented during pregnancy on the health of the fetus, infant, mother, and family.\n\n\nSEARCH STRATEGY\nWe searched the Cochrane Pregnancy and Childbirth Group trials register and the Cochrane Tobacco Addiction Group trials register (July 2003), MEDLINE (January 2002 to July 2003), EMBASE (January 2002 to July 2003), PsychLIT (January 2002 to July 2003), CINAHL (January 2002 to July 2003), and AUSTHEALTH (January 2002 to 2003). We contacted trial authors to locate additional unpublished data. We handsearched references of identified trials and recent obstetric journals.\n\n\nSELECTION CRITERIA\nRandomised and quasi-randomised trials of smoking cessation programs implemented during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nFour reviewers assessed trial quality and extracted data independently.\n\n\nMAIN RESULTS\nThis review included 64 trials. Fifty-one randomised controlled trials (20,931 women) and six cluster-randomised trials (over 7500 women) provided data on smoking cessation and/or perinatal outcomes. Despite substantial variation in the intensity of the intervention and the extent of reminders and reinforcement through pregnancy, there was an increase in the median intensity of both 'usual care' and interventions over time. There was a significant reduction in smoking in the intervention groups of the 48 trials included: (relative risk (RR) 0.94, 95% confidence interval (CI) 0.93 to 0.95), an absolute difference of six in 100 women continuing to smoke. The 36 trials with validated smoking cessation had a similar reduction (RR 0.94, 95% CI 0.92 to 0.95). Smoking cessation interventions reduced low birthweight (RR 0.81, 95% CI 0.70 to 0.94) and preterm birth (RR 0.84, 95% CI 0.72 to 0.98), and there was a 33 g (95% CI 11 g to 55 g) increase in mean birthweight. There were no statistically significant differences in very low birthweight, stillbirths, perinatal or neonatal mortality but these analyses had very limited power. One intervention strategy, rewards plus social support (two trials), resulted in a significantly greater smoking reduction than other strategies (RR 0.77, 95% CI 0.72 to 0.82). Five trials of smoking relapse prevention (over 800 women) showed no statistically significant reduction in relapse.\n\n\nREVIEWERS' CONCLUSIONS\nSmoking cessation programs in pregnancy reduce the proportion of women who continue to smoke, and reduce low birthweight and preterm birth. The pooled trials have inadequate power to detect reductions in perinatal mortality or very low birthweight.",
"title": ""
},
{
"docid": "0eff889c22f81264628ed21eec840011",
"text": "With the emergence of new technology-supported learning environments (e.g., MOOCs, mobile edu games), efficient and effective tutoring mechanisms remain relevant beyond traditional intelligent tutoring systems. This paper provides an approach to build and adapt a tutoring model by using both artificial neural networks and reinforcement learning. The underlying idea is that tutoring rules can be, firstly, learned by observing human tutors' behavior and, then, adapted, at run-time, by observing how each learner reacts within a learning environment at different states of the learning process. The Zone of Proximal Development has been adopted as the underlying theory to evaluate efficacy and efficiency of the learning experience.",
"title": ""
},
{
"docid": "86f82b7fc89fa5132f9784296a322e8c",
"text": "The Developmental Eye Movement Test (DEM) is a standardized test for evaluating saccadic eye movements in children. An adult version, the Adult Developmental Eye Movement Test (A-DEM), was recently developed for Spanish-speaking adults ages 14 to 68. No version yet exists for adults over the age of 68 and normative studies for English-speaking adults are absent. However, it is not clear if the single-digit format of the DEM or the double-digit A-DEM format should be used for further test develop-",
"title": ""
},
{
"docid": "4f059822d0da0ada039b11c1d65c7c32",
"text": "Lead time reduction is a key concern of many industrial buyers of capital facilities given current economic conditions. Supply chain initiatives in manufacturing settings have led owners to expect that dramatic reductions in lead time are possible in all phases of their business, including the delivery of capital materials. Further, narrowing product delivery windows and increasing pressure to be first-tomarket create significant external pressure to reduce lead time. In this paper, a case study is presented in which an owner entered the construction supply chain to procure and position key long-lead materials. The materials were held at a position in the supply chain selected to allow some flexibility for continued customization, but dramatic reduction in the time-to-site. Simulation was used as a tool to consider time-to-site tradeoffs for multiple inventory locations so as to better match the needs of the construction effort.",
"title": ""
},
{
"docid": "a0e0d3224cd73539e01f260d564109a7",
"text": "We are living in a world where there is an increasing need for evidence in organizations. Good digital evidence is becoming a business enabler. Very few organizations have the structures (management and infrastructure) in place to enable them to conduct cost effective, low-impact and fficient digital investigations [1]. Digital Forensics (DF) is a vehicle that organizations use to provide good and trustworthy evidence and processes. The current DF models concentrate on reactive investigations, with limited reference to DF readiness and live investigations. However, organizations use DF for other purposes for example compliance testing. The paper proposes that DF consists of three components: Pro-active (ProDF), Active (ActDF) and Re-active (ReDF). ProDF concentrates on DF readiness and the proactive responsible use of DF to demonstrate good governance and enhance governance structures. ActDF considers the gathering of live evidence during an ongoing attack with a limited live investigation element whilst ReDF deals with the traditional DF investigation. The paper discusses each component and the relationship between the components.",
"title": ""
},
{
"docid": "2321500a01873c1bc7cf3e0e0bdf6d41",
"text": "Advances in future computing to support emerging sensor applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. As a result, it is predicted that intelligent devices and networks, including mobile wireless sensor networks (MWSN), will become the new interfaces to support future applications. In this paper, we propose a novel approach to minimize energy consumption of processing an application in MWSN while satisfying a certain completion time requirement. Specifically, by introducing the concept of cooperation, the logics and related computation tasks can be optimally partitioned, offloaded and executed with the help of peer sensor nodes, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Moreover, for a network with multiple mobile wireless sensor nodes, we propose energy efficient cooperation node selection strategies to offer a tradeoff between fairness and energy consumption. Our performance analysis is supplemented by simulation results to show the significant energy saving of the proposed solution.",
"title": ""
},
{
"docid": "de1165d7ca962c5bbd141d571e50dbd3",
"text": "A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from a scene. The proposed operation is based on Shannon's self-information measure and is achieved in a neural circuit, which is demonstrated as having close ties with the circuitry existent in the primate visual cortex. It is further shown that the proposed saliency measure may be extended to address issues that currently elude explanation in the domain of saliency based models. Results on natural images are compared with experimental eye tracking data revealing the efficacy of the model in predicting the deployment of overt attention as compared with existing efforts.",
"title": ""
},
{
"docid": "d0978cf13927b2693021a43da93a3bc9",
"text": "We explore the use of Large Amplitude Oscillatory Shear (LAOS) deformation to probe the dynamics of shear-banding in soft entangled materials, primarily wormlike micellar solutions which are prone to breakage and disentanglement under strong deformations. The state of stress in these complex fluids is described by a class of viscoelastic constitutive models which capture the key linear and nonlinear rheological features of wormlike micellar solutions, including the breakage and reforming of an entangled network. At a frequency-dependent critical strain, the imposed deformation field localizes to form a shear band, with a phase response that depends on the frequency and amplitude of the forcing. The different material responses are comPreprint submitted to Journal of Non-Newtonian Fluid Mechanics 19 June 2010 pactly represented in the form of Lissajous (phase plane) orbits and a corresponding strain-rate and frequency-dependent Pipkin diagram. Comparisons between the full network model predictions and those of a simpler, limiting case are presented.",
"title": ""
},
{
"docid": "db7426a1896920e0d2e3342d2df96401",
"text": "Nasal obstruction due to weakening of the nasal sidewall is a very common patient complaint. The conchal cartilage butterfly graft is a proven technique for the correction of nasal valve collapse. It allows for excellent functional results, and with experience and attention to technical detail, it may also provide excellent cosmetic results. While this procedure is most useful for restoring form and function in cases of secondary rhinoplasty following the reduction of nasal support structures, we have found it to be a very powerful and satisfying technique in primary rhinoplasty as well. This article aims to describe the butterfly graft, discuss its history, and detail the technical considerations which we have found useful.",
"title": ""
},
{
"docid": "ec0d1addabab76d9c2bd044f0bfe3153",
"text": "Much of scientific progress stems from previously published findings, but searching through the vast sea of scientific publications is difficult. We often rely on metrics of scholarly authority to find the prominent authors but these authority indices do not differentiate authority based on research topics. We present Latent Topical-Authority Indexing (LTAI) for jointly modeling the topics, citations, and topical authority in a corpus of academic papers. Compared to previous models, LTAI differs in two main aspects. First, it explicitly models the generative process of the citations, rather than treating the citations as given. Second, it models each author’s influence on citations of a paper based on the topics of the cited papers, as well as the citing papers. We fit LTAI into four academic corpora: CORA, Arxiv Physics, PNAS, and Citeseer. We compare the performance of LTAI against various baselines, starting with the latent Dirichlet allocation, to the more advanced models including author-link topic model and dynamic author citation topic model. The results show that LTAI achieves improved accuracy over other similar models when predicting words, citations and authors of publications.",
"title": ""
},
{
"docid": "fa99f24d38858b5951c7af587194f4e3",
"text": "Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken.",
"title": ""
},
{
"docid": "4cf670f937921d4c5eec7e477c126eb9",
"text": "This paper presents particle swarm optimization based on learning from winner particle. (PSO-WS). Instead of considering gbest and pbest particle for position update, each particle considers its distance from immediate winner to update its position. Only winner particle follow general velocity and position update equation. If this strategy performs well for the particle, then that particle updates its position based on this strategy, otherwise its position is replaced by its immediate winner particle’s position. Dimension dependant swarm size is used for better exploration. Proposed method is compared with CSO and CCPSO2, which are available to solve large scale optimization problems. Statistical results show that proposed method performs well for separable as well as non separable problems.",
"title": ""
},
{
"docid": "e67b9b48507dcabae92debdb9df9cb08",
"text": "This paper presents an annotation scheme for events that negatively or positively affect entities (benefactive/malefactive events) and for the attitude of the writer toward their agents and objects. Work on opinion and sentiment tends to focus on explicit expressions of opinions. However, many attitudes are conveyed implicitly, and benefactive/malefactive events are important for inferring implicit attitudes. We describe an annotation scheme and give the results of an inter-annotator agreement study. The annotated corpus is available online.",
"title": ""
},
{
"docid": "251138d40df58395d42f66ff288685fc",
"text": "Recent ground-breaking works have shown that deep neural networks can be trained end-to-end to regress dense disparity maps directly from image pairs. Computer generated imagery is deployed to gather the large data corpus required to train such networks, an additional fine-tuning allowing to adapt the model to work well also on real and possibly diverse environments. Yet, besides a few public datasets such as Kitti, the ground-truth needed to adapt the network to a new scenario is hardly available in practice. In this paper we propose a novel unsupervised adaptation approach that enables to fine-tune a deep learning stereo model without any ground-truth information. We rely on off-the-shelf stereo algorithms together with state-of-the-art confidence measures, the latter able to ascertain upon correctness of the measurements yielded by former. Thus, we train the network based on a novel loss-function that penalizes predictions disagreeing with the highly confident disparities provided by the algorithm and enforces a smoothness constraint. Experiments on popular datasets (KITTI 2012, KITTI 2015 and Middlebury 2014) and other challenging test images demonstrate the effectiveness of our proposal.",
"title": ""
},
{
"docid": "883182582b2b62694e725e323e3eb88c",
"text": "With increasing use of mobile devices, photo sharing services are experiencing greater popularity. Aside from providing storage, photo sharing services enable bandwidth-efficient downloads to mobile devices by performing server-side image transformations (resizing, cropping). On the flip side, photo sharing services have raised privacy concerns such as leakage of photos to unauthorized viewers and the use of algorithmic recognition technologies by providers. To address these concerns, we propose a privacy-preserving photo encoding algorithm that extracts and encrypts a small, but significant, component of the photo, while preserving the remainder in a public, standards-compatible, part. These two components can be separately stored. This technique significantly reduces the accuracy of automated detection and recognition on the public part, while preserving the ability of the provider to perform server-side transformations to conserve download bandwidth usage. Our prototype privacy-preserving photo sharing system, P3, works with Facebook, and can be extended to other services as well. P3 requires no changes to existing services or mobile application software, and adds minimal photo storage overhead.",
"title": ""
}
] |
scidocsrr
|
29138495be0fcf49833b85c6b3ba3b1a
|
Government-Driven Participation and Collective Intelligence: A Case of the Government 3.0 Initiative in Korea
|
[
{
"docid": "0f208f26191386dd5c868fa3cc7c7b31",
"text": "This paper revisits the data–information–knowledge–wisdom (DIKW) hierarchy by examining the articulation of the hierarchy in a number of widely read textbooks, and analysing their statements about the nature of data, information, knowledge, and wisdom. The hierarchy referred to variously as the ‘Knowledge Hierarchy’, the ‘Information Hierarchy’ and the ‘Knowledge Pyramid’ is one of the fundamental, widely recognized and ‘taken-for-granted’ models in the information and knowledge literatures. It is often quoted, or used implicitly, in definitions of data, information and knowledge in the information management, information systems and knowledge management literatures, but there has been limited direct discussion of the hierarchy. After revisiting Ackoff’s original articulation of the hierarchy, definitions of data, information, knowledge and wisdom as articulated in recent textbooks in information systems and knowledge management are reviewed and assessed, in pursuit of a consensus on definitions and transformation processes. This process brings to the surface the extent of agreement and dissent in relation to these definitions, and provides a basis for a discussion as to whether these articulations present an adequate distinction between data, information, and knowledge. Typically information is defined in terms of data, knowledge in terms of information, and wisdom in terms of knowledge, but there is less consensus in the description of the processes that transform elements lower in the hierarchy into those above them, leading to a lack of definitional clarity. In addition, there is limited reference to wisdom in these texts.",
"title": ""
}
] |
[
{
"docid": "19c439bd0a7e9b5287ad56b9321dd081",
"text": "Recommendations of products to customers are proved to boost sales, increase customer satisfaction and improve user experience, making recommender systems an important tool for retail businesses. With recent technological advancements in AmI and Ubiquitous Computing, the benefits of recommender systems can be enjoyed not only in e-commerce, but in the physical store scenario as well. However, developing effective context-aware recommender systems by non-expert practitioners is not an easy task due to the complexity of building the necessary data models and selecting and configuring recommendation algorithms. In this paper we apply the Model Driven Development paradigm on the physical commerce recommendation domain by defining a UbiCARS Domain Specific Modelling Language, a modelling editor and a system, that aim to reduce complexity, abstract the technical details and expedite the development and application of State-of-the-Art recommender systems in ubiquitous environments (physical retail stores), as well as to enable practitioners to utilize additional data resulting from ubiquitous user-product interaction in the recommendation process to improve recommendation accuracy.",
"title": ""
},
{
"docid": "125353c682f076f7ad4f75b08b97280b",
"text": "This paper describes a novel conformal surface wave (CSW) launcher that can excite electromagnetic surface waves along unshielded power line cables nonintrusively. This CSW launcher can detect open circuit faults on power cables. Unlike conventional horn-type launchers, this CSW launcher is small, lightweight, and cost effective, and can be placed easily on a power cable. For a nonintrusive open fault detection, the error is <; 5% when the cable length is <; 10 m, which is comparable with other direct-connect fault-finding techniques. For a cable length of 15.14 m, 7.6% error is noted. Besides cable fault detection, the potential applications of the proposed launcher include broadband power line communication and high-frequency power transmission.",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "8ee9ae8afd88a761d9db6128f736bbea",
"text": "Semantic relatedness measures quantify the degree in which some words or concepts are related, considering not only similarity but any possible semantic relationship among them. Relatedness computation is of great interest in different areas, such as Natural Language Processing, Information Retrieval, or the Semantic Web. Different methods have been proposed in the past; however, current relatedness measures lack some desirable properties for a new generation of Semantic Web applications: maximum coverage, domain independence, and universality. In this paper, we explore the use of a semantic relatedness measure between words, that uses the Web as knowledge source. This measure exploits the information about frequencies of use provided by existing search engines. Furthermore, taking this measure as basis, we define a new semantic relatedness measure among ontology terms. The proposed measure fulfils the above mentioned desirable properties to be used on the Semantic Web. We have tested extensively this semantic measure to show that it correlates well with human judgment, and helps solving some particular tasks, as word sense disambiguation or ontology matching.",
"title": ""
},
{
"docid": "bc8780078bef1e7c602e16dcf3ccf0bc",
"text": "In this paper, we deal with the problem of authentication and tamper-proofing of text documents that can be distributed in electronic or printed forms. We advocate the combination of robust text hashing and text data-hiding technologies as an efficient solution to this problem. First, we consider the problem of text data-hiding in the scope of the Gel'fand-Pinsker data-hiding framework. For illustration, two modern text data-hiding methods, namely color index modulation (CIM) and location index modulation (LIM), are explained. Second, we study two approaches to robust text hashing that are well suited for the considered problem. In particular, both approaches are compatible with CIM and LIM. The first approach makes use of optical character recognition (OCR) and a classical cryptographic message authentication code (MAC). The second approach is new and can be used in some scenarios where OCR does not produce consistent results. The experimental work compares both approaches and shows their robustness against typical intentional/unintentional document distortions including electronic format conversion, printing, scanning, [...] VILLAN SEBASTIAN, Renato Fisher, et al. Tamper-proofing of Electronic and Printed Text Documents via Robust Hashing and Data-Hiding. In: Proceedings of SPIE-IS&T Electronic Imaging 2007, Security, Steganography, and Watermarking of Multimedia",
"title": ""
},
{
"docid": "0d669a684c2c65afef96438f88a9a84d",
"text": "STUDY OBJECTIVE\nTo describe the daily routine application of a new telemonitoring system in a large population of cardiac device recipients.\n\n\nMETHODS\nData transmitted daily and automatically by a remote, wireless Home Monitoring system (HM) were analyzed. The average time gained in the detection of events using HM versus standard practice and the impact of HM on physician workload were examined. The mean interval between device interrogations was used to compare the rates of follow-up visits versus that recommended in guidelines.\n\n\nRESULTS\n3,004,763 transmissions were made by 11,624 recipients of pacemakers (n = 4,631), defibrillators (ICD; n = 6,548), and combined ICD + cardiac resynchronization therapy (CRT-D) systems (n = 445) worldwide. The duration of monitoring/patient ranged from 1 to 49 months, representing 10,057 years. The vast majority (86%) of events were disease-related. The mean interval between last follow-up and occurrence of events notified by HM was 26 days, representing a putative temporal gain of 154 and 64 days in patients usually followed at 6- and 3-month intervals, respectively. The mean numbers of events per patient per month reported to the caregivers for the overall population was 0.6. On average, 47.6% of the patients were event-free. The mean interval between follow-up visits in patients with pacemakers, single-chamber ICDs, dual chamber ICDs, and CRT-D systems were 5.9 +/- 2.1, 3.6 +/- 3.3, 3.3 +/- 3.5, and 1.9 +/- 2.9 months, respectively.\n\n\nCONCLUSIONS\nThis broad clinical application of a new monitoring system strongly supports its capability to improve the care of cardiac device recipients, enhance their safety, and optimize the allocation of health resources.",
"title": ""
},
{
"docid": "4c48f4912937f429c80e52d66609f657",
"text": "Fetus in fetu is a rare developmental aberration, characterized by encasement of partially developed monozygotic, diamniotic, and monochorionic fetus into the normally developing host. A 4-month-old boy presented with abdominal mass. Radiological investigations gave the suspicion of fetus in fetu. At surgery a fetus enclosed in an amnion like membrane at upper retroperitoneal location was found and excised. The patient is doing well after the operation.",
"title": ""
},
{
"docid": "1352bb015fea7badea4e9d15f3af4030",
"text": "We present an overview of the QUT plant classification system submitted to LifeCLEF 2014. This system uses generic features extracted from a convolutional neural network previously used to perform general object classification. We examine the effectiveness of these features to perform plant classification when used in combination with an extremely randomised forest. Using this system, with minimal tuning, we obtained relatively good results with a score of 0.249 on the test set of LifeCLEF 2014.",
"title": ""
},
{
"docid": "44aa302a4fcb1793666b6aedc9aa5798",
"text": "Unite neuroscience, supercomputing, and nanotechnology to discover, demonstrate, and deliver the brain's core algorithms.",
"title": ""
},
{
"docid": "e6c7713b9ff08aa01d98c9fec77ebf7a",
"text": "Everyday many users purchases product, book travel tickets, buy goods and services through web. Users also share their views about product, hotel, news, and topic on web in the form of reviews, blogs, comments etc. Many users read review information given on web to take decisions such as buying products, watching movie, going to restaurant etc. Reviews contain user's opinion about product, event or topic. It is difficult for web users to read and understand contents from large number of reviews. Important and useful information can be extracted from reviews through opinion mining and summarization process. We presented machine learning and Senti Word Net based method for opinion mining from hotel reviews and sentence relevance score based method for opinion summarization of hotel reviews. We obtained about 87% of accuracy of hotel review classification as positive or negative review by machine learning method. The classified and summarized hotel review information helps web users to understand review contents easily in a short time.",
"title": ""
},
{
"docid": "ad3437a7458e9152f3eb451e5c1af10f",
"text": "In recent years the number of academic publication increased strongly. As this information flood grows, it becomes more difficult for researchers to find relevant literature effectively. To overcome this difficulty, recommendation systems can be used which often utilize text similarity to find related documents. To improve those systems we add scientometrics as a ranking measure for popularity into these algorithms. In this paper we analyse whether and how scientometrics are useful in a recommender system.",
"title": ""
},
{
"docid": "3a549571e281b9b381a347fb49953d2c",
"text": "Social media has been gaining popularity among university students who use social media at higher rates than the general population. Students consequently spend a significant amount of time on social media, which may inevitably have an effect on their academic engagement. Subsequently, scholars have been intrigued to examine the impact of social media on students' academic engagement. Research that has directly explored the use of social media and its impact on students in tertiary institutions has revealed limited and mixed findings, particularly within a South African context; thus leaving a window of opportunity to further investigate the impact that social media has on students' academic engagement. This study therefore aims to investigate the use of social media in tertiary institutions, the impact that the use thereof has on students' academic engagement and to suggest effective ways of using social media in tertiary institutions to improve students' academic engagement from students' perspectives. This study used an interpretivist (inductive) approach in order to determine and comprehend student's perspectives and experiences towards the use of social media and the effects thereof on their academic engagement. A single case study design at Rhodes University was used to determine students' perceptions and data was collected using an online survey. The findings reveal that students use social media for both social and academic purposes. Students further perceived that social media has a positive impact on their academic engagement and suggest that using social media at tertiary level could be advantageous and could enhance students' academic engagement.",
"title": ""
},
{
"docid": "45f2599c6a256b55ee466c258ba93f48",
"text": "Functional turnover of transcription factor binding sites (TFBSs), such as whole-motif loss or gain, are common events during genome evolution. Conventional probabilistic phylogenetic shadowing methods model the evolution of genomes only at nucleotide level, and lack the ability to capture the evolutionary dynamics of functional turnover of aligned sequence entities. As a result, comparative genomic search of non-conserved motifs across evolutionarily related taxa remains a difficult challenge, especially in higher eukaryotes, where the cis-regulatory regions containing motifs can be long and divergent; existing methods rely heavily on specialized pattern-driven heuristic search or sampling algorithms, which can be difficult to generalize and hard to interpret based on phylogenetic principles. We propose a new method: Conditional Shadowing via Multi-resolution Evolutionary Trees, or CSMET, which uses a context-dependent probabilistic graphical model that allows aligned sites from different taxa in a multiple alignment to be modeled by either a background or an appropriate motif phylogeny conditioning on the functional specifications of each taxon. The functional specifications themselves are the output of a phylogeny which models the evolution not of individual nucleotides, but of the overall functionality (e.g., functional retention or loss) of the aligned sequence segments over lineages. Combining this method with a hidden Markov model that autocorrelates evolutionary rates on successive sites in the genome, CSMET offers a principled way to take into consideration lineage-specific evolution of TFBSs during motif detection, and a readily computable analytical form of the posterior distribution of motifs under TFBS turnover. On both simulated and real Drosophila cis-regulatory modules, CSMET outperforms other state-of-the-art comparative genomic motif finders.",
"title": ""
},
{
"docid": "5793cf03753f498a649c417e410c325e",
"text": "The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes.",
"title": ""
},
{
"docid": "7678ef732bf2a4b6a16f44e45b34ebe8",
"text": "big day: Citizens of Bitotia would once and for all establish which byte order was better, big-endian (B) or little-endian (L). Little Bit Timmy was a big supporter of little endian because that would give him the best position in the word. However, the population was split quite evenly between L and B, with a small minority of Bits who still remembered the single-tape Turing machine and preferred unary encoding (U), without any of this endianness business. Nonetheless, about half of the Bits preferred big-endian (B > L > U), and about half were the other way round (L > B > U). The voting rule was simple enough: You gave 2 points to your top choice, 1 point to your second-best, and 0 points to the worst. As Timmy was about to fall asleep, a sudden realization struck him: Why vote L > B > U and give the point to B, when U is not winning anyway? Immediately, Timmy knew: He would vote L > U > B! The next day brought some of the most sensational news in the whole history of Bitotia: Unary system had won! There were 104 votes L > U > B, 98 votes B > U > L, and 7 votes U > B > L. (Bitotia is a surprisingly small country.) U had won with 216 points, while B had 203 and L had 208. Apparently, Timmy was not the only one who found the trick. Naturally, Bitotians wanted to find out if they could avoid such situations in the future, but ... since they have to use unary now, we will have to help them!",
"title": ""
},
{
"docid": "76502e21fbb777a3442928897ef271f0",
"text": "Staphylococcus saprophyticus (S. saprophyticus) is a Gram-positive, coagulase-negative facultative bacterium belongs to Micrococcaceae family. It is a unique uropathogen associated with uncomplicated urinary tract infections (UTIs), especially cystitis in young women. Young women are very susceptible to colonize this organism in the urinary tracts and it is spread through sexual intercourse. S. saprophyticus is the second most common pathogen after Escherichia coli causing 10-20% of all UTIs in sexually active young women [13]. It contains the urease enzymes that hydrolyze the urea to produce ammonia. The urease activity is the main factor for UTIs infection. Apart from urease activity it has numerous transporter systems to adjust against change in pH, osmolarity, and concentration of urea in human urine [2]. After severe infections, it causes various complications such as native valve endocarditis [4], pyelonephritis, septicemia, [5], and nephrolithiasis [6]. About 150 million people are diagnosed with UTIs each year worldwide [7]. Several virulence factors includes due to the adherence to urothelial cells by release of lipoteichoic acid is a surface-associated adhesion amphiphile [8], a hemagglutinin that binds to fibronectin and hemagglutinates sheep erythrocytes [9], a hemolysin; and production of extracellular slime are responsible for resistance properties of S. saprophyticus [1]. Based on literature, S. saprophyticus strains are susceptible to vancomycin, rifampin, gentamicin and amoxicillin-clavulanic, while resistance to other antimicrobials such as erythromycin, clindamycin, fluoroquinolones, chloramphenicol, trimethoprim/sulfamethoxazole, oxacillin, and Abstract",
"title": ""
},
{
"docid": "d263d778738494e26e160d1c46874fff",
"text": "We introduce new online models for two important aspectsof modern financial markets: Volume Weighted Average Pricetrading and limit order books. We provide an extensivestudy of competitive algorithms in these models and relatethem to earlier online algorithms for stock trading.",
"title": ""
},
{
"docid": "93afa2c0b51a9d38e79e033762335df9",
"text": "With explosive growth of data volume and ever-increasing diversity of data modalities, cross-modal similarity search, which conducts nearest neighbor search across different modalities, has been attracting increasing interest. This paper presents a deep compact code learning solution for efficient cross-modal similarity search. Many recent studies have proven that quantization-based approaches perform generally better than hashing-based approaches on single-modal similarity search. In this paper, we propose a deep quantization approach, which is among the early attempts of leveraging deep neural networks into quantization-based cross-modal similarity search. Our approach, dubbed shared predictive deep quantization (SPDQ), explicitly formulates a shared subspace across different modalities and two private subspaces for individual modalities, and representations in the shared subspace and the private subspaces are learned simultaneously by embedding them to a reproducing kernel Hilbert space, where the mean embedding of different modality distributions can be explicitly compared. In addition, in the shared subspace, a quantizer is learned to produce the semantics preserving compact codes with the help of label alignment. Thanks to this novel network architecture in cooperation with supervised quantization training, SPDQ can preserve intramodal and intermodal similarities as much as possible and greatly reduce quantization error. Experiments on two popular benchmarks corroborate that our approach outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "ac156d7b3069ff62264bd704b7b8dfc9",
"text": "Rynes, Colbert, and Brown (2002) presented the following statement to 959 members of the Society for Human Resource Management (SHRM): “Surveys that directly ask employees how important pay is to them are likely to overestimate pay’s true importance in actual decisions” (p. 158). If our interpretation (and that of Rynes et al.) of the research literature is accurate, then the correct true-false answer to the above statement is “false.” In other words, people are more likely to underreport than to overreport the importance of pay as a motivational factor in most situations. Put another way, research suggests that pay is much more important in people’s actual choices and behaviors than it is in their self-reports of what motivates them, much like the cartoon viewers mentioned in the quote above. Yet, only 35% of the respondents in the Rynes et al. study answered in a way consistent with research findings (i.e., chose “false”). Our objective in this article is to show that employee surveys regarding the importance of various factors in motivation generally produce results that are inconsistent with studies of actual employee behavior. In particular, we focus on well-documented findings that employees tend to say that pay THE IMPORTANCE OF PAY IN EMPLOYEE MOTIVATION: DISCREPANCIES BETWEEN WHAT PEOPLE SAY AND WHAT THEY DO",
"title": ""
}
] |
scidocsrr
|
9d4af98fd6cb119ee82a55df751cfdc0
|
Which cultural values matter to business process management?: Results from a global Delphi study
|
[
{
"docid": "8bc221213edc863f8cba6f9f5d9a9be0",
"text": "Introduction The literature on business process re-engineering, benchmarking, continuous improvement and many other approaches of modern management is very abundant. One thing which is noticeable, however, is the growing usage of the word “process” in everyday business language. This suggests that most organizations adopt a process-based approach to managing their operations and that business process management (BPM) is a well-established concept. Is this really what takes place? On examination of the literature which refers to BPM, it soon emerged that the use of this concept is not really pervasive and what in fact has been acknowledged hitherto as prevalent business practice is no more than structural changes, the use of systems such as EN ISO 9000 and the management of individual projects.",
"title": ""
},
{
"docid": "ed832b653c96f18ec4337cdde95b03c9",
"text": "Purpose – Business process management (BPM) is a management approach that developed with a strong focus on the adoption of information technology (IT). However, there is a growing awareness that BPM requires a holistic organizational perspective especially since culture is often considered a key element in BPM practice. Therefore, the purpose of this paper is to provide an overview of existing research on culture in BPM. Design/methodology/approach – This literature review builds on major sources of the BPM community including the BPM Journal, the BPM Conference and central journal/conference databases. Forward and backward searches additionally deepen the analysis. Based on the results, a model of culture’s role in BPM is developed. Findings – The results of the literature review provide evidence that culture is still a widely under-researched topic in BPM. Furthermore, a framework on culture’s role in BPM is developed and areas for future research are revealed. Research limitations/implications – The analysis focuses on the concepts of BPM and culture. Thus, results do not include findings regarding related concepts such as business process reengineering or change management. Practical implications – The framework provides an orientation for managerial practice. It helps identify dimensions of possible conflicts based on cultural aspects. It thus aims at raising awareness regarding potentially neglected cultural factors. Originality/value – Although culture has been recognized in both theory and practice as an important aspect of BPM, researchers have not systematically engaged with the specifics of the culture phenomenon in BPM. This literature review provides a frame of reference that serves as a basis for future research regarding culture’s role in BPM.",
"title": ""
}
] |
[
{
"docid": "e96791f42b6c78e29a9e19610ff6baba",
"text": "Although the fourth industrial revolution is already in pro-gress and advances have been made in automating factories, completely automated facilities are still far in the future. Human work is still an important factor in many factories and warehouses, especially in the field of logistics. Manual processes are, therefore, often subject to optimization efforts. In order to aid these optimization efforts, methods like human activity recognition (HAR) became of increasing interest in industrial settings. In this work a novel deep neural network architecture for HAR is introduced. A convolutional neural network (CNN), which employs temporal convolutions, is applied to the sequential data of multiple intertial measurement units (IMUs). The network is designed to separately handle different sensor values and IMUs, joining the information step-by-step within the architecture. An evaluation is performed using data from the order picking process recorded in two different warehouses. The influence of different design choices in the network architecture, as well as pre- and post-processing, will be evaluated. Crucial steps for learning a good classification network for the task of HAR in a complex industrial setting will be shown. Ultimately, it can be shown that traditional approaches based on statistical features as well as recent CNN architectures are outperformed.",
"title": ""
},
{
"docid": "c04f67fd5cc7f2f95452046bb18c6cfa",
"text": "Bob is a free signal processing and machine learning toolbox originally developed by the Biometrics group at Idiap Research Institute, Switzerland. The toolbox is designed to meet the needs of researchers by reducing development time and efficiently processing data. Firstly, Bob provides a researcher-friendly Python environment for rapid development. Secondly, efficient processing of large amounts of multimedia data is provided by fast C++ implementations of identified bottlenecks. The Python environment is integrated seamlessly with the C++ library, which ensures the library is easy to use and extensible. Thirdly, Bob supports reproducible research through its integrated experimental protocols for several databases. Finally, a strong emphasis is placed on code clarity, documentation, and thorough unit testing. Bob is thus an attractive resource for researchers due to this unique combination of ease of use, efficiency, extensibility and transparency. Bob is an open-source library and an ongoing community effort.",
"title": ""
},
{
"docid": "7f0a2bcd162ce702ea2813a9cbb0b813",
"text": "BACKGROUND\nhCG is a term referring to 4 independent molecules, each produced by separate cells and each having completely separate functions. These are hCG produced by villous syncytiotrophoblast cells, hyperglycosylated hCG produced by cytotrophoblast cells, free beta-subunit made by multiple primary non-trophoblastic malignancies, and pituitary hCG made by the gonadotrope cells of the anterior pituitary.\n\n\nRESULTS AND DISCUSSION\nhCG has numerous functions. hCG promotes progesterone production by corpus luteal cells; promotes angiogenesis in uterine vasculature; promoted the fusion of cytotrophoblast cell and differentiation to make syncytiotrophoblast cells; causes the blockage of any immune or macrophage action by mother on foreign invading placental cells; causes uterine growth parallel to fetal growth; suppresses any myometrial contractions during the course of pregnancy; causes growth and differentiation of the umbilical cord; signals the endometrium about forthcoming implantation; acts on receptor in mother's brain causing hyperemesis gravidarum, and seemingly promotes growth of fetal organs during pregnancy. Hyperglycosylated hCG functions to promote growth of cytotrophoblast cells and invasion by these cells, as occurs in implantation of pregnancy, and growth and invasion by choriocarcinoma cells. hCG free beta-subunit is produced by numerous non-trophoblastic malignancies of different primaries. The detection of free beta-subunit in these malignancies is generally considered a sign of poor prognosis. The free beta-subunit blocks apoptosis in cancer cells and promotes the growth and malignancy of the cancer. Pituitary hCG is a sulfated variant of hCG produced at low levels during the menstrual cycle. Pituitary hCG seems to mimic luteinizing hormone actions during the menstrual cycle.",
"title": ""
},
{
"docid": "cb6e2fd0082e16549e02db6e2d7fbef7",
"text": "E-Health clouds are gaining increasing popularity by facilitating the storage and sharing of big data in healthcare. However, such an adoption also brings about a series of challenges, especially, how to ensure the security and privacy of highly sensitive health data. Among them, one of the major issues is authentication, which ensures that sensitive medical data in the cloud are not available to illegal users. Three-factor authentication combining password, smart card and biometrics perfectly matches this requirement by providing high security strength. Recently, Wu et al. proposed a three-factor authentication protocol based on elliptic curve cryptosystem which attempts to fulfill three-factor security and resist various existing attacks, providing many advantages over existing schemes. However, we first show that their scheme is susceptible to user impersonation attack in the registration phase. In addition, their scheme is also vulnerable to offline password guessing attack in the login and password change phase, under the condition that the mobile device is lost or stolen. Furthermore, it fails to provide user revocation when the mobile device is lost or stolen. To remedy these flaws, we put forward a robust three-factor authentication protocol, which not only guards various known attacks, but also provides more desired security properties. We demonstrate that our scheme provides mutual authentication using the Burrows–Abadi–Needham logic.",
"title": ""
},
{
"docid": "b7a3a7af3495d0a722040201f5fadd55",
"text": "During the last decade, biodegradable metallic stents have been developed and investigated as alternatives for the currently-used permanent cardiovascular stents. Degradable metallic materials could potentially replace corrosion-resistant metals currently used for stent application as it has been shown that the role of stenting is temporary and limited to a period of 6-12 months after implantation during which arterial remodeling and healing occur. Although corrosion is generally considered as a failure in metallurgy, the corrodibility of certain metals can be an advantage for their application as degradable implants. The candidate materials for such application should have mechanical properties ideally close to those of 316L stainless steel which is the gold standard material for stent application in order to provide mechanical support to diseased arteries. Non-toxicity of the metal itself and its degradation products is another requirement as the material is absorbed by blood and cells. Based on the mentioned requirements, iron-based and magnesium-based alloys have been the investigated candidates for biodegradable stents. This article reviews the recent developments in the design and evaluation of metallic materials for biodegradable stents. It also introduces the new metallurgical processes which could be applied for the production of metallic biodegradable stents and their effect on the properties of the produced metals.",
"title": ""
},
{
"docid": "c4bd2667b2e105219e6a117838dd870d",
"text": "Written contracts are a fundamental framework for commercial and cooperative transactions and relationships. Limited research has been published on the application of machine learning and natural language processing (NLP) to contracts. In this paper we report the classification of components of contract texts using machine learning and hand-coded methods. Authors studying a range of domains have found that combining machine learning and rule based approaches increases accuracy of machine learning. We find similar results which suggest the utility of considering leveraging hand coded classification rules for machine learning. We attained an average accuracy of 83.48% on a multiclass labelling task on 20 contracts combining machine learning and rule based approaches, increasing performance over machine learning alone.",
"title": ""
},
{
"docid": "0d27b687287ea23c1eb2bcff307af818",
"text": "To cite: Suchak T, Hussey J, Takhar M, et al. J Fam Plann Reprod Health Care Published Online First: [please include Day Month Year] doi:10.1136/jfprhc-2014101091 BACKGROUND UK figures estimate that in 1998 there were 3170 people over the age of 15 years assigned as male at birth who had presented with gender dysphoria. This figure is comparable to that found in the Netherlands where 2440 have presented; however, far fewer people actually undergo sex reassignment surgery. Recent statistics from the Netherlands indicate that about 1 in 12 000 natal males undergo sex-reassignment and about 1 in 34 000 natal females. Since April 2013, English gender identity services have been among the specialised services commissioned centrally by NHS England and this body is therefore responsible for commissioning transgender surgical services. The growth in the incidence of revealed gender dysphoria amongst both young and adult people has major implications for commissioners and providers of public services. The present annual requirement is 480 genital and gonadal male-to-female reassignment procedures. There are currently three units in the UK offering this surgery for National Health Service (NHS) patients. Prior to surgery trans women will have had extensive evaluation, including blood tests, advice on smoking, alcohol and obesity, and psychological/psychiatric evaluation. They usually begin to take female hormones after 3 months of transition, aiming to encourage development of breast buds and alter muscle and fat distribution. Some patients may elect at this stage to have breast surgery. Before genital surgery can be considered the patient must have demonstrated they have lived for 1 year full-time as a woman. Figure 1 shows a typical post-surgical result. A trans person who has lived exclusively in their identified gender for at least 2 years (as required by the Gender Recognition Act 2004) can apply for a gender recognition certificate (GRC). This is independent of whether gender reassignment surgery has taken place. Once a trans person has a GRC they can then obtain a new birth certificate. The trans person will also have new hospital records in a new name. It is good practice for health providers to take practical steps to ensure that gender reassignment is not casually visible in records or communicated without the informed consent of the user. Consent must always be sought (and documented) for all medical correspondence where the surgery or life before surgery when living as a different gender is mentioned (exceptions include an order of court and prevention or investigation of crime). 5 It is advisable to seek medico-legal advice before disclosing. Not all trans women opt to undergo vaginoplasty. Patients have free choice as to how much surgery they wish to undertake. Trans women often live a considerable distance from where their surgery was performed and as a result many elect to see their own general practitioner or local Sexual Health Clinic if they have postoperative problems. Fortunately reported complications following surgery are rare. Lawrence summarised 15 papers investigating 232 cases of vaginoplasty surgery; 13 reported rectal-vaginal fistula, 39 reported vaginal stenosis and 33 urethral stenosis; however, it is likely that there is significant under-reporting of complications. Here we present some examples of post-vaginoplasty problems presenting to a Sexual Health Service in the North East of England, and how they were managed.",
"title": ""
},
{
"docid": "0dffca7979e72f7bb4b0fd94b031a46f",
"text": "In collaborative filtering approaches, recommendations are inferred from user data. A large volume and a high data quality is essential for an accurate and precise recommender system. As consequence, companies are collecting large amounts of personal user data. Such data is often highly sensitive and ignoring users’ privacy concerns is no option. Companies address these concerns with several risk reduction strategies, but none of them is able to guarantee cryptographic secureness. To close that gap, the present paper proposes a novel recommender system using the advantages of blockchain-supported secure multiparty computation. A potential customer is able to allow a company to apply a recommendation algorithm without disclosing her personal data. Expected benefits are a reduction of fraud and misuse and a higher willingness to share personal data. An outlined experiment will compare users’ privacy-related behavior in the proposed recommender system with existent solutions.",
"title": ""
},
{
"docid": "3f9eb2e91e0adc0a58f5229141f826ee",
"text": "Box-office performance of a movie is mainly determined by the amount the movie collects in the opening weekend and Pre-Release hype is an important factor as far as estimating the openings of the movie are concerned. This can be estimated through user opinions expressed online on sites such as Twitter which is an online micro-blogging site with a user base running into millions. Each user is entitled to his own opinion which he expresses through his tweets. This paper suggests a novel way to mine and analyze the opinions expressed in these tweets with respect to a movie prior to its release, estimate the hype surrounding it and also predict the box-office openings of the movie.",
"title": ""
},
{
"docid": "b1dbdddadf2cfa72a5fb8e8f5d08b701",
"text": "To improve segmentation performance, a novel neural network architecture (termed DFCN-DCRF) is proposed, which combines an RGB-D fully convolutional neural network (DFCN) with a depth-sensitive fully-connected conditional random field (DCRF). First, a DFCN architecture which fuses depth information into the early layers and applies dilated convolution for later contextual reasoning is designed. Then, a depth-sensitive fully-connected conditional random field (DCRF) is proposed and combined with the previous DFCN to refine the preliminary result. Comparative experiments show that the proposed DFCN-DCRF achieves competitive performance compared with state-of-the-art methods.",
"title": ""
},
{
"docid": "bd6ba64d14c8234e5ec2d07762a1165f",
"text": "Since their introduction in the early years of this century, Variable Stiffness Actuators (VSA) witnessed a sustain ed growth of interest in the research community, as shown by the growing number of publications. While many consider VSA very interesting for applications, one of the factors hindering their further diffusion is the relatively new conceptual structure of this technology. In choosing a VSA for his/her application, the educated practitioner, used to choosing robot actuators based on standardized procedures and uniformly presented data, would be confronted with an inhomogeneous and rather disorganized mass of information coming mostly from scientific publications. In this paper, the authors consider how the design procedures and data presentation of a generic VS actuator could be organized so as to minimize the engineer’s effort in choosing the actuator type and size that would best fit the application needs. The reader is led through the list of the most important parameters that will determine the ultimate performance of his/her VSA robot, and influence both the mechanical design and the controller shape. This set of parameters extends the description of a traditional electric actuator with quantities describing the capability of the VSA to change its output stiffness. As an instrument for the end-user, the VSA datasheet is intended to be a compact, self-contained description of an actuator that summarizes all the salient characteristics that the user must be aware of when choosing a device for his/her application. At the end some example of compiled VSA datasheets are reported, as well as a few examples of actuator selection procedures.",
"title": ""
},
{
"docid": "837803a140450d594d5693a06ba3be4b",
"text": "Allocation of very scarce medical interventions such as organs and vaccines is a persistent ethical challenge. We evaluate eight simple allocation principles that can be classified into four categories: treating people equally, favouring the worst-off, maximising total benefits, and promoting and rewarding social usefulness. No single principle is sufficient to incorporate all morally relevant considerations and therefore individual principles must be combined into multiprinciple allocation systems. We evaluate three systems: the United Network for Organ Sharing points systems, quality-adjusted life-years, and disability-adjusted life-years. We recommend an alternative system-the complete lives system-which prioritises younger people who have not yet lived a complete life, and also incorporates prognosis, save the most lives, lottery, and instrumental value principles.",
"title": ""
},
{
"docid": "f84003f63714442d4f4514eaefd5c985",
"text": "Continuously tracking students during a whole semester plays a vital role to enable a teacher to grasp their learning situation, attitude and motivation. It also helps to give correct assessment and useful feedback to them. To this end, we ask students to write their comments just after each lesson, because student comments re ect their learning attitude towards the lesson, understanding of course contents, and di culties of learning. In this paper, we propose a new method to predict nal student grades. The method employs Word2Vec and Arti cial Neural Network (ANN) to predict student grade in each lesson based on their comments freely written just after the lesson. In addition, we apply a window function to the predicted results obtained in consecutive lessons to keep track of each student's learning situation. The experiment results show that the prediction correct rate reached 80% by considering the predicted student grades from six consecutive lessons, and a nal rate became 94% from all 15 lessons. The results illustrate that our proposed method continuously tracked student learning situation and improved prediction performance of nal student grades as the lessons go by.",
"title": ""
},
{
"docid": "ef239b2f40847b9670b3c4b08630535f",
"text": "When a page of a book is scanned or photocopied, textual noise (extraneous symbols from the neighboring page) and/or non-textual noise (black borders, speckles, ...) appear along the border of the document. Existing document analysis methods can handle non-textual noise reasonably well, whereas textual noise still presents a major issue for document analysis systems. Textual noise may result in undesired text in optical character recognition (OCR) output that needs to be removed afterwards. Existing document cleanup methods try to explicitly detect and remove marginal noise. This paper presents a new perspective for document image cleanup by detecting the page frame of the document. The goal of page frame detection is to find the actual page contents area, ignoring marginal noise along the page border. We use a geometric matching algorithm to find the optimal page frame of structured documents (journal articles, books, magazines) by exploiting their text alignment property. We evaluate the algorithm on the UW-III database. The results show that the error rates are below 4% each of the performance measures used. Further tests were run on a dataset of magazine pages and on a set of camera captured document images. To demonstrate the benefits of using page frame detection in practical applications, we choose OCR and layout-based document image retrieval as sample applications. Experiments using a commercial OCR system show that by removing characters outside the computed page frame, the OCR error rate is reduced from 4.3 to 1.7% on the UW-III dataset. The use of page frame detection in layout-based document image retrieval application decreases the retrieval error rates by 30%.",
"title": ""
},
{
"docid": "e72a782ccb76ac8f681a3a0c40c21d61",
"text": "Integer factorization is a well studied topic. Parts of the cryptography we use each day rely on the fact that this problem is di cult. One method one can use for factorizing a large composite number is the Quadratic Sieve algorithm. This method is among the best known today. We present a parallel implementation of the Quadratic Sieve using the Message Passing Interface (MPI). We also discuss the performance of this implementation which shows that this approach is a good one.",
"title": ""
},
{
"docid": "5e946f2a15b5d9c663d85cd12bc3d9fc",
"text": "Individual differences in young children's understanding of others' feelings and in their ability to explain human action in terms of beliefs, and the earlier correlates of these differences, were studied with 50 children observed at home with mother and sibling at 33 months, then tested at 40 months on affective-labeling, perspective-taking, and false-belief tasks. Individual differences in social understanding were marked; a third of the children offered explanations of actions in terms of false belief, though few predicted actions on the basis of beliefs. These differences were associated with participation in family discourse about feelings and causality 7 months earlier, verbal fluency of mother and child, and cooperative interaction with the sibling. Differences in understanding feelings were also associated with the discourse measures, the quality of mother-sibling interaction, SES, and gender, with girls more successful than boys. The results support the view that discourse about the social world may in part mediate the key conceptual advances reflected in the social cognition tasks; interaction between child and sibling and the relationships between other family members are also implicated in the growth of social understanding.",
"title": ""
},
{
"docid": "ceaa0ceb14034ecc2840425a627a3c71",
"text": "In this article, we present a novel class of robots that are able to move by growing and building their own structure. In particular, taking inspiration by the growing abilities of plant roots, we designed and developed a plant root-like robot that creates its body through an additive manufacturing process. Each robotic root includes a tubular body, a growing head, and a sensorized tip that commands the robot behaviors. The growing head is a customized three-dimensional (3D) printer-like system that builds the tubular body of the root in the format of circular layers by fusing and depositing a thermoplastic material (i.e., polylactic acid [PLA] filament) at the tip level, thus obtaining movement by growing. A differential deposition of the material can create an asymmetry that results in curvature of the built structure, providing the possibility of root bending to follow or escape from a stimulus or to reach a desired point in space. Taking advantage of these characteristics, the robotic roots are able to move inside a medium by growing their body. In this article, we describe the design of the growing robot together with the modeling of the deposition process and the description of the implemented growing movement strategy. Experiments were performed in air and in an artificial medium to verify the functionalities and to evaluate the robot performance. The results showed that the robotic root, with a diameter of 50 mm, grows with a speed of up to 4 mm/min, overcoming medium pressure of up to 37 kPa (i.e., it is able to lift up to 6 kg) and bending with a minimum radius of 100 mm.",
"title": ""
},
{
"docid": "e8d0a238b6e39b8b8a57954b0fa0ce2e",
"text": "As a preprocessing step, image segmentation, which can do partition of an image into different regions, plays an important role in computer vision, objects recognition, tracking and image analysis. Till today, there are a large number of methods present that can extract the required foreground from the background. However, most of these methods are solely based on boundary or regional information which has limited the segmentation result to a large extent. Since the graph cut based segmentation method was proposed, it has obtained a lot of attention because this method utilizes both boundary and regional information. Furthermore, graph cut based method is efficient and accepted world-wide since it can achieve globally optimal result for the energy function. It is not only promising to specific image with known information but also effective to the natural image without any pre-known information. For the segmentation of N-dimensional image, graph cut based methods are also applicable. Due to the advantages of graph cut, various methods have been proposed. In this paper, the main aim is to help researcher to easily understand the graph cut based segmentation approach. We also classify this method into three categories. They are speed up-based graph cut, interactive-based graph cut and shape prior-based graph cut. This paper will be helpful to those who want to apply graph cut method into their research.",
"title": ""
},
{
"docid": "11dbf03a7aa6186ea1f64a582d55c03f",
"text": "This paper presents a new unsupervised learning approach with stacked autoencoder (SAE) for Arabic handwritten digits categorization. Recently, Arabic handwritten digits recognition has been an important area due to its applications in several fields. This work is focusing on the recognition part of handwritten Arabic digits recognition that face several challenges, including the unlimited variation in human handwriting and the large public databases. Arabic digits contains ten numbers that were descended from the Indian digits system. Stacked autoencoder (SAE) tested and trained the MADBase database (Arabic handwritten digits images) that contain 10000 testing images and 60000 training images. We show that the use of SAE leads to significant improvements across different machine-learning classification algorithms. SAE is giving an average accuracy of 98.5%.",
"title": ""
},
{
"docid": "25216b9a56bca7f8503aa6b2e5b9d3a9",
"text": "The study at hand is the first of its kind that aimed to provide a comprehensive analysis of the determinants of foreign direct investment (FDI) in Mongolia by analyzing their short-run, long-run, and Granger causal relationships. In doing so, we methodically used a series of econometric methods to ensure reliable and robust estimation results that included the augmented Dickey-Fuller and Phillips-Perron unit root tests, the most recently advanced autoregressive distributed lag (ARDL) bounds testing approach to cointegration, fully modified ordinary least squares, and the Granger causality test within the vector error-correction model (VECM) framework. Our findings revealed domestic market size and human capital to have a U-shaped relationship with FDI inflows, with an initial positive impact on FDI in the short-run, which then turns negative in the long-run. Macroeconomic instability was found to deter FDI inflows in the long-run. In terms of the impact of trade on FDI, imports were found to have a complementary relationship with FDI; while exports and FDI were found to be substitutes in the short-run. Financial development was also found to induce a deterring effect on FDI inflows in both the shortand long-run; thereby also revealing a substitutive relationship between the two. Infrastructure level was not found to have a significant impact on FDI on any conventional level, in either the shortor long-run. Furthermore, the results have exhibited significant Granger causal relationships between the variables; thereby, ultimately stressing the significance of policy choice in not only attracting FDI inflows, but also in translating their positive spill-over benefits into long-run economic growth. © 2017 AESS Publications. All Rights Reserved.",
"title": ""
}
] |
scidocsrr
|
f2be6f6f08cbf168403ebedc0c3a7152
|
Blinkering surveillance: Enabling video privacy through computer vision
|
[
{
"docid": "34627572a319dfdfcea7277d2650d0f5",
"text": "Visual speech information from the speaker’s mouth region has been successfully shown to improve noise robustness of automatic speech recognizers, thus promising to extend their usability in the human computer interface. In this paper, we review the main components of audio-visual automatic speech recognition and present novel contributions in two main areas: First, the visual front end design, based on a cascade of linear image transforms of an appropriate video region-of-interest, and subsequently, audio-visual speech integration. On the latter topic, we discuss new work on feature and decision fusion combination, the modeling of audio-visual speech asynchrony, and incorporating modality reliability estimates to the bimodal recognition process. We also briefly touch upon the issue of audio-visual adaptation. We apply our algorithms to three multi-subject bimodal databases, ranging from smallto largevocabulary recognition tasks, recorded in both visually controlled and challenging environments. Our experiments demonstrate that the visual modality improves automatic speech recognition over all conditions and data considered, though less so for visually challenging environments and large vocabulary tasks.",
"title": ""
}
] |
[
{
"docid": "9af22f6a1bbb4cbb13508b654e5fd7a5",
"text": "We present a 3-D correspondence method to match the geometric extremities of two shapes which are partially isometric. We consider the most general setting of the isometric partial shape correspondence problem, in which shapes to be matched may have multiple common parts at arbitrary scales as well as parts that are not similar. Our rank-and-vote-and-combine algorithm identifies and ranks potentially correct matches by exploring the space of all possible partial maps between coarsely sampled extremities. The qualified top-ranked matchings are then subjected to a more detailed analysis at a denser resolution and assigned with confidence values that accumulate into a vote matrix. A minimum weight perfect matching algorithm is finally iterated to combine the accumulated votes into an optimal (partial) mapping between shape extremities, which can further be extended to a denser map. We test the performance of our method on several data sets and benchmarks in comparison with state of the art.",
"title": ""
},
{
"docid": "ca683d498e690198ca433050c3d91fd0",
"text": "Cross-graph Relational Learning (CGRL) refers to the problem of predicting the strengths or labels of multi-relational tuples of heterogeneous object types, through the joint inference over multiple graphs which specify the internal connections among each type of objects. CGRL is an open challenge in machine learning due to the daunting number of all possible tuples to deal with when the numbers of nodes in multiple graphs are large, and because the labeled training instances are extremely sparse as typical. Existing methods such as tensor factorization or tensor-kernel machines do not work well because of the lack of convex formulation for the optimization of CGRL models, the poor scalability of the algorithms in handling combinatorial numbers of tuples, and/or the non-transductive nature of the learning methods which limits their ability to leverage unlabeled data in training. This paper proposes a novel framework which formulates CGRL as a convex optimization problem, enables transductive learning using both labeled and unlabeled tuples, and offers a scalable algorithm that guarantees the optimal solution and enjoys a linear time complexity with respect to the sizes of input graphs. In our experiments with a subset of DBLP publication records and an Enzyme multi-source dataset, the proposed method successfully scaled to the large cross-graph inference problem, and outperformed other representative approaches significantly.",
"title": ""
},
{
"docid": "e31749775e64d5407a090f5fd0a275cf",
"text": "This paper focuses on presenting a human-in-the-loop reinforcement learning theory framework and foreseeing its application to driving decision making. Currently, the technologies in human-vehicle collaborative driving face great challenges, and do not consider the Human-in-the-loop learning framework and Driving Decision-Maker optimization under the complex road conditions. The main content of this paper aimed at presenting a study framework as follows: (1) the basic theory and model of the hybrid reinforcement learning; (2) hybrid reinforcement learning algorithm for human drivers; (3)hybrid reinforcement learning algorithm for autopilot; (4) Driving decision-maker verification platform. This paper aims at setting up the human-machine hybrid reinforcement learning theory framework and foreseeing its solutions to two kinds of typical difficulties about human-machine collaborative Driving Decision-Maker, which provides the basic theory and key technologies for the future of intelligent driving. The paper serves as a potential guideline for the study of human-in-the-loop reinforcement learning.",
"title": ""
},
{
"docid": "618ef5ddb544548639b80a495897284a",
"text": "UNLABELLED\nCoccydynia is pain in the coccygeal region, and usually treated conservatively. Extracorporeal shock wave therapy (ESWT) was incorporated as non-invasive treatment of many musculoskeletal conditions. However, the effects of ESWT on coccydynia are less discussed. The purpose of this study is to evaluate the effects of ESWT on the outcomes of coccydynia. Patients were allocated to ESWT (n = 20) or physical modality (SIT) group (n = 21) randomly, and received total treatment duration of 4 weeks. The visual analog scale (VAS), Oswestry disability index (ODI), and self-reported satisfaction score were used to assess treatment effects. The VAS and ODI scores were significantly decreased after treatment in both groups, and the decrease in the VAS score was significantly greater in the ESWT group. The mean proportional changes in the ODI scores were greater in the ESWT group than in the SIT group, but the between-group difference was not statistically significant. The patients in the ESWT group had significantly higher subjective satisfaction scores than SIT group. We concluded that ESWT is more effective and satisfactory in reducing discomfort and disability caused by coccydynia than the use of physical modalities. Thus, ESWT is recommended as an alternative treatment option for patients with coccydynia.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02313324.",
"title": ""
},
{
"docid": "331fbc1b16722669ff83321c7e7fe9b8",
"text": "Coupled-inductor interleaved boost converters are under development for high-current, high-power applications ranging from automotive to distributed generation. The operating modes of these coupled-inductor converters can be complex. This paper presents an investigation of the various continuous-current (CCM) and discontinuous-current (DCM) modes of operation of the coupled-inductor interleaved two-phase boost converter. The various CCM and DCM of the converter are identified together with their submodes of operation. The standard discrete-inductor interleaved two-phase boost can be seen as a subset of the coupled-inductor converter family with zero mutual coupling between the phases. The steady-state operating characteristics, equations and waveforms for the many CCM and DCM will be presented for the converter family. Mode maps will be developed to map the converter operation across the modes over the operating range. Experimental validation is presented from a 3.6 kW laboratory prototype. Design considerations and experimental results are presented for a 72 kW prototype.",
"title": ""
},
{
"docid": "70d69b3933393decd4bdb1e4e21fe07e",
"text": "The population living in cities is continuously increasing worldwide. In developing countries, this phenomenon is exacerbated by poverty, leading to tremendous problems of employment, immigration from the rural areas, transportation, food supply and environment protection. Simultaneously with the growth of cities, a new type of agriculture has emerged; namely, urban agriculture. Here, the main functions of urban agriculture are described: its social roles, the economic functions as part of its multi-functionality, the constraints, and the risks for human consumption and the living environment. We highlight the following major points. (1) Agricultural activity will continue to be a strong contributor to urban households. Currently, differences between rural and urban livelihood households appear to be decreasing. (2) Urban agricultural production includes aquaculture, livestock and plants. The commonest crops are perishable leafy vegetables, particularly in South-east Asia and Africa. These vegetable industries have short marketing chains with lower price differentials between farmers and consumers than longer chains. The city food supply function is one of the various roles and objectives of urban agriculture that leads to increasing dialogue between urban dwellers, city authorities and farmers. (3) One of the farmers’ issues is to produce high quality products in highly populated areas and within a polluted environment. Agricultural production in cities faces the following challenges: access to the main agricultural inputs, fertilizers and water; production in a polluted environment; and limitation of its negative impact on the environment. Urban agriculture can reuse city wastes, but this will not be enough to achieve high yields, and there is still a risk of producing unsafe products. These are the main challenges for urban agriculture in keeping its multi-functional activities such as cleansing, opening up the urban space, and producing fresh and nutritious food.",
"title": ""
},
{
"docid": "ae0474dc41871a28cc3b62dfd672ad0a",
"text": "Recent success in deep learning has generated immense interest among practitioners and students, inspiring many to learn about this new technology. While visual and interactive approaches have been successfully developed to help people more easily learn deep learning, most existing tools focus on simpler models. In this work, we present GAN Lab, the first interactive visualization tool designed for non-experts to learn and experiment with Generative Adversarial Networks (GANs), a popular class of complex deep learning models. With GAN Lab, users can interactively train generative models and visualize the dynamic training process's intermediate results. GAN Lab tightly integrates an model overview graph that summarizes GAN's structure, and a layered distributions view that helps users interpret the interplay between submodels. GAN Lab introduces new interactive experimentation features for learning complex deep learning models, such as step-by-step training at multiple levels of abstraction for understanding intricate training dynamics. Implemented using TensorFlow.js, GAN Lab is accessible to anyone via modern web browsers, without the need for installation or specialized hardware, overcoming a major practical challenge in deploying interactive tools for deep learning.",
"title": ""
},
{
"docid": "f91ba4b37a2a9d80e5db5ace34e6e50a",
"text": "Bearing currents and shaft voltages of an induction motor are measured under hardand soft-switching inverter excitation. The objective is to investigate whether the soft-switching technologies can provide solutions for reducing the bearing currents and shaft voltages. Two of the prevailing soft-switching inverters, the resonant dc-link inverter and the quasi-resonant dc-link inverter, are tested. The results are compared with those obtained using the conventional hard-switching inverter. To ensure objective comparisons between the softand hard-switching inverters, all inverters were configured identically and drove the same induction motor under the same operating conditions when the test data were collected. An insightful explanation of the experimental results is also provided to help understand the mechanisms of bearing currents and shaft voltages produced in the inverter drives. Consistency between the bearing current theory and the experimental results has been demonstrated. Conclusions are then drawn regarding the effectiveness of the soft-switching technologies as a solution to the bearing current and shaft voltage problems.",
"title": ""
},
{
"docid": "7a8619e3adf03c8b00a3e830c3f1170b",
"text": "We present a robot-pose-registration algorithm, which is entirely based on large planar-surface patches extracted from point clouds sampled from a three-dimensional (3-D) sensor. This approach offers an alternative to the traditional point-to-point iterative-closest-point (ICP) algorithm, its point-to-plane variant, as well as newer grid-based algorithms, such as the 3-D normal distribution transform (NDT). The simpler case of known plane correspondences is tackled first by deriving expressions for least-squares pose estimation considering plane-parameter uncertainty computed during plane extraction. Closed-form expressions for covariances are also derived. To round-off the solution, we present a new algorithm, which is called minimally uncertain maximal consensus (MUMC), to determine the unknown plane correspondences by maximizing geometric consistency by minimizing the uncertainty volume in configuration space. Experimental results from three 3-D sensors, viz., Swiss-Ranger, University of South Florida Odetics Laser Detection and Ranging, and an actuated SICK S300, are given. The first two have low fields of view (FOV) and moderate ranges, while the third has a much bigger FOV and range. Experimental results show that this approach is not only more robust than point- or grid-based approaches in plane-rich environments, but it is also faster, requires significantly less memory, and offers a less-cluttered planar-patches-based visualization.",
"title": ""
},
{
"docid": "48a75e28154d630da14fd3dba09d0af8",
"text": "Over the years, artificial intelligence (AI) is spreading its roots in different areas by utilizing the concept of making the computers learn and handle complex tasks that previously require substantial laborious tasks by human beings. With better accuracy and speed, AI is helping lawyers to streamline work processing. New legal AI software tools like Catalyst, Ross intelligence, and Matlab along with natural language processing provide effective quarrel resolution, better legal clearness, and superior admittance to justice and fresh challenges to conventional law firms providing legal services using leveraged cohort correlate model. This paper discusses current applications of legal AI and suggests deep learning and machine learning techniques that can be applied in future to simplify the cumbersome legal tasks.",
"title": ""
},
{
"docid": "56255e2f0f1fb76267d0a1002763e573",
"text": "Recent technology surveys identified flash light detection and ranging technology as the best choice for the navigation and landing of spacecrafts in extraplanetary missions, working from single-point altimeter to range-imaging camera mode. Among all available technologies for a 2D array of direct time-of-flight (DTOF) pixels, CMOS single-photon avalanche diodes (SPADs) represent the ideal candidate due to their rugged design and electronics integration. However, state-of-the-art SPAD imagers are not designed for operation over a wide variety of scenarios, including variable background light, very long to short range, or fast relative movement.",
"title": ""
},
{
"docid": "0d7ce42011c48232189c791e71c289f5",
"text": "RECENT WORK in virtue ethics, particularly sustained reflection on specific virtues, makes it possible to argue that the classical list of cardinal virtues (prudence, justice, temperance, and fortitude) is inadequate, and that we need to articulate the cardinal virtues more correctly. With that end in view, the first section of this article describes the challenges of espousing cardinal virtues today, the second considers the inadequacy of the classical listing of cardinal virtues, and the third makes a proposal. Since virtues, no matter how general, should always relate to concrete living, the article is framed by a case.",
"title": ""
},
{
"docid": "29b4a9f3b3da3172e319d11b8f938a7b",
"text": "Since social media have become very popular during the past few years, researchers have been focusing on being able to automatically process and extract sentiments information from large volume of social media data. This paper contributes to the topic, by focusing on sentiment analysis for Chinese social media. In this paper, we propose to rely on Part of Speech (POS) tags in order to extract unigrams and bigrams features. Bigrams are generated according to the grammatical relation between consecutive words. With those features, we have shown that focusing on a specific topic allows to reach higher estimation accuracy.",
"title": ""
},
{
"docid": "8f01d2e70ec5da655418a6864e94b932",
"text": "Cloud storage services allow users to outsource their data to cloud servers to save on local data storage costs. However, unlike using local storage devices, users don't physically own the data stored on cloud servers and can't be certain about the integrity of the cloud-stored data. Many public verification schemes have been proposed to allow a third-party auditor to verify the integrity of outsourced data. However, most of these schemes assume that the auditors are honest and reliable, so are vulnerable to malicious auditors. Moreover, in most of these schemes, an external adversary could modify the outsourced data and tamper with the interaction messages between the cloud server and the auditor, thus invalidating the outsourced data integrity verification. This article proposes an efficient and secure public verification of data integrity scheme that protects against external adversaries and malicious auditors. The proposed scheme adopts a random masking technique to protect against external adversaries, and requires users to audit auditors' behaviors to prevent malicious auditors from fabricating verification results. It uses Bitcoin to construct unbiased challenge messages to thwart collusion between malicious auditors and cloud servers. A performance analysis demonstrates that the proposed scheme is efficient in terms of the user's auditing overhead.",
"title": ""
},
{
"docid": "8b02f168b2021287848b413ffb297636",
"text": "BACKGROUND\nIdentification of patient at risk of subglottic infantile hemangioma (IH) is challenging because subglottic IH can grow fast and cause airway obstruction with a fatal course.\n\n\nOBJECTIVE\nTo refine the cutaneous IH pattern at risk of subglottic IH.\n\n\nMETHODS\nProspective and retrospective review of patients with cutaneous IH involving the beard area. IHs were classified in the bilateral pattern group (BH) or in the unilateral pattern group (UH). Infantile hemangioma topography, subtype (telangiectatic or tuberous), ear, nose and throat (ENT) manifestations and subglottic involvement were recorded.\n\n\nRESULTS\nThirty-one patients (21 BH and 10 UH) were included during a 20-year span. Nineteen patients (16 BH and 3 UH) had subglottic hemangioma. BH and UH group overlap on the median pattern (tongue, gum, lips, chin and neck). Median pattern, particularly the neck area and telangiectatic subtype of IH were significantly associated with subglottic involvement.\n\n\nCONCLUSION\nPatients presenting with telangiectatic beard IH localized on the median area need early ENT exploration. They should be treated before respiratory symptoms occur.",
"title": ""
},
{
"docid": "77e385b7e7305ec0553c980f22bfa3b4",
"text": "Two and three-dimensional simulations of experiments on atmosphere mixing and stratification in a nuclear power plant containment were performed with the code CFX4.4, with the inclusion of simple models for steam condensation. The purpose was to assess the applicability of the approach to simulate the behaviour of light gases in containments at accident conditions. The comparisons of experimental and simulated results show that, despite a tendency to simulate more intensive mixing, the proposed approach may replicate the non-homogeneous structure of the atmosphere reasonably well. Introduction One of the nuclear reactor safety issues that have lately been considered using Computational Fluid Dynamics (CFD) codes is the problem of predicting the eventual non-homogeneous concentration of light flammable gas (hydrogen) in the containment of a nuclear power plant (NPP) at accident conditions. During a hypothetical severe accident in a Pressurized Water Reactor NPP, hydrogen could be generated due to Zircaloy oxidation in the reactor core. Eventual high concentrations of hydrogen in some parts of the containment could cause hydrogen ignition and combustion, which could threaten the containment integrity. The purpose of theoretical investigations is to predict hydrogen behaviour at accident conditions prior to combustion. In the past few years, many investigations about the possible application of CFD codes for this purpose have been started [1-5]. CFD codes solve the transport mass, momentum and energy equations when a fluid system is modelled using local instantaneous description. Some codes, which also use local instantaneous description, have been developed specifically for nuclear applications [68]. Although many CFD codes are multi-purpose, some of them still lack some models, which are necessary for adequate simulations of containment phenomena. In particular, the modelling of steam condensation often has to be incorporated in the codes by the users. These theoretical investigations are complemented by adequate experiments. Recently, the following novel integral experimental facilities have been set up in Europe: TOSQAN [9,10], at the Institut de Radioprotection et de Sureté Nucléaire (IRSN) in Saclay (France), MISTRA [9,11], at the",
"title": ""
},
{
"docid": "edd25b7f6c031161afc81cc6013ba58a",
"text": "This paper presents a method for airport detection from optical satellite images using deep convolutional neural networks (CNN). To achieve fast detection with high accuracy, region proposal by searching adjacent parallel line segments has been applied to select candidate fields with potential runways. These proposals were further classified by a CNN model transfer learned from AlexNet to identify the final airport regions from other confusing classes. The proposed method has been tested on a remote sensing dataset consisting of 120 airports. Experiments showed that the proposed method could recognize airports from a large complex area in seconds with an accuracy of 84.1%.",
"title": ""
},
{
"docid": "9a033f2ba2dc67f7beb2a86c13f91793",
"text": "Plasticity is an intrinsic property of the human brain and represents evolution's invention to enable the nervous system to escape the restrictions of its own genome and thus adapt to environmental pressures, physiologic changes, and experiences. Dynamic shifts in the strength of preexisting connections across distributed neural networks, changes in task-related cortico-cortical and cortico-subcortical coherence and modifications of the mapping between behavior and neural activity take place in response to changes in afferent input or efferent demand. Such rapid, ongoing changes may be followed by the establishment of new connections through dendritic growth and arborization. However, they harbor the danger that the evolving pattern of neural activation may in itself lead to abnormal behavior. Plasticity is the mechanism for development and learning, as much as a cause of pathology. The challenge we face is to learn enough about the mechanisms of plasticity to modulate them to achieve the best behavioral outcome for a given subject.",
"title": ""
},
{
"docid": "935a576ef026c6891f9ba77ac6dc2507",
"text": "This is Part II of two papers evaluating the feasibility of providing all energy for all purposes (electric power, transportation, and heating/cooling), everywhere in the world, from wind, water, and the sun (WWS). In Part I, we described the prominent renewable energy plans that have been proposed and discussed the characteristics of WWS energy systems, the global demand for and availability of WWS energy, quantities and areas required for WWS infrastructure, and supplies of critical materials. Here, we discuss methods of addressing the variability of WWS energy to ensure that power supply reliably matches demand (including interconnecting geographically dispersed resources, using hydroelectricity, using demand-response management, storing electric power on site, over-sizing peak generation capacity and producing hydrogen with the excess, storing electric power in vehicle batteries, and forecasting weather to project energy supplies), the economics of WWS generation and transmission, the economics of WWS use in transportation, and policy measures needed to enhance the viability of a WWS system. We find that the cost of energy in a 100% WWS will be similar to the cost today. We conclude that barriers to a 100% conversion to WWS power worldwide are primarily social and political, not technological or even economic. & 2010 Elsevier Ltd. All rights reserved. 1. Variability and reliability in a 100% WWS energy system in all regions of the world One of the major concerns with the use of energy supplies, such as wind, solar, and wave power, which produce variable output is whether such supplies can provide reliable sources of electric power second-by-second, daily, seasonally, and yearly. A new WWS energy infrastructure must be able to provide energy on demand at least as reliably as does the current infrastructure (e.g., De Carolis and Keith, 2005). In general, any electricity system must be able to respond to changes in demand over seconds, minutes, hours, seasons, and years, and must be able to accommodate unanticipated changes in the availability of generation. With the current system, electricity-system operators use ‘‘automatic generation control’’ (AGC) (or frequency regulation) to respond to variation on the order of seconds to a few minutes; spinning reserves to respond to variation on the order of minutes to an hour; and peak-power generation to respond to hourly variation (De Carolis and Keith, 2005; Kempton and Tomic, 2005a; Electric Power Research Institute, 1997). AGC and spinning reserves have very low ll rights reserved. Delucchi), cost, typically less than 10% of the total cost of electricity (Kempton and Tomic, 2005a), and are likely to remain this inexpensive even with large amounts of wind power (EnerNex, 2010; DeCesaro et al., 2009), but peak-power generation can be very expensive. The main challenge for the current electricity system is that electric power demand varies during the day and during the year, while most supply (coal, nuclear, and geothermal) is constant during the day, which means that there is a difference to be made up by peakand gap-filling resources such as natural gas and hydropower. Another challenge to the current system is that extreme events and unplanned maintenance can shut down plants unexpectedly. For example, unplanned maintenance can shut down coal plants, extreme heat waves can cause cooling water to warm sufficiently to shut down nuclear plants, supply disruptions can curtail the availability of natural gas, and droughts can reduce the availability of hydroelectricity. A WWS electricity system offers new challenges but also new opportunities with respect to reliably meeting energy demands. On the positive side, WWS technologies generally suffer less downtime than do current electric power technologies. For example, the average coal plant in the US from 2000 to 2004 was down 6.5% of the year for unscheduled maintenance and 6.0% of the year for scheduled maintenance (North American Electric Reliability Corporation, 2009a), but modern wind turbines have a down time of only 0–2% over land and 0–5% over the ocean (Dong Energy et al., M.A. Delucchi, M.Z. Jacobson / Energy Policy 39 (2011) 1170–119",
"title": ""
},
{
"docid": "da61b8bd6c1951b109399629f47dad16",
"text": "In this paper, we introduce an approach for distributed nonlinear control of multiple hovercraft-type underactuated vehicles with bounded and unidirectional inputs. First, a bounded nonlinear controller is given for stabilization and tracking of a single vehicle, using a cascade backstepping method. Then, this controller is combined with a distributed gradient-based control for multi-vehicle formation stabilization using formation potential functions previously constructed. The vehicles are used in the Caltech Multi-Vehicle Wireless Testbed (MVWT). We provide simulation and experimental results for stabilization and tracking of a single vehicle, and a simulation of stabilization of a six-vehicle formation, demonstrating that in all cases the control bounds and the control objective are satisfied.",
"title": ""
}
] |
scidocsrr
|
779c3634f393d5491ceae500bad29ff1
|
Text recognition using deep BLSTM networks
|
[
{
"docid": "744d409ba86a8a60fafb5c5602f6d0f0",
"text": "In this paper, we apply a context-sensitive technique for multimodal emotion recognition based on feature-level fusion of acoustic and visual cues. We use bidirectional Long ShortTerm Memory (BLSTM) networks which, unlike most other emotion recognition approaches, exploit long-range contextual information for modeling the evolution of emotion within a conversation. We focus on recognizing dimensional emotional labels, which enables us to classify both prototypical and nonprototypical emotional expressions contained in a large audiovisual database. Subject-independent experiments on various classification tasks reveal that the BLSTM network approach generally prevails over standard classification techniques such as Hidden Markov Models or Support Vector Machines, and achieves F1-measures of the order of 72 %, 65 %, and 55 % for the discrimination of three clusters in emotional space and the distinction between three levels of valence and activation, respectively.",
"title": ""
}
] |
[
{
"docid": "a75a8a6a149adf80f6ec65dea2b0ec0d",
"text": "This research addresses the role of lyrics in the music emotion recognition process. Our approach is based on several state of the art features complemented by novel stylistic, structural and semantic features. To evaluate our approach, we created a ground truth dataset containing 180 song lyrics, according to Russell's emotion model. We conduct four types of experiments: regression and classification by quadrant, arousal and valence categories. Comparing to the state of the art features (ngrams - baseline), adding other features, including novel features, improved the F-measure from 69.9, 82.7 and 85.6 percent to 80.1, 88.3 and 90 percent, respectively for the three classification experiments. To study the relation between features and emotions (quadrants) we performed experiments to identify the best features that allow to describe and discriminate each quadrant. To further validate these experiments, we built a validation set comprising 771 lyrics extracted from the AllMusic platform, having achieved 73.6 percent F-measure in the classification by quadrants. We also conducted experiments to identify interpretable rules that show the relation between features and emotions and the relation among features. Regarding regression, results show that, comparing to similar studies for audio, we achieve a similar performance for arousal and a much better performance for valence.",
"title": ""
},
{
"docid": "6ee0c9832d82d6ada59025d1c7bb540e",
"text": "Advances in computational linguistics and discourse processing have made it possible to automate many language- and text-processing mechanisms. We have developed a computer tool called Coh-Metrix, which analyzes texts on over 200 measures of cohesion, language, and readability. Its modules use lexicons, part-of-speech classifiers, syntactic parsers, templates, corpora, latent semantic analysis, and other components that are widely used in computational linguistics. After the user enters an English text, CohMetrix returns measures requested by the user. In addition, a facility allows the user to store the results of these analyses in data files (such as Text, Excel, and SPSS). Standard text readability formulas scale texts on difficulty by relying on word length and sentence length, whereas Coh-Metrix is sensitive to cohesion relations, world knowledge, and language and discourse characteristics.",
"title": ""
},
{
"docid": "0801ef431c6e4dab6158029262a3bf82",
"text": "A hallmark of human intelligence is the ability to ask rich, creative, and revealing questions. Here we introduce a cognitive model capable of constructing humanlike questions. Our approach treats questions as formal programs that, when executed on the state of the world, output an answer. The model specifies a probability distribution over a complex, compositional space of programs, favoring concise programs that help the agent learn in the current context. We evaluate our approach by modeling the types of open-ended questions generated by humans who were attempting to learn about an ambiguous situation in a game. We find that our model predicts what questions people will ask, and can creatively produce novel questions that were not present in the training set. In addition, we compare a number of model variants, finding that both question informativeness and complexity are important for producing human-like questions.",
"title": ""
},
{
"docid": "f36826993d5a9f99fc3554b5f542780e",
"text": "In this research, an adaptive timely traffic light is proposed as solution for congestion in typical area in Indonesia. Makassar City, particularly in the most complex junction (fly over, Pettarani, Reformasi highway and Urip S.) is observed for months using static cameras. The condition is mapped into fuzzy logic to have a better time transition of traffic light as opposed to the current conventional traffic light system. In preliminary result, fuzzy logic shows significant number of potential reduced in congestion. Each traffic line has 20-30% less congestion with future implementation of the proposed system.",
"title": ""
},
{
"docid": "a3d1f4a35a8de5278d7295b4ae21451c",
"text": "How can one build a distributed framework that allows efficient deployment of a wide spectrum of modern advanced machine learning (ML) programs for industrial-scale problems using Big Models (100s of billions of parameters) on Big Data (terabytes or petabytes)- Contemporary parallelization strategies employ fine-grained operations and scheduling beyond the classic bulk-synchronous processing paradigm popularized by MapReduce, or even specialized operators relying on graphical representations of ML programs. The variety of approaches tends to pull systems and algorithms design in different directions, and it remains difficult to find a universal platform applicable to a wide range of different ML programs at scale. We propose a general-purpose framework that systematically addresses data- and model-parallel challenges in large-scale ML, by leveraging several fundamental properties underlying ML programs that make them different from conventional operation-centric programs: error tolerance, dynamic structure, and nonuniform convergence; all stem from the optimization-centric nature shared in ML programs' mathematical definitions, and the iterative-convergent behavior of their algorithmic solutions. These properties present unique opportunities for an integrative system design, built on bounded-latency network synchronization and dynamic load-balancing scheduling, which is efficient, programmable, and enjoys provable correctness guarantees. We demonstrate how such a design in light of ML-first principles leads to significant performance improvements versus well-known implementations of several ML programs, allowing them to run in much less time and at considerably larger model sizes, on modestly-sized computer clusters.",
"title": ""
},
{
"docid": "ef5769145c4c1ebe06af0c8b5f67e70e",
"text": "Structures of biological macromolecules determined by transmission cryoelectron microscopy (cryo-TEM) and three-dimensional image reconstruction are often displayed as surface-shaded representations with depth cueing along the viewed direction (Z cueing). Depth cueing to indicate distance from the center of virus particles (radial-depth cueing, or R cueing) has also been used. We have found that a style of R cueing in which color is applied in smooth or discontinuous gradients using the IRIS Explorer software is an informative technique for displaying the structures of virus particles solved by cryo-TEM and image reconstruction. To develop and test these methods, we used existing cryo-TEM reconstructions of mammalian reovirus particles. The newly applied visualization techniques allowed us to discern several new structural features, including sites in the inner capsid through which the viral mRNAs may be extruded after they are synthesized by the reovirus transcriptase complexes. To demonstrate the broad utility of the methods, we also applied them to cryo-TEM reconstructions of human rhinovirus, native and swollen forms of cowpea chlorotic mottle virus, truncated core of pyruvate dehydrogenase complex from Saccharomyces cerevisiae, and flagellar filament of Salmonella typhimurium. We conclude that R cueing with color gradients is a useful tool for displaying virus particles and other macromolecules analyzed by cryo-TEM and image reconstruction.",
"title": ""
},
{
"docid": "dfb83ad16854797137e34a5c7cb110ae",
"text": "The increasing computing requirements for GPUs (Graphics Processing Units) have favoured the design and marketing of commodity devices that nowadays can also be used to accelerate general purpose computing. Therefore, future high performance clusters intended for HPC (High Performance Computing) will likely include such devices. However, high-end GPU-based accelerators used in HPC feature a considerable energy consumption, so that attaching a GPU to every node of a cluster has a strong impact on its overall power consumption. In this paper we detail a framework that enables remote GPU acceleration in HPC clusters, thus allowing a reduction in the number of accelerators installed in the cluster. This leads to energy, acquisition, maintenance, and space savings.",
"title": ""
},
{
"docid": "2c0cc129d7b12c1b61a149e46af23a4b",
"text": "This paper presents our experiences of introducing in a senior level microprocessor course the latest touch sensing technologies, especially programming capacitive touch sensing devices and touchscreen. The emphasis is on the teaching practice details, including the enhanced course contents, outcomes and lecture and lab organization. By utilizing the software package provided by Atmel, students are taught to efficiently build MCU-based embedded applications which control various touch sensing devices. This work makes use of the 32-bit ARM Cortex-M4 microprocessor to control complex touch sensing devices (i.e., touch keys, touch slider and touchscreen). The Atmel SAM 4S-EK2 board is chosen as the main development board employed for practicing the touch devices programming. Multiple capstone projects have been developed, for example adaptive touch-based servo motor control, and calculator and games on the touchscreen. Our primary experiences indicate that the project-based learning approach with the utilization of the selected microcontroller board and software package is efficient and practical for teaching advanced touch sensing techniques. Students have shown the great interest and the capability in adopting touch devices into their senior design projects to improve human machine interface.",
"title": ""
},
{
"docid": "5ccf0b3f871f8362fccd4dbd35a05555",
"text": "Recent evidence suggests a positive impact of bilingualism on cognition, including later onset of dementia. However, monolinguals and bilinguals might have different baseline cognitive ability. We present the first study examining the effect of bilingualism on later-life cognition controlling for childhood intelligence. We studied 853 participants, first tested in 1947 (age = 11 years), and retested in 2008-2010. Bilinguals performed significantly better than predicted from their baseline cognitive abilities, with strongest effects on general intelligence and reading. Our results suggest a positive effect of bilingualism on later-life cognition, including in those who acquired their second language in adulthood.",
"title": ""
},
{
"docid": "c0762517ebbae00ab5ee1291460c164c",
"text": "This paper compares various topologies for 6.6kW on-board charger (OBC) to find out suitable topology. In general, OBC consists of 2-stage; power factor correction (PFC) stage and DC-DC converter stage. Conventional boost PFC, interleaved boost PFC, and semi bridgeless PFC are considered as PFC circuit, and full-bridge converter, phase shift full-bridge converter, and series resonant converter are taken into account for DC-DC converter circuit. The design process of each topology is presented. Then, loss analysis is implemented in order to calculate the efficiency of each topology for PFC circuit and DC-DC converter circuit. In addition, the volume of magnetic components and number of semi-conductor elements are considered. Based on these results, topology selection guideline according to the system specification of 6.6kW OBC is proposed.",
"title": ""
},
{
"docid": "9e933363229c21caccc3842417dd6d60",
"text": "A novel double-layered vertically stacked substrate integrated waveguide leaky-wave antenna (SIW LWA) is presented. An array of vias on the narrow wall produces leakage through excitation of TE10 fast-wave mode of the waveguide. Attenuation and phase constants of the leaky mode are controlled independently to obtain desired pattern in the elevation. In the azimuth, top and bottom layers radiate independently, producing symmetrically located beams on both sides of broadside. A new near-field analysis of single LWA is performed to determine wavenumbers and as a way to anticipate radiation characteristics of the dual layer antenna. In addition to frequency beam steering in the elevation plane, this novel topology also offers flexibility for multispot illumination of the azimuth plane with flat-topped beams at every ${\\varphi }$ -cut through excitation of each layer separately or both antennas simultaneously. It is shown that the proposed antenna solution is a qualified candidate for 5G base station antenna (BSA) applications due to its capability of interference mitigation and latency reduction. Moreover, from the point of view of highly reliable connectivity, users can enjoy seamless mobility through the provided spatial diversity. A 15-GHz prototype has been fabricated and tested. Measured results are in good agreement with those of simulations.",
"title": ""
},
{
"docid": "b610e9bef08ef2c133a02e887b89b196",
"text": "We propose to use question answering (QA) data from Web forums to train chatbots from scratch, i.e., without dialog training data. First, we extract pairs of question and answer sentences from the typically much longer texts of questions and answers in a forum. We then use these shorter texts to train seq2seq models in a more efficient way. We further improve the parameter optimization using a new model selection strategy based on QA measures. Finally, we propose to use extrinsic evaluation with respect to a QA task as an automatic evaluation method for chatbots. The evaluation shows that the model achieves a MAP of 63.5% on the extrinsic task. Moreover, it can answer correctly 49.5% of the questions when they are similar to questions asked in the forum, and 47.3% of the questions when they are more conversational in style.",
"title": ""
},
{
"docid": "fd184f271a487aba70025218fd8c76e4",
"text": "BACKGROUND\nIron deficiency anaemia is common in patients with chronic kidney disease, and intravenous iron is the preferred treatment for those on haemodialysis. The aim of this trial was to compare the efficacy and safety of iron isomaltoside 1000 (Monofer®) with iron sucrose (Venofer®) in haemodialysis patients.\n\n\nMETHODS\nThis was an open-label, randomized, multicentre, non-inferiority trial conducted in 351 haemodialysis subjects randomized 2:1 to either iron isomaltoside 1000 (Group A) or iron sucrose (Group B). Subjects in Group A were equally divided into A1 (500 mg single bolus injection) and A2 (500 mg split dose). Group B were also treated with 500 mg split dose. The primary end point was the proportion of subjects with haemoglobin (Hb) in the target range 9.5-12.5 g/dL at 6 weeks. Secondary outcome measures included haematology parameters and safety parameters.\n\n\nRESULTS\nA total of 351 subjects were enrolled. Both treatments showed similar efficacy with >82% of subjects with Hb in the target range (non-inferiority, P = 0.01). Similar results were found when comparing subgroups A1 and A2 with Group B. No statistical significant change in Hb concentration was found between any of the groups. There was a significant increase in ferritin from baseline to Weeks 1, 2 and 4 in Group A compared with Group B (Weeks 1 and 2: P < 0.001; Week 4: P = 0.002). There was a significant higher increase in reticulocyte count in Group A compared with Group B at Week 1 (P < 0.001). The frequency, type and severity of adverse events were similar.\n\n\nCONCLUSIONS\nIron isomaltoside 1000 and iron sucrose have comparative efficacy in maintaining Hb concentrations in haemodialysis subjects and both preparations were well tolerated with a similar short-term safety profile.",
"title": ""
},
{
"docid": "e433da4c3128a48c4c2fad39ddb55ac1",
"text": "Vector field design on surfaces is necessary for many graphics applications: example-based texture synthesis, nonphotorealistic rendering, and fluid simulation. For these applications, singularities contained in the input vector field often cause visual artifacts. In this article, we present a vector field design system that allows the user to create a wide variety of vector fields with control over vector field topology, such as the number and location of singularities. Our system combines basis vector fields to make an initial vector field that meets user specifications.The initial vector field often contains unwanted singularities. Such singularities cannot always be eliminated due to the Poincaré-Hopf index theorem. To reduce the visual artifacts caused by these singularities, our system allows the user to move a singularity to a more favorable location or to cancel a pair of singularities. These operations offer topological guarantees for the vector field in that they only affect user-specified singularities. We develop efficient implementations of these operations based on Conley index theory. Our system also provides other editing operations so that the user may change the topological and geometric characteristics of the vector field.To create continuous vector fields on curved surfaces represented as meshes, we make use of the ideas of geodesic polar maps and parallel transport to interpolate vector values defined at the vertices of the mesh. We also use geodesic polar maps and parallel transport to create basis vector fields on surfaces that meet the user specifications. These techniques enable our vector field design system to work for both planar domains and curved surfaces.We demonstrate our vector field design system for several applications: example-based texture synthesis, painterly rendering of images, and pencil sketch illustrations of smooth surfaces.",
"title": ""
},
{
"docid": "2f5d428b8da4d5b5009729fc1794e53d",
"text": "The resolution of a synthetic aperture radar (SAR) image, in range and azimuth, is determined by the transmitted bandwidth and the synthetic aperture length, respectively. Various superresolution techniques for improving resolution have been proposed, and we have proposed an algorithm that we call polarimetric bandwidth extrapolation (PBWE). To apply PBWE to a radar image, one needs to first apply PBWE in the range direction and then in the azimuth direction, or vice versa . In this paper, PBWE is further extended to the 2-D case. This extended case (2D-PBWE) utilizes a 2-D polarimetric linear prediction model and expands the spatial frequency bandwidth in range and azimuth directions simultaneously. The performance of the 2D-PBWE is shown through a simulated radar image and a real polarimetric SAR image",
"title": ""
},
{
"docid": "9d0ed62f210d0e09db0cc6735699f5b3",
"text": "The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein-ligand docking, pharmacophore modeling and QSAR techniques are reviewed.",
"title": ""
},
{
"docid": "169258ee8696b481aac76fcee488632c",
"text": "Three parkinsonian patients are described who independently discovered that their gait was facilitated by inverting a walking stick and using the handle, carried a few inches from the ground, as a visual cue or target to step over and initiate walking. It is suggested that the \"inverted\" walking stick have wider application in patients with Parkinson's disease as an aid to walking, particularly if they have difficulty with step initiation and maintenance of stride length.",
"title": ""
},
{
"docid": "4fb6b884b22962c6884bd94f8b76f6f2",
"text": "This paper describes a novel motion estimation algorithm for floating base manipulators that utilizes low-cost inertial measurement units (IMUs) containing a three-axis gyroscope and a three-axis accelerometer. Four strap-down microelectromechanical system (MEMS) IMUs are mounted on each link to form a virtual IMU whose body's fixed frame is located at the center of the joint rotation. An extended Kalman filter (EKF) and a complementary filter are used to develop a virtual IMU by fusing together the output of four IMUs. The novelty of the proposed algorithm is that no forward kinematic model that requires data flow from previous joints is needed. The measured results obtained from the planar motion of a hydraulic arm show that the accuracy of the estimation of the joint angle is within ± 1 degree and that the root mean square error is less than 0.5 degree.",
"title": ""
},
{
"docid": "55f253cfb67ee0ba79b1439cc7e1764b",
"text": "Despite legislative attempts to curtail financial statement fraud, it continues unabated. This study makes a renewed attempt to aid in detecting this misconduct using linguistic analysis with data mining on narrative sections of annual reports/10-K form. Different from the features used in similar research, this paper extracts three distinct sets of features from a newly constructed corpus of narratives (408 annual reports/10-K, 6.5 million words) from fraud and non-fraud firms. Separately each of these three sets of features is put through a suite of classification algorithms, to determine classifier performance in this binary fraud/non-fraud discrimination task. From the results produced, there is a clear indication that the language deployed by management engaged in wilful falsification of firm performance is discernibly different from truth-tellers. For the first time, this new interdisciplinary research extracts features for readability at a much deeper level, attempts to draw out collocations using n-grams and measures tone using appropriate financial dictionaries. This linguistic analysis with machine learning-driven data mining approach to fraud detection could be used by auditors in assessing financial reporting of firms and early detection of possible misdemeanours.",
"title": ""
},
{
"docid": "7f81e1d6a6955cec178c1c811810322b",
"text": "The MATLAB toolbox YALMIP is introduced. It is described how YALMIP can be used to model and solve optimization problems typically occurring in systems and control theory. In this paper, free MATLAB toolbox YALMIP, developed initially to model SDPs and solve these by interfacing eternal solvers. The toolbox makes development of optimization problems in general, and control oriented SDP problems in particular, extremely simple. In fact, learning 3 YALMIP commands is enough for most users to model and solve the optimization problems",
"title": ""
}
] |
scidocsrr
|
d419ec9dc3301a6b5676f41bcc1ecea4
|
Data-Driven News Generation for Automated Journalism
|
[
{
"docid": "85da95f8d04a8c394c320d2cce25a606",
"text": "Improved numerical weather prediction simulations have led weather services to examine how and where human forecasters add value to forecast production. The Forecast Production Assistant (FPA) was developed with that in mind. The authors discuss the Forecast Generator (FOG), the first application developed on the FPA. FOG is a bilingual report generator that produces routine and special purpose forecast directly from the FPA's graphical weather predictions. Using rules and a natural-language generator, FOG converts weather maps into forecast text. The natural-language issues involved are relevant to anyone designing a similar system.<<ETX>>",
"title": ""
},
{
"docid": "cf20d9a0268511b283da42643cd2c845",
"text": "The increasing frequency of use of data and code in journalistic projects drives the need to develop guidelines or frameworks for how to responsibly and accountably employ algorithms and data in acts of journalism. One route to the accountable use of algorithms in journalistic work is to develop standards and expectations for transparency. In this paper we describe steps toward transparency with respect to computational journalism drawing from two case studies. The first case study concerns algorithmic accountability reporting where data collected via the Uber API and government sources were analyzed to understand quality of Uber service across Washington D.C. The second case centers on editorial transparency in the creation of a tool – in this case, a Twitter bot – built as an exploration of automated surfacing of anecdotal comments from news articles. Based on our experiences in these two cases we describe approaches to sharing data and code. The benefits of transparency as well as considerations such as licensing and documentation are discussed.",
"title": ""
},
{
"docid": "0075c4714b8e7bf704381d3a3722ab59",
"text": "This paper surveys the current state of the art in Natural Language Generation (nlg), defined as the task of generating text or speech from non-linguistic input. A survey of nlg is timely in view of the changes that the field has undergone over the past two decades, especially in relation to new (usually data-driven) methods, as well as new applications of nlg technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in nlg and the architectures adopted in which such tasks are organised; (b) highlight a number of recent research topics that have arisen partly as a result of growing synergies between nlg and other areas of artificial intelligence; (c) draw attention to the challenges in nlg evaluation, relating them to similar challenges faced in other areas of nlp, with an emphasis on different evaluation methods and the relationships between them.",
"title": ""
}
] |
[
{
"docid": "307267213b63577ce020cf206d0ea5e0",
"text": "Note. This article has been co-published in the British Journal of Sports Medicine (doi:10.1136/bjsports-2018-099193). Mountjoy is with the Department of Family Medicine, Michael G. DeGroote School of Medicine, McMaster University, Hamilton, Canada. Sundgot-Borgen is with the Department of Sports Medicine, The Norwegian School of Sport Sciences, Oslo, Norway. Burke is with Sports Nutrition, Australian Institute of Sport, Belconnen, Australia, and Centre for Exercise and Nutrition, Mary MacKillop Institute for Health Research, Melbourne, Australia. Ackerman is with the Divisions of Sports Medicine and Endocrinology, Boston Children’s Hospital and the Neuroendocrine Unit, Massachusetts General Hospital; Harvard Medical School, Boston, Massachusetts. Blauwet is with the Department of Physical Medicine and Rehabilitation, Harvard Medical School, Spaulding Rehabilitation Hospital/Brigham and Women’s Hospital, Boston, Massachusetts. Constantini is with the Heidi Rothberg Sport Medicine Center, Shaare Zedek Medical Center, Hebrew University, Jerusalem, Israel. Lebrun is with the Department of Family Medicine, Faculty of Medicine & Dentistry, and Glen Sather Sports Medicine Clinic, University of Alberta, Edmonton, Alberta, Canada. Melin is with the Department of Nutrition, Exercise and Sport, University of Copenhagen, Frederiksberg, Denmark. Meyer is with the Health Sciences Department, University of Colorado, Colorado Springs, Colorado. Sherman is a counselor in Bloomington, Indiana. Tenforde is with the Department of Physical Medicine and Rehabilitation, Harvard Medical School, Spaulding Rehabilitation Hospital, Charlestown, Massachusetts. Klungland Torstveit is with the Faculty of Health and Sport Sciences, University of Agder, Kristiansand, Norway. Budgett is with the IOC Medical and Scientific Department, Lausanne, Switzerland. Address author correspondence to Margo Mountjoy at mmsportdoc@mcmaster.ca. Margo Mountjoy McMaster University",
"title": ""
},
{
"docid": "41a287c7ecc5921aedfa5b733a928178",
"text": "This research presents the inferential statistics for Cronbach's coefficient alpha on the basis of the standard statistical assumption of multivariate normality. The estimation of alpha's standard error (ASE) and confidence intervals are described, and the authors analytically and empirically investigate the effects of the components of these equations. The authors then demonstrate the superiority of this estimate compared with previous derivations of ASE in a separate Monte Carlo simulation. The authors also present a sampling error and test statistic for a test of independent sample alphas. They conclude with a recommendation that all alpha coefficients be reported in conjunction with standard error or confidence interval estimates and offer SAS and SPSS programming codes for easy implementation.",
"title": ""
},
{
"docid": "c366303728d2a8ee47fe4cbfe67dec24",
"text": "Terrestrial Gamma-ray Flashes (TGFs), discovered in 1994 by the Compton Gamma-Ray Observatory, are high-energy photon bursts originating in the Earth’s atmosphere in association with thunderstorms. In this paper, we demonstrate theoretically that, while TGFs pass through the atmosphere, the large quantities of energetic electrons knocked out by collisions between photons and air molecules generate excited species of neutral and ionized molecules, leading to a significant amount of optical emissions. These emissions represent a novel type of transient luminous events in the vicinity of the cloud tops. We show that this predicted phenomenon illuminates a region with a size notably larger than the TGF source and has detectable levels of brightness. Since the spectroscopic, morphological, and temporal features of this luminous event are closely related with TGFs, corresponding measurements would provide a novel perspective for investigation of TGFs, as well as lightning discharges that produce them.",
"title": ""
},
{
"docid": "180dd2107c6a39e466b3d343fa70174f",
"text": "This paper presents simulation and hardware implementation of incremental conductance (IncCond) maximum power point tracking (MPPT) used in solar array power systems with direct control method. The main difference of the proposed system to existing MPPT systems includes elimination of the proportional-integral control loop and investigation of the effect of simplifying the control circuit. Contributions are made in several aspects of the whole system, including converter design, system simulation, controller programming, and experimental setup. The resultant system is capable of tracking MPPs accurately and rapidly without steady-state oscillation, and also, its dynamic performance is satisfactory. The IncCond algorithm is used to track MPPs because it performs precise control under rapidly changing atmospheric conditions. MATLAB and Simulink were employed for simulation studies, and Code Composer Studio v3.1 was used to program a TMS320F2812 digital signal processor. The proposed system was developed and tested successfully on a photovoltaic solar panel in the laboratory. Experimental results indicate the feasibility and improved functionality of the system.",
"title": ""
},
{
"docid": "58efd234d4ca9b10ccfc363db4c501d3",
"text": "In order to understand the role of the medium osmolality on the metabolism of glumate-producing Corynebacterium glutamicum, effects of saline osmotic upshocks from 0.4 osnol. kg−1 to 2 osmol. kg−1 have been investigated on the growth kinetics and the intracellular content of the bacteria. Addition of a high concentration of NaCl after a few hours of batch culture results in a temporary interruption of the cellular growth. Cell growth resumes after about 1 h but at a specific rate that decreases with increasing medium osmolality. Investigation of the intracellular content showed, during the first 30 min following the shock, a rapid but transient influx of sodium ions. This was followed by a strong accumulation of proline, which rose from 5 to 110 mg/g dry weight at the end of the growth phase. A slight accumulation of intracellular glutamate from 60 to 75 mg/g dry weight was also observed. Accordingly, for Corynebacterium glutamicum an increased osmolality in the glutamate and proline synthesis during the growth phase.",
"title": ""
},
{
"docid": "de703c909703b2dcabf7d99a4b5e1493",
"text": "The ultimate goal of this paper is to print radio frequency (RF) and microwave structures using a 3-D platform and to pattern metal films on nonplanar structures. To overcome substrate losses, air core substrates that can readily be printed are utilized. To meet the challenge of patterning conductive layers on complex or nonplanar printed structures, two novel self-aligning patterning processes are demonstrated. One is a simple damascene-like process, and the other is a lift-off process using a 3-D printed lift-off mask layer. A range of microwave and RF circuits are designed and demonstrated between 1 and 8 GHz utilizing these processes. Designs are created and simulated using Keysight Advanced Design System and ANSYS High Frequency Structure Simulator. Circuit designs include a simple microstrip transmission line (T-line), coupled-line bandpass filter, circular ring resonator, T-line resonator, resonant cavity structure, and patch antenna. A commercially available 3-D printer and metal sputtering system are used to realize the designs. Both simulated and measured results of these structures are presented.",
"title": ""
},
{
"docid": "8df91df5b37fa278b9b9096ad32cf266",
"text": "Objective. Many studies find that females benefit from their gender in sentencing decisions. Few researchers, however, address whether the gender-sentencing association might be stronger for some crimes, such as minor nonviolent offending, and weaker for other offenses, such as serious violent crime. Method. Using a large random sample of convicted offenders in Texas drawn from a statewide project on sentencing practices mandated by the 73rd Texas Legislature, logistic regression and OLS regression analyses of likelihood of imprisonment and prison length illustrate the importance of looking at sentencing outcomes not only in terms of gender but also in terms of crime type. Results. Specifically, we find that the effect of gender on sentencing does vary by crime type, but not in a consistent or predicted fashion. For both property and drug offending, females are less likely to be sentenced to prison and also receive shorter sentences if they are sentenced to prison. For violent offending, however, females are no less likely than males to receive prison time, but for those who do, females receive substantially shorter sentences than males. Conclusions. We conclude that such variation in the gender-sentencing association across crime type is largely due to features of Texas’ legal code that channel the level of discretion available to judges depending on crime type and whether incarceration likelihood or sentence length is examined.",
"title": ""
},
{
"docid": "4d040791f63af5e2ff13ff2b705dc376",
"text": "The frequency and severity of forest fires, coupled with changes in spatial and temporal precipitation and temperature patterns, are likely to severely affect the characteristics of forest and permafrost patterns in boreal eco-regions. Forest fires, however, are also an ecological factor in how forest ecosystems form and function, as they affect the rate and characteristics of tree recruitment. A better understanding of fire regimes and forest recovery patterns in different environmental and climatic conditions will improve the management of sustainable forests by facilitating the process of forest resilience. Remote sensing has been identified as an effective tool for preventing and monitoring forest fires, as well as being a potential tool for understanding how forest ecosystems respond to them. However, a number of challenges remain before remote sensing practitioners will be able to better understand the effects of forest fires and how vegetation responds afterward. This article attempts to provide a comprehensive review of current research with respect to remotely sensed data and methods used to model post-fire effects and forest recovery patterns in boreal forest regions. The review reveals that remote sensing-based monitoring of post-fire effects and forest recovery patterns in boreal forest regions is not only limited by the gaps in both field data and remotely sensed data, but also the complexity of far-northern fire regimes, climatic conditions and environmental conditions. We expect that the integration of different remotely sensed data coupled with field campaigns can provide an important data source to support the monitoring of post-fire effects and forest recovery patterns. Additionally, the variation and stratification of preand post-fire vegetation and environmental conditions should be considered to achieve a reasonable, operational model for monitoring post-fire effects and forest patterns in boreal regions. OPEN ACCESS Remote Sens. 2014, 6 471",
"title": ""
},
{
"docid": "9864597d714ba07b9fc502ab6f1baee3",
"text": "Foreground object segmentation is a critical step for many image analysis tasks. While automated methods can produce high-quality results, their failures disappoint users in need of practical solutions. We propose a resource allocation framework for predicting how best to allocate a fixed budget of human annotation effort in order to collect higher quality segmentations for a given batch of images and automated methods. The framework is based on a proposed prediction module that estimates the quality of given algorithm-drawn segmentations. We demonstrate the value of the framework for two novel tasks related to \"pulling the plug\" on computer and human annotators. Specifically, we implement two systems that automatically decide, for a batch of images, when to replace 1) humans with computers to create coarse segmentations required to initialize segmentation tools and 2) computers with humans to create final, fine-grained segmentations. Experiments demonstrate the advantage of relying on a mix of human and computer efforts over relying on either resource alone for segmenting objects in three diverse datasets representing visible, phase contrast microscopy, and fluorescence microscopy images.",
"title": ""
},
{
"docid": "caf9809b7d5da080c7408c8e8d7fdd22",
"text": "Artificial neural networks have become state-of-the-art in the task of language modelling on a small corpora. While feed-forward networks are able to take into account only a fixed context length to predict the next word, recurrent neural networks (RNN) can take advantage of all previous words. Due the difficulties in training of RNN, the way could be in using Long Short Term Memory (LSTM) neural network architecture. In this work, we show an application of LSTM network with extensions on a language modelling task with Czech spontaneous phone calls. Experiments show considerable improvements in perplexity and WER on recognition system over n-gram baseline.",
"title": ""
},
{
"docid": "25a13221c6cda8e1d4adf451701e421a",
"text": "Block-based local binary patterns a.k.a. enhanced local binary patterns (ELBPs) have proven to be a highly discriminative descriptor for face recognition and image retrieval. Since this descriptor is mainly composed by histograms, little work (if any) has been done for selecting its relevant features (either the bins or the blocks). In this paper, we address feature selection for both the classic ELBP representation and the recently proposed color quaternionic LBP (QLBP). We introduce a filter method for the automatic weighting of attributes or blocks using an improved version of the margin-based iterative search Simba algorithm. This new improved version introduces two main modifications: (i) the hypothesis margin of a given instance is computed by taking into account the K-nearest neighboring examples within the same class as well as the K-nearest neighboring examples with a different label; (ii) the distances between samples and their nearest neighbors are computed using the weighted $$\\chi ^2$$ χ 2 distance instead of the Euclidean one. This algorithm has been compared favorably with several competing feature selection algorithms including the Euclidean-based Simba as well as variance and Fisher score algorithms giving higher performances. The proposed method is useful for other descriptors that are formed by histograms. Experimental results show that the QLBP descriptor allows an improvement of the accuracy in discriminating faces compared with the ELBP. They also show that the obtained selection (attributes or blocks) can either improve recognition performance or maintain it with a significant reduction in the descriptor size.",
"title": ""
},
{
"docid": "2c2be931e456761824920fcc9e4666ec",
"text": "The resource description framework (RDF) is a metadata model and language recommended by the W3C. This paper presents a framework to incorporate temporal reasoning into RDF, yielding temporal RDF graphs. We present a semantics for these kinds of graphs which includes the notion of temporal entailment and a syntax to incorporate this framework into standard RDF graphs, using the RDF vocabulary plus temporal labels. We give a characterization of temporal entailment in terms of RDF entailment and show that the former does not yield extra asymptotic complexity with respect to nontemporal RDF graphs. We also discuss temporal RDF graphs with anonymous timestamps, providing a theoretical framework for the study of temporal anonymity. Finally, we sketch a temporal query language for RDF, along with complexity results for query evaluation that show that the time dimension preserves the tractability of answers",
"title": ""
},
{
"docid": "e1fb80117a0925954b444360e227d680",
"text": "Maize is one of the most important food and feed crops in Asia, and is a source of income for several million farmers. Despite impressive progress made in the last few decades through conventional breeding in the “Asia-7” (China, India, Indonesia, Nepal, Philippines, Thailand, and Vietnam), average maize yields remain low and the demand is expected to increasingly exceed the production in the coming years. Molecular marker-assisted breeding is accelerating yield gains in USA and elsewhere, and offers tremendous potential for enhancing the productivity and value of Asian maize germplasm. We discuss the importance of such efforts in meeting the growing demand for maize in Asia, and provide examples of the recent use of molecular markers with respect to (i) DNA fingerprinting and genetic diversity analysis of maize germplasm (inbreds and landraces/OPVs), (ii) QTL analysis of important biotic and abiotic stresses, and (iii) marker-assisted selection (MAS) for maize improvement. We also highlight the constraints faced by research institutions wishing to adopt the available and emerging molecular technologies, and conclude that innovative models for resource-pooling and intellectual-property-respecting partnerships will be required for enhancing the level and scope of molecular marker-assisted breeding for maize improvement in Asia. Scientists must ensure that the tools of molecular marker-assisted breeding are focused on developing commercially viable cultivars, improved to ameliorate the most important constraints to maize production in Asia.",
"title": ""
},
{
"docid": "8626803a7fd8a2190f4d6c4b56b04489",
"text": "Quotes, or quotations, are well known phrases or sentences that we use for various purposes such as emphasis, elaboration, and humor. In this paper, we introduce a task of recommending quotes which are suitable for given dialogue context and we present a deep learning recommender system which combines recurrent neural network and convolutional neural network in order to learn semantic representation of each utterance and construct a sequence model for the dialog thread. We collected a large set of twitter dialogues with quote occurrences in order to evaluate proposed recommender system. Experimental results show that our approach outperforms not only the other state-of-the-art algorithms in quote recommendation task, but also other neural network based methods built for similar tasks.",
"title": ""
},
{
"docid": "97230a49932b3577730d991864cf35d4",
"text": "Brain fluctuations at rest are not random but are structured in spatial patterns of correlated activity across different brain areas. The question of how resting-state functional connectivity (FC) emerges from the brain's anatomical connections has motivated several experimental and computational studies to understand structure-function relationships. However, the mechanistic origin of resting state is obscured by large-scale models' complexity, and a close structure-function relation is still an open problem. Thus, a realistic but simple enough description of relevant brain dynamics is needed. Here, we derived a dynamic mean field model that consistently summarizes the realistic dynamics of a detailed spiking and conductance-based synaptic large-scale network, in which connectivity is constrained by diffusion imaging data from human subjects. The dynamic mean field approximates the ensemble dynamics, whose temporal evolution is dominated by the longest time scale of the system. With this reduction, we demonstrated that FC emerges as structured linear fluctuations around a stable low firing activity state close to destabilization. Moreover, the model can be further and crucially simplified into a set of motion equations for statistical moments, providing a direct analytical link between anatomical structure, neural network dynamics, and FC. Our study suggests that FC arises from noise propagation and dynamical slowing down of fluctuations in an anatomically constrained dynamical system. Altogether, the reduction from spiking models to statistical moments presented here provides a new framework to explicitly understand the building up of FC through neuronal dynamics underpinned by anatomical connections and to drive hypotheses in task-evoked studies and for clinical applications.",
"title": ""
},
{
"docid": "34641057a037740ec28581a798c96f05",
"text": "Vehicles are becoming complex software systems with many components and services that need to be coordinated. Service oriented architectures can be used in this domain to support intra-vehicle, inter-vehicles, and vehicle-environment services. Such architectures can be deployed on different platforms, using different communication and coordination paradigms. We argue that practical solutions should be hybrid: they should integrate and support interoperability of different paradigms. We demonstrate the concept by integrating Jini, the service-oriented technology we used within the vehicle, and JXTA, the peer to peer infrastructure we used to support interaction with the environment through a gateway service, called J2J. Initial experience with J2J is illustrated.",
"title": ""
},
{
"docid": "0c832dde1c268ec32e7fca64158abb31",
"text": "For many years, the clinical laboratory's focus on analytical quality has resulted in an error rate of 4-5 sigma, which surpasses most other areas in healthcare. However, greater appreciation of the prevalence of errors in the pre- and post-analytical phases and their potential for patient harm has led to increasing requirements for laboratories to take greater responsibility for activities outside their immediate control. Accreditation bodies such as the Joint Commission International (JCI) and the College of American Pathologists (CAP) now require clear and effective procedures for patient/sample identification and communication of critical results. There are a variety of free on-line resources available to aid in managing the extra-analytical phase and the recent publication of quality indicators and proposed performance levels by the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) working group on laboratory errors and patient safety provides particularly useful benchmarking data. Managing the extra-laboratory phase of the total testing cycle is the next challenge for laboratory medicine. By building on its existing quality management expertise, quantitative scientific background and familiarity with information technology, the clinical laboratory is well suited to play a greater role in reducing errors and improving patient safety outside the confines of the laboratory.",
"title": ""
},
{
"docid": "be23bfb328ebd884e3afde7996ccdd0b",
"text": "Air Medical Journal 29:3 In 1980, Tim Berners-Lee, a 25-year-old engineer, was doing computer modeling for CERN, the particle-physics laboratory in Geneva, Switzerland, and began the work that was the conception of the World Wide Web. His goal was to find a better way to organize files and link related documents on his hard drive on his computer in a “brain-like way.”1 He wanted a program that could “keep track of all the random associations one comes across in real life”1 and create a system that would facilitate sharing and updating information among researchers. Although the web was conceived as a tool for researchers and academics to share information, the uses and even the focus of the web are a part of everyday life for many people. It is estimated that the web contains at least 25.21 billion pages on an estimated 110 million websites.2 Never before have individuals had so much information at their fingertips, and never have people had such an opportunity to share information that they would never have considered sharing with strangers before. The explosion of social and professional networking sites such as Facebook and LinkedIn; media-sharing sites such as Flickr and YouTube; blogging sites such as Blogger and WordPress; and even forum sites such as Flightweb and JustHelicopters has created an attractive means for both purposeful and accidental dissemination of damaging, harmful, and possibly illegal information on the World Wide Web. These virtual communities allow people to interact over the Internet in a way that is far beyond the static web pages, text messages, and email of the 1990s.3 The social media communities have forged groups on nearly any topic, from cancer care to cooking, dating to depression, and workplace gripes to restaurant reviews. Each community can be reached on multitudes of platforms from PCs to PDAs. A two-sided conundrum exists. First are the issues facing employers—from the failure of policy and appropriate education for employees to keeping pace with the rapid growth of social networking media. Second are the issue of employees recognizing and understanding their heightened responsibility in our digitally driven world. A recent study in the Journal of the American Medical Association (JAMA) assessed the experience of US medical schools regarding online posting of unprofessional content by students.4 The researchers found that 60% (47/78) of US medical schools that responded reported incidents of students posting unprofessional online content. Violations of patient confidentiality were reported by 13% (6/46). Student use of profanity (52%, 22/42), frankly discriminatory language (48%, 19/40), depiction of intoxication (39%, 17/44), and sexually suggestive material (38%, 16/42) was commonly reported. The issue here is not just patient privacy, but also the portrayal of organizations in the public eye. The widespread use of social media raises privacy implications for both employees and employers. Most individuals view their personal pages as private; however, the reality is far different. The online name of “HeliRN” easily identifies an individual and has a connotation for the reader. A searcher of career information on becoming a flight nurse may find this name and click on it, leading to a social networking site that has information about a career—and a photo of “HeliRN” wearing a camouflaged jacket over a low-cut T-shirt with a large collection of dog-tags around her neck and a plastic cup of beer in her hand while an unknown male is playfully sticking his tongue into her ear. The posted material can be accessed by people for whom it was never intended, and that is where the harm often begins. Regardless of the privacy settings set by the individual user, personal information and communications posted may be read by nearly anyone and everyone. Although many employers have guidelines and codes of conduct for e-mail and internet use, social networking and media-sharing sites pose different privacy challenges that should be specifically addressed in a similar guidelinesbased manner. The crafting of policies that establish best practices and outline expectations for acceptable use both in and away from the workplace and set out the consequences of misuse are paramount for any employer. Employers should inform employees in plain language why it is important to keep some personal and corporate information confidential—about themselves, their coworkers, patients, and the organization. This seems like a commonsense approach; however, as with all of the issues in the accompanying sidebar, none of the individuals thought they were doing anything wrong. Clarity in expectations could well have kept these people out of trouble. Although the medical community became very “privacy aware” in the age of the Health Insurance Portability and Accountability Act (HIPAA), it has not done a very good job of defining acceptable behaviors relative to personal behavior. Lawyer jokes aside, ethics and accountability are centerpiece to the legal community. The Model Rules of Professional Conduct5 provide a framework for the professional conduct of all lawyers. Specifically, Rule 8.4 (g) states: “It is professional misconduct for a lawyer to: (g) engage in conduct, in a professional capacity, manifesting, by words or conduct, bias or prejudice based upon race, gender, religion, national origin, disability, sexual orientation, age, socioeconomic status, or similar factors.” Recent Legal Matters John R. Clark, JD, MBA, NREMT-P, FP-C, CCP-C, CMTE",
"title": ""
},
{
"docid": "71c067065a5d3ada7f789798e0cf3424",
"text": "Fog computing paradigm extends the storage, networking, and computing facilities of the cloud computing toward the edge of the networks while offloading the cloud data centers and reducing service latency to the end users. However, the characteristics of fog computing arise new security and privacy challenges. The existing security and privacy measurements for cloud computing cannot be directly applied to the fog computing due to its features, such as mobility, heterogeneity, and large-scale geo-distribution. This paper provides an overview of existing security and privacy concerns, particularly for the fog computing. Afterward, this survey highlights ongoing research effort, open challenges, and research trends in privacy and security issues for fog computing.",
"title": ""
},
{
"docid": "0d0c44dd4fd5b89edc29763ad038540b",
"text": "There is at present limited understanding of the neurobiological basis of the different processes underlying emotion perception. We have aimed to identify potential neural correlates of three processes suggested by appraisalist theories as important for emotion perception: 1) the identification of the emotional significance of a stimulus; 2) the production of an affective state in response to 1; and 3) the regulation of the affective state. In a critical review, we have examined findings from recent animal, human lesion, and functional neuroimaging studies. Findings from these studies indicate that these processes may be dependent upon the functioning of two neural systems: a ventral system, including the amygdala, insula, ventral striatum, and ventral regions of the anterior cingulate gyrus and prefrontal cortex, predominantly important for processes 1 and 2 and automatic regulation of emotional responses; and a dorsal system, including the hippocampus and dorsal regions of anterior cingulate gyrus and prefrontal cortex, predominantly important for process 3. We suggest that the extent to which a stimulus is identified as emotive and is associated with the production of an affective state may be dependent upon levels of activity within these two neural systems.",
"title": ""
}
] |
scidocsrr
|
3ffd5414d352e413ea1c8da9db4d7115
|
A Secure Data Deduplication Scheme for Cloud Storage
|
[
{
"docid": "94d40100337b2f6721cdc909513e028d",
"text": "Data deduplication systems detect redundancies between data blocks to either reduce storage needs or to reduce network traffic. A class of deduplication systems splits the data stream into data blocks (chunks) and then finds exact duplicates of these blocks.\n This paper compares the influence of different chunking approaches on multiple levels. On a macroscopic level, we compare the chunking approaches based on real-life user data in a weekly full backup scenario, both at a single point in time as well as over several weeks.\n In addition, we analyze how small changes affect the deduplication ratio for different file types on a microscopic level for chunking approaches and delta encoding. An intuitive assumption is that small semantic changes on documents cause only small modifications in the binary representation of files, which would imply a high ratio of deduplication. We will show that this assumption is not valid for many important file types and that application-specific chunking can help to further decrease storage capacity demands.",
"title": ""
}
] |
[
{
"docid": "ce24b783f2157fdb4365b60aa2e6163a",
"text": "Geosciences is a field of great societal relevance that requires solutions to several urgent problems facing our humanity and the planet. As geosciences enters the era of big data, machine learning (ML)— that has been widely successful in commercial domains—offers immense potential to contribute to problems in geosciences. However, problems in geosciences have several unique challenges that are seldom found in traditional applications, requiring novel problem formulations and methodologies in machine learning. This article introduces researchers in the machine learning (ML) community to these challenges offered by geoscience problems and the opportunities that exist for advancing both machine learning and geosciences. We first highlight typical sources of geoscience data and describe their properties that make it challenging to use traditional machine learning techniques. We then describe some of the common categories of geoscience problems where machine learning can play a role, and discuss some of the existing efforts and promising directions for methodological development in machine learning. We conclude by discussing some of the emerging research themes in machine learning that are applicable across all problems in the geosciences, and the importance of a deep collaboration between machine learning and geosciences for synergistic advancements in both disciplines.",
"title": ""
},
{
"docid": "51f90bbb8519a82983eec915dd643d34",
"text": "The growth of vehicles in Yogyakarta Province, Indonesia is not proportional to the growth of roads. This problem causes severe traffic jam in many main roads. Common traffic anomalies detection using surveillance camera requires manpower and costly, while traffic anomalies detection with crowdsourcing mobile applications are mostly owned by private. This research aims to develop a real-time traffic classification by harnessing the power of social network data, Twitter. In this study, Twitter data are processed to the stages of preprocessing, feature extraction, and tweet classification. This study compares classification performance of three machine learning algorithms, namely Naive Bayes (NB), Support Vector Machine (SVM), and Decision Tree (DT). Experimental results show that SVM algorithm produced the best performance among the other algorithms with 99.77% and 99.87% of classification accuracy in balanced and imbalanced data, respectively. This research implies that social network service may be used as an alternative source for traffic anomalies detection by providing information of traffic flow condition in real-time.",
"title": ""
},
{
"docid": "8fb5a9d2f68601d9e07d4a96ea45e585",
"text": "The solid-state transformer (SST) is a promising power electronics solution that provides voltage regulation, reactive power compensation, dc-sourced renewable integration, and communication capabilities, in addition to the traditional step-up/step-down functionality of a transformer. It is gaining widespread attention for medium-voltage (MV) grid interfacing to enable increases in renewable energy penetration, and, commercially, the SST is of interest for traction applications due to its light weight as a result of medium-frequency isolation. The recent advancements in silicon carbide (SiC) power semiconductor device technology are creating a new paradigm with the development of discrete power semiconductor devices in the range of 10-15 kV and even beyond-up to 22 kV, as recently reported. In contrast to silicon (Si) IGBTs, which are limited to 6.5-kV blocking, these high-voltage (HV) SiC devices are enabling much simpler converter topologies and increased efficiency and reliability, with dramatic reductions of the size and weight of the MV power-conversion systems. This article presents the first-ever demonstration results of a three-phase MV grid-connected 100-kVA SST enabled by 15-kV SiC n-IGBTs, with an emphasis on the system design and control considerations. The 15-kV SiC n-IGBTs were developed by Cree and packaged by Powerex. The low-voltage (LV) side of the SST is built with 1,200-V, 100-A SiC MOSFET modules. The galvanic isolation is provided by three single-phase 22-kV/800-V, 10-kHz, 35-kVA-rated high-frequency (HF) transformers. The three-phase all-SiC SST that interfaces with 13.8-kV and 480-V distribution grids is referred to as a transformerless intelligent power substation (TIPS). The characterization of the 15-kV SiC n-IGBTs, the development of the MV isolated gate driver, and the design, control, and system demonstration of the TIPS were undertaken by North Carolina State University's (NCSU's) Future Renewable Electrical Energy Delivery and Management (FREEDM) Systems Center, sponsored by an Advanced Research Projects Agency-Energy (ARPA-E) project.",
"title": ""
},
{
"docid": "8df196edbb812198ebe1f86e81f38481",
"text": "Ever since the formulation of Rhetorical Structure Theory (RST) by Mann and Thompson, researchers have debated about what is the ‘right’ number of relations. One proposal is based on the discourse markers (connectives) signalling the presence of a particular relationship. In this paper, I discuss the adequacy of such a proposal, in the light of two different corpus studies: a study of conversations, and a study of newspaper articles. The two corpora were analyzed in terms of rhetorical relations, and later coded for external signals of those relations. The conclusion in both studies is that there are a high number of relations (between 60% and 70% of the total, on average) that are not signalled. A comparison between the two corpora suggests that genre-specific factors may affect which relations are signalled, and which are not.",
"title": ""
},
{
"docid": "2ed9db3d174d95e5b97c4fe26ca6c8ac",
"text": "One of the more startling effects of road related accidents is the economic and social burden they cause. Between 750,000 and 880,000 people died globally in road related accidents in 1999 alone, with an estimated cost of US$518 billion [11]. One way of combating this problem is to develop Intelligent Vehicles that are selfaware and act to increase the safety of the transportation system. This paper presents the development and application of a novel multiple-cue visual lane tracking system for research into Intelligent Vehicles (IV). Particle filtering and cue fusion technologies form the basis of the lane tracking system which robustly handles several of the problems faced by previous lane tracking systems such as shadows on the road, unreliable lane markings, dramatic lighting changes and discontinuous changes in road characteristics and types. Experimental results of the lane tracking system running at 15Hz will be discussed, focusing on the particle filter and cue fusion technology used.",
"title": ""
},
{
"docid": "b6b63aa72904f9b7e24e3750c0db12f0",
"text": "The explosion of the learning materials in personal learning environments has caused difficulties to locate appropriate learning materials to learners. Personalized recommendations have been used to support the activities of learners in personal learning environments and this technology can deliver suitable learning materials to learners. In order to improve the quality of recommendations, this research considers the multidimensional attributes of material, rating of learners, and the order and sequential patterns of the learner's accessed material in a unified model. The proposed approach has two modules. In the sequential-based recommendation module, latent patterns of accessing materials are discovered and presented in two formats including the weighted association rules and the compact tree structure (called Pattern-tree). In the attribute-based module, after clustering the learners using latent patterns by K-means algorithm, the learner preference tree (LPT) is introduced to consider the multidimensional attributes of materials, rating of learners, and also order of the accessed materials. The mixed, weighted, and cascade hybrid methods are employed to generate the final combined recommendations. The experiments show that the proposed approach outperforms the previous algorithms in terms of precision, recall, and intra-list similarity measure. The main contributions are improvement of the recommenda-tions' quality and alleviation of the sparsity problem by combining the contextual information, including order and sequential patterns of the accessed material, rating of learners, and the multidimensional attributes of materials. With the explosion of learning materials available on personal learning environments (PLEs), it is difficult for learners to discover the most appropriate materials according to keyword searching method. One way to address this challenge is the use of recom-mender systems [16]. In addition, up to very recent years, several researches have expressed the need for personalization in e-learning environments. In fact, one of the new forms of personalization in e-learning environments is to provide recommendations to learners to support and help them through the e-learning process [19]. According to the strategies applied, recommender systems can be segmented into three major categories: content-based, collabo-rative, and hybrid recommendation [1]. Hybrid recommendation mechanisms attempt to deal with some of the limitations and overcome the drawbacks of pure content-based approach and pure collaborative approach by combining the two approaches. The majority of the traditional recommendation algorithms have been developed for e-commerce applications, which are unable to cover the entire requirements of learning environments. One of these drawbacks is that they do not consider the learning process in their recommendation …",
"title": ""
},
{
"docid": "daebc612abf2b8ae0cb4e235ed50fe6b",
"text": "Authorship Attribution, (AA) is a process of determining a particular document is written by which author among a list of suspected authors. Authorship attribution has been the problem from last six decades; when there were handwritten documents needed to be identified for the genuine author. Due to the technology advancement and increase in cybercrime and unlawful activities, this problem of AA becomes forth most important to trace out the author behind online messages. Over the past, many years research has been conducted to attribute the authorship of an author on the basis of their writing style as all authors possess different distinctiveness while writing a piece of document. This paper presents a comparative study of various machine learning approaches on different feature sets for authorship attribution on short text. The Twitter dataset has been used for comparison with varying sample size of a dataset of 10 prolific authors with various combinations of feature sets. The significance and impact of combinations of features while inferring different stylometric features has been reflected. The results of different approaches are compared based on their accuracy and precision values.",
"title": ""
},
{
"docid": "8b34b86cb1ce892a496740bfbff0f9c5",
"text": "Common subexpression elimination is commonly employed to reduce the number of operations in DSP algorithms after decomposing constant multiplications into shifts and additions. Conventional optimization techniques for finding common subexpressions can optimize constant multiplications with only a single variable at a time, and hence cannot fully optimize the computations with multiple variables found in matrix form of linear systems like DCT, DFT etc. We transform these computations such that all common subexpressions involving any number of variables can be detected. We then present heuristic algorithms to select the best set of common subexpressions. Experimental results show the superiority of our technique over conventional techniques for common subexpression elimination.",
"title": ""
},
{
"docid": "f267c096ffe69c40b5bd987450cdde84",
"text": "Recent breakthroughs in cryptanalysis of standard hash functions like SHA-1 and MD5 raise the need for alternatives. The MD6 hash function is developed by a team led by Professor Ronald L. Rivest in response to the call for proposals for a SHA-3 cryptographic hash algorithm by the National Institute of Standards and Technology. The hardware performance evaluation of hash chip design mainly includes efficiency and flexibility. In this paper, a RAM-based reconfigurable FPGA implantation of the MD6-224/256/384 /512 hash function is presented. The design achieves a throughput ranges from 118 to 227 Mbps at the maximum frequency of 104MHz on low-cost Cyclone III device. The implementation of MD6 core functionality uses mainly embedded Block RAMs and small resources of logic elements in Altera FPGA, which satisfies the needs of most embedded applications, including wireless communication. The implementation results also show that the MD6 hash function has good reconfigurability.",
"title": ""
},
{
"docid": "c9065814777e0815da0ceb6a1a1b624a",
"text": "Axial and radial power peaking factors (Fq, Fah) were estimated in Chashma Nuclear Power Plant Unit-1 (C-1) core using artificial Neural Network Technique (ANNT). Position of T4 control bank, axial offsets in four quadrants and quadrant power tilt ratios were taken as input variables in neural network designing. Power Peaking Factors (PPF) were calculated using computer codes FCXS, TWODFD and 3D-NB-2P for 52 core critical conditions made during C-1 fuel cycle-7. A multilayered Perceptron (MLP) neural network was trained by applying a set of measured input parameters and calculated output data for each core state. Training average relative errors between targets and ANNT estimated peaking factors were ranged from 0.018% to 0.054%, implies that ANNT introduces negligible error during training and exactly map the values. For validation process, PPF were estimated using ANNT for 36 cases devised at the time when power distribution measurement test and in-core/ex-core detectors calibration test were performed during fuel cycle. ANNT Results were compared with C-1 peaking factors measured with in-core flux mapping system and INCOPW computer code. Results showed that ANNT estimated PPF deviated from C-1 measured values within ±3%. The results of this study indicate that ANNT is an alternate technique for PPF measurement using only ex-core detectors signals data and independent of in-core flux mapping system. It might increase the time interval between in-core flux maps to 180 Effective Full Power Days (EFPDs) and reduce usage frequency of in-core flux mapping system during fuel cycle as present in Advanced Countries Nuclear Power Plants.",
"title": ""
},
{
"docid": "a5131904788f3a6aabfc482109f6f71e",
"text": "The global move toward efficient energy consumption and production has led to remarkable advancements in the design of the smart grid infrastructure. Local energy trading is one way forward. It typically refers to the transfer of energy from an entity of the smart grid surplus energy to one with a deficit. In this paper, we present a detailed review of the recent advances in the application of game-theoretic methods to local energy trading scenarios. An extensive description of a complete game theory-based energy trading framework is presented. It includes a taxonomy of the methods and an introduction to the smart grid architecture with a focus on renewable energy generation and energy storage. Finally, we present a critical evaluation of the current shortcomings and identify areas for future research.",
"title": ""
},
{
"docid": "f568c4987b4c318567aa6b6a757d9510",
"text": "Privacy preserving mining of distributed data has numerous applications. Each application poses different constraints: What is meant by privacy, what are the desired results, how is the data distributed, what are the constraints on collaboration and cooperative computing, etc. We suggest that the solution to this is a toolkit of components that can be combined for specific privacy-preserving data mining applications. This paper presents some components of such a toolkit, and shows how they can be used to solve several privacy-preserving data mining problems.",
"title": ""
},
{
"docid": "d4fbd2f212367706cf47b6b25b5e9dcf",
"text": "Web Services are considered an essential services-oriented technology today on networked application architectures due to their language and platform-independence. Their language and platform independence also brings difficulties in testing them especially in an automated manner. In this paper, a comparative evaluation of testing techniques based on, TTCN-3 and SoapUI, in order of contributing towards resolving these difficulties is performed. Aspects of TTCN-3 and SoapUI are highlighted, including test abstraction, performance efficiency and powerful matching mechanisms in TTCN-3 that allow a separation between behaviour and the conditions governing behaviour. Keywords— Web Services Testing, Automated Testing, Web Testing, SoapUI, TTCN-3, Titan TTCN-3, Testing",
"title": ""
},
{
"docid": "d5faccc7187a185f6e287a7cc29f0878",
"text": "The revival of deep neural networks and the availability of ImageNet laid the foundation for recent success in highly complex recognition tasks. However, ImageNet does not cover all visual concepts of all possible application scenarios. Hence, application experts still record new data constantly and expect the data to be used upon its availability. In this paper, we follow this observation and apply the classical concept of fine-tuning deep neural networks to scenarios where data from known or completely new classes is continuously added. Besides a straightforward realization of continuous fine-tuning, we empirically analyze how computational burdens of training can be further reduced. Finally, we visualize how the network’s attention maps evolve over time which allows for visually investigating what the network learned during continuous fine-tuning.",
"title": ""
},
{
"docid": "48921ecc30ccbf09bf0c864c8fe6f0b9",
"text": "Original scientific paper In the present work, duralumin aircraft spar fatigue life is evaluated by extended finite element method (XFEM) under cyclic loading condition. The effect of the crack growth on the fatigue life of aircraft spar is discussed in detail. The values of stress intensity factors (SIFs) are extracted from the XFEM solution. Standard Paris fatigue crack growth law (currently, the only one incorporated in Abaqus) is used for the fatigue life estimation. Obtained results are compared with previously obtained experimental results.",
"title": ""
},
{
"docid": "822a4971bb1e92ddf47fd732a652ebb9",
"text": "The axial-flux permanent-magnet machine (AFPM) topology is suited for direct-drive applications and, due to their enhanced flux-weakening capability, AFPMs having slotted windings are the most promising candidates for use in wheel-motor drives. In consideration of this, this paper deals with an experimental study devoted to investigate a number of technical solutions to be used in AFPMs having slotted windings in order to achieve substantial reduction of both cogging torque and no-load power loss in the machine. To conduct such an experimental study, a laboratory machine was purposely built incorporating facilities that allow easy-to-achieve offline modifications of the overall magnetic arrangement at the machine air gaps, such as magnet skewing, angular shifting between rotor discs, and accommodation of either PVC or Somaloy wedges for closing the slot openings. The paper discusses experimental results and gives guidelines for the design of AFPMs with improved performance.",
"title": ""
},
{
"docid": "3c44f2bf1c8a835fb7b86284c0b597cd",
"text": "This paper explores some of the key electromagnetic design aspects of a synchronous reluctance motor that is equipped with single-tooth windings (i.e., fractional slot concentrated windings). The analyzed machine, a 6-slot 4-pole motor, utilizes a segmented stator core structure for ease of coil winding, pre-assembly, and facilitation of high slot fill factors (~60%). The impact on the motors torque producing capability and its power factor of these inter-segment air gaps between the stator segments is investigated through 2-D finite element analysis (FEA) studies where it is shown that they have a low impact. From previous studies, torque ripple is a known issue with this particular slot–pole combination of synchronous reluctance motor, and the use of two different commercially available semi-magnetic slot wedges is investigated as a method to improve torque quality. An analytical analysis of continuous rotor skewing is also investigated as an attempt to reduce the torque ripple. Finally, it is shown that through a combination of 2-D and 3-D FEA studies in conjunction with experimentally derived results on a prototype machine that axial fringing effects cannot be ignored when predicting the q-axis reactance in such machines. A comparison of measured orthogonal axis flux linkages/reactances with 3-D FEA studies is presented for the first time.",
"title": ""
},
{
"docid": "f1b3831db9900a2f573b76113cd4068c",
"text": "Digital signature has been widely employed in wireless mobile networks to ensure the authenticity of messages and identity of nodes. A paramount concern in signature verification is reducing the verification delay to ensure the network QoS. To address this issue, researchers have proposed the batch cryptography technology. However, most of the existing works focus on designing batch verification algorithms without sufficiently considering the impact of invalid signatures. The performance of batch verification could dramatically drop, if there are verification failures caused by invalid signatures. In this paper, we propose a Game-theory-based Batch Identification Model (GBIM) for wireless mobile networks, enabling nodes to find invalid signatures with the optimal delay under heterogeneous and dynamic attack scenarios. Specifically, we design an incomplete information game model between a verifier and its attackers, and prove the existence of Nash Equilibrium, to select the dominant algorithm for identifying invalid signatures. Moreover, we propose an auto-match protocol to optimize the identification algorithm selection, when the attack strategies can be estimated based on history information. Comprehensive simulation results demonstrate that GBIM can identify invalid signatures more efficiently than existing algorithms.",
"title": ""
},
{
"docid": "42c6eaae2cbdb850f634d987ab7d1cdb",
"text": "The main aim of this paper is to solve a path planning problem for an autonomous mobile robot in static and dynamic environments by determining the collision-free path that satisfies the chosen criteria for shortest distance and path smoothness. The algorithm mimics the real world by adding the actual size of the mobile robot to that of the obstacles and formulating the problem as a moving point in the free-space. The proposed path planning algorithm consists of three modules: in the first module, the path planning algorithm forms an optimised path by conducting a hybridized Particle Swarm Optimization-Modified Frequency Bat (PSO-MFB) algorithm that minimises distance and follows path smoothness criteria; in the second module, any infeasible points generated by the proposed PSO-MFB Algorithm are detected by a novel Local Search (LS) algorithm and integrated with the PSO-MFB algorithm to be converted into feasible solutions; the third module features obstacle detection and avoidance (ODA), which is triggered when the mobile robot detects obstacles within its sensing region, allowing it to avoid collision with obstacles. Simulations have been carried out that indicated that this method generates a feasible path even in complex dynamic environments and thus overcomes the shortcomings of conventional approaches such as grid methods. Comparisons with previous examples in the literature are also included in the results.",
"title": ""
},
{
"docid": "5b9a3e12d2a6550a7291dcb3aa964dd8",
"text": "To apply semiotics to organisational analysis and information systems design, it is essential to unite two basic concepts: the sign and the norm. A sign is anything that stands for something else for some community. A norm is a generalised disposition to the world shared by members of a community. When its condition is met, a norm generates a propositional attitude which may, but not necessarily will, affect the subject's behaviour. Norms reflect regularities in the behaviour of members in an organisation, allowing them to coordinate their actions. Organised behaviour is normgoverned behaviour. Signs trigger the norms leading to more signs being produced. Both signs and norms lend themselves to empirical study. The focus in this paper is on the properties of norms since those for signs are relatively well known. The paper discusses a number of different taxonomies of norms: formal, informal, technical; evaluative, perceptual, behavioural, cognitive; structure, action; substantive, communication and control. A semiotic analysis of information systems is adduced in this paper from the social, pragmatic, semantic, syntactic, empiric and physical perspectives. The paper finally presents a semiotic approach to information systems design, by discussing the method of information modelling and systems architecture. This approach shows advantages over other traditional one in a higher degree of separation of knowledge, and hence system’s consistency, integrity and maintainability.",
"title": ""
}
] |
scidocsrr
|
ef2deabfa26382894c8cb78670f3841a
|
Explaining Explanation For “Explainable Ai”
|
[
{
"docid": "e056192e11fb6430ec1d3e64c2336df3",
"text": "Teleological explanations (TEs) account for the existence or properties of an entity in terms of a function: we have hearts because they pump blood, and telephones for communication. While many teleological explanations seem appropriate, others are clearly not warranted--for example, that rain exists for plants to grow. Five experiments explore the theoretical commitments that underlie teleological explanations. With the analysis of [Wright, L. (1976). Teleological Explanations. Berkeley, CA: University of California Press] from philosophy as a point of departure, we examine in Experiment 1 whether teleological explanations are interpreted causally, and confirm that TEs are only accepted when the function invoked in the explanation played a causal role in bringing about what is being explained. However, we also find that playing a causal role is not sufficient for all participants to accept TEs. Experiment 2 shows that this is not because participants fail to appreciate the causal structure of the scenarios used as stimuli. In Experiments 3-5 we show that the additional requirement for TE acceptance is that the process by which the function played a causal role must be general in the sense of conforming to a predictable pattern. These findings motivate a proposal, Explanation for Export, which suggests that a psychological function of explanation is to highlight information likely to subserve future prediction and intervention. We relate our proposal to normative accounts of explanation from philosophy of science, as well as to claims from psychology and artificial intelligence.",
"title": ""
}
] |
[
{
"docid": "1d50c8598a41ed7953e569116f59ae41",
"text": "Several web-based platforms have emerged to ease the development of interactive or near real-time IoT applications by providing a way to connect things and services together and process the data they emit using a data flow paradigm. While these platforms have been found to be useful on their own, many IoT scenarios require the coordination of computing resources across the network: on servers, gateways and devices themselves. To address this, we explore how to extend existing IoT data flow platforms to create a system suitable for execution on a range of run time environments, toward supporting distributed IoT programs that can be partitioned between servers, gateways and devices. Eventually we aim to automate the distribution of data flows using appropriate distribution mechanism, and optimization heuristics based on participating resource capabilities and constraints imposed by the developer.",
"title": ""
},
{
"docid": "6a51e7a1b32a844160ba6a0e3b329b46",
"text": "We present an overview of the current pharmacological treatment of urinary incontinence (UI) in women, according to the latest evidence available. After a brief description of the lower urinary tract receptors and mediators (detrusor, bladder neck, and urethra), the potential sites of pharmacological manipulation in the treatment of UI are discussed. Each class of drug used to treat UI has been evaluated, taking into account published rate of effectiveness, different doses, and way of administration. The prevalence of the most common adverse effects and overall compliance had also been pointed out, with cost evaluation after 1 month of treatment for each class of drug. Moreover, we describe those newer agents whose efficacy and safety need to be further investigated. We stress the importance of a better understanding of the causes and pathophysiology of UI to ensure newer and safer treatments for such a debilitating condition.",
"title": ""
},
{
"docid": "5c05436c8ec2cce8d13279bd9f926510",
"text": "This paper concerns 18-40 GHz 1times 16 beam shaping and 1times 8 beam steering phased antenna arrays (PAAs) realized on a single low-cost printed circuit board substrate. The system consists of a wideband power divider with amplitude taper for sidelobe suppression, wideband microstrip-to-slotline transition, a low-cost true time piezoelectric transducer (PET)-controlled phase shifter, and wideband Fermi antennas with corrugations along the sides. A coplanar stripline is used under a PET-controlled phase shifter, which can generate 50% more phase shift compared to the perturbation on microstrip lines previously published. The systems are fabricated using electro-fine-forming microfabrication technology. Measured return loss is better than 10 dB from 18 to 40 GHz for both the beam-shaping and beam-steering PAAs. The beam-shaping PAA has a 12deg 3-dB beamwidth broadening range. The sidelobe ratios (SLRs) are 27, 23, and 20 dB at 20, 30, and 40 GHz, respectively, without perturbation. The SLRs are 20, 16, and 15 dB at 20, 30, and 40 GHz with maximum perturbation. The beam-steering PAA has a 36deg (-17deg to +19deg ) beam-scanning range measured at 30 GHz.",
"title": ""
},
{
"docid": "e1b9795030dac51172c20a49113fac23",
"text": "Bin packing problems are a class of optimization problems that have numerous applications in the industrial world, ranging from efficient cutting of material to packing various items in a larger container. We consider here only rectangular items cut off an infinite strip of material as well as off larger sheets of fixed dimensions. This problem has been around for many years and a great number of publications can be found on the subject. Nevertheless, it is often difficult to reconcile a theoretical paper and practical application of it. The present work aims to create simple but, at the same time, fast and efficient algorithms, which would allow one to write high-speed and capable software that can be used in a real-time application.",
"title": ""
},
{
"docid": "a8d9b1db27530c5170f5976dfe880bcd",
"text": "The success of Deep Learning and its potential use in many important safety- critical applications has motivated research on formal verification of Neural Network (NN) models. Despite the reputation of learned NN models to behave as black boxes and the theoretical hardness of proving their properties, researchers have been successful in verifying some classes of models by exploiting their piecewise linear structure. Unfortunately, most of these approaches test their algorithms without comparison with other approaches. As a result, the pros and cons of the different algorithms are not well understood. Motivated by the need to accelerate progress in this very important area, we investigate the trade-offs of a number of different approaches based on Mixed Integer Programming, Satisfiability Modulo Theory, as well as a novel method based on the Branch-and-Bound framework. We also propose a new data set of benchmarks, in addition to a collection of pre- viously released testcases that can be used to compare existing methods. Our analysis not only allows a comparison to be made between different strategies, the comparison of results from different solvers also revealed implementation bugs in published methods. We expect that the availability of our benchmark and the analysis of the different approaches will allow researchers to develop and evaluate promising approaches for making progress on this important topic.",
"title": ""
},
{
"docid": "ec788f48207b0a001810e1eabf6b2312",
"text": "Maximum likelihood factor analysis provides an effective method for estimation of factor matrices and a useful test statistic in the likelihood ratio for rejection of overly simple factor models. A reliability coefficient is proposed to indicate quality of representation of interrelations among attributes in a battery by a maximum likelihood factor analysis. Usually, for a large sample of individuals or objects, the likelihood ratio statistic could indicate that an otherwise acceptable factor model does not exactly represent the interrelations among the attributes for a population. The reliability coefficient could indicate a very close representation in this case and be a better indication as to whether to accept or reject the factor solution.",
"title": ""
},
{
"docid": "b85e5b1f819dbff1a91baba87975e17f",
"text": "Super capacitor has the advantage of quick charge, large power density, and long cycle life. The shortage is the lower energy density compared with electrochemistry batteries. These features make it suitable for a short-distance electric bus used in the city. Because of the capacitance difference between the capacitor cells, after a number of deep discharging/charging cycles, the voltage difference between cells will be enlarged. This will accelerate the aging of the weak super capacitors and affect the output power. So, a management system with an equalization function is essential. In this paper, a practical super-capacitor stacks management system with dynamic equalization techniques is proposed. The function of the management system includes: monitoring the current, voltage, and temperature of the stacks, control of charge, and discharge with equalization online. A switched-capacitor equalization approach is adopted showing a low-cost way to meet the accuracy requirements. The dynamic equalized charging and discharging circuit is described. The algorithm to increase the operation speed and the precision is analyzed. By dynamically redistributing the current, the equalization procedure can be quicker and more efficient. This approach has been verified by experiments",
"title": ""
},
{
"docid": "7a3b5ab64e9ef5cd0f0b89391bb8bee2",
"text": "Quality enhancement of humanitarian assistance is far from a technical task. It is interwoven with debates on politics of principles and people are intensely committed to the various outcomes these debates might have. It is a field of strongly competing truths, each with their own rationale and appeal. The last few years have seen a rapid increase in discussions, policy paper and organisational initiatives regarding the quality of humanitarian assistance. This paper takes stock of the present initiatives and of the questions raised with regard to the quality of humanitarian assistance.",
"title": ""
},
{
"docid": "367406644a29b4894df011b95add5985",
"text": "Graphs have long been proposed as a tool to browse and navigate in a collection of documents in order to support exploratory search. Many techniques to automatically extract different types of graphs, showing for example entities or concepts and different relationships between them, have been suggested. While experimental evidence that they are indeed helpful exists for some of them, it is largely unknown which type of graph is most helpful for a specific exploratory task. However, carrying out experimental comparisons with human subjects is challenging and time-consuming. Towards this end, we present the GraphDocExplore framework. It provides an intuitive web interface for graph-based document exploration that is optimized for experimental user studies. Through a generic graph interface, different methods to extract graphs from text can be plugged into the system. Hence, they can be compared at minimal implementation effort in an environment that ensures controlled comparisons. The system is publicly available under an open-source license.1",
"title": ""
},
{
"docid": "7a5fb7d551d412fd8bdbc3183dafc234",
"text": "Presentations have been an effective means of delivering information to groups for ages. Over the past few decades, technological advancements have revolutionized the way humans deliver presentations. Despite that, the quality of presentations can be varied and affected by a variety of reasons. Conventional presentation evaluation usually requires painstaking manual analysis by experts. Although the expert feedback can definitely assist users in improving their presentation skills, manual evaluation suffers from high cost and is often not accessible to most people. In this work, we propose a novel multi-sensor self-quantification framework for presentations. Utilizing conventional ambient sensors (i.e., static cameras, Kinect sensor) and the emerging wearable egocentric sensors (i.e., Google Glass), we first analyze the efficacy of each type of sensor with various nonverbal assessment rubrics, which is followed by our proposed multi-sensor presentation analytics framework. The proposed framework is evaluated on a new presentation dataset, namely NUS Multi-Sensor Presentation (NUSMSP) dataset, which consists of 51 presentations covering a diverse set of topics. The dataset was recorded with ambient static cameras, Kinect sensor, and Google Glass. In addition to multi-sensor analytics, we have conducted a user study with the speakers to verify the effectiveness of our system generated analytics, which has received positive and promising feedback.",
"title": ""
},
{
"docid": "0264a3c21559a1b9c78c42d7c9848783",
"text": "This paper presents the first linear bulk CMOS power amplifier (PA) targeting low-power fifth-generation (5G) mobile user equipment integrated phased array transceivers. The output stage of the PA is first optimized for power-added efficiency (PAE) at a desired error vector magnitude (EVM) and range given a challenging 5G uplink use case scenario. Then, inductive source degeneration in the optimized output stage is shown to enable its embedding into a two-stage transformer-coupled PA; by broadening interstage impedance matching bandwidth and helping to reduce distortion. Designed and fabricated in 1P7M 28 nm bulk CMOS and using a 1 V supply, the PA achieves +4.2 dBm/9% measured Pout/PAE at -25 dBc EVM for a 250 MHz-wide 64-quadrature amplitude modulation orthogonal frequency division multiplexing signal with 9.6 dB peak-to-average power ratio. The PA also achieves 35.5%/10% PAE for continuous wave signals at saturation/9.6 dB back-off from saturation. To the best of the authors' knowledge, these are the highest measured PAE values among published K-and Ka-band CMOS PAs.",
"title": ""
},
{
"docid": "c53e4ab482ff23697d75a4b3872c57b5",
"text": "Climate Change during and after the Roman Empire: Reconstructing the Past from Scientiac and Historical Evidence When this journal pioneered the study of history and climate in 1979, the questions quickly outstripped contemporary science and history. Today climate science uses a formidable and expanding array of new methods to measure pre-modern environments, and to open the way to exploring how Journal of Interdisciplinary History, xliii:2 (Autumn, 2012), 169–220.",
"title": ""
},
{
"docid": "eef1fd772f50fb38e882832ed082efbd",
"text": "In this paper, fingerprint images compressed with WSQ, CAWDR and JPEG2000 are evaluated for fingerprint recognition performance. With high compression ratio between 40:1 to 160:1, fingerprint images which lost a lot of their details are used to find fingercodes, or core point. Euclidean Distance is the method used to find the matched fingerprint. We also proposed the subband-reduced CAWDR method which results in comparative recognition performance to the conventional CAWDR. The results of recognition performance of all coders are summarized in the paper.",
"title": ""
},
{
"docid": "d02f4c07881b467b619b3d4a03bcade2",
"text": "As more users are connected to the Internet and conduct their daily activities electronically, computer users have become the target of an underground economy that infects hosts with malware or adware for financial gain. Unfortunately, even a single visit to an infected web site enables the attacker to detect vulnerabilities in the user’s applications and force the download a multitude of malware binaries. Frequently, this malware allows the adversary to gain full control of the compromised systems leading to the ex-filtration of sensitive information or installation of utilities that facilitate remote control of the host. We believe that such behavior is similar to our traditional understanding of botnets. However, the main difference is that web-based malware infections are pull-based and that the resulting command feedback loop is looser. To characterize the nature of this rising thread, we identify the four prevalent mechanisms used to inject malicious content on popular web sites: web server security, user contributed content, advertising and third-party widgets. For each of these areas, we present examples of abuse found on the Internet. Our aim is to present the state of malware on the Web and emphasize the importance of this rising threat.",
"title": ""
},
{
"docid": "3e4bdf640db171b95b5b27a3f0b4621a",
"text": "It is necessary to have a realistic estimation of its performance prior to the large scale deployment of any biometric system. In the domain of fingerprint biometric, 3D fingerprint scan technology has been developing very fast. However, there is no 3D fingerprint database publicly available for research purpose. To evaluate the matching performance of 3D fingerprints and the compatibility of 2D and 3D fingerprints comprehensively, we have established a large fingerprint database using two commercial fingerprint sensors. The database consists of both 3D fingerprints and their corresponding 2D fingerprints. We have carried out several verification experiments using a commercial fingerprint identification software. The results can be served as performance criterion of the database, which will be released publicly together with the database in late 2014.",
"title": ""
},
{
"docid": "e4319431eb83ed67ba03b66957de6f9e",
"text": "An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well. This paper gives overview of Artificial Neural Network, working & training of ANN. It also explain the application and advantages of ANN.",
"title": ""
},
{
"docid": "a5f557ddac63cd24a11c1490e0b4f6d4",
"text": "Continuous opinion dynamics optimizer (CODO) is an algorithm based on human collective opinion formation process for solving continuous optimization problems. In this paper, we have studied the impact of topology and introduction of leaders in the society on the optimization performance of CODO. We have introduced three new variants of CODO and studied the efficacy of algorithms on several benchmark functions. Experimentation demonstrates that scale free CODO performs significantly better than all algorithms. Also, the role played by individuals with different degrees during the optimization process is studied.",
"title": ""
},
{
"docid": "f141bd66dc2a842c21f905e3e01fa93c",
"text": "In this paper, we develop the nonsubsampled contourlet transform (NSCT) and study its applications. The construction proposed in this paper is based on a nonsubsampled pyramid structure and nonsubsampled directional filter banks. The result is a flexible multiscale, multidirection, and shift-invariant image decomposition that can be efficiently implemented via the a trous algorithm. At the core of the proposed scheme is the nonseparable two-channel nonsubsampled filter bank (NSFB). We exploit the less stringent design condition of the NSFB to design filters that lead to a NSCT with better frequency selectivity and regularity when compared to the contourlet transform. We propose a design framework based on the mapping approach, that allows for a fast implementation based on a lifting or ladder structure, and only uses one-dimensional filtering in some cases. In addition, our design ensures that the corresponding frame elements are regular, symmetric, and the frame is close to a tight one. We assess the performance of the NSCT in image denoising and enhancement applications. In both applications the NSCT compares favorably to other existing methods in the literature",
"title": ""
},
{
"docid": "fbac8859e581fd1622bad0b50ac0a3f5",
"text": "OBJECTIVE\nThis preliminary study sought to determine whether the imagery perspective used during mental practice (MP) differentially influenced performance outcomes after stroke.\n\n\nMETHOD\nNineteen participants with unilateral subacute stroke (9 men and 10 women, ages 28-77) were randomly allocated to one of three groups. All groups received 30-min occupational therapy sessions 2×/wk for 6 wk. Experimental groups received MP training in functional tasks using either an internal or an external perspective; the control group received relaxation imagery training. Participants were pre- and posttested using the Fugl-Meyer Motor Assessment (FMA), the Jebsen-Taylor Test of Hand Function (JTTHF), and the Canadian Occupational Performance Measure (COPM).\n\n\nRESULTS\nAt posttest, the internal and external experimental groups showed statistically similar improvements on the FMA and JTTHF (p < .05). All groups improved on the COPM (p < .05).\n\n\nCONCLUSION\nMP combined with occupational therapy improves upper-extremity recovery after stroke. MP does not appear to enhance self-perception of performance. This preliminary study suggests that imagery perspective may not be an important variable in MP interventions.",
"title": ""
},
{
"docid": "69fd3e6e9a1fc407d20b0fb19fc536e3",
"text": "In the last decade, the research topic of automatic analysis of facial expressions has become a central topic in machine vision research. Nonetheless, there is a glaring lack of a comprehensive, readily accessible reference set of face images that could be used as a basis for benchmarks for efforts in the field. This lack of easily accessible, suitable, common testing resource forms the major impediment to comparing and extending the issues concerned with automatic facial expression analysis. In this paper, we discuss a number of issues that make the problem of creating a benchmark facial expression database difficult. We then present the MMI facial expression database, which includes more than 1500 samples of both static images and image sequences of faces in frontal and in profile view displaying various expressions of emotion, single and multiple facial muscle activation. It has been built as a Web-based direct-manipulation application, allowing easy access and easy search of the available images. This database represents the most comprehensive reference set of images for studies on facial expression analysis to date.",
"title": ""
}
] |
scidocsrr
|
26a8f06753605e82112059ac7153c8d4
|
Hybrid Beamforming for Massive MIMO: A Survey
|
[
{
"docid": "b7d1428434a7274b55a00bce2cc0cf4f",
"text": "This paper studies wideband hybrid precoder for downlink space-division multiple-access and orthogonal frequency-division multiple-access (SDMA-OFDMA) massive multi-input multi-output (MIMO) systems. We first derive an iterative algorithm to alternatingly optimize the phase-shifter based wideband analog precoder and low-dimensional digital precoders, then an efficient low-complexity non-iterative hybrid precoder proposes. Simulation results show that in wideband systems the performance of hybrid precoder is affected by the employed frequency-domain scheduling method and the number of available radio frequency (RF) chains, which can perform as well as narrowband hybrid precoder when greedy scheduling is employed and the number of RF chains is large.",
"title": ""
}
] |
[
{
"docid": "20ebefc5be0e91e15e4773c633624224",
"text": "Effects of different levels of Biomin® IMBO synbiotic, including Enterococcus faecium (as probiotic), and fructooligosaccharides (as prebiotic) on survival, growth performance, and digestive enzyme activities of common carp fingerlings (Cyprinus carpio) were evaluated. The experiment was carried out in four treatments (each with 3 replicates), including T1 = control with non-synbiotic diet, T2 = 0.5 g/kg synbiotic diet, T3 = 1 g/kg synbiotic diet, and T4 = 1.5 g/kg synbiotic diet. In total 300 fish with an average weight of 10 ± 1 g were distributed in 12 tanks (25 animals per 300 l) and were fed experimental diets over a period of 60 days. The results showed that synbiotic could significantly enhance growth parameters (weight gain, length gain, specific growth rate, percentage weight gain) (P < 0.05), but did not exhibit any effect on survival rate (P > 0.05) compared with the control. An assay of the digestive enzyme activities demonstrated that the trypsin and chymotrypsin activities of synbiotic groups were considerably increased than those in the control (P < 0.05), but there was no significant difference in the levels of α-amylase, lipase, or alkaline phosphatase (P > 0.05). This study indicated that different levels of synbiotic have the capability to enhance probiotic substitution, to improve digestive enzyme activity which leads to digestive system efficiency, and finally to increase growth. It seems that the studied synbiotic could serve as a good diet supplement for common carp cultures.",
"title": ""
},
{
"docid": "bc1d4ce838971d6a04d5bf61f6c3f2d8",
"text": "This paper presents a novel network slicing management and orchestration architectural framework. A brief description of business scenarios and potential customers of network slicing is provided, illustrating the need for ordering network services with very different requirements. Based on specific customer goals (of ordering and building an end-to-end network slice instance) and other requirements gathered from industry and standardization associations, a solution is proposed enabling the automation of end-to-end network slice management and orchestration in multiple resource domains. This architecture distinguishes between two main design time and runtime components: Network Slice Design and Multi-Domain Orchestrator, belonging to different competence service areas with different players in these domains, and proposes the required interfaces and data structures between these components.",
"title": ""
},
{
"docid": "231732058c9eb87d953eb457b7298fb8",
"text": "The iris is regarded as one of the most useful traits for biometric recognition and the dissemination of nationwide iris-based recognition systems is imminent. However, currently deployed systems rely on heavy imaging constraints to capture near infrared images with enough quality. Also, all of the publicly available iris image databases contain data correspondent to such imaging constraints and therefore are exclusively suitable to evaluate methods thought to operate on these type of environments. The main purpose of this paper is to announce the availability of the UBIRIS.v2 database, a multisession iris images database which singularly contains data captured in the visible wavelength, at-a-distance (between four and eight meters) and on on-the-move. This database is freely available for researchers concerned about visible wavelength iris recognition and will be useful in accessing the feasibility and specifying the constraints of this type of biometric recognition.",
"title": ""
},
{
"docid": "74dead8ad89ae4a55105fb7ae95d3e20",
"text": "Improved health is one of the many reasons people choose to adopt a vegetarian diet, and there is now a wealth of evidence to support the health benefi ts of a vegetarian diet. Abstract: There is now a significant amount of research that demonstrates the health benefits of vegetarian and plant-based diets, which have been associated with a reduced risk of obesity, diabetes, heart disease, and some types of cancer as well as increased longevity. Vegetarian diets are typically lower in fat, particularly saturated fat, and higher in dietary fiber. They are also likely to include more whole grains, legumes, nuts, and soy protein, and together with the absence of red meat, this type of eating plan may provide many benefits for the prevention and treatment of obesity and chronic health problems, including diabetes and cardiovascular disease. Although a well-planned vegetarian or vegan diet can meet all the nutritional needs of an individual, it may be necessary to pay particular attention to some nutrients to ensure an adequate intake, particularly if the person is on a vegan diet. This article will review the evidence for the health benefits of a vegetarian diet and also discuss strategies for meeting the nutritional needs of those following a vegetarian or plant-based eating pattern.",
"title": ""
},
{
"docid": "9f9dcb320149d4a84bec8b1587b73aa2",
"text": "The sheer volume of multimedia contents generated by today's Internet services are stored in the cloud. The traditional indexing method associating the user-generated metadata with the content is vulnerable to the inaccuracy caused by the low quality of the metadata. While the content-based indexing does not depend on the error-prone metadata. However, the state-of-the-art research focuses on developing descriptive features and miss the system-oriented considerations when incorporating these features into the practical cloud computing systems. We propose an Update-Efficient and Parallel-Friendly content-based multimedia indexing system, called Partitioned Hash Forest (PHF). The PHF system incorporates the state-of-the-art content-based indexing models and multiple system-oriented optimizations. PHF contains an approximate content-based index and leverages the hierarchical memory system to support the high volume of updates. Additionally, the content-aware data partitioning and lock-free concurrency management module enable the parallel processing of the concurrent user requests. We evaluate PHF in terms of indexing accuracy and system efficiency by comparing it with the state-of-the-art content-based indexing algorithm and its variances. We achieve the significantly better accuracy with less resource consumption, around 37% faster in update processing and up to 2.5X throughput speedup in a multi-core platform comparing to other parallel-friendly designs.",
"title": ""
},
{
"docid": "76cef1b6d0703127c3ae33bcf71cdef8",
"text": "Risks have a significant impact on a construction project’s performance in terms of cost, time and quality. As the size and complexity of the projects have increased, an ability to manage risks throughout the construction process has become a central element preventing unwanted consequences. How risks are shared between the project actors is to a large extent governed by the procurement option and the content of the related contract documents. Therefore, selecting an appropriate project procurement option is a key issue for project actors. The overall aim of this research is to increase the understanding of risk management in the different procurement options: design-bid-build contracts, designbuild contracts and collaborative form of partnering. Deeper understanding is expected to contribute to a more effective risk management and, therefore, a better project output and better value for both clients and contractors. The study involves nine construction projects recently performed in Sweden and comprises a questionnaire survey and a series of interviews with clients, contractors and consultants involved in these construction projects. The findings of this work show a lack of an iterative approach to risk management, which is a weakness in current procurement practices. This aspect must be addressed if the risk management process is to serve projects and, thus, their clients. The absence of systematic risk management is especially noted in the programme phase, where it arguably has the greatest potential impact. The production phase is where most interest and activity are to be found. As a matter of practice, the communication of risks between the actors simply does not work to the extent that it must if projects are to be delivered with certainty, irrespective of the form of procurement. A clear connection between the procurement option and risk management in construction projects has been found. Traditional design-bid-build contracts do not create opportunities for open discussion of project risks and joint risk management. A number of drivers of and obstacles to effective risk management have been explored in the study. Every actor’s involvement in dialogue, effective communication and information exchange, open attitudes and trustful relationship are the factors that support open discussion of project risks and, therefore, contribute to successful risk management. Based on the findings, a number of recommendations facilitating more effective risk management have been developed for the industry practitioners. Keywords--Risk Management, Risk Allocation, Construction Project, Construction Contract, Design-BidBuild, Design-Build, Partnering",
"title": ""
},
{
"docid": "dbc1fdbed86631ef12894ecd20e26ada",
"text": "With the popularity of mobile devices, mobile social networks (MSNs) have become an important platform for information dissemination. However, the spread of rumors in MSNs present a massive social threat. Currently, there are two kinds of methods to address this: blocking rumors at influential users and spreading truth to clarify rumors. However, most existing works either overlook the cost of various methods or only consider different methods individually. This paper proposes a heterogeneous-network-based epidemic model that incorporates the two kinds of methods to describe rumor spreading in MSNs. Moreover, two cost-efficient strategies are designed to restrain rumors. The first strategy is the real-time optimization strategy that minimizes the rumor-restraining cost by optimally combining various rumor-restraining methods such that a rumor can be extinct within an expected time period. The second strategy is the pulse spreading truth and continuous blocking rumor strategy that restrains rumor spreading through spreading truth periodically. The two strategies can restrain rumors in a continuous or periodical manner and guarantee cost efficiency. The experiments toward the Digg2009 data set demonstrate the effectiveness of the proposed model and the efficiency of the two strategies.",
"title": ""
},
{
"docid": "23b4f5576e2b279c17ce03410bd66162",
"text": "The aim of this review is to investigate barriers and challenges of wearable patient monitoring (WPM) solutions adopted by clinicians in acute, as well as in community, care settings. Currently, healthcare providers are coping with ever-growing healthcare challenges including an ageing population, chronic diseases, the cost of hospitalization, and the risk of medical errors. WPM systems are a potential solution for addressing some of these challenges by enabling advanced sensors, wearable technology, and secure and effective communication platforms between the clinicians and patients. A total of 791 articles were screened and 20 were selected for this review. The most common publication venue was conference proceedings (13, 54%). This review only considered recent studies published between 2015 and 2017. The identified studies involved chronic conditions (6, 30%), rehabilitation (7, 35%), cardiovascular diseases (4, 20%), falls (2, 10%) and mental health (1, 5%). Most studies focussed on the system aspects of WPM solutions including advanced sensors, wireless data collection, communication platform and clinical usability based on a specific area or disease. The current studies are progressing with localized sensor-software integration to solve a specific use-case/health area using non-scalable and ‘silo’ solutions. There is further work required regarding interoperability and clinical acceptance challenges. The advancement of wearable technology and possibilities of using machine learning and artificial intelligence in healthcare is a concept that has been investigated by many studies. We believe future patient monitoring and medical treatments will build upon efficient and affordable solutions of wearable technology.",
"title": ""
},
{
"docid": "b62b8862d26e5ce5bcbd2b434aff5d0e",
"text": "In this demo paper we present Docear's research paper recommender system. Docear is an academic literature suite to search, organize, and create research articles. The users' data (papers, references, annotations, etc.) is managed in mind maps and these mind maps are utilized for the recommendations. Using content-based filtering methods, Docear's recommender achieves click-through rates around 6%, in some scenarios even over 10%.",
"title": ""
},
{
"docid": "732eb96d39d250e6b1355f7f4d53feed",
"text": "Determine blood type is essential before administering a blood transfusion, including in emergency situation. Currently, these tests are performed manually by technicians, which can lead to human errors. Various systems have been developed to automate these tests, but none is able to perform the analysis in time for emergency situations. This work aims to develop an automatic system to perform these tests in a short period of time, adapting to emergency situations. To do so, it uses the slide test and image processing techniques using the IMAQ Vision from National Instruments. The image captured after the slide test is processed and detects the occurrence of agglutination. Next the classification algorithm determines the blood type in analysis. Finally, all the information is stored in a database. Thus, the system allows determining the blood type in an emergency, eliminating transfusions based on the principle of universal donor and reducing transfusion reactions risks.",
"title": ""
},
{
"docid": "1b59c72119b13e5860cab40e18264aa6",
"text": "A taxonomy is a semantic hierarchy, consisting of concepts linked by is-a relations. While a large number of taxonomies have been constructed from human-compiled resources (e.g., Wikipedia), learning taxonomies from text corpora has received a growing interest and is essential for longtailed and domain-specific knowledge acquisition. In this paper, we overview recent advances on taxonomy construction from free texts, reorganizing relevant subtasks into a complete framework. We also overview resources for evaluation and discuss challenges for future research.",
"title": ""
},
{
"docid": "aa72af5867ec5862706fc66bacfd622a",
"text": "This paper presents a sensor fusion strategy applied for Simultaneous Localization and Mapping (SLAM) in dynamic environments. The designed approach consists of two features: (i) the first one is a fusion module which synthesizes line segments obtained from laser rangefinder and line features extracted from monocular camera. This policy eliminates any pseudo segments that appear from any momentary pause of dynamic objects in laser data. (ii) The second characteristic is a modified multi-sensor point estimation fusion SLAM (MPEF-SLAM) that incorporates two individual Extended Kalman Filter (EKF) based SLAM algorithms: monocular and laser SLAM. The error of the localization in fused SLAM is reduced compared with those of individual SLAM. Additionally, a new data association technique based on the homography transformation matrix is developed for monocular SLAM. This data association method relaxes the pleonastic computation. The experimental results validate the performance of the proposed sensor fusion and data association method.",
"title": ""
},
{
"docid": "906c92a4e913d2b7e478155492a69013",
"text": "Most investigations into near-memory hardware accelerators for deep neural networks have primarily focused on inference, while the potential of accelerating training has received relatively little attention so far. Based on an in-depth analysis of the key computational patterns in state-of-the-art gradient-based training methods, we propose an efficient near-memory acceleration engine called NTX that can be used to train state-of-the-art deep convolutional neural networks at scale. Our main contributions are: (i) a loose coupling of RISC-V cores and NTX co-processors reducing offloading overhead by <inline-formula><tex-math notation=\"LaTeX\">$7\\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>7</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"schuiki-ieq1-2876312.gif\"/></alternatives></inline-formula> over previously published results; (ii) an optimized IEEE 754 compliant data path for fast high-precision convolutions and gradient propagation; (iii) evaluation of near-memory computing with NTX embedded into residual area on the Logic Base die of a Hybrid Memory Cube; and (iv) a scaling analysis to meshes of HMCs in a data center scenario. We demonstrate a <inline-formula><tex-math notation=\"LaTeX\">$2.7\\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>2</mml:mn><mml:mo>.</mml:mo><mml:mn>7</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"schuiki-ieq2-2876312.gif\"/></alternatives></inline-formula> energy efficiency improvement of NTX over contemporary GPUs at <inline-formula><tex-math notation=\"LaTeX\">$4.4\\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>4</mml:mn><mml:mo>.</mml:mo><mml:mn>4</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"schuiki-ieq3-2876312.gif\"/></alternatives></inline-formula> less silicon area, and a compute performance of 1.2 Tflop/s for training large state-of-the-art networks with full floating-point precision. At the data center scale, a mesh of NTX achieves above 95 percent parallel and energy efficiency, while providing <inline-formula><tex-math notation=\"LaTeX\">$2.1\\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>2</mml:mn><mml:mo>.</mml:mo><mml:mn>1</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"schuiki-ieq4-2876312.gif\"/></alternatives></inline-formula> energy savings or <inline-formula><tex-math notation=\"LaTeX\">$3.1\\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>3</mml:mn><mml:mo>.</mml:mo><mml:mn>1</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"schuiki-ieq5-2876312.gif\"/></alternatives></inline-formula> performance improvement over a GPU-based system.",
"title": ""
},
{
"docid": "ef8be5104f9bc4a0f4353ed236b6afb8",
"text": "State-of-the-art human pose estimation methods are based on heat map representation. In spite of the good performance, the representation has a few issues in nature, such as non-differentiable postprocessing and quantization error. This work shows that a simple integral operation relates and unifies the heat map representation and joint regression, thus avoiding the above issues. It is differentiable, efficient, and compatible with any heat map based methods. Its effectiveness is convincingly validated via comprehensive ablation experiments under various settings, specifically on 3D pose estimation, for the first time.",
"title": ""
},
{
"docid": "5d3977c0a7e3e1a4129693342c6be3d3",
"text": "With the fast advances in nextgen sequencing technology, high-throughput RNA sequencing has emerged as a powerful and cost-effective way for transcriptome study. De novo assembly of transcripts provides an important solution to transcriptome analysis for organisms with no reference genome. However, there lacked understanding on how the different variables affected assembly outcomes, and there was no consensus on how to approach an optimal solution by selecting software tool and suitable strategy based on the properties of RNA-Seq data. To reveal the performance of different programs for transcriptome assembly, this work analyzed some important factors, including k-mer values, genome complexity, coverage depth, directional reads, etc. Seven program conditions, four single k-mer assemblers (SK: SOAPdenovo, ABySS, Oases and Trinity) and three multiple k-mer methods (MK: SOAPdenovo-MK, trans-ABySS and Oases-MK) were tested. While small and large k-mer values performed better for reconstructing lowly and highly expressed transcripts, respectively, MK strategy worked well for almost all ranges of expression quintiles. Among SK tools, Trinity performed well across various conditions but took the longest running time. Oases consumed the most memory whereas SOAPdenovo required the shortest runtime but worked poorly to reconstruct full-length CDS. ABySS showed some good balance between resource usage and quality of assemblies. Our work compared the performance of publicly available transcriptome assemblers, and analyzed important factors affecting de novo assembly. Some practical guidelines for transcript reconstruction from short-read RNA-Seq data were proposed. De novo assembly of C. sinensis transcriptome was greatly improved using some optimized methods.",
"title": ""
},
{
"docid": "3f03e4cf6c360462c43f2583386db735",
"text": "Owing to the limited resources of the sensor nodes, designing energy-efficient routing mechanism to prolong the overall network lifetime becomes one of the most important technologies in wireless sensor networks (WSNs). As an active branch of routing technology, cluster-based routing protocols have proven to be effective in network topology management, energy minimization, data aggregation and so on. In this paper, we present a survey of state-of-the-art routing techniques in WSNs. We first outline the clustering architecture in WSNs, and classify the proposed approaches based on their objectives and design principles. Furthermore, we highlight the challenges in clustering WSNs, including rotating the role of cluster heads, optimization of cluster size and communication mode, followed by a comprehensive survey of routing techniques. Finally, the paper concludes with possible future research areas.",
"title": ""
},
{
"docid": "7272ebab22d3efec95792acece86b4dd",
"text": "Many of today's machine learning (ML) systems are built by reusing an array of, often pre-trained, primitive models, each fulfilling distinct functionality (e.g., feature extraction). The increasing use of primitive models significantly simplifies and expedites the development cycles of ML systems. Yet, because most of such models are contributed and maintained by untrusted sources, their lack of standardization or regulation entails profound security implications, about which little is known thus far. In this paper, we demonstrate that malicious primitive models pose immense threats to the security of ML systems. We present a broad class of model-reuse attacks wherein maliciously crafted models trigger host ML systems to misbehave on targeted inputs in a highly predictable manner. By empirically studying four deep learning systems (including both individual and ensemble systems) used in skin cancer screening, speech recognition, face verification, and autonomous steering, we show that such attacks are (i) effective - the host systems misbehave on the targeted inputs as desired by the adversary with high probability, (ii) evasive - the malicious models function indistinguishably from their benign counterparts on non-targeted inputs, (iii) elastic - the malicious models remain effective regardless of various system design choices and tuning strategies, and (iv) easy - the adversary needs little prior knowledge about the data used for system tuning or inference. We provide analytical justification for the effectiveness of model-reuse attacks, which points to the unprecedented complexity of today's primitive models. This issue thus seems fundamental to many ML systems. We further discuss potential countermeasures and their challenges, which lead to several promising research directions.",
"title": ""
},
{
"docid": "ed95c3c25fe1dd3097b5ca84e0569b03",
"text": "The state-of-the-art in semantic segmentation is currently represented by fully convolutional networks (FCNs). However, FCNs use large receptive fields and many pooling layers, both of which cause blurring and low spatial resolution in the deep layers. As a result FCNs tend to produce segmentations that are poorly localized around object boundaries. Prior work has attempted to address this issue in post-processing steps, for example using a color-based CRF on top of the FCN predictions. However, these approaches require additional parameters and low-level features that are difficult to tune and integrate into the original network architecture. Additionally, most CRFs use colorbased pixel affinities, which are not well suited for semantic segmentation and lead to spatially disjoint predictions. To overcome these problems, we introduce a Boundary Neural Field (BNF), which is a global energy model integrating FCN predictions with boundary cues. The boundary information is used to enhance semantic segment coherence and to improve object localization. Specifically, we first show that the convolutional filters of semantic FCNs provide good features for boundary detection. We then employ the predicted boundaries to define pairwise potentials in our energy. Finally, we show that our energy decomposes semantic segmentation into multiple binary problems, which can be relaxed for efficient global optimization. We report extensive experiments demonstrating that minimization of our global boundary-based energy yields results superior to prior globalization methods, both quantitatively as well as qualitatively.",
"title": ""
},
{
"docid": "cf20ffac349478b3fc5753624eb17c7f",
"text": "Knowledge stickiness often impedes knowledge transfer. When knowledge is complex and the knowledge seeker lacks intimacy with the knowledge source, knowledge sticks in its point of origin because the knowledge seeker faces ambiguity about the best way to acquire the needed knowledge. We theorize that, given the extent of that ambiguity, knowledge seekers will make a choice to either ask for needed knowledge immediately after deciding it is needed, or wait and ask for it at a later date. We hypothesize that when knowledge is sticky, knowledge seekers will delay asking for knowledge and, in the interim period, use an enterprise social networking site to gather information that can lubricate stuck knowledge, such as how, when, and in what way to ask for the desired knowledge. We propose that by doing this, knowledge seekers can increase their ultimate satisfaction with the knowledge once they ask for it. Data describing specific instances of knowledge transfer occurring in a large telecommunications firm supported these hypotheses, showing that knowledge transfer is made easier by the fact that enterprise social networking sites make other peoples’ communications visible to casual observers such that knowledge seekers can gather information about the knowledge and its source simply by watching his or her actions through the technology, even if they never interacted with the source directly themselves. The findings show that simple awareness of others’ communications (what we call ambient awareness) played a pivotal role in helping knowledge seekers to obtain interpersonal and knowledge-related material with which to lubricate their interactions with knowledge sources. 1University of California, Santa Barbara, CA, USA 2Northwestern University, Evanston, IL, USA Corresponding Author: Paul M. Leonardi, Phelps Hall, University of California, Santa Barbara, CA, USA, 93106-5129. Email: Leonardi@tmp.ucsb.edu 540509 ABSXXX10.1177/0002764214540509American Behavioral ScientistLeonardi and Meyer research-article2014 at UNIV CALIFORNIA SANTA BARBARA on December 9, 2014 abs.sagepub.com Downloaded from Leonardi and Meyer 11",
"title": ""
},
{
"docid": "bb2b3944f72c0d1a530f971ddf6dc6fb",
"text": "UNLABELLED\nAny suture material, absorbable or nonabsorbable, elicits a kind of inflammatory reaction within the tissue. Nonabsorbable black silk suture and absorbable polyglycolic acid suture were compared clinically and histologically on various parameters.\n\n\nMATERIALS AND METHODS\nThis study consisted of 50 patients requiring minor surgical procedure, who were referred to the Department of Oral and Maxillofacial Surgery. Patients were selected randomly and sutures were placed in the oral cavity 7 days preoperatively. Polyglycolic acid was placed on one side and black silk suture material on the other. Seven days later, prior to surgical procedure the sutures will be assessed. After the surgical procedure the sutures will be placed postoperatively in the same way for 7 days, after which the sutures will be assessed clinically and histologically.\n\n\nRESULTS\nThe results of this study showed that all the sutures were retained in case of polyglycolic acid suture whereas four cases were not retained in case of black silk suture. As far as polyglycolic acid suture is concerned 25 cases were mild, 18 cases moderate and seven cases were severe. Black silk showed 20 mild cases, 21 moderate cases and six severe cases. The histological results showed that 33 cases showed mild, 14 cases moderate and three cases severe in case of polyglycolic acid suture. Whereas in case of black silk suture 41 cases were mild. Seven cases were moderate and two cases were severe. Black silk showed milder response than polyglycolic acid suture histologically.\n\n\nCONCLUSION\nThe polyglycolic acid suture was more superior because in all 50 patients the suture was retained. It had less tissue reaction, better handling characteristics and knotting capacity.",
"title": ""
}
] |
scidocsrr
|
37d190ba97fe95a7e8e43e0ad80091ba
|
pSnakes: A new radial active contour model and its application in the segmentation of the left ventricle from echocardiographic images
|
[
{
"docid": "f3c2663cb0341576d754bb6cd5f2c0f5",
"text": "This article surveys deformable models, a promising and vigorously researched computer-assisted medical image analysis technique. Among model-based techniques, deformable models offer a unique and powerful approach to image analysis that combines geometry, physics and approximation theory. They have proven to be effective in segmenting, matching and tracking anatomic structures by exploiting (bottom-up) constraints derived from the image data together with (top-down) a priori knowledge about the location, size and shape of these structures. Deformable models are capable of accommodating the significant variability of biological structures over time and across different individuals. Furthermore, they support highly intuitive interaction mechanisms that, when necessary, allow medical scientists and practitioners to bring their expertise to bear on the model-based image interpretation task. This article reviews the rapidly expanding body of work on the development and application of deformable models to problems of fundamental importance in medical image analysis, including segmentation, shape representation, matching and motion tracking.",
"title": ""
}
] |
[
{
"docid": "18833ea97a4562b2be5f46d4cc3af354",
"text": "Cloud computing is the trend of information resources sharing and also an important tool for enterprises or organizations to enhance competitiveness. Users have the flexibility to adjust request and configuration. However, network security and resource sharing issues have continued to threaten the operation of cloud computing, making cloud computing security encounter serious test. How to evaluate cloud computing security has become a topic worthy of further exploration. Combining system, management and technique three levels security factors, this paper proposes a Security Threats Measurement Model (STMM). Applying the STMM, security of cloud computing system environment can be effectively evaluated and cloud computing security defects and problems can be concretely identified. Based on the evaluation results, the user can choice the higher security service provider or request the service provider security improvement actions.",
"title": ""
},
{
"docid": "4f3066f6d45bc48cfe655642f384e09a",
"text": "There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of categorical perception. In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, surprise expressions lie between happiness and fear expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain.",
"title": ""
},
{
"docid": "f78e430994e9eeccd034df76d2b5316a",
"text": "An externally leveraged circular resonant piezoelectric actuator with haptic natural frequency and fast response time was developed within the volume of 10 mm diameter and 3.4 mm thickness for application in mobile phones. An efficient displacement-amplifying mechanism was developed using a piezoelectric bimorph, a lever system, and a mass-spring system. The proposed displacement-amplifying mechanism utilizes both internally and externally leveraged structures. The former generates bending by means of bending deformation of the piezoelectric bimorph, and the latter transforms the bending to radial displacement of the lever system, which is transformed to a large axial displacement of the spring. The piezoelectric bimorph, lever system, and spring were designed to maximize static displacement and the mass-spring system was designed to have a haptic natural frequency. The static displacement, natural frequency, maximum output displacement, and response time of the resonant piezoelectric actuator were calculated by means of finite-element analyses. The proposed resonant piezoelectric actuator was prototyped and the simulated results were verified experimentally. The prototyped piezoelectric actuator generated the maximum output displacement of 290 μm at the haptic natural frequency of 242 Hz. Owing to the proposed efficient displacement-amplifying mechanism, the proposed resonant piezoelectric actuator had the fast response time of 14 ms, approximately one-fifth of a conventional resonant piezoelectric actuator of the same size.",
"title": ""
},
{
"docid": "49002be42dfa6e6998e6975203357e3b",
"text": "In this paper, we present a new tone mapping algorithm for the display of high dynamic range images, inspired by adaptive process of the human visual system. The proposed algorithm is based on the center-surround Retinex processing. In our method, the local details are enhanced according to a non-linear adaptive spatial filter (Gaussian filter), whose shape (filter variance) is adapted to high-contrast edges of the image. Thus our method does not generate halo artifacts meanwhile preserves visibility and contrast impression of high dynamic range scenes in the common display devices. The proposed method is tested on a variety of HDR images and the results show the good performance of our method in terms of visual quality.",
"title": ""
},
{
"docid": "5f1be582b0afdea606e388fad9ca477b",
"text": "Aims: This article reports on construct validity and reliability of 30 items of the Practice Environment Scale of the Nursing Work Index (PES-NWI). Background: Australia, like other countries is experiencing a shortage of nurses and a multifactor approach to retention of nurses is required. One significant factor that has received increasing attention in the last decade, particularly in the United States is the nursing practice environment. Design: The reliability of the 30 items of the PES-NWI was assessed by Cronbach’s alpha and factor analysis was performed using principal component analysis. Setting: The PES-NWI was completed by nurses working in the aged care, private and public sectors in Queensland, Australia. Participants: A total of 3,000 were distributed to a random sample of members of the Queensland Nurses Union. Of these 1192 surveys were returned, a response rate of 40%. Results: The PES-NWI was shown to be reliable demonstrating internal consistency with a Cronbach’s alpha of the total scale of 0.948. The 30 items loaded onto 5 factors explaining 57.7% of the variance. The items across the factors differed slightly from those reported by the original author of the PES-NWI. Conclusion: This study indicates that the PES-NWI has construct validity and reliability in the Australian setting for nurses.",
"title": ""
},
{
"docid": "81b2a039a391b5f2c1a9a15c94f1f67e",
"text": "Evolution of resistance in pests can reduce the effectiveness of insecticidal proteins from Bacillus thuringiensis (Bt) produced by transgenic crops. We analyzed results of 77 studies from five continents reporting field monitoring data for resistance to Bt crops, empirical evaluation of factors affecting resistance or both. Although most pest populations remained susceptible, reduced efficacy of Bt crops caused by field-evolved resistance has been reported now for some populations of 5 of 13 major pest species examined, compared with resistant populations of only one pest species in 2005. Field outcomes support theoretical predictions that factors delaying resistance include recessive inheritance of resistance, low initial frequency of resistance alleles, abundant refuges of non-Bt host plants and two-toxin Bt crops deployed separately from one-toxin Bt crops. The results imply that proactive evaluation of the inheritance and initial frequency of resistance are useful for predicting the risk of resistance and improving strategies to sustain the effectiveness of Bt crops.",
"title": ""
},
{
"docid": "033792460507de261ee77c96dae3a6f7",
"text": "Being happy and finding life meaningful overlap, but there are important differences. A large survey revealed multiple differing predictors of happiness (controlling for meaning) and meaningfulness (controlling for happiness). Satisfying one’s needs and wants increased happiness but was largely irrelevant to meaningfulness. Happiness was largely present oriented, whereas meaningfulness involves integrating past, present, and future. For example, thinking about future and past was associated with high meaningfulness but low happiness. Happiness was linked to being a taker rather than a giver, whereas meaningfulness went with being a giver rather than a taker. Higher levels of worry, stress, and anxiety were linked to higher meaningfulness but lower happiness. Concerns with personal identity and expressing the self contributed to meaning but not happiness. We offer brief composite sketches of the unhappy but meaningful life and of the happy but meaningless life.",
"title": ""
},
{
"docid": "28fcee5c28c2b3aae6f4761afb00ebc2",
"text": "The presence of sarcasm in text can hamper the performance of sentiment analysis. The challenge is to detect the existence of sarcasm in texts. This challenge is compounded when bilingual texts are considered, for example using Malay social media data. In this paper a feature extraction process is proposed to detect sarcasm using bilingual texts; more specifically public comments on economic related posts on Facebook. Four categories of feature that can be extracted using natural language processing are considered; lexical, pragmatic, prosodic and syntactic. We also investigated the use of idiosyncratic feature to capture the peculiar and odd comments found in a text. To determine the effectiveness of the proposed process, a non-linear Support Vector Machine was used to classify texts, in terms of the identified features, according to whether they included sarcastic content or not. The results obtained demonstrate that a combination of syntactic, pragmatic and prosodic features produced the best performance with an F-measure score of 0.852.",
"title": ""
},
{
"docid": "62de6de8b92e4bba6ee947cd475363ee",
"text": "In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.",
"title": ""
},
{
"docid": "f35da2c06c3468fdf06884f85f8b2632",
"text": "Volunteer-based crowdsourcing depend critically on maintaining the engagement of participants. We explore a methodology for extending engagement in citizen science by combining machine learning with intervention design. We first present a platform for using real-time predictions about forthcoming disengagement to guide interventions. Then we discuss a set of experiments with delivering different messages to users based on the proximity to the predicted time of disengagement. The messages address motivational factors that were found in prior studies to influence users’ engagements. We evaluate this approach on Galaxy Zoo, one of the largest citizen science application on the web, where we traced the behavior and contributions of thousands of users who received intervention messages over a period of a few months. We found sensitivity of the amount of user contributions to both the timing and nature of the message. Specifically, we found that a message emphasizing the helpfulness of individual users significantly increased users’ contributions when delivered according to predicted times of disengagement, but not when delivered at random times. The influence of the message on users’ contributions was more pronounced as additional user data was collected and made available to the classifier.",
"title": ""
},
{
"docid": "a52673140d86780db6c73787e5f53139",
"text": "Human papillomavirus (HPV) is the most important etiological factor for cervical cancer. A recent study demonstrated that more than 20 HPV types were thought to be oncogenic for uterine cervical cancer. Notably, more than one-half of women show cervical HPV infections soon after their sexual debut, and about 90 % of such infections are cleared within 3 years. Immunity against HPV might be important for elimination of the virus. The innate immune responses involving macrophages, natural killer cells, and natural killer T cells may play a role in the first line of defense against HPV infection. In the second line of defense, adaptive immunity via cytotoxic T lymphocytes (CTLs) targeting HPV16 E2 and E6 proteins appears to eliminate cells infected with HPV16. However, HPV can evade host immune responses. First, HPV does not kill host cells during viral replication and therefore neither presents viral antigen nor induces inflammation. HPV16 E6 and E7 proteins downregulate the expression of type-1 interferons (IFNs) in host cells. The lack of co-stimulatory signals by inflammatory cytokines including IFNs during antigen recognition may induce immune tolerance rather than the appropriate responses. Moreover, HPV16 E5 protein downregulates the expression of HLA-class 1, and it facilitates evasion of CTL attack. These mechanisms of immune evasion may eventually support the establishment of persistent HPV infection, leading to the induction of cervical cancer. Considering such immunological events, prophylactic HPV16 and 18 vaccine appears to be the best way to prevent cervical cancer in women who are immunized in adolescence.",
"title": ""
},
{
"docid": "f6463026a75a981c22e00a98990a095a",
"text": "Thanks to their anonymity (pseudonymity) and elimination of trusted intermediaries, cryptocurrencies such as Bitcoin have created or stimulated growth in many businesses and communities. Unfortunately, some of these are criminal, e.g., money laundering, illicit marketplaces, and ransomware. Next-generation cryptocurrencies such as Ethereum will include rich scripting languages in support of smart contracts, programs that autonomously intermediate transactions. In this paper, we explore the risk of smart contracts fueling new criminal ecosystems. Specifically, we show how what we call criminal smart contracts (CSCs) can facilitate leakage of confidential information, theft of cryptographic keys, and various real-world crimes (murder, arson, terrorism).\n We show that CSCs for leakage of secrets (a la Wikileaks) are efficiently realizable in existing scripting languages such as that in Ethereum. We show that CSCs for theft of cryptographic keys can be achieved using primitives, such as Succinct Non-interactive ARguments of Knowledge (SNARKs), that are already expressible in these languages and for which efficient supporting language extensions are anticipated. We show similarly that authenticated data feeds, an emerging feature of smart contract systems, can facilitate CSCs for real-world crimes (e.g., property crimes).\n Our results highlight the urgency of creating policy and technical safeguards against CSCs in order to realize the promise of smart contracts for beneficial goals.",
"title": ""
},
{
"docid": "00f106ff157e515ed8fde53fdaf1491e",
"text": "In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural network or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data.",
"title": ""
},
{
"docid": "7cc362ec57b9b4a8f0e5d9beaf0ed02f",
"text": "Conclusions Trading Framework Deep Learning has become a robust machine learning tool in recent years, and models based on deep learning has been applied to various fields. However, applications of deep learning in the field of computational finance are still limited[1]. In our project, Long Short Term Memory (LSTM) Networks, a time series version of Deep Neural Networks model, is trained on the stock data in order to forecast the next day‘s stock price of Intel Corporation (NASDAQ: INTC): our model predicts next day’s adjusted closing price based on information/features available until the present day. Based on the predicted price, we trade the Intel stock according to the strategy that we developed, which is described below. Locally Weighted Regression has also been performed in lieu of the unsupervised learning model for comparison.",
"title": ""
},
{
"docid": "50bd1cd30682a2a7bbb7570b636c504a",
"text": "A 3-stage wideband power amplifier (PA) using a 0.15 pm gallium nitride (GaN) monolithic microwave integrated circuit (MMIC) process from NRC is designed, fabricated, and measured. After characterization of the high electron mobility transistor (HEMT), a non-linear model was created from the measured data for use in the design. The reactively matched 3.8 mm × 1.8 mm PA also uses resistive elements for gain compensation and circuit stability. Measurements at 20 dBm source power show 35–38 dBm output power and 10–18% PAE over a 6 to 17 GHz bandwidth. These results demonstrate the highest output power per die area for a 3-stage GaN MMIC PA of this bandwidth in this power range.",
"title": ""
},
{
"docid": "d72652b6ad54422e6864baccc88786a8",
"text": "Neisseria meningitidis is a major global pathogen that continues to cause endemic and epidemic human disease. Initial exposure typically occurs within the nasopharynx, where the bacteria can invade the mucosal epithelium, cause fulminant sepsis, and disseminate to the central nervous system, causing bacterial meningitis. Recently, Chamot-Rooke and colleagues1 described a unique virulence property of N. meningitidis in which the bacterial surface pili, after contact with host cells, undergo a modification that facilitates both systemic invasion and the spread of colonization to close contacts. Person-to-person spread of N. meningitidis can result in community epidemics of bacterial meningitis, with major consequences for public health. In resource-poor nations, cyclical outbreaks continue to result in high mortality and long-term disability, particularly in sub-Saharan Africa, where access to early diagnosis, antibiotic therapy, and vaccination is limited.2,3 An exclusively human pathogen, N. meningitidis uses several virulence factors to cause disease. Highly charged and hydrophilic capsular polysaccharides protect N. meningitidis from phagocytosis and complement-mediated bactericidal activity of the innate immune system. A family of proteins (called opacity proteins) on the bacterial outer membrane facilitate interactions with both epithelial and endothelial cells. These proteins are phase-variable — that is, the genome of the bacterium encodes related opacity proteins that are variably expressed, depending on environment, allowing the bacterium to adjust to rapidly changing environmental conditions. Lipooligosaccharide, analogous to the lipopolysaccharide of enteric gram-negative bacteria, contains a lipid A moiety with endotoxin activity that promotes the systemic sepsis encountered clinically. However, initial attachment to host cells is primarily mediated by filamentous organelles referred to as type IV pili, which are common to many bacterial pathogens and unique in their ability to undergo both antigenic and phase variation. Within hours of attachment to the host endothelial cell, N. meningitidis induces the formation of protrusions in the plasma membrane of host cells that aggregate the bacteria into microcolonies and facilitate pili-mediated contacts between bacteria and between bacteria and host cells. After attachment and aggregation, N. meningitidis detaches from the aggregates to systemically invade the host, by means of a transcellular pathway that crosses the respiratory epithelium,4 or becomes aerosolized and spreads the colonization of new hosts (Fig. 1). Chamot-Rooke et al. dissected the molecular mechanism underlying this critical step of systemic invasion and person-to-person spread and reported that pathogenesis depends on a unique post-translational modification of the type IV pili. Using whole-protein mass spectroscopy, electron microscopy, and molecular modeling, they showed that the major component of N. meningitidis type IV pili (called PilE or pilin) undergoes an unusual post-translational modification by phosphoglycerol. Expression of pilin phosphotransferase, the enzyme that transfers phosphoglycerol onto pilin, is increased within 4 hours of meningococcus contact with host cells and modifies the serine residue at amino acid position 93 of pilin, altering the charge of the pilin structure and thereby destabilizing the pili bundles, reducing bacterial aggregation, and promoting detachment from the cell surface. Strains of N. meningitidis in which phosphoglycerol modification of pilin occurred had a greatly enhanced ability to cross epithelial monolayers, a finding that supports the view that this virulence property, which causes deaggregation, promotes both transmission to new hosts and systemic invasion. Although this new molecular understanding of N. meningitidis virulence in humans is provoc-",
"title": ""
},
{
"docid": "11a3ee5afc835a47a6a9529940d237f1",
"text": "BACKGROUND\nAntibiotic therapy is commonly used to treat hidradenitis suppurativa (HS). Although concern for antibiotic resistance exists, data examining the association between antibiotics and antimicrobial resistance in HS lesions are limited.\n\n\nOBJECTIVE\nWe sought to determine the frequency of antimicrobial resistance in HS lesions from patients on antibiotic therapy.\n\n\nMETHODOLOGY\nA cross-sectional analysis was conducted on 239 patients with HS seen at the Johns Hopkins Medical Institutions from 2010 through 2015.\n\n\nRESULTS\nPatients using topical clindamycin were more likely to grow clindamycin-resistant Staphylococcus aureus compared with patients using no antibiotics (63% vs 17%; P = .03). Patients taking ciprofloxacin were more likely to grow ciprofloxacin-resistant methicillin-resistant S aureus compared with patients using no antibiotics (100% vs 10%; P = .045). Patients taking trimethoprim/sulfamethoxazole were more likely to grow trimethoprim/sulfamethoxazole-resistant Proteus species compared with patients using no antibiotics (88% vs 0%; P < .001). No significant antimicrobial resistance was observed with tetracyclines or oral clindamycin.\n\n\nLIMITATIONS\nData on disease characteristics and antimicrobial susceptibilities for certain bacteria were limited.\n\n\nCONCLUSIONS\nAntibiotic therapy for HS treatment may be inducing antibiotic resistance. These findings highlight the importance of stewardship in antibiotic therapy for HS and raise questions regarding the balance of antibiotic use versus potential harms associated with antibiotic resistance.",
"title": ""
},
{
"docid": "0efc627b13cf773490f2e70b00c8e493",
"text": "This paper investigates the modeling, simulation and implementation of sensorless maximum power point tracking (MPPT) of permanent magnet synchronous generator wind power system. A comprehensive portfolio of control schemes are discussed and verified by simulations and experiments. Particularly, a PMSG-based wind power emulation system has been developed based on two machine drive setups — one is controlled as wind energy source and operated in torque control mode while the other is controlled as a wind generator and operated in speed control mode to attain MPPT. Both simulation and experimental results demonstrate a robust sensorless MPPT operation in the customized PMSG wind power system.",
"title": ""
},
{
"docid": "a22a319fedc1392ff21dcfa4ad92b82e",
"text": "This paper investigates the possible causes for high attrition rates for Computer Science students. It is a serious problem in universities that must be addressed if the need for technologically competent professionals is to be met.",
"title": ""
},
{
"docid": "db7a27dfe392005139fc44677a862bc7",
"text": "LPWAN is a type of wireless telecommunication network designed to allow long range communications with relaxed requirements on data rate and latency between the core network and a high-volume of battery-operated devices. This article first reviews the leading LPWAN technologies on both unlicensed spectrum (SIGFOX, and LoRa) and licensed spectrum (LTE-M and NB-IoT). Although these technologies differ in many aspects, they do have one thing in common: they all utilize the narrow-band transmission mechanism as a leverage to achieve three fundamental goals, that is, high system capacity, long battery life, and wide coverage. This article introduces an effective bandwidth concept that ties these goals together with the transmission bandwidth, such that these contradicting goals are balanced for best overall system performance.",
"title": ""
}
] |
scidocsrr
|
86b95e504304e34daa7cbd4ee0ea2b30
|
Selecting the best VM across multiple public clouds: a data-driven performance modeling approach
|
[
{
"docid": "70a07b1aedcb26f7f03ffc636b1d84a8",
"text": "This paper addresses the problem of scheduling concurrent jobs on clusters where application data is stored on the computing nodes. This setting, in which scheduling computations close to their data is crucial for performance, is increasingly common and arises in systems such as MapReduce, Hadoop, and Dryad as well as many grid-computing environments. We argue that data-intensive computation benefits from a fine-grain resource sharing model that differs from the coarser semi-static resource allocations implemented by most existing cluster computing architectures. The problem of scheduling with locality and fairness constraints has not previously been extensively studied under this resource-sharing model.\n We introduce a powerful and flexible new framework for scheduling concurrent distributed jobs with fine-grain resource sharing. The scheduling problem is mapped to a graph datastructure, where edge weights and capacities encode the competing demands of data locality, fairness, and starvation-freedom, and a standard solver computes the optimal online schedule according to a global cost model. We evaluate our implementation of this framework, which we call Quincy, on a cluster of a few hundred computers using a varied workload of data-and CPU-intensive jobs. We evaluate Quincy against an existing queue-based algorithm and implement several policies for each scheduler, with and without fairness constraints. Quincy gets better fairness when fairness is requested, while substantially improving data locality. The volume of data transferred across the cluster is reduced by up to a factor of 3.9 in our experiments, leading to a throughput increase of up to 40%.",
"title": ""
},
{
"docid": "5ebefc9d5889cb9c7e3f83a8b38c4cb4",
"text": "As organizations start to use data-intensive cluster computing systems like Hadoop and Dryad for more applications, there is a growing need to share clusters between users. However, there is a conflict between fairness in scheduling and data locality (placing tasks on nodes that contain their input data). We illustrate this problem through our experience designing a fair scheduler for a 600-node Hadoop cluster at Facebook. To address the conflict between locality and fairness, we propose a simple algorithm called delay scheduling: when the job that should be scheduled next according to fairness cannot launch a local task, it waits for a small amount of time, letting other jobs launch tasks instead. We find that delay scheduling achieves nearly optimal data locality in a variety of workloads and can increase throughput by up to 2x while preserving fairness. In addition, the simplicity of delay scheduling makes it applicable under a wide variety of scheduling policies beyond fair sharing.",
"title": ""
},
{
"docid": "cd35602ecb9546eb0f9a0da5f6ae2fdf",
"text": "The size of data sets being collected and analyzed in the industry for business intelligence is growing rapidly, making traditional warehousing solutions prohibitively expensive. Hadoop [3] is a popular open-source map-reduce implementation which is being used as an alternative to store and process extremely large data sets on commodity hardware. However, the map-reduce programming model is very low level and requires developers to write custom programs which are hard to maintain and reuse. In this paper, we present Hive, an open-source data warehousing solution built on top of Hadoop. Hive supports queries expressed in a SQL-like declarative language HiveQL, which are compiled into map-reduce jobs executed on Hadoop. In addition, HiveQL supports custom map-reduce scripts to be plugged into queries. The language includes a type system with support for tables containing primitive types, collections like arrays and maps, and nested compositions of the same. The underlying IO libraries can be extended to query data in custom formats. Hive also includes a system catalog, Hive-Metastore, containing schemas and statistics, which is useful in data exploration and query optimization. In Facebook, the Hive warehouse contains several thousand tables with over 700 terabytes of data and is being used extensively for both reporting and ad-hoc analyses by more than 100 users. The rest of the paper is organized as follows. Section 2 describes the Hive data model and the HiveQL language with an example. Section 3 describes the Hive system architecture and an overview of the query life cycle. Section 4 provides a walk-through of the demonstration. We conclude with future work in Section 5.",
"title": ""
}
] |
[
{
"docid": "7f5e6c0061351ab064aa7fd25d076a1b",
"text": "Guadua angustifolia Kunth was successfully propagated in vitro from axillary buds. Culture initiation, bud sprouting, shoot and plant multiplication, rooting and acclimatization, were evaluated. Best results were obtained using explants from greenhouse-cultivated plants, following a disinfection procedure that comprised the sequential use of an alkaline detergent, a mixture of the fungicide Benomyl and the bactericide Agri-mycin, followed by immersion in sodium hypochlorite (1.5% w/v) for 10 min, and culturing on Murashige and Skoog medium containing 2 ml l−1 of Plant Preservative Mixture®. Highest bud sprouting in original explants was observed when 3 mg l−1 N6-benzylaminopurine (BAP) was incorporated into the culture medium. Production of lateral shoots in in vitro growing plants increased with BAP concentration in culture medium, up to 5 mg l−1, the highest concentration assessed. After six subcultures, clumps of 8–12 axes were obtained, and their division in groups of 3–5 axes allowed multiplication of the plants. Rooting occurred in vitro spontaneously in 100% of the explants that produced lateral shoots. Successful acclimatization of well-rooted clumps of 5–6 axes was achieved in the greenhouse under mist watering in a mixture of soil, sand and rice hulls (1:1:1).",
"title": ""
},
{
"docid": "4abceedb1f6c735a8bc91bc811ce4438",
"text": "The study of school bullying has recently assumed an international dimension, but is faced with difficulties in finding terms in different languages to correspond to the English word bullying. To investigate the meanings given to various terms, a set of 25 stick-figure cartoons was devised, covering a range of social situations between peers. These cartoons were shown to samples of 8- and 14-year-old pupils (N = 1,245; n = 604 at 8 years, n = 641 at 14 years) in schools in 14 different countries, who judged whether various native terms cognate to bullying, applied to them. Terms from 10 Indo-European languages and three Asian languages were sampled. Multidimensional scaling showed that 8-year-olds primarily discriminated nonaggressive and aggressive cartoon situations; however, 14-year-olds discriminated fighting from physical bullying, and also discriminated verbal bullying and social exclusion. Gender differences were less appreciable than age differences. Based on the 14-year-old data, profiles of 67 words were then constructed across the five major cartoon clusters. The main types of terms used fell into six groups: bullying (of all kinds), verbal plus physical bullying, solely verbal bullying, social exclusion, solely physical aggression, and mainly physical aggression. The findings are discussed in relation to developmental trends in how children understand bullying, the inferences that can be made from cross-national studies, and the design of such studies.",
"title": ""
},
{
"docid": "1384bc0c18a47630707dfebc036d8ac0",
"text": "Recent research has demonstrated the important of ontology and its applications. For example, while designing adaptive learning materials, designers need to refer to the ontology of a subject domain. Moreover, ontology can show the whole picture and core knowledge about a subject domain. Research from literature also suggested that graphical representation of ontology can reduce the problems of information overload and learning disorientation for learners. However, ontology constructions used to rely on domain experts in the past; it is a time consuming and high cost task. Ontology creation for emerging new domains like e-learning is even more challenging. The aim of this paper is to construct e-learning domain concept maps, an alternative form of ontology, from academic articles. We adopt some relevant journal articles and conferences papers in e-learning domain as data sources, and apply text-mining techniques to automatically construct concept maps for e-learning domain. The constructed concept maps can provide a useful reference for researchers, who are new to e-leaning field, to study related issues, for teachers to design adaptive courses, and for learners to understand the whole picture of e-learning domain knowledge",
"title": ""
},
{
"docid": "f2e62e761c357c8490f1b53f125f8f28",
"text": "The credit crisis and the ongoing European sovereign debt crisis have highlighted the native form of credit risk, namely the counterparty risk. The related Credit Valuation Adjustment (CVA), Debt Valuation Adjustment (DVA), Liquidity Valuation Adjustment (LVA) and Replacement Cost (RC) issues, jointly referred to in this paper as Total Valuation Adjustment (TVA), have been thoroughly investigated in the theoretical papers Crépey (2012a, 2012b). The present work provides an executive summary and numerical companion to these papers, through which the TVA pricing problem can be reduced to Markovian pre-default TVA BSDEs. The first step consists in the counterparty clean valuation of a portfolio of contracts, which is the valuation in a hypothetical situation where the two parties would be risk-free and funded at a risk-free rate. In the second step, the TVA is obtained as the value of an option on the counterparty clean value process called Contingent Credit Default Swap (CCDS). Numerical results are presented for interest rate swaps in the Vasicek, as well as in the inverse Gaussian Hull-White short rate model, also allowing one to assess the related model risk issue.",
"title": ""
},
{
"docid": "d71e9063c8ac026f1592d8db4d927edc",
"text": "With the advancement of power electronics, new materials and novel bearing technologies, there has been an active development of high speed machines in recent years. The simple rotor structure makes switched reluctance machines (SRM) candidates for high speed operation. This paper has presents the design of a low power, 50,000 RPM 6/4 SRM having a toroidally wound stator. Finite element analysis (FEA) shows an equivalence to conventionally wound SRMs in terms of torque capability. With the conventional asymmetric converter and classic angular control, this toroidal-winding SRM (TSRM) is able to produce 233.20 W mechanical power with an efficiency of 75% at the FEA stage. Considering the enhanced cooling capability as the winding is directly exposed to air, the toroidal-winding is a good option for high-speed SRM.",
"title": ""
},
{
"docid": "875548b7dc303bef8efa8284216e010d",
"text": "BACKGROUND\nGigantomastia is a breast disorder marked by exaggerated rapid growth of the breasts, generally bilaterally. Since this disorder is very rare and has been reported only in sparse case reports its etiology has yet to be fully established. Treatment is aimed at improving the clinical and psychological symptoms and reducing the treatment side effects; however, the best therapeutic option varies from case to case.\n\n\nCASE PRESENTATION\nThe present report described a case of gestational gigantomastia in a 30-year-old woman, gravida 2, parity 1, 17 week pregnant admitted to Pars Hospital, Tehran, Iran, on May 2014. The patient was admitted to hospital at week 17 of pregnancy, although her breasts initially had begun to enlarge from the first trimester. The patient developed hypercalcemia in her 32nd week of pregnancy. The present report followed this patient from diagnosis until the completion of treatment.\n\n\nCONCLUSION\nAlthough gestational gigantomastia is a rare condition, its timely prognosis and careful examination of some conditions like hyperprolactinemia and hypercalcemia is essential in successful management of this condition.",
"title": ""
},
{
"docid": "2b1a9f7131b464d9587137baf828cd3a",
"text": "The description of the spatial characteristics of twoand three-dimensional objects, in the framework of MPEG-7, is considered. The shape of an object is one of its fundamental properties, and this paper describes an e$cient way to represent the coarse shape, scale and composition properties of an object. This representation is invariant to resolution, translation and rotation, and may be used for both two-dimensional (2-D) and three-dimensional (3-D) objects. This coarse shape descriptor will be included in the eXperimentation Model (XM) of MPEG-7. Applications of such a description to search object databases, in particular the CAESAR anthropometric database are discussed. ( 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "439540480944799e93717d78fc298e68",
"text": "Group equivariant and steerable convolutional neural networks (regular and steerable G-CNNs) have recently emerged as a very effective model class for learning from signal data such as 2D and 3D images, video, and other data where symmetries are present. In geometrical terms, regular G-CNNs represent data in terms of scalar fields (“feature channels”), whereas the steerable G-CNN can also use vector or tensor fields (“capsules”) to represent data. In algebraic terms, the feature spaces in regular G-CNNs transform according to a regular representation of the group G, whereas the feature spaces in Steerable G-CNNs transform according to the more general induced representations of G. In order to make the network equivariant, each layer in a G-CNN is required to intertwine between the induced representations associated with its input and output space. In this paper we present a general mathematical framework for G-CNNs on homogeneous spaces like Euclidean space or the sphere. We show, using elementary methods, that the layers of an equivariant network are convolutional if and only if the input and output feature spaces transform according to an induced representation. This result, which follows from G.W. Mackey’s abstract theory on induced representations, establishes G-CNNs as a universal class of equivariant network architectures, and generalizes the important recent work of Kondor & Trivedi on the intertwiners between regular representations. In order for a convolution layer to be equivariant, the filter kernel needs to satisfy certain linear equivariance constraints. The space of equivariant kernels has a rich and interesting structure, which we expose using direct calculations. Additionally, we show how this general understanding can be used to compute a basis for the space of equivariant filter kernels, thereby providing a straightforward path to the implementation of G-CNNs for a wide range of groups and manifolds. 1 ar X iv :1 80 3. 10 74 3v 2 [ cs .L G ] 3 0 M ar 2 01 8",
"title": ""
},
{
"docid": "6620aa5b1ecaac765112f0f1f15ef920",
"text": "In this paper we present the tangible 3D tabletop and discuss the design potential of this novel interface. The tangible 3D tabletop combines tangible tabletop interaction with 3D projection in such a way that the tangible objects may be augmented with visual material corresponding to their physical shapes, positions, and orientation on the tabletop. In practice, this means that both the tabletop and the tangibles can serve as displays. We present the basic design principles for this interface, particularly concerning the interplay between 2D on the tabletop and 3D for the tangibles, and present examples of how this kind of interface might be used in the domain of maps and geolocalized data. We then discuss three central design considerations concerning 1) the combination and connection of content and functions of the tangibles and tabletop surface, 2) the use of tangibles as dynamic displays and input devices, and 3) the visual effects facilitated by the combination of the 2D tabletop surface and the 3D tangibles.",
"title": ""
},
{
"docid": "d98b97dae367d57baae6b0211c781d66",
"text": "In this paper we describe a technology for protecting privacy in video systems. The paper presents a review of privacy in video surveillance and describes how a computer vision approach to understanding the video can be used to represent “just enough” of the information contained in a video stream to allow video-based tasks (including both surveillance and other “person aware” applications) to be accomplished, while hiding superfluous details, particularly identity, that can contain privacyintrusive information. The technology has been implemented in the form of a privacy console that manages operator access to different versions of the video-derived data according to access control lists. We have also built PrivacyCam—a smart camera that produces a video stream with the privacy-intrusive information already removed.",
"title": ""
},
{
"docid": "e70f261ba4bfa47b476d2bbd4abd4982",
"text": "A geometric program (GP) is a type of mathematical optimization problem characterized by objective and constraint functions that have a special form. Recently developed solution methods can solve even large-scale GPs extremely efficiently and reliably; at the same time a number of practical problems, particularly in circuit design, have been found to be equivalent to (or well approximated by) GPs. Putting these two together, we get effective solutions for the practical problems. The basic approach in GP modeling is to attempt to express a practical problem, such as an engineering analysis or design problem, in GP format. In the best case, this formulation is exact; when this isn’t possible, we settle for an approximate formulation. This tutorial paper collects together in one place the basic background material needed to do GP modeling. We start with the basic definitions and facts, and some methods used to transform problems into GP format. We show how to recognize functions and problems compatible with GP, and how to approximate functions or data in a form compatible with GP (when this is possible). We give some simple and representative examples, and also describe some common extensions of GP, along with methods for solving (or approximately solving) them. Information Systems Laboratory, Department of Electrical Engineering, Stanford University, Stanford CA 94305 (boyd@stanford.edu) Information Systems Laboratory, Department of Electrical Engineering, Stanford University, Stanford,CA 94305 (sjkim@stanford.edu) Department of Electrical Engineering, University of California, Los Angeles, CA 90095 (vandenbe@ucla.edu) Clear Shape Technologies, Inc., Sunnyvale, CA 94086 (arash@clearshape.com)",
"title": ""
},
{
"docid": "d3324f45ec730b5dc088cdd49bed7a8e",
"text": "Social media use is a global phenomenon, with almost two billion people worldwide regularly using these websites. As Internet access around the world increases, so will the number of social media users. Neuroscientists can capitalize on the ubiquity of social media use to gain novel insights about social cognitive processes and the neural systems that support them. This review outlines social motives that drive people to use social media, proposes neural systems supporting social media use, and describes approaches neuroscientists can use to conduct research with social media. We close by noting important directions and ethical considerations of future research with social media.",
"title": ""
},
{
"docid": "8ab51537f15c61f5b34a94461b9e0951",
"text": "An approach to the problem of estimating the size of inhomogeneous crowds, which are composed of pedestrians that travel in different directions, without using explicit object segmentation or tracking is proposed. Instead, the crowd is segmented into components of homogeneous motion, using the mixture of dynamic-texture motion model. A set of holistic low-level features is extracted from each segmented region, and a function that maps features into estimates of the number of people per segment is learned with Bayesian regression. Two Bayesian regression models are examined. The first is a combination of Gaussian process regression with a compound kernel, which accounts for both the global and local trends of the count mapping but is limited by the real-valued outputs that do not match the discrete counts. We address this limitation with a second model, which is based on a Bayesian treatment of Poisson regression that introduces a prior distribution on the linear weights of the model. Since exact inference is analytically intractable, a closed-form approximation is derived that is computationally efficient and kernelizable, enabling the representation of nonlinear functions. An approximate marginal likelihood is also derived for kernel hyperparameter learning. The two regression-based crowd counting methods are evaluated on a large pedestrian data set, containing very distinct camera views, pedestrian traffic, and outliers, such as bikes or skateboarders. Experimental results show that regression-based counts are accurate regardless of the crowd size, outperforming the count estimates produced by state-of-the-art pedestrian detectors. Results on 2 h of video demonstrate the efficiency and robustness of the regression-based crowd size estimation over long periods of time.",
"title": ""
},
{
"docid": "968965ddb9aa26b041ea688413935e86",
"text": "Lightweight photo sharing, particularly via mobile devices, is fast becoming a common communication medium used for maintaining a presence in the lives of friends and family. How should such systems be designed to maximize this social presence while maintaining simplicity? An experimental photo sharing system was developed and tested that, compared to current systems, offers highly simplified, group-centric sharing, automatic and persistent people-centric organization, and tightly integrated desktop and mobile sharing and viewing. In an experimental field study, the photo sharing behaviors of groups of family or friends were studied using their normal photo sharing methods and with the prototype sharing system. Results showed that users found photo sharing easier and more fun, shared more photos, and had an enhanced sense of social presence when sharing with the experimental system. Results are discussed in the context of design principles for the rapidly increasing number of lightweight photo sharing systems.",
"title": ""
},
{
"docid": "f7f1deeda9730056876db39b4fe51649",
"text": "Fracture in bone occurs when an external force exercised upon the bone is more than what the bone can tolerate or bear. As, its consequence structure and muscular power of the bone is disturbed and bone becomes frail, which causes tormenting pain on the bone and ends up in the loss of functioning of bone. Accurate bone structure and fracture detection is achieved using various algorithms which removes noise, enhances image details and highlights the fracture region. Automatic detection of fractures from x-ray images is considered as an important process in medical image analysis by both orthopaedic and radiologic aspect. Manual examination of x-rays has multitude drawbacks. The process is time consuming and subjective. In this paper we discuss several digital image processing techniques applied in fracture detection of bone. This led us to study techniques that have been applied to images obtained from different modalities like x-ray, CT, MRI and ultrasound. Keywords— Fracture detection, Medical Imaging, Morphology, Tibia, X-ray image",
"title": ""
},
{
"docid": "f93ebf9beefe35985b6e31445044e6d1",
"text": "Recent genetic studies have suggested that the colonization of East Asia by modern humans was more complex than a single origin from the South, and that a genetic contribution via a Northern route was probably quite substantial. Here we use a spatially-explicit computer simulation approach to investigate the human migration hypotheses of this region based on one-route or two-route models. We test the likelihood of each scenario by using Human Leukocyte Antigen (HLA) − A, −B, and − DRB1 genetic data of East Asian populations, with both selective and demographic parameters considered. The posterior distribution of each parameter is estimated by an Approximate Bayesian Computation (ABC) approach. Our results strongly support a model with two main routes of colonization of East Asia on both sides of the Himalayas, with distinct demographic histories in Northern and Southern populations, characterized by more isolation in the South. In East Asia, gene flow between populations originating from the two routes probably existed until a remote prehistoric period, explaining the continuous pattern of genetic variation currently observed along the latitude. A significant although dissimilar level of balancing selection acting on the three HLA loci is detected, but its effect on the local genetic patterns appears to be minor compared to those of past demographic events.",
"title": ""
},
{
"docid": "fe947a8e35bce2b3ebd479f1eab2eb99",
"text": "Deep networks often perform well on the data manifold on which they are trained, yet give incorrect (and often very confident) answers when evaluated on points from off of the training distribution. This is exemplified by the adversarial examples phenomenon but can also be seen in terms of model generalization and domain shift. We propose Manifold Mixup which encourages the network to produce more reasonable and less confident predictions at points with combinations of attributes not seen in the training set. This is accomplished by training on convex combinations of the hidden state representations of data samples. Using this method, we demonstrate improved semi-supervised learning, learning with limited labeled data, and robustness to novel transformations of the data not seen during training. Manifold Mixup requires no (significant) additional computation. We also discover intriguing properties related to adversarial examples and generative adversarial networks. Analytical experiments on both real data and synthetic data directly support our hypothesis for why the Manifold Mixup method improves results.",
"title": ""
},
{
"docid": "71e640caa999167a3df19eca5df2bf7f",
"text": "Grid-tie inverters are used to convert DC power into AC power for connection to an existing electrical grid and are key components in a microgrid system. This paper discusses the design and implementation of a grid-tie inverter for connecting renewable resources such as solar arrays, wind turbines, and energy storage to the AC grid, in a laboratory microgrid system while also controlling real and reactive power flows. The Atmel EVK1100 with an AVR32UC3A0512 microcontroller, will be used to coordinate all of the different functions of this grid-tie inverter. The EVK1100 will communicate with Rockwell PLCs via Ethernet. The PLCs are part of the communication, control and sensing network of the microgrid system.",
"title": ""
},
{
"docid": "ac808ecd75ccee74fff89d03e3396f26",
"text": "This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint. Keywords—Agricultural engineering, computer vision, image processing, flower detection.",
"title": ""
},
{
"docid": "049c1597f063f9c5fcc098cab8885289",
"text": "When one captures images in low-light conditions, the images often suffer from low visibility. This poor quality may significantly degrade the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a very simple and effective method, named as LIME, to enhance low-light images. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G and B channels. Further, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging real-world low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts.",
"title": ""
}
] |
scidocsrr
|
dde440ce088eb678010f567d3d285f68
|
Multi-hop communication routing (MCR) protocol for heterogeneous wireless sensor networks
|
[
{
"docid": "aed264522ed7ee1d3559fe4863760986",
"text": "A wireless network consisting of a large number of small sensors with low-power transceivers can be an effective tool for gathering data in a variety of environments. The data collected by each sensor is communicated through the network to a single processing center that uses all reported data to determine characteristics of the environment or detect an event. The communication or message passing process must be designed to conserve the limited energy resources of the sensors. Clustering sensors into groups, so that sensors communicate information only to clusterheads and then the clusterheads communicate the aggregated information to the processing center, may save energy. In this paper, we propose a distributed, randomized clustering algorithm to organize the sensors in a wireless sensor network into clusters. We then extend this algorithm to generate a hierarchy of clusterheads and observe that the energy savings increase with the number of levels in the hierarchy. Results in stochastic geometry are used to derive solutions for the values of parameters of our algorithm that minimize the total energy spent in the network when all sensors report data through the clusterheads to the processing center. KeywordsSensor Networks; Clustering Methods; Voronoi Tessellations; Algorithms.",
"title": ""
}
] |
[
{
"docid": "2b3e78940de9d9a924139e7ce3241e8c",
"text": "In today’s world people are extensively using internet and thus are also vulnerable to its flaws. Cyber security is the main area where these flaws are exploited. Intrusion is one way to exploit the internet for search of valuable information that may cause devastating damage, which can be personal or on a large scale. Thus Intrusion detection systems are placed for timely detection of such intrusion and alert the user about the same. Intrusion Detection using hybrid classification technique consist of a hybrid model i.e. misuse detection model (AdTree based) and Anomaly model (svm based).NSL-KDD intrusion detection dataset plays a vital role in calibrating intrusion detection system and is extensively used by the researchers working in the field of intrusion detection. This paper presents Association rule mining technique for IDS.",
"title": ""
},
{
"docid": "9aa458acf63b94e40afbc8bb68049082",
"text": "We tested the accuracy of thermal imaging as a lie detection tool in airport screening. Fifty-one passengers in an international airport departure hall told the truth or lied about their forthcoming trip in an interview. Their skin temperature was recorded via a thermal imaging camera. Liars' skin temperature rose significantly during the interview, whereas truth tellers' skin temperature remained constant. On the basis of these different patterns, 64% of truth tellers and 69% of liars were classified correctly. The interviewers made veracity judgements independently from the thermal recordings. The interviewers outperformed the thermal recordings and classified 72% of truth tellers and 77% of liars correctly. Accuracy rates based on the combination of thermal imaging scores and interviewers' judgements were the same as accuracy rates based on interviewers' judgements alone. Implications of the findings for the suitability of thermal imaging as a lie detection tool in airports are discussed.",
"title": ""
},
{
"docid": "6bdcac1d424162a89adac7fa2a6221ae",
"text": "The growing popularity of online product review forums invites people to express opinions and sentiments toward the products .It gives the knowledge about the product as well as sentiment of people towards the product. These online reviews are very important for forecasting the sales performance of product. In this paper, we discuss the online review mining techniques in movie domain. Sentiment PLSA which is responsible for finding hidden sentiment factors in the reviews and ARSA model used to predict sales performance. An Autoregressive Sentiment and Quality Aware model (ARSQA) also in consideration for to build the quality for predicting sales performance. We propose clustering and classification based algorithm for sentiment analysis.",
"title": ""
},
{
"docid": "3e9d7fed78af293ad6bce35ff34e1ddf",
"text": "Ontology researches have been carried out in many diverse research areas in the past decade for numerous purposes especially in the eRecruitment domain. In this article, we would like to take a closer look on the current work of such domain of ontologies such as eRecruitment. Ontology application for e-Recruitment is becoming an important task for matching job postings and applicants semantically in a Semantic web technology using ontology and ontology matching techniques. Most of the reviewed papers used currently (existing) available widespread standards and classifications to build human resource ontology that provide a way of semantic representation for positions offered and candidates to fulfil, some of other researches have been done created their own HR ontologies to build recruitment prototype. We have reviewed number of articles and identified few purposes for which ontology matching",
"title": ""
},
{
"docid": "d3d481807a16b19a066dc793c252dfda",
"text": "Asthma is a T helper 2 (Th2)-cell-mediated disease; however, recent findings implicate Th17 and innate lymphoid cells also in regulating airway inflammation. Herein, we have demonstrated profound interleukin-21 (IL-21) production after house dust mite (HDM)-driven asthma by using T cell receptor (TCR) transgenic mice reactive to Dermatophagoides pteronyssinus 1 and an IL-21GFP reporter mouse. IL-21-producing cells in the mediastinal lymph node (mLN) bore characteristics of T follicular helper (Tfh) cells, whereas IL-21(+) cells in the lung did not express CXCR5 (a chemokine receptor expressed by Tfh cells) and were distinct from effector Th2 or Th17 cells. Il21r(-/-) mice developed reduced type 2 responses and the IL-21 receptor (IL-21R) enhanced Th2 cell function in a cell-intrinsic manner. Finally, administration of recombinant IL-21 and IL-25 synergistically promoted airway eosinophilia primarily via effects on CD4(+) lymphocytes. This highlights an important Th2-cell-amplifying function of IL-21-producing CD4(+) T cells in allergic airway inflammation.",
"title": ""
},
{
"docid": "653bbea24044bd53e4e9e180593d2321",
"text": "In this paper, we present an integrated model of the two central tasks of dialog management: interpreting user actions and generating system actions. We model the interpretation task as a classication problem and the generation task as a prediction problem. These two tasks are interleaved in an incremental parsing-based dialog model. We compare three alternative parsing methods for this dialog model using a corpus of human-human spoken dialog from a catalog ordering domain that has been annotated for dialog acts and task/subtask information. We contrast the amount of context provided by each method and its impact on performance.",
"title": ""
},
{
"docid": "f83a16d393c78d6ba0e65a4659446e7e",
"text": "Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background contents, we need not only to recognize their action categories, but also to localize the start time and end time of each instance. Many state-of-the-art systems use segment-level classifiers to select and rank proposal segments of pre-determined boundaries. However, a desirable model should move beyond segment-level and make dense predictions at a fine granularity in time to determine precise temporal boundaries. To this end, we design a novel Convolutional-De-Convolutional (CDC) network that places CDC filters on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-to-end manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but also significantly boosts the precision of localizing temporal boundaries. Finally, the CDC network demonstrates a very high efficiency with the ability to process 500 frames per second on a single GPU server. Source code and trained models are available online at https://bitbucket.org/columbiadvmm/cdc.",
"title": ""
},
{
"docid": "5a392f4c9779c06f700e2ff004197de9",
"text": "Breiman's bagging and Freund and Schapire's boosting are recent methods for improving the predictive power of classiier learning systems. Both form a set of classiiers that are combined by v oting, bagging by generating replicated boot-strap samples of the data, and boosting by adjusting the weights of training instances. This paper reports results of applying both techniques to a system that learns decision trees and testing on a representative collection of datasets. While both approaches substantially improve predictive accuracy, boosting shows the greater beneet. On the other hand, boosting also produces severe degradation on some datasets. A small change to the way that boosting combines the votes of learned classiiers reduces this downside and also leads to slightly better results on most of the datasets considered.",
"title": ""
},
{
"docid": "b26882cddec1690e3099757e835275d2",
"text": "Accumulating evidence suggests that, independent of physical activity levels, sedentary behaviours are associated with increased risk of cardio-metabolic disease, all-cause mortality, and a variety of physiological and psychological problems. Therefore, the purpose of this systematic review is to determine the relationship between sedentary behaviour and health indicators in school-aged children and youth aged 5-17 years. Online databases (MEDLINE, EMBASE and PsycINFO), personal libraries and government documents were searched for relevant studies examining time spent engaging in sedentary behaviours and six specific health indicators (body composition, fitness, metabolic syndrome and cardiovascular disease, self-esteem, pro-social behaviour and academic achievement). 232 studies including 983,840 participants met inclusion criteria and were included in the review. Television (TV) watching was the most common measure of sedentary behaviour and body composition was the most common outcome measure. Qualitative analysis of all studies revealed a dose-response relation between increased sedentary behaviour and unfavourable health outcomes. Watching TV for more than 2 hours per day was associated with unfavourable body composition, decreased fitness, lowered scores for self-esteem and pro-social behaviour and decreased academic achievement. Meta-analysis was completed for randomized controlled studies that aimed to reduce sedentary time and reported change in body mass index (BMI) as their primary outcome. In this regard, a meta-analysis revealed an overall significant effect of -0.81 (95% CI of -1.44 to -0.17, p = 0.01) indicating an overall decrease in mean BMI associated with the interventions. There is a large body of evidence from all study designs which suggests that decreasing any type of sedentary time is associated with lower health risk in youth aged 5-17 years. In particular, the evidence suggests that daily TV viewing in excess of 2 hours is associated with reduced physical and psychosocial health, and that lowering sedentary time leads to reductions in BMI.",
"title": ""
},
{
"docid": "16c3b19c2bc9ce1f528dbb4a51280dbc",
"text": "This paper presents a time-domain analysis of the intermodulation distortion (IMD) of a closed-loop Class-D amplifier (amp) with either first- or second-order loop filter. The derived expression for the IMD indicates that there exist significant third-order intermodulation products (3rd-IMPs) within the output spectrum, which may lead to even greater distortion than the intrinsic harmonic components. In addition, the output expressions are compact, precise, and suitable for hand calculation so that the parametric relationships between the IMD and the magnitude and frequency of the input signals, as well as the effect of the loop filter design are straightforwardly investigated. In order to accurately represent the IMD performance of class-D amp, a modified testing setup is introduced to account for the dominantly large 3rd-IMPs when the International Telecommunication Union Radiocommunication Sector (ITU-R) standard is applied.",
"title": ""
},
{
"docid": "d9f0f36e75c08d2c3097e85d8c2dec36",
"text": "Social software solutions in enterprises such as IBM Connections are said to have the potential to support communication and collaboration among employees. However, companies are faced to manage the adoption of such collaborative tools and therefore need to raise the employees’ acceptance and motivation. To solve these problems, developers started to implement Gamification elements in social software tools, which aim to increase users’ motivation. In this research-in-progress paper, we give first insights and critically examine the current market of leading social software solutions to find out which Gamification approaches are implementated in such collaborative tools. Our findings show, that most of the major social collaboration solutions do not offer Gamification features by default, but leave the integration to a various number of third party plug-in vendors. Furthermore we identify a trend in which Gamification solutions majorly focus on rewarding quantitative improvement of work activities, neglecting qualitative performance. Subsequently, current solutions do not match recent findings in research and ignore risks that can lower the employees’ motivation and work performance in the long run.",
"title": ""
},
{
"docid": "aafae4864d274540d0f80842970c7eac",
"text": "Fraud is increasing with the extensive use of internet and the increase of online transactions. More advanced solutions are desired to protect financial service companies and credit card holders from constantly evolving online fraud attacks. The main objective of this paper is to construct an efficient fraud detection system which is adaptive to the behavior changes by combining classification and clustering techniques. This is a two stage fraud detection system which compares the incoming transaction against the transaction history to identify the anomaly using BOAT algorithm in the first stage. In second stage to reduce the false alarm rate suspected anomalies are checked with the fraud history database and make sure that the detected anomalies are due to fraudulent transaction or any short term change in spending profile. In this work BOAT supports incremental update of transactional database and it handles maximum fraud coverage with high speed and less cost. Proposed model is evaluated on both synthetically generated and real life data and shows very good accuracy in detecting fraud transaction.",
"title": ""
},
{
"docid": "cdfcc894d32c9a6a3a076d3e978d400f",
"text": "The earliest Convolution Neural Network (CNN) model is leNet-5 model proposed by LeCun in 1998. However, in the next few years, the development of CNN had been almost stopped until the article ‘Reducing the dimensionality of data with neural networks’ presented by Hinton in 2006. CNN started entering a period of rapid development. AlexNet won the championship in the image classification contest of ImageNet with the huge superiority of 11% beyond the second place in 2012, and the proposal of DeepFace and DeepID, as two relatively successful models for high-performance face recognition and authentication in 2014, marking the important position of CNN. Convolution Neural Network (CNN) is an efficient recognition algorithm widely used in image recognition and other fields in recent years. That the core features of CNN include local field, shared weights and pooling greatly reducing the parameters, as well as simple structure, make CNN become an academic focus. In this paper, the Convolution Neural Network’s history and structure are summarized. And then several areas of Convolutional Neural Network applications are enumerated. At last, some new insights for the future research of CNN are presented.",
"title": ""
},
{
"docid": "7b4a66d354443dbe560a933c9c8dd8d4",
"text": "Skin color is a well-recognized adaptive trait and has been studied extensively in humans. Understanding the genetic basis of adaptation of skin color in various populations has many implications in human evolution and medicine. Impressive progress has been made recently to identify genes associated with skin color variation in a wide range of geographical and temporal populations. In this review, we discuss what is currently known about the genetics of skin color variation. We enumerated several cases of skin color adaptation in global modern humans and archaic hominins, and illustrated why, when, and how skin color adaptation occurred in different populations. Finally, we provided a summary of the candidate loci associated with pigmentation, which could be a valuable reference for further evolutionary and medical studies. Previous studies generally indicated a complex genetic mechanism underlying the skin color variation, expanding our understanding of the role of population demographic history and natural selection in shaping genetic and phenotypic diversity in humans. Future work is needed to dissect the genetic architecture of skin color adaptation in numerous ethnic minority groups around the world, which remains relatively obscure compared with that of major continental groups, and to unravel the exact genetic basis of skin color adaptation.",
"title": ""
},
{
"docid": "40e7ea2295994e1b822b3e4ab968d9f9",
"text": "This paper presents the use of a new meta-heuristic technique namely gray wolf optimizer (GWO) which is inspired from gray wolves’ leadership and hunting behaviors to solve optimal reactive power dispatch (ORPD) problem. ORPD problem is a well-known nonlinear optimization problem in power system. GWO is utilized to find the best combination of control variables such as generator voltages, tap changing transformers’ ratios as well as the amount of reactive compensation devices so that the loss and voltage deviation minimizations can be achieved. In this paper, two case studies of IEEE 30bus system and IEEE 118-bus system are used to show the effectiveness of GWO technique compared to other techniques available in literature. The results of this research show that GWO is able to achieve less power loss and voltage deviation than those determined by other techniques.",
"title": ""
},
{
"docid": "6a3638b7760f5a8d4d1a0a1842904b27",
"text": "We perform a sentiment analysis of all tweets published on the microblogging platform Twitter in the second half of 2008. We use a psychometric instrument to extract six mood states (tension, depression, anger, vigor, fatigue, confusion) from the aggregated Twitter content and compute a six-dimensional mood vector for each day in the timeline. We compare our results to a record of popular events gathered from media and sources. We find that events in the social, political, cultural and economic sphere do have a significant, immediate and highly specific effect on the various dimensions of public mood. We speculate that large scale analyses of mood can provide a solid platform to model collective emotive trends in terms of their predictive value with regards to existing social as well as economic indicators.",
"title": ""
},
{
"docid": "9a27c676b5d356d5feb91850e975a336",
"text": "Joseph Goldstein has written in this journal that creation (through invention) and revelation (through discovery) are two different routes to advancement in the biomedical sciences1. In my work as a phytochemist, particularly during the period from the late 1960s to the 1980s, I have been fortunate enough to travel both routes. I graduated from the Beijing Medical University School of Pharmacy in 1955. Since then, I have been involved in research on Chinese herbal medicine in the China Academy of Chinese Medical Sciences (previously known as the Academy of Traditional Chinese Medicine). From 1959 to 1962, I was released from work to participate in a training course in Chinese medicine that was especially designed for professionals with backgrounds in Western medicine. The 2.5-year training guided me to the wonderful treasure to be found in Chinese medicine and toward understanding the beauty in the philosophical thinking that underlies a holistic view of human beings and the universe.",
"title": ""
},
{
"docid": "6384c31adaf8b28ca7a6dd97d3eb571a",
"text": ".....................................................................................................3 Introduction...................................................................................................4 Chapter 1. History of Origami............................................................................. 5 Chapter 2. Evolution of Origami tessellations in 20-th century architecture........................7 Chapter 3. Kinetic system and Origami...................................................................9 3.1. Kinetic system................................................................................. 9 3.2. Geometric Origami............................................................................ 9 Chapter 4. Folding patterns................................................................................ 10 4.1. Yoshimura pattern (diamond pattern)........................................................ 11 4.2. Diagonal pattern..............................................................................11 4.3. Miura Ori pattern (herringbone pattern)...................................................11 Chapter 5. The origami house and impact on the furniture design.................................... 13 Conclusion.................................................................................................... 16 References...................................................................................................17 Annex 1....................................................................................................... 18 Annex 2...................................................................................................... 19",
"title": ""
},
{
"docid": "f05b001f03e00bf2d0807eb62d9e2369",
"text": "Since the hydraulic actuating suspension system has nonlinear and time-varying behavior, it is difficult to establish an accurate model for designing a model-based controller. Here, an adaptive fuzzy sliding mode controller is proposed to suppress the sprung mass position oscillation due to road surface variation. This intelligent control strategy combines an adaptive rule with fuzzy and sliding mode control algorithms. It has online learning ability to deal with the system time-varying and nonlinear uncertainty behaviors, and adjust the control rules parameters. Only eleven fuzzy rules are required for this active suspension system and these fuzzy control rules can be established and modified continuously by online learning. The experimental results show that this intelligent control algorithm effectively suppresses the oscillation amplitude of the sprung mass with respect to various road surface disturbances.",
"title": ""
}
] |
scidocsrr
|
cca63f40a40fbd4b1fe18f6023e40ee9
|
Bayesian Computation via Markov chain Monte
|
[
{
"docid": "bfe762fc6e174778458b005be75d8285",
"text": "The Gibbs sampler, the algorithm of Metropolis and similar iterative simulation methods are potentially very helpful for summarizing multivariate distributions. Used naively, however, iterative simulation can give misleading answers. Our methods are simple and generally applicable to the output of any iterative simulation; they are designed for researchers primarily interested in the science underlying the data and models they are analyzing, rather than for researchers interested in the probability theory underlying the iterative simulations themselves. Our recommended strategy is to use several independent sequences, with starting points sampled from an overdispersed istribution. At each step of the iterative simulation, we obtain, for each univariate estimand of interest, a distributional estimate and an estimate of how much sharper the distributional estimate might become if the simulations were continued indefinitely. Because our focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normality after transformations and marginalization, we derive our results as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations. The methods are illustrated on a randomeffects mixture model applied to experimental measurements of reaction times of normal and schizophrenic patients.",
"title": ""
}
] |
[
{
"docid": "b07cbf3da9e3ff9691dcb49040c7e6a5",
"text": "Few years ago, the information flow in library was relatively simple and the application of technology was limited. However, as we progress into a more integrated world where technology has become an integral part of the business processes, the process of transfer of information has become more complicated. Today, one of the biggest challenges that libraries face is the explosive growth of library data and to use this data to improve the quality of managerial decisions. Data mining techniques are analytical tools that can be used to extract meaningful knowledge from large data sets. This paper addresses the applications of data mining in library to extract useful information from the huge data sets and providing analytical tool to view and use this information for decision making processes by taking real life examples.",
"title": ""
},
{
"docid": "2a1b24d5737ac0a6aae9cd005fcf0984",
"text": "JavaScript is an object-based scripting language that can be interpreted by most commonly used Web browsers, including Netscape® Navigator® and Internet Explorer®. In conjunction with HTML form elements, JavaScript can be used to make flexible and easy-to-use applications that can be accessed by anyone connected to the Internet (3). The Sequence Manipulation Suite (http://www.ualberta.ca/~stothard/javascript/) is a collection of freely available JavaScript applications for molecular biologists. It consists of over 30 utilities for analyzing and manipulating sequence data, including the following:",
"title": ""
},
{
"docid": "170f14fbf337186c8bd9f36390916d2e",
"text": "In this paper, we draw upon two sets of theoretical resources to develop a comprehensive theory of sexual offender rehabilitation named the Good Lives Model-Comprehensive (GLM-C). The original Good Lives Model (GLM-O) forms the overarching values and principles guiding clinical practice in the GLM-C. In addition, the latest sexual offender theory (i.e., the Integrated Theory of Sexual Offending; ITSO) provides a clear etiological grounding for these principles. The result is a more substantial and improved rehabilitation model that is able to conceptually link latest etiological theory with clinical practice. Analysis of the GLM-C reveals that it also has the theoretical resources to secure currently used self-regulatory treatment practice within a meaningful structure. D 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "81e88dbd2f01ddddb2b8245e9d9626c9",
"text": "The remarkable properties of some recent computer algorithms for neural networks seemed to promise a fresh approach to understanding the computational properties of the brain. Unfortunately most of these neural nets are unrealistic in important respects.",
"title": ""
},
{
"docid": "fc7c7828428a4018a8aaddaff4eb5b3f",
"text": "Data mining is comprised of many data analysis techniques. Its basic objective is to discover the hidden and useful data pattern from very large set of data. Graph mining, which has gained much attention in the last few decades, is one of the novel approaches for mining the dataset represented by graph structure. Graph mining finds its applications in various problem domains, including: bioinformatics, chemical reactions, Program flow structures, computer networks, social networks etc. Different data mining approaches are used for mining the graph-based data and performing useful analysis on these mined data. In literature various graph mining approaches have been proposed. Each of these approaches is based on either classification; clustering or decision trees data mining techniques. In this study, we present a comprehensive review of various graph mining techniques. These different graph mining techniques have been critically evaluated in this study. This evaluation is based on different parameters. In our future work, we will provide our own classification based graph mining technique which will efficiently and accurately perform mining on the graph structured data.",
"title": ""
},
{
"docid": "5ce82b8c2cc87ae84026d230f3a97e06",
"text": "This paper presents a new physically-based method for predicting natural hairstyles in the presence of gravity and collisions. The method is based upon a mechanically accurate model for static elastic rods (Kirchhoff model), which accounts for the natural curliness of hair, as well as for hair ellipticity. The equilibrium shape is computed in a stable and easy way by energy minimization. This yields various typical hair configurations that can be observed in the real world, such as ringlets. As our results show, the method can generate different hair types with a very few input parameters, and perform virtual hairdressing operations such as wetting, cutting and drying hair.",
"title": ""
},
{
"docid": "f921eccfa5df6b8479489c8851653b14",
"text": "Restricted Boltzmann Machines (RBMs) are general unsupervised learning devices to ascertain generative models of data distributions. RBMs are often trained using the Contrastive Divergence learning algorithm (CD), an approximation to the gradient of the data log-likelihood. A simple reconstruction error is often used to decide whether the approximation provided by the CD algorithm is good enough, though several authors (Schulz et al., 2010; Fischer & Igel, 2010) have raised doubts concerning the feasibility of this procedure. However, not many alternatives to the reconstruction error have been used in the literature. In this manuscript we investigate simple alternatives to the reconstruction error in order to detect as soon as possible the decrease in the log-likelihood during learning. Proceedings of the 2 International Conference on Learning Representations, Banff, Canada, 2014. Copyright 2014 by the author(s).",
"title": ""
},
{
"docid": "3db4ed6fb68bd1c6249e747fdb8067db",
"text": "National exercises for the evaluation of research activity by universities are becoming regular practice in ever more countries. These exercises have mainly been conducted through the application of peer-review methods. Bibliometrics has not been able to offer a valid large-scale alternative because of almost overwhelming difficulties in identifying the true author of each publication. We will address this problem by presenting a heuristic approach to author name disambiguation in bibliometric datasets for large-scale research assessments. The application proposed concerns the Italian university system, consisting of 80 universities and a research staff of over 60,000 scientists. The key advantage of the proposed approach is the ease of implementation. The algorithms are of practical application and have considerably better scalability and expandability properties than state-of-the-art unsupervised approaches. Moreover, the performance in terms of precision and recall, which can be further improved, seems thoroughly adequate for the typical needs of large-scale bibliometric research assessments.",
"title": ""
},
{
"docid": "dbbd9f6440ee0c137ee0fb6a4aadba38",
"text": "In local differential privacy (LDP), each user perturbs her data locally before sending the noisy data to a data collector. The latter then analyzes the data to obtain useful statistics. Unlike the setting of centralized differential privacy, in LDP the data collector never gains access to the exact values of sensitive data, which protects not only the privacy of data contributors but also the collector itself against the risk of potential data leakage. Existing LDP solutions in the literature are mostly limited to the case that each user possesses a tuple of numeric or categorical values, and the data collector computes basic statistics such as counts or mean values. To the best of our knowledge, no existing work tackles more complex data mining tasks such as heavy hitter discovery over set-valued data. In this paper, we present a systematic study of heavy hitter mining under LDP. We first review existing solutions, extend them to the heavy hitter estimation, and explain why their effectiveness is limited. We then propose LDPMiner, a two-phase mechanism for obtaining accurate heavy hitters with LDP. The main idea is to first gather a candidate set of heavy hitters using a portion of the privacy budget, and focus the remaining budget on refining the candidate set in a second phase, which is much more efficient budget-wise than obtaining the heavy hitters directly from the whole dataset. We provide both in-depth theoretical analysis and extensive experiments to compare LDPMiner against adaptations of previous solutions. The results show that LDPMiner significantly improves over existing methods. More importantly, LDPMiner successfully identifies the majority true heavy hitters in practical settings.",
"title": ""
},
{
"docid": "ffc05e53e847cda384b3b83269abcc9c",
"text": "We propose a machine learning framework based on sliding windows for glaucoma diagnosis. In digital fundus photographs, our method automatically localizes the optic cup, which is the primary structural image cue for clinically identifying glaucoma. This localization uses a bundle of sliding windows of different sizes to obtain cup candidates in each disc image, then extracts from each sliding window a new histogram based feature that is learned using a group sparsity constraint. An epsilon-SVR (support vector regression) model based on non-linear radial basis function (RBF) kernels is used to rank each candidate, and final decisions are made with a non-maximal suppression (NMS) method. Tested on the large ORIGA(-light) clinical dataset, the proposed method achieves a 73.2% overlap ratio with manually-labeled ground-truth and a 0.091 absolute cup-to-disc ratio (CDR) error, a simple yet widely used diagnostic measure. The high accuracy of this framework on images from low-cost and widespread digital fundus cameras indicates much promise for developing practical automated/assisted glaucoma diagnosis systems.",
"title": ""
},
{
"docid": "367ba3305217805d6068d6117a693a11",
"text": "Many efforts have been devoted to training generative latent variable models with autoregressive decoders, such as recurrent neural networks (RNN). Stochastic recurrent models have been successful in capturing the variability observed in natural sequential data such as speech. We unify successful ideas from recently proposed architectures into a stochastic recurrent model: each step in the sequence is associated with a latent variable that is used to condition the recurrent dynamics for future steps. Training is performed with amortized variational inference where the approximate posterior is augmented with a RNN that runs backward through the sequence. In addition to maximizing the variational lower bound, we ease training of the latent variables by adding an auxiliary cost which forces them to reconstruct the state of the backward recurrent network. This provides the latent variables with a task-independent objective that enhances the performance of the overall model. We found this strategy to perform better than alternative approaches such as KL annealing. Although being conceptually simple, our model achieves state-of-the-art results on standard speech benchmarks such as TIMIT and Blizzard and competitive performance on sequential MNIST. Finally, we apply our model to language modeling on the IMDB dataset where the auxiliary cost helps in learning interpretable latent variables.",
"title": ""
},
{
"docid": "301e061163b115126b8f0b9851ed265c",
"text": "Pressure ulcers are a common problem among older adults in all health care settings. Prevalence and incidence estimates vary by setting, ulcer stage, and length of follow-up. Risk factors associated with increased pressure ulcer incidence have been identified. Activity or mobility limitation, incontinence, abnormalities in nutritional status, and altered consciousness are the most consistently reported risk factors for pressure ulcers. Pain, infectious complications, prolonged and expensive hospitalizations, persistent open ulcers, and increased risk of death are all associated with the development of pressure ulcers. The tremendous variability in pressure ulcer prevalence and incidence in health care settings suggests that opportunities exist to improve outcomes for persons at risk for and with pressure ulcers.",
"title": ""
},
{
"docid": "5c76caebe05acd7d09e6cace0cac9fe1",
"text": "A program that detects people in images has a multitude of potential applications, including tracking for biomedical applications or surveillance, activity recognition for person-device interfaces (device control, video games), organizing personal picture collections, and much more. However, detecting people is difficult, as the appearance of a person can vary enormously because of changes in viewpoint or lighting, clothing style, body pose, individual traits, occlusion, and more. It then makes sense that the first people detectors were really detectors of pedestrians, that is, people walking at a measured pace on a sidewalk, and viewed from a fixed camera. Pedestrians are nearly always upright, their arms are mostly held along the body, and proper camera placement relative to pedestrian traffic can virtually ensure a view from the front or from behind (Figure 1). These factors reduce variation of appearance, although clothing, illumination, background, occlusions, and somewhat limited variations of pose still present very significant challenges.",
"title": ""
},
{
"docid": "39d3f1a5d40325bdc4bca9ee50241c9e",
"text": "This paper reviews the recent progress of quantum-dot semiconductor optical amplifiers developed as ultrawideband polarization-insensitive high-power amplifiers, high-speed signal regenerators, and wideband wavelength converters. A semiconductor optical amplifier having a gain of > 25 dB, noise figure of < 5 dB, and 3-dB saturation output power of > 20 dBm, over the record widest bandwidth of 90 nm among all kinds of optical amplifiers, and also having a penalty-free output power of 23 dBm, the record highest among all the semiconductor optical amplifiers, was realized by using quantum dots. By utilizing isotropically shaped quantum dots, the TM gain, which is absent in the standard Stranski-Krastanow QDs, has been drastically enhanced, and nearly polarization-insensitive SOAs have been realized for the first time. With an ultrafast gain response unique to quantum dots, an optical regenerator having receiver-sensitivity improving capability of 4 dB at a BER of 10-9 and operating speed of > 40 Gb/s has been successfully realized with an SOA chip. This performance achieved together with simplicity of structure suggests a potential for low-cost realization of regenerative transmission systems.",
"title": ""
},
{
"docid": "c8245d1c57ce52020743043d88be0942",
"text": "P2P streaming applications are very popular on the Internet today. However, a mobile device in P2P streaming not only needs to continuously receive streaming data from other peers for its playback, but also needs to continuously exchange control information (e.g., buffermaps and file chunk requests) with neighboring peers and upload the downloaded streaming data to them. These lead to excessive battery power consumption on the mobile device.\n In this paper, we first conduct Internet experiments to study in-depth the impact of control traffic and uploading traffic on battery power consumption with several popular Internet P2P streaming applications. Motivated by measurement results, we design and implement a system called BlueStreaming that effectively utilizes the commonly existing Bluetooth interface on mobile devices. Instead of activating WiFi and Bluetooth interfaces alternatively, BlueStreaming keeps Bluetooth active all the time to transmit delay-sensitive control traffic while using WiFi for streaming data traffic. BlueStreaming trades Bluetooth's power consumption for much more significant energy saving from shaped WiFi traffic. To evaluate the performance of BlueStreaming, we have implemented prototypes on both Windows and Mac to access existing popular Internet P2P streaming services. The experimental results show that BlueStreaming can save up to 46% battery power compared to the commodity PSM scheme.",
"title": ""
},
{
"docid": "d76a65397b62b511c2ee20b10edc7b00",
"text": "In this paper we introduce the Pivoting M-tree (PM-tree), a metric access method combining M-tree with the pivot-based approach. While in M-tree a metric region is represented by a hyper-sphere, in PM-tree the shape of a metric region is determined by intersection of the hyper-sphere and a set of hyper-rings. The set of hyper-rings for each metric region is related to a fixed set of pivot objects. As a consequence, the shape of a metric region bounds the indexed objects more tightly which, in turn, significantly improves the overall efficiency of similarity search. We present basic algorithms on PM-tree and two cost models for range query processing. Finally, the PM-tree efficiency is experimentally evaluated on large synthetic as well as real-world datasets.",
"title": ""
},
{
"docid": "c2daec5b85a4e8eea614d855c6549ef0",
"text": "An audio-visual corpus has been collected to support the use of common material in speech perception and automatic speech recognition studies. The corpus consists of high-quality audio and video recordings of 1000 sentences spoken by each of 34 talkers. Sentences are simple, syntactically identical phrases such as \"place green at B 4 now\". Intelligibility tests using the audio signals suggest that the material is easily identifiable in quiet and low levels of stationary noise. The annotated corpus is available on the web for research use.",
"title": ""
},
{
"docid": "d3f6906d1cfacf98f2f465c0e14461da",
"text": "This article provides an improved automated skin lesion segmentation method for dermoscopic images. There are several stages for this method. These include the pre-processing steps such as resizing the images and eliminating noise. Hair was removed and reflective light was reduced using morphological operations and a median filter. The single green channel was rescaled into new intensities, as it provided the highest segmentation accuracy. The threshold value was calculated to separate the skin lesion region from healthy skin. Morphological operations were implemented to merge the small lesion areas around the bigger lesion areas with similar features and trace the boundary of the melanoma. The accuracy of the segmentation was evaluated by comparing the automatic boundary and manual boundary. Compared to other studies, our proposed method achieved the highest average accuracy of 97%.",
"title": ""
},
{
"docid": "3f2c0a1fb27c4df6ff02bc7d0a885dfd",
"text": "Advances in semiconductor manufacturing processes and large scale integration keep pushing demanding applications further away from centralized processing, and closer to the edges of the network (i.e. Edge Computing). It has become possible to perform complex in-network image processing using low-power embedded smart cameras, enabling a multitude of new collaborative image processing applications. This paper introduces OpenMV, a new low-power smart camera that lends itself naturally to wireless sensor networks and machine vision applications. The uniqueness of this platform lies in running an embedded Python3 interpreter, allowing its peripherals and machine vision library to be scripted in Python. In addition, its hardware is extensible via modules that augment the platform with new capabilities, such as thermal imaging and networking modules.",
"title": ""
}
] |
scidocsrr
|
47bbfe32e19f340546c875d135f38ecf
|
Bodily Influences on Emotional Feelings : Accumulating Evidence and Extensions of William James ’ Theory of Emotion
|
[
{
"docid": "9ed61c312a5b4055dbf0b905eb63ca84",
"text": "Four experiments were conducted to determine whether voluntarily produced emotional facial configurations are associated with differentiated patterns of autonomic activity, and if so, how this might be mediated. Subjects received muscle-by-muscle instructions and coaching to produce facial configurations for anger, disgust, fear, happiness, sadness, and surprise while heart rate, skin conductance, finger temperature, and somatic activity were monitored. Results indicated that voluntary facial activity produced significant levels of subjective experience of the associated emotion, and that autonomic distinctions among emotions: (a) were found both between negative and positive emotions and among negative emotions, (b) were consistent between group and individual subjects' data, (c) were found in both male and female subjects, (d) were found in both specialized (actors, scientists) and nonspecialized populations, (e) were stronger when the voluntary facial configurations most closely resembled actual emotional expressions, and (f) were stronger when experience of the associated emotion was reported. The capacity of voluntary facial activity to generate emotion-specific autonomic activity: (a) did not require subjects to see facial expressions (either in a mirror or on an experimenter's face), and (b) could not be explained by differences in the difficulty of making the expressions or by differences in concomitant somatic activity.",
"title": ""
}
] |
[
{
"docid": "f296b374b635de4f4c6fc9c6f415bf3e",
"text": "People increasingly use the Internet for obtaining information regarding diseases, diagnoses and available treatments. Currently, many online health portals already provide non-personalized health information in the form of articles. However, it can be challenging to find information relevant to one's condition, interpret this in context, and understand the medical terms and relationships. Recommender Systems (RS) already help these systems perform precise information filtering. In this short paper, we look one step ahead and show the progress made towards RS helping users find personalized, complex medical interventions or support them with preventive healthcare measures. We identify key challenges that need to be addressed for RS to offer the kind of decision support needed in high-risk domains like healthcare.",
"title": ""
},
{
"docid": "56c30ddf0aedfb0f13885d90e22e6537",
"text": "A single-pole double-throw novel switch device in0.18¹m SOI complementary metal-oxide semiconductor(CMOS) process is developed for 0.9 Ghz wireless GSMsystems. The layout of the device is optimized keeping inmind the parameters of interest for the RF switch. A subcircuitmodel, with the standard surface potential (PSP) modelas the intrinsic FET model along with the parasitic elementsis built to predict the Ron and Coff of the switch. Themeasured data agrees well with the model. The eight FETstacked switch achieved an Ron of 2.5 ohms and an Coff of180 fF.",
"title": ""
},
{
"docid": "95745bf35bb1d63fc0af015f345d1da1",
"text": "Gaining a better understanding of how and what machine learning systems learn is important to increase confidence in their decisions and catalyze further research. In this paper, we analyze the predictions made by a specific type of recurrent neural network, mixture density RNNs (MD-RNNs). These networks learn to model predictions as a combination of multiple Gaussian distributions, making them particularly interesting for problems where a sequence of inputs may lead to several distinct future possibilities. An example is learning internal models of an environment, where different events may or may not occur, but where the average over different events is not meaningful. By analyzing the predictions made by trained MD-RNNs, we find that their different Gaussian components have two complementary roles: 1) Separately modeling different stochastic events and 2) Separately modeling scenarios governed by different rules. These findings increase our understanding of what is learned by predictive MD-RNNs, and open up new research directions for further understanding how we can benefit from their self-organizing model decomposition.",
"title": ""
},
{
"docid": "bf1dd3cf77750fe5e994fd6c192ba1be",
"text": "Increasingly manufacturers of smartphone devices are utilising a diverse range of sensors. This innovation has enabled developers to accurately determine a user's current context. In recent years there has also been a renewed requirement to use more types of context and reduce the current over-reliance on location as a context. Location based systems have enjoyed great success and this context is very important for mobile devices. However, using additional context data such as weather, time, social media sentiment and user preferences can provide a more accurate model of the user's current context. One area that has been significantly improved by the increased use of context in mobile applications is tourism. Traditionally tour guide applications rely heavily on location and essentially ignore other types of context. This has led to problems of inappropriate suggestions, due to inadequate content filtering and tourists experiencing information overload. These problems can be mitigated if appropriate personalisation and content filtering is performed. The intelligent decision making that this paper proposes with regard to the development of the VISIT [17] system, is a hybrid based recommendation approach made up of collaborative filtering, content based recommendation and demographic profiling. Intelligent reasoning will then be performed as part of this hybrid system to determine the weight/importance of each different context type.",
"title": ""
},
{
"docid": "83c184b9a9b533835c74bbe844f54a70",
"text": "This work addresses issues related to the design and implementation of focused crawlers. Several variants of state-of-the-art crawlers relying on web page content and link information for estimating the relevance of web pages to a given topic are proposed. Particular emphasis is given to crawlers capable of learning not only the content of relevant pages (as classic crawlers do) but also paths leading to relevant pages. A novel learning crawler inspired by a previously proposed Hidden Markov Model (HMM) crawler is described as well. The crawlers have been implemented using the same baseline implementation (only the priority assignment function differs in each crawler) providing an unbiased evaluation framework for a comparative analysis of their performance. All crawlers achieve their maximum performance when a combination of web page content and (link) anchor text is used for assigning download priorities to web pages. Furthermore, the new HMM crawler improved the performance of the original HMM crawler and also outperforms classic focused crawlers in searching for specialized topics.",
"title": ""
},
{
"docid": "ce0d21bfdd22ed6275911d3171bcb3a7",
"text": "Automatic identity recognition from ear images represents an active field of research within the biometric community. The ability to capture ear images from a distance and in a covert manner makes the technology an appealing choice for surveillance and security applications as well as other application domains. Significant contributions have been made in the field over recent years, but open research problems still remain and hinder a wider (commercial) deployment of the technology. This paper presents an overview of the field of automatic ear recognition (from 2D images) and focuses specifically on the most recent, descriptor-based methods proposed in this area. Open challenges are discussed and potential research directions are outlined with the goal of providing the reader with a point of reference for issues worth examining in the future. In addition to a comprehensive review on ear recognition technology, the paper also introduces a new, fully unconstrained dataset of ear images gathered from the web and a toolbox implementing several state-of-the-art techniques for ear recognition. The dataset and toolbox are meant to address some of the open issues in the field and are made publicly available to the research commu-",
"title": ""
},
{
"docid": "c132272c8caa7158c0549bd5f2d626aa",
"text": "This study investigates alternative material compositions for flexible silicone-based dry electroencephalography (EEG) electrodes to improve the performance lifespan while maintaining high-fidelity transmission of EEG signals. Electrode materials were fabricated with varying concentrations of silver-coated silica and silver flakes to evaluate their electrical, mechanical, and EEG transmission performance. Scanning electron microscope (SEM) analysis of the initial electrode development identified some weak points in the sensors' construction, including particle pull-out and ablation of the silver coating on the silica filler. The newly-developed sensor materials achieved significant improvement in EEG measurements while maintaining the advantages of previous silicone-based electrodes, including flexibility and non-toxicity. The experimental results indicated that the proposed electrodes maintained suitable performance even after exposure to temperature fluctuations, 85% relative humidity, and enhanced corrosion conditions demonstrating improvements in the environmental stability. Fabricated flat (forehead) and acicular (hairy sites) electrodes composed of the optimum identified formulation exhibited low impedance and reliable EEG measurement; some initial human experiments demonstrate the feasibility of using these silicone-based electrodes for typical lab data collection applications.",
"title": ""
},
{
"docid": "7f47434e413230faf04849cf43a845fa",
"text": "Although surgical resection remains the gold standard for treatment of liver cancer, there is a growing need for alternative therapies. Microwave ablation (MWA) is an experimental procedure that has shown great promise for the treatment of unresectable tumors and exhibits many advantages over other alternatives to resection, such as radiofrequency ablation and cryoablation. However, the antennas used to deliver microwave power largely govern the effectiveness of MWA. Research has focused on coaxial-based interstitial antennas that can be classified as one of three types (dipole, slot, or monopole). Choked versions of these antennas have also been developed, which can produce localized power deposition in tissue and are ideal for the treatment of deepseated hepatic tumors.",
"title": ""
},
{
"docid": "0b0465490e6263cef6033e5bb1cdf78f",
"text": "Lee Cronk’s book That complex whole is about a variety of different kinds of culture wars, some restricted to an academic milieu and others well-known fixtures of public discourse in the United States and beyond. Most directly, it addresses a perennial debate in cultural anthropology: how should anthropologists define human culture, its boundaries and roles in human existence? Beyond that, it looks at the disciplinary split that runs through the different sub-fields of North American anthropology, one that distinguishes researchers who define themselves as scientists from those who take a more humanistic view of anthropological goals and procedures. Finally, and most indirectly, the book offers a perspective on the arguments over cultural practises and values that periodically – or perhaps constantly – ring across Western societies. The book raises a set of important questions about the relations between evolutionary theory and cultural anthropology and is well written and accessible, so that one would expect it to be a useful text for undergraduates and the general public. Unfortunately, its treatment of anthropological theorizing about culture is weak, and creates a distorted view of the history and state of the art of this work. Such difficulties might perhaps be expected in a text written by someone outside the discipline (see for example Pinker 1997, 2002), but are less understandable when they come from the pen of an anthropologist. Cronk begins the book with an observation, and a claim. The observation is one instance of an ethnographic commonplace: people say one thing, but actually and systematically do another. The Mukogodo pastoralists in whose Kenyan communities Cronk did his fieldwork express a preference for male children over female children, but treat their daughters somewhat better than they do their sons. Examples of such contradictions can be multiplied, and Cronk cites a number of such examples, from other parts of Africa, from Asia and from the United States. Based on his research, he posits that in the Mukogodo case the favoritism shown toward daughters is an example of an evolved human tendency to favour children with the best prospects, especially in marriage, in later life. The hypothesis is an interesting and useful one. It could be – and probably is being – extended by fieldwork in other societies where similarly gender-differentiated prospects exist.",
"title": ""
},
{
"docid": "9f0e7fbe10ce2998dac649b6a71e58a6",
"text": "A method of workspace modelling for spherical parallel manipulators (SPMs) of symmetrical architecture is developed by virtue of Euler parameters in the paper. The adoption of Euler parameters in the expression of spatial rotations of SPMs helps not only to eliminate the possible singularity in the rotation matrix, but also to formulate all equations in polynomials, which are more easily manipulated. Moreover, a homogeneous workspace can be obtained with Euler parameters for the SPMs, which facilitates the evaluation of dexterity. In this work, the problem of workspace modelling and analysis is formulated in terms of Euler parameters. An equation dealing with boundary surfaces is derived and branches of boundary surface are identified. Evaluation of dexterity is explored to quantitatively describe the capability of a manipulator to attain orientations. The singularity identification is also addressed. Examples are included to demonstrate the application of the proposed method.",
"title": ""
},
{
"docid": "cd3d046fc4aa9af3730e76163fb2ae0a",
"text": "Blockchain has emerged as one of the most promising and revolutionary technologies in the past years. Companies are exploring implementation of use cases in hope of significant gains in efficiencies. However, to achieve the impact hoped for, it is not sufficient to merely replace existing technologies. The current business processes must also be redesigned and innovated to enable realization of hoped for benefits. This conceptual paper provides a theoretical contribution on how blockchain technology and smart contracts potentially can, within the framework of the seven principles of business process re-engineering (BPR), enable process innovations. In this paper, we analyze the BPR principles in light of their applicability to blockchain-based solutions. We find these principles to be applicable and helpful in understanding how blockchain technology could enable transformational redesign of current processes. However, the viewpoint taken, should be expanded from intrato inter-organizational processes operating within an ecosystem of separate organizational entities. In such a blockchain powered ecosystem, smart contracts take on a pivotal role, both as repositories of data and executioner of activities.",
"title": ""
},
{
"docid": "e914a66fc4c5b35e3fd24427ffdcbd96",
"text": "This paper proposes two control algorithms for a sensorless speed control of a PMSM. One is a new low pass filter. This filter is designed to have the variable cutoff frequency according to the rotor speed. And the phase delay angle is so small as to be ignored not only in the low speed region but also in the high speed region including the field weakening region. Sensorless control of a PMSM can be guaranteed without any delay angle by using the proposed low pass filter. The other is a new iterative sliding mode observer (I-SMO). Generally the sliding mode observer (SMO) has the attractive features of the robustness to disturbances, and parameter variations. In the high speed region the switching gain of SMO must be large enough to operate the sliding mode stably. But the estimated currents and back EMF can not help having much ripple or chattering components especially in the high speed region including the flux weakening region. Using I-SMO can reduce chattering components of the estimated currents and back EMF in all speed regions without any help of the expensive hardware such as the high performance DSP and A/D converter. Experimental results show the usefulness of the proposed two algorithms for the sensorless drive system of a PMSM.",
"title": ""
},
{
"docid": "1f4ca34b4032902a27ed55e505e2b8ba",
"text": "Monitoring the structural health of railcars is important to ensure safe and efficient railroad operation. The structural integrity of freight cars depends on the health of certain structural components within their underframes. These components serve two principal functions: supporting the car body and lading and transmitting longitudinal buff and draft forces. Although railcars are engineered to withstand large static, dynamic and cyclical loads, they can still develop a variety of structural defects. As a result, Federal Railroad Administration (FRA) regulations and individual railroad mechanical department practices require periodic inspection of railcars to detect mechanical and structural damage or defects. These inspections are primarily a manual process that relies on the acuity, knowledge and endurance of qualified inspection personnel. Enhancements to the process are possible through machine-vision technology, which uses computer algorithms to convert digital image data of railcar underframes into useful information. This paper describes research investigating the feasibility of an automated inspection system capable of detecting structural defects in freight car underframes and presents an inspection approach using machine-vision techniques including multi-scale image segmentation. A preliminary image collection system has been developed, field trials conducted and algorithms developed that can analyze the images and identify certain underframe components, assessing aspects of their condition. The development of this technology, in conjunction with additional preventive maintenance systems, has the potential to provide more objective information on railcar condition, improved utilization of railcar inspection and repair resources, increased train and employee safety, and improvements to overall railroad network efficiency. Schlake et al. 09-2863 4 INTRODUCTION In the United States, railcars undergo regular mechanical inspections as required by Federal Railroad Administration (FRA) regulations and as dictated by railroad mechanical department practices. These mechanical inspections address numerous components on the railcar including several underbody components that are critically important to the structural integrity of the railcar. The primary structural component, the center sill, runs longitudinally along the center of the car, forming the backbone of the underframe and transmitting buff and draft forces through the car (1). In addition to the center sill, several other structural components are critical to load transfer, including the side sills, body bolsters, and crossbearers. The side sills are longitudinal members similar to the center sill but run along either side of the car. Body bolsters are transverse members near each end of the car that transfer the car’s load from the car body to the trucks. Crossbearers are transverse members that connect the side sills to the center sill and help distribute the load between the longitudinal members of the car. These components work together as a system to help maintain the camber and structural integrity of the car. Mechanical Regulations and Inspection Procedures FRA Mechanical Regulations require the inspection of center sills for breaks, cracks, and buckling, and the inspection of sidesills, crossbearers, and body bolsters for breaks, as well as other selected inspection items (2). Every time a car departs a yard or industrial facility it is required under the FRA regulations to be visually inspected by either a carman or train crew member for possible defects that would adversely affect the safe operation of the train. The current railcar inspection process is tedious, labor intensive, and in general lacks the level of objectivity that may be achievable through the use of technology. In order to effectively detect structural defects, car inspectors would need to walk around the entire car and crawl underneath with a flashlight to view each structural component. Due to time constraints associated with typical pre-departure mechanical inspections, cars are only inspected with this level of scrutiny in car repair shops before undergoing major repairs. In addition to the inherent challenges of manual inspections, records of these inspections are generally not retained unless a billable repair is required, making it difficult to track the health of a car over time or to perform a trend analysis. As a result, the maintenance of railcar structural components is almost entirely reactive rather than predictive, making repairs and maintenance less efficient. Technology Driven Train Inspection (TDTI) The Association of American Railroads (AAR) along with the Transportation Technology Center, Inc. (TTCI) has initiated a program intended to provide safer, more efficient, and traceable means of rolling stock inspection (3). The object of the Technology Driven Train Inspection (TDTI) program is to identify, develop, and apply new technologies to enhance the efficiency and effectiveness of the railcar inspection, maintenance, and repair process. Examples of these new technologies include the automated inspection of railcar trucks, safety appliances and passenger car undercarriages (4, 5, 6). The ultimate objective of TDTI is to implement a network of automatic wayside inspection systems capable of inspecting and monitoring the North American Schlake et al. 09-2863 5 freight car fleet in order to maintain compliance with FRA regulations and railroadspecific maintenance and operational standards. Automated Structural Component Inspection System (ASCIS) One aspect of the TDTI initiative is the development of the Automated Structural Component Inspection System (ASCIS), which is currently underway at the University of Illinois at Urbana-Champaign (UIUC). ASCIS focuses on developing technology to aid in the inspection of freight car bodies for defective structural components through the use of machine vision. A machine-vision system collects data using digital cameras, organizes and analyzes the images using computer algorithms, and outputs useful information, such as the type and location of defects, to the appropriate repair personnel. The computer algorithms use visual cues to locate areas of interest on the freight car and then analyze each component to determine its variance from the baseline case. While manual inspections are subject to inaccuracies and delays due to time constraints and human fatigue, ASCIS will work collectively with other automated inspection systems (e.g. machine vision systems for inspecting safety appliances, truck components, brake shoes, etc.) to inspect freight cars efficiently and objectively and will not suffer from monotony or fatigue. ASCIS will also maintain health records of every car that undergoes inspection, allowing potential structural defects to be monitored so that components are repaired prior to failure. Additionally, applying these new technologies to the inspection process has the potential to enhance safety and efficiency for both train crew members and mechanical personnel. A primary benefit of ASCIS and other automated inspection systems is the facilitation of preventive, or condition-based, maintenance. Condition-based maintenance involves the monitoring of certain parameters related to component health or degradation and the subsequent corrective actions taken prior to component failure (7). Despite the advantages of condition-based maintenance, current structural component repair and billing practices engender corrective maintenance, which does not occur until after a critical defect is detected. Due to the reactive nature of corrective maintenance, repairs cannot be planned as effectively, resulting in higher expenses and less efficient repairs. For example, it is more economical to patch a cracked crossbearer before it breaks than to replace a fully broken crossbearer. Having recognized the need for preventative maintenance, railroads have begun implementing other technologies similar to ASCIS that monitor subtle indicators of railcar component health (e.g. Truck Performance Detectors and the AAR’s Fully Automated Car Train Inspection System FactISTM) (8). REGULATORY COMPLIANCE The FRA regulations for freight car bodies form the basis for which components will be inspected by ASCIS. Section 215.121 of Title 49 in the U.S. Code of Federal Regulations (CFR) governs the inspection of freight car bodies and two of the six parts in this section pertain to the inspection of structural components (2). According to the regulations, the center sill may not be broken, cracked more than 6 inches, or bent/buckled more than 2.5 inches in any 6 foot length. Specific parameters are established for the allowable magnitude of cracks or buckling because these defects may undermine the integrity of the sill, resulting in a center sill failure (9). Therefore, these regulations are intended to Schlake et al. 09-2863 6 identify potentially hazardous cars so that they will be repaired before an in-service failure. FRA structural component inspection data from the last eight years shows that on average 59% of the structural component defects are comprised of broken, cracked, bent, or buckled center sills, while the remaining 41% represent defective side sills, body bolsters, or crossbearers (Figure 1). FIGURE 1 Average number of yearly structural defects recorded by FRA inspectors as a percentage of all cars inspected in a year. Based on these data and guidance from the AAR, the primary focus of ASCIS will be on the inspection of center sills and the secondary focus will be on the inspection of the other structural components. The final goal of ASCIS is to provide data and trending information for the implementation of condition-based maintenance on all freight car structural components.",
"title": ""
},
{
"docid": "10e88f0d1a339c424f7e0b8fa5b43c1e",
"text": "Hash functions play an important role in modern cryptography. This paper investigates optimisation techniques that have recently been proposed in the literature. A new VLSI architecture for the SHA-256 and SHA-512 hash functions is presented, which combines two popular hardware optimisation techniques, namely pipelining and unrolling. The SHA processors are developed for implementation on FPGAs, thereby allowing rapid prototyping of several designs. Speed/area results from these processors are analysed and are shown to compare favourably with other FPGA-based implementations, achieving the fastest data throughputs in the literature to date",
"title": ""
},
{
"docid": "7fe6505453be76030d8580e7be5fa8c7",
"text": "Based on experiences with different organizations having insider threat programs, the components needed for an insider threat auditing and mitigation program and methods of program validation that agencies can use when both initiating a program and reviewing an existing program has been described. This paper concludes with descriptions of each of the best practices derived from the model program. This final section is meant to be a standalone section that readers can detach and incorporate into their insider threat mitigation program guidance.",
"title": ""
},
{
"docid": "98e557f291de3b305a91e47f59a9ed34",
"text": "We propose SfM-Net, a geometry-aware neural network for motion estimation in videos that decomposes frameto-frame pixel motion in terms of scene and object depth, camera motion and 3D object rotations and translations. Given a sequence of frames, SfM-Net predicts depth, segmentation, camera and rigid object motions, converts those into a dense frame-to-frame motion field (optical flow), differentiably warps frames in time to match pixels and back-propagates. The model can be trained with various degrees of supervision: 1) self-supervised by the reprojection photometric error (completely unsupervised), 2) supervised by ego-motion (camera motion), or 3) supervised by depth (e.g., as provided by RGBD sensors). SfMNet extracts meaningful depth estimates and successfully estimates frame-to-frame camera rotations and translations. It often successfully segments the moving objects in the scene, even though such supervision is never provided.",
"title": ""
},
{
"docid": "971692db73441f7c68a0cc32927ae0b2",
"text": "This letter presents a new lattice-form complex adaptive IIR notch filter to estimate and track the frequency of a complex sinusoid signal. The IIR filter is a cascade of a direct-form all-pole prefilter and an adaptive lattice-form all-zero filter. A complex domain exponentially weighted recursive least square algorithm is adopted instead of the widely used least mean square algorithm to increase the convergence rate. The convergence property of this algorithm is investigated, and an expression for the steady-state asymptotic bias is derived. Analysis results indicate that the frequency estimate for a single complex sinusoid is unbiased. Simulation results demonstrate that the proposed method achieves faster convergence and better tracking performance than all traditional algorithms.",
"title": ""
},
{
"docid": "eb218a1d8b7cbcd895dd0cd8cfcf9d80",
"text": "Caring is considered as the essence of nursing and is the basic factor that distinguishes between nurses and other health professions. The literature is rich of previous studies that focused on perceptions of nurses toward nurse caring behaviors, but less studywas applied in pediatric nurses in different settings. Aim of the study:evaluate the effect of application of Watson caring theory for nurses in pediatric critical care unit. Method(s): A convenience sample of 70 nurses of Pediatric Critical Care Unit in El-Menoufya University Hospital and educational hospital in ShebenElkom.were completed the demographics questionnaire, and the Caring Behavior Assessment (CBA) questionnaire,medical record to collect medical data regarding children characteristics such as age and diagnosis, Interviewing questionnaire for nurses regarding their barrier to less interest of comfort behavior such as doing doctor order, Shortage of nursing staff, Large number of patients, Heavy workloads, Secretarial jobs for nurses and Emotional stress. Results: more thantwothirds of nurses in study group and majority of control group had age less than 30 years, there were highly statistically significant difference related to mean scores for Caring Behavior Assessment (CBA) as rated by nurses in pretest (1.4750 to 2.0750) than in posttest (3.5 to 4.55). Also, near to two-thirds (64.3%) of the nurses stated that doing doctor order act as a barrier to apply this theory. In addition, there were a statistical significance difference between educational qualifications of nurses and a Supportive\\ protective\\corrective environment subscale with mean score for master degree 57.0000, also between years of experiences and human needs assistance. Conclusion: Program instructions for all nurses to apply Watson Caring theory for children in pediatric critical care unit were successful and effective and this study provided evidence for application of this theory for different departments in all settings. Recommendations: It was recommended that In-service training programs for nurses about caring behavior and its different areas, with special emphasis on communication are needed to improve their own behaviors in all aspects of the caring behaviors for all health care settings. Motivating hospital authorities to recruit more nurses, then, the nurses would be able to have more care that is direct. Consequently, the amount and the quality of nurse-child communication and opportunities for patient education would increase, this in turn improve child's outcome.",
"title": ""
},
{
"docid": "1a3c01a10c296ca067452d98847240d6",
"text": "The second edition of Creswell's book has been significantly revised and updated. The author clearly sets out three approaches to research: quantitative, qualitative and mixed methods. As someone who has used mixed methods in my research, it is refreshing to read a textbook that addresses this. The differences between the approaches are clearly identified and a rationale for using each methodological stance provided.",
"title": ""
},
{
"docid": "fb89fd2d9bf526b8bc7f1433274859a6",
"text": "In multidimensional image analysis, there are, and will continue to be, situations wherein automatic image segmentation methods fail, calling for considerable user assistance in the process. The main goals of segmentation research for such situations ought to be (i) to provide ffective controlto the user on the segmentation process while it is being executed, and (ii) to minimize the total user’s time required in the process. With these goals in mind, we present in this paper two paradigms, referred to aslive wireandlive lane, for practical image segmentation in large applications. For both approaches, we think of the pixel vertices and oriented edges as forming a graph, assign a set of features to each oriented edge to characterize its “boundariness,” and transform feature values to costs. We provide training facilities and automatic optimal feature and transform selection methods so that these assignments can be made with consistent effectiveness in any application. In live wire, the user first selects an initial point on the boundary. For any subsequent point indicated by the cursor, an optimal path from the initial point to the current point is found and displayed in real time. The user thus has a live wire on hand which is moved by moving the cursor. If the cursor goes close to the boundary, the live wire snaps onto the boundary. At this point, if the live wire describes the boundary appropriately, the user deposits the cursor which now becomes the new starting point and the process continues. A few points (livewire segments) are usually adequate to segment the whole 2D boundary. In live lane, the user selects only the initial point. Subsequent points are selected automatically as the cursor is moved within a lane surrounding the boundary whose width changes",
"title": ""
}
] |
scidocsrr
|
c2f3f1a2926890c52c461cddfd20f7d9
|
Fake News Detection Through Multi-Perspective Speaker Profiles
|
[
{
"docid": "a08fa88123a62987c6613f89741b5abc",
"text": "Predicting users political party in social media has important impacts on many real world applications such as targeted advertising, recommendation and personalization. Several political research studies on it indicate that political parties’ ideological beliefs on sociopolitical issues may influence the users political leaning. In our work, we exploit users’ ideological stances on controversial issues to predict political party of online users. We propose a collaborative filtering approach to solve the data sparsity problem of users stances on ideological topics and apply clustering method to group the users with the same party. We evaluated several state-of-the-art methods for party prediction task on debate.org dataset. The experiments show that using ideological stances with Probabilistic Matrix Factorization (PMF) technique achieves a high accuracy of 88.9% at 22.9% data sparsity rate and 80.5% at 70% data sparsity rate on users’ party prediction task.",
"title": ""
},
{
"docid": "517916f4c62bc7b5766efa537359349d",
"text": "Document-level sentiment classification aims to predict user’s overall sentiment in a document about a product. However, most of existing methods only focus on local text information and ignore the global user preference and product characteristics. Even though some works take such information into account, they usually suffer from high model complexity and only consider wordlevel preference rather than semantic levels. To address this issue, we propose a hierarchical neural network to incorporate global user and product information into sentiment classification. Our model first builds a hierarchical LSTM model to generate sentence and document representations. Afterwards, user and product information is considered via attentions over different semantic levels due to its ability of capturing crucial semantic components. The experimental results show that our model achieves significant and consistent improvements compared to all state-of-theart methods. The source code of this paper can be obtained from https://github. com/thunlp/NSC.",
"title": ""
},
{
"docid": "218c93b9e7be1ddbf86cd7dca9065fde",
"text": "Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present LIAR: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from POLITIFACT.COM, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate metadata with text. We show that this hybrid approach can improve a text-only deep learning model.",
"title": ""
}
] |
[
{
"docid": "470093535d4128efa9839905ab2904a5",
"text": "Photovolatic systems normally use a maximum power point tracking (MPPT) technique to continuously deliver the highest possible power to the load when variations in the insolation and temperature occur. It overcomes the problem of mismatch between the solar arrays and the given load. A simple method of tracking the maximum power points (MPP’s) and forcing the system to operate close to these points is presented. The principle of energy conservation is used to derive the largeand small-signal model and transfer function. By using the proposed model, the drawbacks of the state-space-averaging method can be overcome. The TI320C25 digital signal processor (DSP) was used to implement the proposed MPPT controller, which controls the dc/dc converter in the photovoltaic system. Simulations and experimental results show excellent performance.",
"title": ""
},
{
"docid": "d2a89459ca4a0e003956d6fe4871bb34",
"text": "In this paper, a high-efficiency high power density LLC resonant converter with a matrix transformer is proposed. A matrix transformer can help reduce leakage inductance and the ac resistance of windings so that the flux cancellation method can then be utilized to reduce core size and loss. Synchronous rectifier (SR) devices and output capacitors are integrated into the secondary windings to eliminate termination-related winding losses, via loss and reduce leakage inductance. A 1 MHz 390 V/12 V 1 kW LLC resonant converter prototype is built to verify the proposed structure. The efficiency can reach as high as 95.4%, and the power density of the power stage is around 830 W/in3.",
"title": ""
},
{
"docid": "51ac5dde554fd8363fcf95e6d3caf439",
"text": "Swarm intelligence is a relatively novel field. It addresses the study of the collective behaviors of systems made by many components that coordinate using decentralized controls and self-organization. A large part of the research in swarm intelligence has focused on the reverse engineering and the adaptation of collective behaviors observed in natural systems with the aim of designing effective algorithms for distributed optimization. These algorithms, like their natural systems of inspiration, show the desirable properties of being adaptive, scalable, and robust. These are key properties in the context of network routing, and in particular of routing in wireless sensor networks. Therefore, in the last decade, a number of routing protocols for wireless sensor networks have been developed according to the principles of swarm intelligence, and, in particular, taking inspiration from the foraging behaviors of ant and bee colonies. In this paper, we provide an extensive survey of these protocols. We discuss the general principles of swarm intelligence and of its application to routing. We also introduce a novel taxonomy for routing protocols in wireless sensor networks and use it to classify the surveyed protocols. We conclude the paper with a critical analysis of the status of the field, pointing out a number of fundamental issues related to the (mis) use of scientific methodology and evaluation procedures, and we identify some future research directions. 2010 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "1fcca6b7c1755da04ec01235619de098",
"text": "■ HBM is a breakthrough memory solution for performance, power and form-factor constrained systems by delivering high bandwidth, Low effective power & Small form factor ■ HBM device provide various mechanisms to ensure quality/reliability at pre and post SiP assembly ■ HBM is an industry standard solution with multiple supply sources",
"title": ""
},
{
"docid": "79cffed53f36d87b89577e96a2b2e713",
"text": "Human pose estimation has made significant progress during the last years. However current datasets are limited in their coverage of the overall pose estimation challenges. Still these serve as the common sources to evaluate, train and compare different models on. In this paper we introduce a novel benchmark \"MPII Human Pose\" that makes a significant advance in terms of diversity and difficulty, a contribution that we feel is required for future developments in human body models. This comprehensive dataset was collected using an established taxonomy of over 800 human activities [1]. The collected images cover a wider variety of human activities than previous datasets including various recreational, occupational and householding activities, and capture people from a wider range of viewpoints. We provide a rich set of labels including positions of body joints, full 3D torso and head orientation, occlusion labels for joints and body parts, and activity labels. For each image we provide adjacent video frames to facilitate the use of motion information. Given these rich annotations we perform a detailed analysis of leading human pose estimation approaches and gaining insights for the success and failures of these methods.",
"title": ""
},
{
"docid": "275aa27bbb1bb84a82c57025dd299f2c",
"text": "BACKGROUND\nChildbirth medicalization has reduced the parturient's opportunity to labour and deliver in a spontaneous position, constricting her to assume the recumbent one. The aim of the study was to compare recumbent and alternative positions in terms of labour process, type of delivery, neonatal wellbeing, and intrapartum fetal head rotation.\n\n\nMETHODS\nWe conducted an observational cohort study on women at pregnancy term. Primiparous women with physiological pregnancies and single cephalic fetuses were eligible for the study. We considered data about maternal-general characteristics, labour process, type of delivery, and neonatal wellbeing at birth. Patients were divided into two groups: Group-A if they spent more than 50% of labour in a recumbent position and Group-B when in alternative ones.\n\n\nRESULTS\n225 women were recruited (69 in Group-A and 156 in Group-B). We found significant differences between the groups in terms of labour length, Numeric Rating Scale score and analgesia request rate, type of delivery, need of episiotomy, and fetal occiput rotation. No differences were found in terms of neonatal outcomes.\n\n\nCONCLUSION\nAlternative maternal positioning may positively influence labour process reducing maternal pain, operative vaginal delivery, caesarean section, and episiotomy rate. Women should be encouraged to move and deliver in the most comfortable position.",
"title": ""
},
{
"docid": "cc1d80b5428227517d654181cfdcc3f6",
"text": "Proof Number search (PNS) is an effective algorithm for searching theoretical values on games with non-uniform branching factors. Focused depth-first proof number search (FDFPN) with dynamic widening was proposed for Hex where the branching factor is nearly uniform. However, FDFPN is fragile to its heuristic move ordering function. The recent advances of Convolutional Neural Networks (CNNs) have led to considerable progress in game playing. We investigate how to incorporate the strength of CNNs into solving, with application to the game of Hex. We describe FDFPN-CNN, a new focused DFPN search that uses convolutional neural networks. FDFPN-CNN integrates two CNNs trained from games played by expert players. The value approximation CNN provides reliable information for defining the widening size by estimating the value of the node to expand, while the policy CNN selects promising children nodes to the search. On 8x8 Hex, experimental results show FDFPN-CNN performs notably better than FDFPN, suggesting a promising direction for better solving Hex positions where learning from strong players is possible.",
"title": ""
},
{
"docid": "57cbffa039208b85df59b7b3bc1718d5",
"text": "This paper provides an in-depth analysis of the technological and social factors that led to the successful adoption of groupware by a virtual team in a educational setting. Drawing on a theoretical framework based on the concept of technological frames, we conducted an action research study to analyse the chronological sequence of events in groupware adoption. We argue that groupware adoption can be conceptualised as a three-step process of expanding and aligning individual technological frames towards groupware. The first step comprises activities that bring knowledge of new technological opportunities to the participants. The second step involves facilitating the participants to articulate and evaluate their work practices and their use of tech© Scandinavian Journal of Information Systems, 2006, 18(2):29-68 nology. The third and final step deals with the participants' commitment to, and practical enactment of, groupware technology. The alignment of individual technological frames requires the articulation and re-evaluation of experience with collaborative practice and with the use of technology. One of the key findings is that this activity cannot take place at the outset of groupware adoption.",
"title": ""
},
{
"docid": "5f109b71bf1e39030db2594e54718ce5",
"text": "Following the hierarchical Bayesian framework for blind deconvolution problems, in this paper, we propose the use of simultaneous autoregressions as prior distributions for both the image and blur, and gamma distributions for the unknown parameters (hyperparameters) of the priors and the image formation noise. We show how the gamma distributions on the unknown hyperparameters can be used to prevent the proposed blind deconvolution method from converging to undesirable image and blur estimates and also how these distributions can be inferred in realistic situations. We apply variational methods to approximate the posterior probability of the unknown image, blur, and hyperparameters and propose two different approximations of the posterior distribution. One of these approximations coincides with a classical blind deconvolution method. The proposed algorithms are tested experimentally and compared with existing blind deconvolution methods",
"title": ""
},
{
"docid": "46533c7b42e2bad3fb0b65722479a552",
"text": "Agarwal, R., Krudys, G., and Tanniru, M. 1997. “Infusing Learning into the Information Systems Organization,” European Journal of Information Systems (6:1), pp. 25-40. Alavi, M., and Leidner, D. E. 2001. “Review: Knowledge Management and Knowledge Management Systems: Conceptual Foundations and Research Issues,” MIS Quarterly (25:1), pp. 107-136. Andersen, T. J. 2001. “Information Technology, Strategic Decision Making Approaches and Organizational Performance in Different Industrial Settings,” Journal of Strategic Information Systems (10:2), pp. 101-119. Andersen, T. J., and Segars, A. H. 2001. “The Impact of IT on Decision Structure and Firm Performance: Evidence from the Textile and Apparel Industry,” Information & Management (39:2), pp. 85-100. Andersson, M., Lindgren, R., and Henfridsson, A. 2008. “Architectural Knowledge in Inter-Organizational IT Innovation,” Journal of Strategic Information Systems (17:1), pp. 19-38. Armstrong, C. P., and Sambamurthy, V. 1999. “Information Technology Assimilation in Firms: The Influence of Senior Leadership and IT Infrastructures,” Information Systems Research (10:4), pp. 304-327. Auer, T. 1998. “Quality of IS Use,” European Journal of Information Systems (7:3), pp. 192-201. Bassellier, G., Benbasat, I., and Reich, B. H. 2003. “The Influence of Business Managers’ IT Competence on Championing IT,” Information Systems Research (14:4), pp. 317-336.",
"title": ""
},
{
"docid": "d6dfa1f279a5df160814e1d378162c02",
"text": "Understanding and forecasting mobile traffic of large scale cellular networks is extremely valuable for service providers to control and manage the explosive mobile data, such as network planning, load balancing, and data pricing mechanisms. This paper targets at extracting and modeling traffic patterns of 9,000 cellular towers deployed in a metropolitan city. To achieve this goal, we design, implement, and evaluate a time series analysis approach that is able to decompose large scale mobile traffic into regularity and randomness components. Then, we use time series prediction to forecast the traffic patterns based on the regularity components. Our study verifies the effectiveness of our utilized time series decomposition method, and shows the geographical distribution of the regularity and randomness component. Moreover, we reveal that high predictability of the regularity component can be achieved, and demonstrate that the prediction of randomness component of mobile traffic data is impossible.",
"title": ""
},
{
"docid": "0d40f7ddda91227fab3cc62a4ca2847c",
"text": "Coherent texts are not just simple sequences of clauses and sentences, but rather complex artifacts that have highly elaborate rhetorical structure. This paper explores the extent to which well-formed rhetorical structures can be automatically derived by means of surface-form-based algorithms. These algorithms identify discourse usages of cue phrases and break sentences into clauses, hypothesize rhetorical relations that hold among textual units, and produce valid rhetorical structure trees for unrestricted natural language texts. The algorithms are empirically grounded in a corpus analysis of cue phrases and rely on a first-order formalization of rhetorical structure trees. The algorithms are evaluated both intrinsically and extrinsically. The intrinsic evaluation assesses the resemblance between automatically and manually constructed rhetorical structure trees. The extrinsic evaluation shows that automatically derived rhetorical structures can be successfully exploited in the context of text summarization.",
"title": ""
},
{
"docid": "e133f005e6bae09d7d67da1b4e4ec176",
"text": "Because of broad range of applications and distinctive properties of aptamer, the global market size was valued at USD 723.6 million in 2016 and is projected to grow at the compound annual growth rate (CAGR) of 28.2%,1 and expected to reach $8.91 Billion by 2025, growing rapidly. Aptamers and the derivatives are also referred to as “synthetic antibodies” or “chemical antibodies”2‒5 that are able to bind with high affinity and specificity to almost all types of molecules as well as antigens, cells. Because of their unique properties, aptamers have a wide range of applications, particularly in biological and medical sciences, including diagnosis, therapies, forensics, and biodefense.6‒9 So far, hundreds of aptamer reagents have been developed for the applications,10 which are faster, cheaper, and less or without the predictable problems associated with the production of recombinant antibodies. This review summarizes the resent technologies of modified analogous of aptamer, so called pseudo aptamers in this script.",
"title": ""
},
{
"docid": "743424b3b532b16f018e92b2563458d5",
"text": "We consider the problem of finding a few representatives for a dataset, i.e., a subset of data points that efficiently describes the entire dataset. We assume that each data point can be expressed as a linear combination of the representatives and formulate the problem of finding the representatives as a sparse multiple measurement vector problem. In our formulation, both the dictionary and the measurements are given by the data matrix, and the unknown sparse codes select the representatives via convex optimization. In general, we do not assume that the data are low-rank or distributed around cluster centers. When the data do come from a collection of low-rank models, we show that our method automatically selects a few representatives from each low-rank model. We also analyze the geometry of the representatives and discuss their relationship to the vertices of the convex hull of the data. We show that our framework can be extended to detect and reject outliers in datasets, and to efficiently deal with new observations and large datasets. The proposed framework and theoretical foundations are illustrated with examples in video summarization and image classification using representatives.",
"title": ""
},
{
"docid": "32e430c84b64d123763ed2e034696e20",
"text": "The Internet of Things (IoT) is becoming a key infrastructure for the development of smart ecosystems. However, the increased deployment of IoT devices with poor security has already rendered them increasingly vulnerable to cyber attacks. In some cases, they can be used as a tool for committing serious crimes. Although some researchers have already explored such issues in the IoT domain and provided solutions for them, there remains the need for a thorough analysis of the challenges, solutions, and open problems in this domain. In this paper, we consider this research gap and provide a systematic analysis of security issues of IoT-based systems. Then, we discuss certain existing research projects to resolve the security issues. Finally, we highlight a set of open problems and provide a detailed description for each. We posit that our systematic approach for understanding the nature and challenges in IoT security will motivate researchers to addressing and solving these problems.",
"title": ""
},
{
"docid": "bb1b8e5d3a53b82cffd4d91163d95829",
"text": "PURPOSE\nThis study was designed to evaluate the feasibility and oncologic and functional outcomes of intersphincteric resection for very low rectal cancer.\n\n\nMETHODS\nA feasibility study was performed using 213 specimens from abdominoperineal resections of rectal cancer. Oncologic and functional outcomes were investigated in 228 patients with rectal cancer located <5 cm from the anal verge who underwent intersphincteric resection at seven institutions in Japan between 1995 and 2004.\n\n\nRESULTS\nCurative operations were accomplished by intersphincteric resection in 86 percent of patients who underwent abdominoperineal resection. Complete microscopic curative surgery was achieved by intersphincteric resection in 225 of 228 patients. Morbidity was 24 percent, and mortality was 0.4 percent. During the median observation time of 41 months, rate of local recurrence was 5.8 percent at three years, and five-year overall and disease-free survival rates were 91.9 percent and 83.2 percent, respectively. In 181 patients who received stoma closure, 68 percent displayed good continence, and only 7 percent showed worsened continence at 24 months after stoma closure. Patients with total intersphincteric resection displayed significantly worse continence than patients with partial or subtotal resection.\n\n\nCONCLUSIONS\nCurability with intersphincteric resection was verified histologically, and acceptable oncologic and functional outcomes were obtained by using these procedures in patients with very low rectal cancer. However, information on potential functional adverse effects after intersphincteric resection should be provided to patients preoperatively.",
"title": ""
},
{
"docid": "73973ae6c858953f934396ab62276e0d",
"text": "The unsolicited bulk messages are widespread in the applications of short messages. Although the existing spam filters have satisfying performance, they are facing the challenge of an adversary who misleads the spam filters by manipulating samples. Until now, the vulnerability of spam filtering technique for short messages has not been investigated. Different from the other spam applications, a short message only has a few words and its length usually has an upper limit. The current adversarial learning algorithms may not work efficiently in short message spam filtering. In this paper, we investigate the existing good word attack and its counterattack method, i.e. the feature reweighting, in short message spam filtering in an effort to understand whether, and to what extent, they can work efficiently when the length of a message is limited. This paper proposes a good word attack strategy which maximizes the influence to a classifier with the least number of inserted characters based on the weight values and also the length of words. On the other hand, we also proposes the feature reweighting method with a new rescaling function which minimizes the importance of the feature representing a short word in order to require more inserted characters for a successful evasion. The methods are evaluated experimentally by using the SMS and the comment spam dataset. The results confirm that the length of words is a critical factor of the robustness of short message spam filtering to good word attack. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0df1a06896fc4a98ee2d98f9e81a6969",
"text": "Today, 77GHz FMCW (Frequency Modulation Continuous Wave) radar sensors are used for automotive applications. In typical automotive radar, the target of interest is a moving target. Thus, to improve the detection probability and reduce the false alarm rate, an MTD(Moving Target Detection) algorithm should be required. This paper describes the proposed two-step MTD algorithm. The 1st MTD processing consists of a clutter cancellation step and a noise cancellation step. The two steps can cancel almost all clutter including stationary targets. However, clutter still remains among the interest beat frequencies detected during the 1st MTD and CFAR (Constant False Alarm) processing. Thus, in the 2nd MTD step, we remove the rest of the clutter with zero phase variation.",
"title": ""
},
{
"docid": "fb1a178c7c097fbbf0921dcef915dc55",
"text": "AIMS\nThe management of open lower limb fractures in the United Kingdom has evolved over the last ten years with the introduction of major trauma networks (MTNs), the publication of standards of care and the wide acceptance of a combined orthopaedic and plastic surgical approach to management. The aims of this study were to report recent changes in outcome of open tibial fractures following the implementation of these changes.\n\n\nPATIENTS AND METHODS\nData on all patients with an open tibial fracture presenting to a major trauma centre between 2011 and 2012 were collected prospectively. The treatment and outcomes of the 65 Gustilo Anderson Grade III B tibial fractures were compared with historical data from the same unit.\n\n\nRESULTS\nThe volume of cases, the proportion of patients directly admitted and undergoing first debridement in a major trauma centre all increased. The rate of limb salvage was maintained at 94% and a successful limb reconstruction rate of 98.5% was achieved. The rate of deep bone infection improved to 1.6% (one patient) in the follow-up period.\n\n\nCONCLUSION\nThe reasons for these improvements are multifactorial, but the major trauma network facilitating early presentation to the major trauma centre, senior orthopaedic and plastic surgical involvement at every stage and proactive microbiological management, may be important factors.\n\n\nTAKE HOME MESSAGE\nThis study demonstrates that a systemised trauma network combined with evidence based practice can lead to improvements in patient care.",
"title": ""
},
{
"docid": "f75ae6fedddde345109d33499853256d",
"text": "Deaths due to prescription and illicit opioid overdose have been rising at an alarming rate, particularly in the USA. Although naloxone injection is a safe and effective treatment for opioid overdose, it is frequently unavailable in a timely manner due to legal and practical restrictions on its use by laypeople. As a result, an effort spanning decades has resulted in the development of strategies to make naloxone available for layperson or \"take-home\" use. This has included the development of naloxone formulations that are easier to administer for nonmedical users, such as intranasal and autoinjector intramuscular delivery systems, efforts to distribute naloxone to potentially high-impact categories of nonmedical users, as well as efforts to reduce regulatory barriers to more widespread distribution and use. Here we review the historical and current literature on the efficacy and safety of naloxone for use by nonmedical persons, provide an evidence-based discussion of the controversies regarding the safety and efficacy of different formulations of take-home naloxone, and assess the status of current efforts to increase its public distribution. Take-home naloxone is safe and effective for the treatment of opioid overdose when administered by laypeople in a community setting, shortening the time to reversal of opioid toxicity and reducing opioid-related deaths. Complementary strategies have together shown promise for increased dissemination of take-home naloxone, including 1) provision of education and training; 2) distribution to critical populations such as persons with opioid addiction, family members, and first responders; 3) reduction of prescribing barriers to access; and 4) reduction of legal recrimination fears as barriers to use. Although there has been considerable progress in decreasing the regulatory and legal barriers to effective implementation of community naloxone programs, significant barriers still exist, and much work remains to be done to integrate these programs into efforts to provide effective treatment of opioid use disorders.",
"title": ""
}
] |
scidocsrr
|
c1f02e25a9e97206b807844b752a6ae5
|
SIRIUS-LTG-UiO at SemEval-2018 Task 7: Convolutional Neural Networks with Shortest Dependency Paths for Semantic Relation Extraction and Classification in Scientific Papers
|
[
{
"docid": "7927dffe38cec1ce2eb27dbda644a670",
"text": "This paper describes our system for SemEval-2010 Task 8 on multi-way classification of semantic relations between nominals. First, the type of semantic relation is classified. Then a relation typespecific classifier determines the relation direction. Classification is performed using SVM classifiers and a number of features that capture the context, semantic role affiliation, and possible pre-existing relations of the nominals. This approach achieved an F1 score of 82.19% and an accuracy of 77.92%.",
"title": ""
},
{
"docid": "6e8cf6a53e1a9d571d5e5d1644c56e57",
"text": "Previous research on relation classification has verified the effectiveness of using dependency shortest paths or subtrees. In this paper, we further explore how to make full use of the combination of these dependency information. We first propose a new structure, termed augmented dependency path (ADP), which is composed of the shortest dependency path between two entities and the subtrees attached to the shortest path. To exploit the semantic representation behind the ADP structure, we develop dependency-based neural networks (DepNN): a recursive neural network designed to model the subtrees, and a convolutional neural network to capture the most important features on the shortest path. Experiments on the SemEval-2010 dataset show that our proposed method achieves state-of-art results.",
"title": ""
}
] |
[
{
"docid": "c9af9d5f461cb0aa196221c926ac4252",
"text": "The validation of software quality metrics lacks statistical significance. One reason for this is that the data collection requires quite some effort. To help solve this problem, we develop tools for metrics analysis of a large number of software projects (146 projects with ca. 70.000 classes and interfaces and over 11 million lines of code). Moreover, validation of software quality metrics should focus on relevant metrics, i.e., correlated metrics need not to be validated independently. Based on our statistical basis, we identify correlation between several metrics from well-known object-oriented metrics suites. Besides, we present early results of typical metrics values and possible thresholds.",
"title": ""
},
{
"docid": "cf5452e43b6141728da673892c680b6e",
"text": "This paper presents another approach of Thai word segmentation, which is composed of two processes : syllable segmentation and syllable merging. Syllable segmentation is done on the basis of trigram statistics. Syllable merging is done on the basis of collocation between syllables. We argue that many of word segmentation ambiguities can be resolved at the level of syllable segmentation. Since a syllable is a more well-defined unit and more consistent in analysis than a word, this approach is more reliable than other approaches that use a wordsegmented corpus. This approach can perform well at the level of accuracy 81-98% depending on the dictionary used in the segmentation.",
"title": ""
},
{
"docid": "12a3e52c3af78663698e7b907f6ee912",
"text": "A novel graph-based language-independent stemming algorithm suitable for information retrieval is proposed in this article. The main features of the algorithm are retrieval effectiveness, generality, and computational efficiency. We test our approach on seven languages (using collections from the TREC, CLEF, and FIRE evaluation platforms) of varying morphological complexity. Significant performance improvement over plain word-based retrieval, three other language-independent morphological normalizers, as well as rule-based stemmers is demonstrated.",
"title": ""
},
{
"docid": "481931c78a24020a02245075418a26c3",
"text": "Bayesian optimization has been successful at global optimization of expensiveto-evaluate multimodal objective functions. However, unlike most optimization methods, Bayesian optimization typically does not use derivative information. In this paper we show how Bayesian optimization can exploit derivative information to find good solutions with fewer objective function evaluations. In particular, we develop a novel Bayesian optimization algorithm, the derivative-enabled knowledgegradient (d-KG), which is one-step Bayes-optimal, asymptotically consistent, and provides greater one-step value of information than in the derivative-free setting. d-KG accommodates noisy and incomplete derivative information, comes in both sequential and batch forms, and can optionally reduce the computational cost of inference through automatically selected retention of a single directional derivative. We also compute the d-KG acquisition function and its gradient using a novel fast discretization-free technique. We show d-KG provides state-of-the-art performance compared to a wide range of optimization procedures with and without gradients, on benchmarks including logistic regression, deep learning, kernel learning, and k-nearest neighbors.",
"title": ""
},
{
"docid": "dd4edd271de8483fc3ce25f16763ffd1",
"text": "Computer vision is a rapidly evolving discipline. It includes methods for acquiring, processing, and understanding still images and video to model, replicate, and sometimes, exceed human vision and perform useful tasks.\n Computer vision will be commonly used for a broad range of services in upcoming devices, and implemented in everything from movies, smartphones, cameras, drones and more. Demand for CV is driving the evolution of image sensors, mobile processors, operating systems, application software, and device form factors in order to meet the needs of upcoming applications and services that benefit from computer vision. The resulting impetus means rapid advancements in:\n • visual computing performance\n • object recognition effectiveness\n • speed and responsiveness\n • power efficiency\n • video image quality improvement\n • real-time 3D reconstruction\n • pre-scanning for movie animation\n • image stabilization\n • immersive experiences\n • and more...\n Comprised of innovation leaders of computer vision, this panel will cover recent developments, as well as how CV will be enabled and used in 2016 and beyond.",
"title": ""
},
{
"docid": "c77042cb1a8255ac99ebfbc74979c3c6",
"text": "Machine translation systems require semantic knowledge and grammatical understanding. Neural machine translation (NMT) systems often assume this information is captured by an attention mechanism and a decoder that ensures fluency. Recent work has shown that incorporating explicit syntax alleviates the burden of modeling both types of knowledge. However, requiring parses is expensive and does not explore the question of what syntax a model needs during translation. To address both of these issues we introduce a model that simultaneously translates while inducing dependency trees. In this way, we leverage the benefits of structure while investigating what syntax NMT must induce to maximize performance. We show that our dependency trees are 1. language pair dependent and 2. improve translation quality.",
"title": ""
},
{
"docid": "70e80f9546215593862063af3fcf4a34",
"text": "1 Corresponding Author 2 The two lead authors made substantially similar contributions to this paper. First authorship was determined by rotation among papers.",
"title": ""
},
{
"docid": "d593f5205c84536ea1dfc4a561b86fca",
"text": "State of the art approaches for visual-inertial sensor fusion use filter-based or optimization-based algorithms. Due to the nonlinearity of the system, a poor initialization can have a dramatic impact on the performance of these estimation methods. Recently, a closed-form solution providing such an initialization was derived in [1]. That solution determines the velocity (angular and linear) of a monocular camera in metric units by only using inertial measurements and image features acquired in a short time interval. In this letter, we study the impact of noisy sensors on the performance of this closed-form solution. We show that the gyroscope bias, not accounted for in [1], significantly affects the performance of the method. Therefore, we introduce a new method to automatically estimate this bias. Compared to the original method, the new approach now models the gyroscope bias and is robust to it. The performance of the proposed approach is successfully demonstrated on real data from a quadrotor MAV.",
"title": ""
},
{
"docid": "c052c9e920ae871fbf20a8560b87d887",
"text": "This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given to the notion of probability assignment under the self-information loss function, which is directly related to the theory of universal data compression. Both the probabilistic setting and the deterministic setting of the universal prediction problem are described with emphasis on the analogy and the differences between results in the two settings.",
"title": ""
},
{
"docid": "7579b5cb9f18e3dc296bcddc7831abc5",
"text": "Unlike conventional anomaly detection research that focuses on point anomalies, our goal is to detect anomalous collections of individual data points. In particular, we perform group anomaly detection (GAD) with an emphasis on irregular group distributions (e.g. irregular mixtures of image pixels). GAD is an important task in detecting unusual and anomalous phenomena in real-world applications such as high energy particle physics, social media and medical imaging. In this paper, we take a generative approach by proposing deep generative models: Adversarial autoencoder (AAE) and variational autoencoder (VAE) for group anomaly detection. Both AAE and VAE detect group anomalies using point-wise input data where group memberships are known a priori. We conduct extensive experiments to evaluate our models on real world datasets. The empirical results demonstrate that our approach is effective and robust in detecting group anomalies.",
"title": ""
},
{
"docid": "14c3d8cee12007dc8af75c7e0df77f00",
"text": "A modular magic sudoku solution is a sudoku solution with symbols in {0, 1, ..., 8} such that rows, columns, and diagonals of each subsquare add to zero modulo nine. We count these sudoku solutions by using the action of a suitable symmetry group and we also describe maximal mutually orthogonal families.",
"title": ""
},
{
"docid": "930f368fd668bb98527d60c526b4c991",
"text": "Limited research efforts have been made for Mobile CrowdSensing (MCS) to address quality of the recruited crowd, i.e., quality of services/data each individual mobile user and the whole crowd are potentially capable of providing, which is the main focus of the paper. Moreover, to improve flexibility and effectiveness, we consider fine-grained MCS, in which each sensing task is divided into multiple subtasks and a mobile user may make contributions to multiple subtasks. In this paper, we first introduce mathematical models for characterizing the quality of a recruited crowd for different sensing applications. Based on these models, we present a novel auction formulation for quality-aware and fine-grained MCS, which minimizes the expected expenditure subject to the quality requirement of each subtask. Then we discuss how to achieve the optimal expected expenditure, and present a practical incentive mechanism to solve the auction problem, which is shown to have the desirable properties of truthfulness, individual rationality and computational efficiency. We conducted trace-driven simulation using the mobility dataset of San Francisco taxies. Extensive simulation results show the proposed incentive mechanism achieves noticeable expenditure savings compared to two well-designed baseline methods, and moreover, it produces close-to-optimal solutions.",
"title": ""
},
{
"docid": "f955d211ee27ac428e54116667913975",
"text": "The authors are collaborating with a manufacturer of custom built steel frame modular units which are then transported for rapid erection onsite (volumetric building system). As part of its strategy to develop modular housing, Enemetric, is taking the opportunity to develop intelligent buildings, integrating a wide range of sensors and control systems for optimising energy efficiency and directly monitoring structural health. Enemetric have recently been embracing Building Information Modeling (BIM) to improve workflow, in particular cost estimation and to simplify computer aided manufacture (CAM). By leveraging the existing data generated during the design phases, and projecting it to all other aspects of construction management, less errors are made and productivity is significantly increased. Enemetric may work on several buildings at once, and scheduling and priorities become especially important for effective workflow, and implementing Enterprise Resource Planning (ERP). The parametric nature of BIM is also very useful for improving building management, whereby real-time data collection can be logically associated with individual components of the BIM stored in a local Building Management System performing structural health monitoring and environmental monitoring and control. BIM reuse can be further employed in building simulation tools, to apply simulation assisted control strategies, in order to reduce energy consumption, and increase occupant comfort. BIM Integrated Workflow Management and Monitoring System for Modular Buildings",
"title": ""
},
{
"docid": "52dbfe369d1875c402220692ef985bec",
"text": "Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data. Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors. Leave-many-out evaluation shows that our method is able to infer location for 101, 846, 236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80% of public tweets.",
"title": ""
},
{
"docid": "dda2fdd40378ba3340354f836e6cd131",
"text": "Successful face analysis requires robust methods. It has been hard to compare the methods due to different experimental setups. We carried out a comparison study for the state-of-the-art gender classification methods to find out their actual reliability. The main contributions are comprehensive and comparable classification results for the gender classification methods combined with automatic real-time face detection and, in addition, with manual face normalization. We also experimented by combining gender classifier outputs arithmetically. This lead to increased classification accuracies. Furthermore, we contribute guidelines to carry out classification experiments, knowledge on the strengths and weaknesses of the gender classification methods, and two new variants of the known methods. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "dc48b68a202974f62ae63d1d14002adf",
"text": "In the speed sensorless vector control system, the amended method of estimating the rotor speed about model reference adaptive system (MRAS) based on radial basis function neural network (RBFN) for PMSM sensorless vector control system was presented. Based on the PI regulator, the radial basis function neural network which is more prominent learning efficiency and performance is combined with MRAS. The reference model and the adjust model are the PMSM itself and the PMSM current, respectively. The proposed scheme only needs the error signal between q axis estimated current and q axis actual current. Then estimated speed is gained by using RBFN regulator which adjusted error signal. Comparing study of simulation and experimental results between this novel sensorless scheme and the scheme in reference literature, the results show that this novel method is capable of precise estimating the rotor position and speed under the condition of high or low speed. It also possesses good performance of static and dynamic.",
"title": ""
},
{
"docid": "639729ba7b21f8b73e6dc363fe0f217f",
"text": "Various magnetic nanoparticles have been extensively investigated as novel magnetic resonance imaging (MRI) contrast agents owing to their unique characteristics, including efficient contrast effects, biocompatibility, and versatile surface functionalization capability. Nanoparticles with high relaxivity are very desirable because they would increase the accuracy of MRI. Recent progress in nanotechnology enables fine control of the size, crystal structure, and surface properties of iron oxide nanoparticles. In this tutorial review, we discuss how MRI contrast effects can be improved by controlling the size, composition, doping, assembly, and surface properties of iron-oxide-based nanoparticles.",
"title": ""
},
{
"docid": "5a4d0254c1331f8577c462343a8cfb0a",
"text": "In this paper, we address the problem of realizing a human following task in a crowded environment. We consider an active perception system, consisting of a camera mounted on a pan-tilt unit and a 360◦ RFID detection system, both embedded on a mobile robot. To perform such a task, it is necessary to efficiently track humans in crowds. In a first step, we have dealt with this problem using the particle filtering framework because it enables the fusion of heterogeneous data, which improves the tracking robustness. In a second step, we have considered the problem of controlling the robot motion to make the robot follow the person of interest. To this aim, we have designed a multisensor-based control strategy based on the tracker outputs and on the RFID data. Finally, we have implemented the tracker and the control strategy on our robot. The obtained experimental results highlight the relevance of the developed perceptual functions. Possible extensions of this work are discussed at the end of the article.",
"title": ""
},
{
"docid": "9f04ac4067179aadf5e429492c7625e9",
"text": "We provide a model that links an asset’s market liquidity — i.e., the ease with which it is traded — and traders’ funding liquidity — i.e., the ease with which they can obtain funding. Traders provide market liquidity, and their ability to do so depends on their availability of funding. Conversely, traders’ funding, i.e., their capital and the margins they are charged, depend on the assets’ market liquidity. We show that, under certain conditions, margins are destabilizing and market liquidity and funding liquidity are mutually reinforcing, leading to liquidity spirals. The model explains the empirically documented features that market liquidity (i) can suddenly dry up, (ii) has commonality across securities, (iii) is related to volatility, (iv) is subject to “flight to quality”, and (v) comoves with the market, and it provides new testable predictions.",
"title": ""
},
{
"docid": "37927017353dc0bab9c081629d33d48c",
"text": "Generating a secret key between two parties by extracting the shared randomness in the wireless fading channel is an emerging area of research. Previous works focus mainly on single-antenna systems. Multiple-antenna devices have the potential to provide more randomness for key generation than single-antenna ones. However, the performance of key generation using multiple-antenna devices in a real environment remains unknown. Different from the previous theoretical work on multiple-antenna key generation, we propose and implement a shared secret key generation protocol, Multiple-Antenna KEy generator (MAKE) using off-the-shelf 802.11n multiple-antenna devices. We also conduct extensive experiments and analysis in real indoor and outdoor mobile environments. Using the shared randomness extracted from measured Received Signal Strength Indicator (RSSI) to generate keys, our experimental results show that using laptops with three antennas, MAKE can increase the bit generation rate by more than four times over single-antenna systems. Our experiments validate the effectiveness of using multi-level quantization when there is enough mutual information in the channel. Our results also show the trade-off between bit generation rate and bit agreement ratio when using multi-level quantization. We further find that even if an eavesdropper has multiple antennas, she cannot gain much more information about the legitimate channel.",
"title": ""
}
] |
scidocsrr
|
4332bf6d4447da9631af865db1c437fc
|
Efficient Online Novelty Detection in News Streams
|
[
{
"docid": "0d41a6d4cf8c42ccf58bccd232a46543",
"text": "Novelty detection is the ident ification of new or unknown data or signal that a machine learning system is not aware of during training. In this paper we focus on neural network based approaches for novelty detection. Statistical approaches are covered in part-I paper.",
"title": ""
}
] |
[
{
"docid": "ad48ca7415808c4337c0b6eb593005d6",
"text": "Neuroscience is experiencing a data revolution in which many hundreds or thousands of neurons are recorded simultaneously. Currently, there is little consensus on how such data should be analyzed. Here we introduce LFADS (Latent Factor Analysis via Dynamical Systems), a method to infer latent dynamics from simultaneously recorded, single-trial, high-dimensional neural spiking data. LFADS is a sequential model based on a variational auto-encoder. By making a dynamical systems hypothesis regarding the generation of the observed data, LFADS reduces observed spiking to a set of low-dimensional temporal factors, per-trial initial conditions, and inferred inputs. We compare LFADS to existing methods on synthetic data and show that it significantly out-performs them in inferring neural firing rates and latent dynamics.",
"title": ""
},
{
"docid": "b31ebdbd7edc0b30b0529a85fab0b612",
"text": "In this paper, we present RFMS, the real-time flood monitoring system with wireless sensor networks, which is deployed in two volcanic islands Ulleung-do and Dok-do located in the East Sea near to the Korean peninsula and developed for flood monitoring. RFMS measures river and weather conditions through wireless sensor nodes equipped with different sensors. Measured information is employed for early-warning via diverse types of services such as SMS (short message service) and a Web service.",
"title": ""
},
{
"docid": "a235657ae9c608b349e185ca73053058",
"text": "Four cases of a distinctive soft-tissue tumor of the vulva are described. They were characterized by occurrence in middle-aged women (39-50 years), small size (< 3 cm), and a usually well-circumscribed margin. The preoperative clinical diagnosis was that of a labial or Bartholin gland cyst in three of the four cases. The microscopic appearance was remarkably consistent and was characterized by a cellular neoplasm composed of uniform, bland, spindled stromal cells, numerous thick-walled and often hyalinized vessels, and a scarce component of mature adipocytes. Mitotic activity was brisk in three cases (up to 11 mitoses per 10 high power fields). The stromal cells were positive for vimentin and negative for CD34, S-100 protein, actin, desmin, and epithelial membrane antigen, suggesting fibroblastic differentiation. Two patients with follow-up showed no evidence of recurrence. The differential diagnosis of this distinctive tumor includes aggressive angiomyxoma, angiomyofibroblastoma, spindle cell lipoma, solitary fibrous tumor, perineurioma, and leiomyoma. The designation of \"cellular angiofibroma\" is chosen to emphasize the two principal components of this tumor: the cellular spindle cell component and the prominent blood vessels.",
"title": ""
},
{
"docid": "51f686a1056f389ff69855887e3f4f3b",
"text": "Pipelining has been used in the design of many PRAM algorithms to reduce their asymptotic running time. Paul, Vishkin, and Wagener (PVW) used the approach in a parallel implementation of 2-3 trees. The approach was later used by Cole in the first O( lg n) time sorting algorithm on the PRAM not based on the AKS sorting network, and has since been used to improve the time of several other algorithms. Although the approach has improved the asymptotic time of many algorithms, there are two practical problems: maintaining the pipeline is quite complicated for the programmer, and the pipelining forces highly synchronous code execution. Synchronous execution is less practical on asynchronous machines and makes it difficult to modify a schedule to use less memory or to take better advantage of locality. In this paper we show how futures (a parallel language construct) can be used to implement pipelining without requiring the user to code it explicitly, allowing for much simpler code and more asynchronous execution. A runtime system manages the pipelining implicitly. As with user-managed pipelining, we show how the technique reduces the depth of many algorithms by a logarithmic factor over the nonpipelined version. We describe and analyze four algorithms for which this is the case: a parallel merging algorithm on trees, parallel algorithms for finding the union and difference of two randomized balanced trees (treaps), and insertion into a variant of the PVW 2-3 trees. For three of these, the pipeline delays are data dependent making them particularly difficult to pipeline by hand. To determine the runtime of algorithms we first analyze the algorithms in a language-based cost model in terms of the work w and depth d of the computations, and then show universal bounds for implementing the language on various machine models.",
"title": ""
},
{
"docid": "aa8e351d9e4d4065e5ce59718b7f085e",
"text": "A hybrid metal-dielectric nanoantenna promises to harness the large Purcell factor of metallic nanostructures while taking advantage of the high scattering directivity and low dissipative losses of dielectric nanostructures. Here, we investigate a compact hybrid metal-dielectric nanoantenna that is inspired by the Yagi-Uda design. It comprises a metallic gold bowtie nanoantenna feed element and three silicon nanorod directors, exhibiting high unidirectional in-plane directivity and potential beam redirection capability in the visible spectral range. The entire device has a footprint of only 0.38 λ2, and its forward directivity is robust against fabrication imperfections. We use the photoluminescence from the gold bowtie nanoantenna itself as an elegant emitter to characterize the directivity of the device and experimentally demonstrate a directivity of ∼49.2. In addition, we demonstrate beam redirection with our device, achieving a 5° rotation of the main emission lobe with a feed element displacement of only 16 nm. These results are promising for various applications, including on-chip wireless communications, quantum computing, display technologies, and nanoscale alignment.",
"title": ""
},
{
"docid": "940994951108186b57c88217ffda9c88",
"text": "A small phallus causes great concern regarding genital adequacy. A concealed penis, although of normal size, appears small either because it is buried in prepubic tissues, enclosed in scrotal tissue penis palmatus (PP), or trapped due to phimosis or a scar following circumcision or trauma. From July 1978 to January 2001 we operated upon 92 boys with concealed penises; 49 had buried penises (BP), while PP of varying degrees was noted in 14. Of 29 patients with a trapped penis, phimosis was noted in 9, post-circumcision cicatrix (PCC) in 17, radical circumcision in 2, and posttraumatic scarring in 1. The BP was corrected at 2–3 years of age by incising the inner prepuce circumferentially, degloving the penis to the penopubic junction, dividing dysgenetic bands, and suturing the dermis of the penopubic skin to Buck's fascia with nonabsorbable sutures. Patients with PP required displacement of the scrotum in addition to correction of the BP. Phimosis was treated by circumcision. Patients with a PCC were recircumcised carefully, preserving normal skin, but Z-plasties and Byars flaps were often required for skin coverage. After radical circumcision and trauma, vascularized flaps were raised to cover the defect. Satisfactory results were obtained in all cases although 2 patients with BP required a second operation. The operation required to correct a concealed penis has to be tailored to its etiology.",
"title": ""
},
{
"docid": "3e98e933aff32193fe4925f39fd04198",
"text": "Estimating surface normals is an important task in computer vision, e.g. in surface reconstruction, registration and object detection. In stereo vision, the error of depth reconstruction increases quadratically with distance. This makes estimation of surface normals an especially demanding task. In this paper, we analyze how error propagates from noisy disparity data to the orientation of the estimated surface normal. Firstly, we derive a transformation for normals between disparity space and world coordinates. Afterwards, the propagation of disparity noise is analyzed by means of a Monte Carlo method. Normal reconstruction at a pixel position requires to consider a certain neighborhood of the pixel. The extent of this neighborhood affects the reconstruction error. Our method allows to determine the optimal neighborhood size required to achieve a pre specified deviation of the angular reconstruction error, defined by a confidence interval. We show that the reconstruction error only depends on the distance of the surface point to the camera, the pixel distance to the principal point in the image plane and the angle at which the viewing ray intersects the surface.",
"title": ""
},
{
"docid": "0ef01fb9322ed10529f074ef73e9a19f",
"text": "Detecting the document focus time, defined as the time the content of a document refers to, is an important task to support temporal information retrieval systems. In this paper we propose a novel approach to focus time estimation based on a bag-of-entity representation. In particular, we are interested in understanding if and to what extent existing open data sources can be leveraged to achieve focus time estimation. We leverage state of the art Named Entity Extraction tools and exploit links to Wikipedia and DBpedia to derive temporal information relevant to entities, namely years and intervals of years. We then estimate focus time as the point in time that is more relevant to the entity set associated to a document. Our method does not rely on explicit temporal expressions in the documents, so it is therefore applicable to a general context. We tested our methodology on two datasets of historical events and evaluated it against a state of the art approach, measuring improvement in average estimation error.",
"title": ""
},
{
"docid": "44b7ed6c8297b6f269c8b872b0fd6266",
"text": "vii",
"title": ""
},
{
"docid": "9722a9895e51b86f9fc5ff51f8ac1582",
"text": "Performance and responsiveness of visual analytics sytems for exploratory data analysis of large datasets has been a long standing problem. We propose a method for incrementally computing visualizations in a distributed fashion by combining a modified MapReduce-style algorithm with a compressed columnar data store, resulting in significant improvements in performance and responsiveness for constructing commonly encountered information visualizations, e.g. bar charts, scatterplots, heat maps, cartograms and parallel coordinate plots. We compare our method with one that queries three other readily available database and data warehouse systems - PostgreSQL, Cloudera Impala and the MapReduce-based Apache Hive - in order to build visualizations. We show that our end-to-end approach allows for greater speed and guaranteed end-user responsiveness, even in the face of large, long-running queries.",
"title": ""
},
{
"docid": "9be5326deba6eaab21150edf882188f1",
"text": "CARS 2016—Computer Assisted Radiology and Surgery Proceedings of the 30th International Congress and Exhibition Heidelberg, Germany, June 21–25, 2016",
"title": ""
},
{
"docid": "428697d3ec6992c3158f3f0b2690c155",
"text": "Severe infections represent the main cause of neonatal mortality accounting for more than one million neonatal deaths worldwide every year. Antibiotics are the most commonly prescribed medications in neonatal intensive care units. The benefits of antibiotic therapy when indicated are clearly enormous, but the continued and widespread use of antibiotics has generated over the years a strong selective pressure on microorganisms, favoring the emergence of resistant strains. Health agencies worldwide are galvanizing attention toward antibiotic resistance in gram-positive and gram-negative bacteria. Infections in neonatal units due to multidrug and extensively multidrug resistant bacteria are rising and are already seriously challenging antibiotic treatment options. While there is a growing choice of agents against multi-resistant gram-positive bacteria, new options for multi-resistant gram-negative bacteria in the clinical practice have decreased significantly in the last 20 years making the treatment of infections caused by multidrug-resistant pathogens challenging mostly in neonates. Treatment options are currently limited and will be some years before any new treatment for neonates become available for clinical use, if ever. The aim of the review is to highlight the current knowledge on antibiotic resistance in the neonatal population, the possible therapeutic choices, and the prevention strategies to adopt in order to reduce the emergency and spread of resistant strains.",
"title": ""
},
{
"docid": "3b5dcd12c1074100ffede33c8b3a680c",
"text": "This paper proposes a two-stream flow-guided convolutional attention networks for action recognition in videos. The central idea is that optical flows, when properly compensated for the camera motion, can be used to guide attention to the human foreground. We thus develop crosslink layers from the temporal network (trained on flows) to the spatial network (trained on RGB frames). These crosslink layers guide the spatial-stream to pay more attention to the human foreground areas and be less affected by background clutter. We obtain promising performances with our approach on the UCF101, HMDB51 and Hollywood2 datasets.",
"title": ""
},
{
"docid": "a2217cd5f5e6b54ad0329a8703204ccb",
"text": "Knowledge bases are useful resources for many natural language processing tasks, however, they are far from complete. In this paper, we define a novel entity representation as a mixture of its neighborhood in the knowledge base and apply this technique on TransE—a well-known embedding model for knowledge base completion. Experimental results show that the neighborhood information significantly helps to improve the results of the TransE, leading to better performance than obtained by other state-of-the-art embedding models on three benchmark datasets for triple classification, entity prediction and relation prediction tasks.",
"title": ""
},
{
"docid": "148b7445ec2cd811d64fd81c61c20e02",
"text": "Using sensors to measure parameters of interest in rotating environments and communicating the measurements in real-time over wireless links, requires a reliable power source. In this paper, we have investigated the possibility to generate electric power locally by evaluating six different energy-harvesting technologies. The applicability of the technology is evaluated by several parameters that are important to the functionality in an industrial environment. All technologies are individually presented and evaluated, a concluding table is also summarizing the technologies strengths and weaknesses. To support the technology evaluation on a more theoretical level, simulations has been performed to strengthen our claims. Among the evaluated and simulated technologies, we found that the variable reluctance-based harvesting technology is the strongest candidate for further technology development for the considered use-case.",
"title": ""
},
{
"docid": "51a6b1868082fc2963dd8bae513f6a9b",
"text": "The red blood cells or erythrocytes are biconcave shaped cells and consist mostly in a membrane delimiting a cytosol with a high concentration in hemoglobin. This membrane is highly deformable and allows the cells to go through narrow passages like the capillaries which diameters can be much smaller than red blood cells one. They carry oxygen thanks to hemoglobin, a complex molecule that have very high affinity for oxygen. The capacity of erythrocytes to load and unload oxygen is thus a determinant factor in their efficacy. In this paper, we will focus on the pulmonary capillary where red blood cells capture oxygen. In order to numerically study the behavior of red blood cells along a whole capillary, we propose a camera method that consists in working in a reference frame that follows the red blood cells. More precisely, the domain of study is reduced to a neighborhood of the red blood cells and moves along at erythrocytes mean velocity. This method avoids too large mesh deformation. Our goal is to understand how erythrocytes geometrical changes along the capillary can affect its capacity to capture oxygen. The first part of this document presents the model chosen for the red blood cells along with the numerical method used to determine and follow their shapes along the capillary. The membrane of the red blood cell is complex and has been modelled by an hyper-elastic approach coming from [16]. This camera method is then validated and confronted with a standard Arbitrary Lagrangian Eulerian (ALE) method in which the displacements of the red blood cells are correlated with the deformation of an initial mesh of the whole capillary with red blood cells at start positions. Some geometrical properties of the red blood cells observed in our simulations are then studied and discussed. The second part of this paper deals with the modeling of oxygen and hemoglobin chemistry in the geometries obtained in the first part. We have implemented a full complex hemoglobin behavior with allosteric states inspired from [4]. 1 Laboratoire MSC, Université Paris 7 / CNRS, 10 rue Alice Domon et Léonie Duquet, F-75205 Paris cedex 13 c © EDP Sciences, SMAI 2008",
"title": ""
},
{
"docid": "f1681e1c8eef93f15adb5a4d7313c94c",
"text": "The paper investigates techniques for extracting data from HTML sites through the use of automatically generated wrappers. To automate the wrapper generation and the data extraction process, the paper develops a novel technique to compare HTML pages and generate a wrapper based on their similarities and differences. Experimental results on real-life data-intensive Web sites confirm the feasibility of the approach.",
"title": ""
},
{
"docid": "7314977f3af06253fdc2631a6a0a64a2",
"text": "In this paper, a wheelchair robot equipped with new-style variable-geometry-tracked mechanism is proposed. This new-style mechanism can adapt to convex terrain and turn to concave geometry by active control of track tension, based on which the terrain adaptability of the wheelchair robot is improved. Aiming at climbing stairs, the transformation rule of robot configuration is presented, the description of passenger's attitude and action is established. Following that, the tip-over stability analysis and simulation are performed with Force-Angle stability measure, and the variation of the tip-over stability margin of the robot under different conditions of passenger's attitude and action during stair-climbing is obtained. The analysis and simulation results provide a valid reference to the wheelchair robot's potential application.",
"title": ""
},
{
"docid": "2c3bdb3dc3bf4aedc36a49e82a2dca50",
"text": "We report the implementation of a text input application (speller) based on the P300 event related potential. We obtain high accuracies by using an SVM classifier and a novel feature. These techniques enable us to maintain fast performance without sacrificing the accuracy, thus making the speller usable in an online mode. In order to further improve the usability, we perform various studies on the data with a view to minimizing the training time required. We present data collected from nine healthy subjects, along with the high accuracies (of the order of 95% or more) measured online. We show that the training time can be further reduced by a factor of two from its current value of about 20 min. High accuracy, fast learning, and online performance make this P300 speller a potential communication tool for severely disabled individuals, who have lost all other means of communication and are otherwise cut off from the world, provided their disability does not interfere with the performance of the speller.",
"title": ""
},
{
"docid": "d27ccd837bd82cf5d79c777c459944ec",
"text": "Wireless sensor networks (WSNs) play an increasingly important role in monitoring applications in many areas. With the emergence of the Internet-of-Things (IoT), many more lowpower sensors will need to be deployed in various environments to collect and monitor data about environmental factors in real time. Providing power supply to these sensor nodes becomes a critical challenge for realizations of IoT applications as sensor nodes are normally battery-powered and have a limited lifetime. This paper proposes a wireless sensor network that is powered by solar energy harvesting. The sensor network monitors the environmental data with low-power sensor electronics and forms a network using multiple XBee wireless modules. A detailed performance analysis of the network system under solar energy harvesting has been presented. The sensor network system and the proposed energy-harvesting techniques are configured to achieve a continuous energy source for the sensor network. The proposed energy-harvesting system has been successfully designed to enable an energy solution in order to keep sensor nodes active and reliable for a whole day. The paper also outlines some of our experiences in real-time implementation of a sensor network system with energy harvesting.",
"title": ""
}
] |
scidocsrr
|
e2c7d4c765e44eef0fb28d1f512257f7
|
Time Domain Passivity Control of Haptic Interface
|
[
{
"docid": "dbfdb9251e8b9738eaebae3bcd708926",
"text": "Stable Haptic Interaction with Virtual Environments",
"title": ""
}
] |
[
{
"docid": "f10b3f34e63f1c8a1cba703b62cc1043",
"text": "BACKGROUND\nDespite the increasing use of very low carbohydrate ketogenic diets (VLCKD) in weight control and management of the metabolic syndrome there is a paucity of research about effects of VLCKD on sport performance. Ketogenic diets may be useful in sports that include weight class divisions and the aim of our study was to investigate the influence of VLCKD on explosive strength performance.\n\n\nMETHODS\n8 athletes, elite artistic gymnasts (age 20.9 ± 5.5 yrs) were recruited. We analyzed body composition and various performance aspects (hanging straight leg raise, ground push up, parallel bar dips, pull up, squat jump, countermovement jump, 30 sec continuous jumps) before and after 30 days of a modified ketogenic diet. The diet was based on green vegetables, olive oil, fish and meat plus dishes composed of high quality protein and virtually zero carbohydrates, but which mimicked their taste, with the addition of some herbal extracts. During the VLCKD the athletes performed the normal training program. After three months the same protocol, tests were performed before and after 30 days of the athletes' usual diet (a typically western diet, WD). A one-way Anova for repeated measurements was used.\n\n\nRESULTS\nNo significant differences were detected between VLCKD and WD in all strength tests. Significant differences were found in body weight and body composition: after VLCKD there was a decrease in body weight (from 69.6 ± 7.3 Kg to 68.0 ± 7.5 Kg) and fat mass (from 5.3 ± 1.3 Kg to 3.4 ± 0.8 Kg p < 0.001) with a non-significant increase in muscle mass.\n\n\nCONCLUSIONS\nDespite concerns of coaches and doctors about the possible detrimental effects of low carbohydrate diets on athletic performance and the well known importance of carbohydrates there are no data about VLCKD and strength performance. The undeniable and sudden effect of VLCKD on fat loss may be useful for those athletes who compete in sports based on weight class. We have demonstrated that using VLCKD for a relatively short time period (i.e. 30 days) can decrease body weight and body fat without negative effects on strength performance in high level athletes.",
"title": ""
},
{
"docid": "79c2623b0e1b51a216fffbc6bbecd9ec",
"text": "Visual notations form an integral part of the language of software engineering (SE). Yet historically, SE researchers and notation designers have ignored or undervalued issues of visual representation. In evaluating and comparing notations, details of visual syntax are rarely discussed. In designing notations, the majority of effort is spent on semantics, with graphical conventions largely an afterthought. Typically, no design rationale, scientific or otherwise, is provided for visual representation choices. While SE has developed mature methods for evaluating and designing semantics, it lacks equivalent methods for visual syntax. This paper defines a set of principles for designing cognitively effective visual notations: ones that are optimized for human communication and problem solving. Together these form a design theory, called the Physics of Notations as it focuses on the physical (perceptual) properties of notations rather than their logical (semantic) properties. The principles were synthesized from theory and empirical evidence from a wide range of fields and rest on an explicit theory of how visual notations communicate. They can be used to evaluate, compare, and improve existing visual notations as well as to construct new ones. The paper identifies serious design flaws in some of the leading SE notations, together with practical suggestions for improving them. It also showcases some examples of visual notation design excellence from SE and other fields.",
"title": ""
},
{
"docid": "0259066962633694e027b059567d722f",
"text": "In order to improve real-time and robustness of the lane detection and get more ideal lane, in the image preprocessing, the filter is used in strengthening lane information of the binary image, reducing the noise and removing irrelevant information. The lane edge detection is by using Canny operator, then the corner detection method is used in getting the Image corners coordinates and finally using the RANSAC to circulation fit for corners, according to the optimal lanes parameters drawing lane. Through experiment of different scenes, this method can not only effectively rule out linear pixel interference of outside the road in multiple complex environments, but also quickly and accurately identify lane. This method improves the stability of the lane detection to a certain extent, which has good robust and real-time.",
"title": ""
},
{
"docid": "982253c9f0c05e50a070a0b2e762abd7",
"text": "In this work, we focus on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface. To generate a set of multi-content images following a consistent style from very few examples, we propose an end-to-end stacked conditional GAN model considering content along channels and style along network layers. Our proposed network transfers the style of given glyphs to the contents of unseen ones, capturing highly stylized fonts found in the real-world such as those on movie posters or infographics. We seek to transfer both the typographic stylization (ex. serifs and ears) as well as the textual stylization (ex. color gradients and effects.) We base our experiments on our collected data set including 10,000 fonts with different styles and demonstrate effective generalization from a very small number of observed glyphs.",
"title": ""
},
{
"docid": "9c41df95c11ec4bed3e0b19b20f912bb",
"text": "Text mining has been defined as “the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources” [6]. Many other industries and areas can also benefit from the text mining tools that are being developed by a number of companies. This paper provides an overview of the text mining tools and technologies that are being developed and is intended to be a guide for organizations who are looking for the most appropriate text mining techniques for their situation. This paper also concentrates to design text and data mining tool to extract the valuable information from curriculum vitae according to concerned requirements. The tool clusters the curriculum vitae into several segments which will help the public and private concerns for their recruitment. Rule based approach is used to develop the algorithm for mining and also it is implemented to extract the valuable information from the curriculum vitae on the web. Analysis of Curriculum vitae is until now, a costly and manual activity. It is subject to all typical variations and limitations in its quality, depending of who is doing it. Automating this analysis using algorithms might deliver much more consistency and preciseness to support the human experts. The experiments involve cooperation with many people having their CV online, as well as several recruiters etc. The algorithms must be developed and improved for processing of existing sets of semi-structured documents information retrieval under uncertainity about quality of the sources.",
"title": ""
},
{
"docid": "f1334528988d79724146d29d67cdb460",
"text": "Long-term outcome after endarterectomy of the femoral bifurcation has not been widely investigated, and the aim of this study was to assess its late results from a community-wide perspective. Between 1983 and 2006 111 isolated endarterectomies of the common femoral artery and/or the proximal part of the superficial femoral artery or profunda femoris were performed in 90 patients at the Oulu University Hospital, Oulu, Finland. A total of 77 limbs were treated surgically for claudication and 34 others for critical limb ischemia. Angiographic findings of 100 extremities were evaluated. The in-hospital mortality rate was 1.8%. The mean follow-up period was 5.9 years. At 5-, 10-, and 15-year follow-up the overall survival was 60.5%, 32.7%, and 17.6%, respectively (S.E < 0.05). A C-reactive protein value ≥ 10 mg/l was predictive of poor late survival (p = 0.008). Limb salvage rates after isolated femoral endarterectomy at 5-, 10-, and 15-year follow-up were 93.7%, 93.7%, and 85.2%, respectively (S.E. < 0.08). Critical limb ischemia (p = 0.006) and current smoking (p = 0.027) were independent predictors of major lower limb amputation. A total of 41 limbs were subjected to ipsilateral vascular procedures after femoral endarterectomy, only one of which was re-endarterectomy. Freedom from any ipsilateral revascularization procedure at 5-, 10-, and 15-year follow-up was calculated at 68.0%, 50.6%, and 42.5%, respectively (S.E. < 0.08). The overall linearized rate of reintervention on the ipsilateral limb was 0.16 ± 0.44/year. The linearized rate among patients who had any ipsilateral vascular reintervention was 0.43 ± 0.66/year. Isolated femoral endarterectomy is a rather low-risk and durable procedure. However, a significant number of reinterventions distal or proximal to the endarterectomized site can be expected in one third of patients.",
"title": ""
},
{
"docid": "a526a2254f4408048828a9112e475020",
"text": "Fast Fourier transform (FFT)-based restorations are fast, but at the expense of assuming that the blurring and deblurring are based on circular convolution. Unfortunately, when the opposite sides of the image do not match up well in intensity, this assumption can create significant artifacts across the image. If the pixels outside the measured image window are modeled as unknown values in the restored image, boundary artifacts are avoided. However, this approach destroys the structure that makes the use of the FFT directly applicable, since the unknown image is no longer the same size as the measured image. Thus, the restoration methods available for this problem no longer have the computational efficiency of the FFT. We propose a new restoration method for the unknown boundary approach that can be implemented in a fast and flexible manner. We decompose the restoration into a sum of two independent restorations. One restoration yields an image that comes directly from a modified FFT-based approach. The other restoration involves a set of unknowns whose number equals that of the unknown boundary values. By summing the two, the artifacts are canceled. Because the second restoration has a significantly reduced set of unknowns, it can be calculated very efficiently even though no circular convolution structure exists.",
"title": ""
},
{
"docid": "6ee2ee4a1cff7b1ddb8e5e1e2faf3aa5",
"text": "An array of four uniform half-width microstrip leaky-wave antennas (MLWAs) was designed and tested to obtain maximum radiation in the boresight direction. To achieve this, uniform MLWAs are placed at 90 ° and fed by a single probe at the center. Four beams from four individual branches combine to form the resultant directive beam. The measured matched bandwidth of the array is 300 MHz (3.8-4.1 GHz). Its beam toward boresight occurs over a relatively wide 6.4% (3.8-4.05 GHz) band. The peak measured boresight gain of the array is 10.1 dBi, and its variation within the 250-MHz boresight radiation band is only 1.7 dB.",
"title": ""
},
{
"docid": "43a7e786704b5347f3b67c08ac9c4f70",
"text": "Before beginning any robot task, users must position the robot's base, a task that now depends entirely on user intuition. While slight perturbation is tolerable for robots with moveable bases, correcting the problem is imperative for fixed- base robots if some essential task sections are out of reach. For mobile manipulation robots, it is necessary to decide on a specific base position before beginning manipulation tasks. This paper presents Reuleaux, an open source library for robot reachability analyses and base placement. It reduces the amount of extra repositioning and removes the manual work of identifying potential base locations. Based on the reachability map, base placement locations of a whole robot or only the arm can be efficiently determined. This can be applied to both statically mounted robots, where the position of the robot and workpiece ensure the maximum amount of work performed, and to mobile robots, where the maximum amount of workable area can be reached. The methods were tested on different robots of different specifications and evaluated for tasks in simulation and real world environment. Evaluation results indicate that Reuleaux had significantly improved performance than prior existing methods in terms of time-efficiency and range of applicability.",
"title": ""
},
{
"docid": "5365f6f5174c3d211ea562c8a7fa0aab",
"text": "Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.)",
"title": ""
},
{
"docid": "602e58608bdb78cab9186293d7efc171",
"text": "Crowdfunding provides a new way for creatives to share their work and acquire resources from their social network to influence what new ideas are realized. Yet, we understand very little about this growing phenomenon. Grounded in existing work on social network analysis, we interview 58 crowdfunding project creators to investigate how crowdfunders use their social network to reach their campaign goals. We identified three main challenges, which include understanding network capabilities, activating network connections, and expanding network reach. From our findings, we develop initial design implications for support tools to help crowdfunding project creators better understand and leverage their social network.",
"title": ""
},
{
"docid": "01e5485dc7801f2497a03a6666970e03",
"text": "KinectFusion is a method for real-time capture of dense 3D geometry of the physical environment using a depth sensor. The system allows capture of a large dataset of 3D scene reconstructions at very low cost. In this paper we discuss the properties of the generated data and evaluate in which situations the method is accurate enough to provide ground truth models for low-level image processing tasks like stereo and optical flow estimation. The results suggest that the method is suitable for the fast acquisition of medium scale scenes (a few meters across), filling a gap between structured light and LiDAR scanners. For these scenes e.g. ground truth optical flow fields with accuracies of approximately 0.1 pixel can be created. We reveal an initial, high-quality dataset consisting of 57 scenes which can be used by researchers today, as well as a new, interactive tool implementing the KinectFusion method. Such datasets can then also be used as training data, e.g. for 3D recognition and depth inpainting.",
"title": ""
},
{
"docid": "ecf289d121a9d0ac8f2467879a7f285f",
"text": "An increasing number of people are using the Internet, in many instances unaware of the information being collected about them. In contrast, other people concerned about the privacy and security issues are limiting their use of the Internet, abstaining from purchasing products online. Businesses should be aware that consumers are looking for privacy protection and a privacy statement can help to ease consumers' concerns. New Zealand based web sites are expected to have privacy statements on their web sites under the New Zealand Privacy Act 1993. The incidence of the information gathered from New Zealand web sites and their use of privacy statements is examined here. In particular, web sites utilizing cookies and statements about them are scanned. Global consistency on Internet privacy protection is important to boost the growth of electronic commerce. To protect consumers in a globally consistent manner, legislation, self-regulation, technical solutions and combination solutions are different ways that can be implemented.",
"title": ""
},
{
"docid": "a1a81d420ef5702483859b01633bb14c",
"text": "Many sorting algorithms have been studied in the past, but there are only a few algorithms that can effectively exploit both SIMD instructions and thread-level parallelism. In this paper, we propose a new parallel sorting algorithm, called aligned-access sort (AA-sort), for shared-memory multi processors. The AA-sort algorithm takes advantage of SIMD instructions. The key to high performance is eliminating unaligned memory accesses that would reduce the effectiveness of SIMD instructions. We implemented and evaluated the AA-sort on PowerPCreg 970MP and Cell Broadband Enginetrade. In summary, a sequential version of the AA-sort using SIMD instructions outperformed IBM's optimized sequential sorting library by 1.8 times and GPUTeraSort using SIMD instructions by 3.3 times on PowerPC 970MP when sorting 32 M of random 32-bit integers. Furthermore, a parallel version of AA-sort demonstrated better scalability with increasing numbers of cores than a parallel version of GPUTeraSort on both platforms.",
"title": ""
},
{
"docid": "83530198697ed04a3870a1e9d403728b",
"text": "Conventional charge pump circuits use a fixed switching frequency that leads to power efficiency degradation for loading less than the rated loading. This paper proposes a level shifter design that also functions as a frequency converter to automatically vary the switching frequency of a dual charge pump circuit according to the loading. The switching frequency is designed to be 25 kHz with 12 mA loading on both inverting and noninverting outputs. The switching frequency is automatically reduced when loading is lighter to improve the power efficiency. The frequency tuning range of this circuit is designed to be from 100 Hz to 25 kHz. A start-up circuit is included to ensure proper pumping action and avoid latch-up during power-up. A slow turn-on, fast turn-off driving scheme is used in the clock buffer to reduce power dissipation. The new dual charge pump circuit was fabricated in a 3m p-well double-poly single-metal CMOS technology with breakdown voltage of 18 V, the die size is 4.7 4.5 mm2. For comparison, a charge pump circuit with conventional level shifter and clock buffer was also fabricated. The measured results show that the new charge pump has two advantages: 1) the power dissipation of the charge pump is improved by a factor of 32 at no load and by 2% at rated loading of 500 and 2) the breakdown voltage requirement is reduced from 19.2 to 17 V.",
"title": ""
},
{
"docid": "82c557b21509c30f34ac8d0463a027af",
"text": "Formant frequency data for /l/ in 23 languages/dialects where the consonant may be typically clear or dark show that the two varieties of /l/ are set in contrast mostly in the context of /i/ but also next to /a/, and that a few languages/dialects may exhibit intermediate degrees of darkness in the consonant. F2 for /l/ is higher utterance initially than utterance finally, more so if the lateral is clear than if it is dark; moreover, the initial and final allophones may be characterized as intrinsic (in most languages/dialects) or extrinsic (in several English dialects, Czech and Dutch) depending on whether the position-dependent frequency difference in question is below or above 200/ 300 Hz. The paper also reports a larger degree of vowel coarticulation for clear /l/ than for dark /l/ and in initial than in final position. These results are interpreted in terms of the production mechanisms involved in the realization of the two /l/ varieties in the different positional and vowel context conditions subjected to investigation. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5faa5e5eff47883711ba8e285cd9aefb",
"text": "At present smart phone usage is increasing dramatically due to their extended functionality than cell phones. Smartphones are like small computer which accompany us everywhere and allow us to access various functionalities. Smart phone is a personnel device which provides entertainment, information, making call, writing SMS and accessing different applications like check the email, to browse the Internet or to play games with our friends. We have to install applications on our smartphone in order to take all the advantage that these devices offer. The increasing importance of smart phones has increased competition among technology giants to take over the bigger part of the market share for mobile platform. As a result, in 2005 Google introduced Android (developed by Andy Rubin the Director of Mobile Platforms for Google), an open source mobile platform for smart phone devices which is consisting of a Linux Kernel, runtime environment, development framework, libraries and key applications. This paper aims to deal with the comparison between different smartphones like Android OS (Google), iOS (Apple), Symbian (Nokia) & Blackberry OS (RIM).",
"title": ""
},
{
"docid": "34441c93c20072074e4f0bef4f681ac0",
"text": "In object detection, object proposal methods have been widely used to generate candidate regions which may contain objects. Object proposal based on superpixel merging is one kind of object proposal methods, and the merging strategies of superpixels have been extensively explored. However, the ranking of generated candidate proposals still remains to be further studied. In this paper, we formulate the ranking of object proposals as a learning to rank problem, and propose a novel object proposals ranking method based on ListNet. In the proposed method, Selective Search, which is one of the state-of-the-art object proposal methods based on superpixel merging, is adopted to generate the candidate proposals. During the superpixel merging process, five discriminative objectness features are extracted from superpixel sets and the corresponding bounding boxes. Then, to weight each feature, a linear neural network is learned based on ListNet. Consequently, objectness scores can be computed for final candidate proposals ranking. Extensive experiments demonstrate the effectiveness and robustness of the proposed method. Preprint submitted to Neurocomputing June 8, 2017",
"title": ""
},
{
"docid": "89cc39369eeb6c12a12c61e210c437e3",
"text": "Multimodal learning with deep Boltzmann machines (DBMs) is an generative approach to fuse multimodal inputs, and can learn the shared representation via Contrastive Divergence (CD) for classification and information retrieval tasks. However, it is a 2-fan DBM model, and cannot effectively handle multiple prediction tasks. Moreover, this model cannot recover the hidden representations well by sampling from the conditional distribution when more than one modalities are missing. In this paper, we propose a Kfan deep structure model, which can handle the multi-input and muti-output learning problems effectively. In particular, the deep structure has K-branch for different inputs where each branch can be composed of a multi-layer deep model, and a shared representation is learned in an discriminative manner to tackle multimodal tasks. Given the deep structure, we propose two objective functions to handle two multi-input and multi-output tasks: joint visual restoration and labeling, and the multi-view multi-calss object recognition tasks. To estimate the model parameters, we initialize the deep model parameters with CD to maximize the joint distribution, and then we use backpropagation to update the model according to specific objective function. The experimental results demonstrate that the model can effectively leverages multi-source information and predict multiple tasks well over competitive baselines.",
"title": ""
},
{
"docid": "e9e620742992a6b6aa50e6e0e5894b6f",
"text": "A significant amount of information in today’s world is stored in structured and semistructured knowledge bases. Efficient and simple methods to query these databases are essential and must not be restricted to only those who have expertise in formal query languages. The field of semantic parsing deals with converting natural language utterances to logical forms that can be easily executed on a knowledge base. In this survey, we examine the various components of a semantic parsing system and discuss prominent work ranging from the initial rule based methods to the current neural approaches to program synthesis. We also discuss methods that operate using varying levels of supervision and highlight the key challenges involved in the learning of such systems.",
"title": ""
}
] |
scidocsrr
|
f9061c4761deded68bf0942b5b8cfe60
|
Metal object detection circuit with non-overlapped coils for wireless EV chargers
|
[
{
"docid": "32d6b6d362896c1d880cb26bbd034034",
"text": "For commercialization of wireless stationary EV chargers, the foreign object detection (FOD) on a power supply coil and the location detection (LOD) of electric vehicles (EVs) are needed. In this paper, a dual-purpose non-overlapped coil sets as both FOD and LOD are newly proposed. Not only the existence of conductive object debris on a power supply coil but also the location of them are determined by an induced voltage difference of the coil sets. By measuring the induced voltage of the coil sets, displacements between a power supply coil and a pick-up coil can be also found to inform drivers of their EV alignments. Moreover, the proposed FOD and LOD methods make no contribution to any power losses. The proposed non-overlapped coil sets have been demonstrated by simulations and experiments for a prototype coil set. Throughout experiments, the induced voltage difference of a coil set shows 2.19 mV (ideally zero) without foreign objects while the induced voltage difference significantly increases by 78.2 mV, which is about 35 times of the value without objects, when eight conductive coins are located on a power supply coil. Also, it is found that the LOD can be achieved by measuring the variation of the induced voltages in coil sets when a pick-up coil moves on a power supply coil.",
"title": ""
}
] |
[
{
"docid": "c8e5257c2ed0023dc10786a3071c6e6a",
"text": "Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.",
"title": ""
},
{
"docid": "caa6f0769cc62cbde30b96ae31dabb3f",
"text": "ThyssenKrupp Transrapid developed a new motor winding for synchronous long stator propulsion with optimized grounding system. The motor winding using a cable without metallic screen is presented. The function as well as the mechanical and electrical design of the grounding system is illustrated. The new design guarantees a much lower electrical stress than the load capacity of the system. The main design parameters, simulation and testing results as well as calculations of the electrical stress of the grounding system are described.",
"title": ""
},
{
"docid": "f421fc30b899181ee7c819fe21fcb5d6",
"text": "Realizing a true three-dimensional (3D) display environment has been an ultimate goal of visual computing communities. Burton Inc. in Japan and others built upon the modern laser-plasma technology to come up with 3D Aerial Display device in 2006, with which the users are allowed to plot a unicursal series of illuminants freely in the midair, and thus the surrounding audience can enjoy watching different aspects of the 3D image from different positions, without any eye strain [Kimura et al. 2006].",
"title": ""
},
{
"docid": "dd54483344a58ec7822237d1a222d67e",
"text": "It is widely recognized that the risk of fractures is closely related to the typical decline in bone mass during the ageing process in both women and men. Exercise has been reported as one of the best non-pharmacological ways to improve bone mass throughout life. However, not all exercise regimens have the same positive effects on bone mass, and the studies that have evaluated the role of exercise programmes on bone-related variables in elderly people have obtained inconclusive results. This systematic review aims to summarize and update present knowledge about the effects of different types of training programmes on bone mass in older adults and elderly people as a starting point for developing future interventions that maintain a healthy bone mass and higher quality of life in people throughout their lifetime. A literature search using MEDLINE and the Cochrane Central Register of Controlled Trials databases was conducted and bibliographies for studies discussing the effect of exercise interventions in older adults published up to August 2011 were examined. Inclusion criteria were met by 59 controlled trials, 7 meta-analyses and 8 reviews. The studies included in this review indicate that bone-related variables can be increased, or at least the common decline in bone mass during ageing attenuated, through following specific training programmes. Walking provides a modest increase in the loads on the skeleton above gravity and, therefore, this type of exercise has proved to be less effective in osteoporosis prevention. Strength exercise seems to be a powerful stimulus to improve and maintain bone mass during the ageing process. Multi-component exercise programmes of strength, aerobic, high impact and/or weight-bearing training, as well as whole-body vibration (WBV) alone or in combination with exercise, may help to increase or at least prevent decline in bone mass with ageing, especially in postmenopausal women. This review provides, therefore, an overview of intervention studies involving training and bone measurements among older adults, especially postmenopausal women. Some novelties are that WBV training is a promising alternative to prevent bone fractures and osteoporosis. Because this type of exercise under prescription is potentially safe, it may be considered as a low impact alternative to current methods combating bone deterioration. In other respects, the ability of peripheral quantitative computed tomography (pQCT) to assess bone strength and geometric properties may prove advantageous in evaluating the effects of training on bone health. As a result of changes in bone mass becoming evident by pQCT even when dual energy X-ray absortiometry (DXA) measurements were unremarkable, pQCT may provide new knowledge about the effects of exercise on bone that could not be elucidated by DXA. Future research is recommended including longest-term exercise training programmes, the addition of pQCT measurements to DXA scanners and more trials among men, including older participants.",
"title": ""
},
{
"docid": "205c0c94d3f2dbadbc7024c9ef868d97",
"text": "Solid dispersions (SD) of curcuminpolyvinylpyrrolidone in the ratio of 1:2, 1:4, 1:5, 1:6, and 1:8 were prepared in an attempt to increase the solubility and dissolution. Solubility, dissolution, powder X-ray diffraction (XRD), differential scanning calorimetry (DSC) and Fourier transform infrared spectroscopy (FTIR) of solid dispersions, physical mixtures (PM) and curcumin were evaluated. Both solubility and dissolution of curcumin solid dispersions were significantly greater than those observed for physical mixtures and intact curcumin. The powder X-ray diffractograms indicated that the amorphous curcumin was obtained from all solid dispersions. It was found that the optimum weight ratio for curcumin:PVP K-30 is 1:6. The 1:6 solid dispersion still in the amorphous from after storage at ambient temperature for 2 years and the dissolution profile did not significantly different from freshly prepared. Keywords—Curcumin, polyvinylpyrrolidone K-30, solid dispersion, dissolution, physicochemical.",
"title": ""
},
{
"docid": "83071476dae1d2a52e137683616668c2",
"text": "We present a strategy to make productive use of semantically-related social data, from a user-centered semantic network, in order to help users (tourists and citizens in general) to discover cultural heritage, points of interest and available services in a smart city. This data can be used to personalize recommendations in a smart tourism application. Our approach is based on flow centrality metrics typically used in social network analysis: flow betweenness, flow closeness and eccentricity. These metrics are useful to discover relevant nodes within the network yielding nodes that can be interpreted as suggestions (venues or services) to users. We describe the semantic network built on graph model, as well as social metrics algorithms used to produce recommendations. We also present challenges and results from a prototypical implementation applied to the case study of the City of Puebla, Mexico.",
"title": ""
},
{
"docid": "ff83b2522f73ac53b6f362a0b4a20e90",
"text": "The mammalian striatum receives its main excitatory input from the two types of cortical pyramidal neurons of layer 5 of the cerebral cortex - those with only intratelencephalic connections (IT-type) and those sending their main axon to the brainstem via the pyramidal tract (PT-type). These two neurons types are present in layer 5 of all cortical regions, and thus they appear to project together to all parts of striatum. These two neuron types, however, differ genetically, morphologically, and functionally, with IT-type neurons conveying sensory and motor planning information to striatum and PT-type neurons conveying an efference copy of motor commands (for motor cortex at least). Anatomical and physiological data for rats, and more recent data for primates, indicate that these two cortical neuron types also differ in their targeting of the two main types of striatal projection neurons, with the IT-type input preferentially innervating direct pathway neurons and the PT-type input preferentially innervating indirect pathway striatal neurons. These findings have implications for understanding how the direct and indirect pathways carry out their respective roles in movement facilitation and movement suppression, and they have implications for understanding the role of corticostriatal synaptic plasticity in adaptive motor control by the basal ganglia.",
"title": ""
},
{
"docid": "5300e9938a545895c8b97fe6c9d06aa5",
"text": "Background subtraction is a common computer vision task. We analyze the usual pixel-level approach. We develop an efficient adaptive algorithm using Gaussian mixture probability density. Recursive equations are used to constantly update the parameters and but also to simultaneously select the appropriate number of components for each pixel.",
"title": ""
},
{
"docid": "8e5d18e33e4024d686d6c63cd879d616",
"text": "Integrated Layer Processing (ILP) is an implementation concept which \"permit[s] the implementor the option of performing all the [data] manipulation steps in one or two integrated processing loops\" [1]. To estimate the achievable benefits of ILP a file transfer application with an encryption function on top of a user-level TCP has been implemented and the performance of the application in terms of throughput and packet processing times has been measured. The results show that it is possible to obtain performance benefits by integrating marshalling, encryption and TCP checksum calculation. They also show that the benefits are smaller than in simple experiments, where ILP effects have not been evaluated in a complete protocol environment. Simulations of memory access and cache hit rate show that the main benefit of ILP is reduced memory accesses rather than an improved cache hit rate. The results further show that data manipulation characteristics may significantly influence the cache behavior and the achievable performance gain of ILP.",
"title": ""
},
{
"docid": "0bf227d17e76d1fb16868ff90d75e94c",
"text": "The high-efficiency current-mode (CM) and voltage-mode (VM) Class-E power amplifiers (PAs) for MHz wireless power transfer (WPT) systems are first proposed in this paper and the design methodology for them is presented. The CM/VM Class-E PA is able to deliver the increasing/decreasing power with the increasing load and the efficiency maintains high even when the load varies in a wide range. The high efficiency and certain operation mode are realized by introducing an impedance transformation network with fixed components. The efficiency, output power, circuit tolerance, and robustness are all taken into consideration in the design procedure, which makes the CM and the VM Class-E PAs especially practical and efficient to real WPT systems. 6.78-MHz WPT systems with the CM and the VM Class-E PAs are fabricated and compared to that with the classical Class-E PA. The measurement results show that the output power is proportional to the load for the CM Class-E PA and is inversely proportional to the load for the VM Class-E PA. The efficiency for them maintains high, over 83%, when the load of PA varies from 10 to 100 $\\Omega$, while the efficiency of the classical Class-E is about 60% in the worst case. The experiment results validate the feasibility of the proposed design methodology and show that the CM and the VM Class-E PAs present superior performance in WPT systems compared to the traditional Class-E PA.",
"title": ""
},
{
"docid": "e7772ed75853d4d16641b41ad2abdcfe",
"text": "A 3D shape signature is a compact representation for some essence of a shape. Shape signatures are commonly utilized as a fast indexing mechanism for shape retrieval. Effective shape signatures capture some global geometric properties which are scale, translation, and rotation invariant. In this paper, we introduce an effective shape signature which is also pose-oblivious. This means that the signature is also insensitive to transformations which change the pose of a 3D shape such as skeletal articulations. Although some topology-based matching methods can be considered pose-oblivious as well, our new signature retains the simplicity and speed of signature indexing. Moreover, contrary to topology-based methods, the new signature is also insensitive to the topology change of the shape, allowing us to match similar shapes with different genus. Our shape signature is a 2D histogram which is a combination of the distribution of two scalar functions defined on the boundary surface of the 3D shape. The first is a definition of a novel function called the local-diameter function. This function measures the diameter of the 3D shape in the neighborhood of each vertex. The histogram of this function is an informative measure of the shape which is insensitive to pose changes. The second is the centricity function that measures the average geodesic distance from one vertex to all other vertices on the mesh. We evaluate and compare a number of methods for measuring the similarity between two signatures, and demonstrate the effectiveness of our pose-oblivious shape signature within a 3D search engine application for different databases containing hundreds of models",
"title": ""
},
{
"docid": "d212d81105e3573b5a7a33695fa3a764",
"text": "To achieve tasks in unknown environments with high reliability, highly accurate localization during task execution is necessary for humanoid robots. In this paper, we discuss a localization system which can be applied to a humanoid robot when executing tasks in the real world. During such tasks, humanoid robots typically do not possess a referential to a constant horizontal plane which can in turn be used as part of fast and cost efficient localization methods. We solve this problem by first computing an improved odometry estimate through fusing visual odometry, feedforward commands from gait generator and orientation from inertia sensors. This estimate is used to generate a 3D point cloud from the accumulation of successive laser scans and such point cloud is then properly sliced to create a constant height horizontal virtual scan. Finally, this slice is used as an observation base and fed to a 2D SLAM method. The fusion process uses a velocity error model to achieve greater accuracy, which parameters are measured on the real robot. We evaluate our localization system in a real world task execution experiment using the JAXON robot and show how our system can be used as a practical solution for humanoid robots localization during complex tasks execution processes.",
"title": ""
},
{
"docid": "085ec38c3e756504be93ac0b94483cea",
"text": "Low power wide area (LPWA) networks are making spectacular progress from design, standardization, to commercialization. At this time of fast-paced adoption, it is of utmost importance to analyze how well these technologies will scale as the number of devices connected to the Internet of Things inevitably grows. In this letter, we provide a stochastic geometry framework for modeling the performance of a single gateway LoRa network, a leading LPWA technology. Our analysis formulates the unique peculiarities of LoRa, including its chirp spread-spectrum modulation technique, regulatory limitations on radio duty cycle, and use of ALOHA protocol on top, all of which are not as common in today’s commercial cellular networks. We show that the coverage probability drops exponentially as the number of end-devices grows due to interfering signals using the same spreading sequence. We conclude that this fundamental limiting factor is perhaps more significant toward LoRa scalability than for instance spectrum restrictions. Our derivations for co-spreading factor interference found in LoRa networks enables rigorous scalability analysis of such networks.",
"title": ""
},
{
"docid": "0619308f0a79fb33d91a3a8db2a0db14",
"text": "FPGA CAD tool parameters controlling synthesis optimizations, place and route effort, mapping criteria along with user-supplied physical constraints can affect timing results of the circuit by as much as 70% without any change in original source code. A correct selection of these parameters across a diverse set of benchmarks with varying characteristics and design goals is challenging. The sheer number of parameters and option values that can be selected is large (thousands of combinations for modern CAD tools) with often conflicting interactions. In this paper, we present InTime, a machine-learning approach supported by a cloud-based (or cluster-based) compilation infrastructure for automating the selection of these parameters effectively to minimize timing costs. InTime builds a database of results from a series of preliminary runs based on canned configurations of CAD options. It then learns from these runs to predict the next series of CAD tool options to improve timing results. Towards the end, we rely on a limited degree of statistical sampling of certain options like placer and synthesis seeds to further tighten results. Using our approach, we show 70% reduction in final timing results across industrial benchmark problems for the Altera CAD flow. This is 30% better than vendor-supplied design space exploration tools that attempts a similar optimization using canned heuristics.",
"title": ""
},
{
"docid": "1f6bf9c06b7ee774bc08848293b5c94a",
"text": "The success of a virtual learning environment (VLE) depends to a considerable extent on student acceptance and use of such an e-learning system. After critically assessing models of technology adoption, including the Technology Acceptance Model (TAM), TAM2, and the Unified Theory of Acceptance and Usage of Technology (UTAUT), we build a conceptual model to explain the differences between individual students in the level of acceptance and use of a VLE. This model extends TAM2 and includes subjective norm, personal innovativeness in the domain of information technology, and computer anxiety. Data were collected from 45 Chinese participants in an Executive MBA program. After performing satisfactory reliability and validity checks, the structural model was tested with the use of PLS. Results indicate that perceived usefulness has a direct effect on VLE use. Perceived ease of use and subjective norm have only indirect effects via perceived usefulness. Both personal innovativeness and computer anxiety have direct effects on perceived ease of use only. Implications are that program managers in education should not only concern themselves with basic system design but also explicitly address individual differences between VLE users. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c66d556686c60af51f007ec36c29bd38",
"text": "The main question we try to answer in this work is whether it is feasible to employ super-resolution (SR) algorithms to increase the spatial resolution of endoscopic high-definition (HD) images in order to reveal new details which may have got lost due to the limited endoscope magnification of the HD endoscope used (e.g. mucosal structures). For this purpose we compare the quality achieved of different SR methods. This is done on standard test images as well as on images obtained from endoscopic video frames. We also investigate whether compression artifacts have a noticeable effect on the SR results. We show that, due to several limitations in case of endoscopic videos, we are not consistently able to achieve a higher visual quality when using SR algorithms instead of bicubic interpolation.",
"title": ""
},
{
"docid": "7bdbfd11a4aa723d3b5361f689d93698",
"text": "We discuss the characteristics of constructive news comments, and present methods to identify them. First, we define the notion of constructiveness. Second, we annotate a corpus for constructiveness. Third, we explore whether available argumentation corpora can be useful to identify constructiveness in news comments. Our model trained on argumentation corpora achieves a top accuracy of 72.59% (baseline=49.44%) on our crowdannotated test data. Finally, we examine the relation between constructiveness and toxicity. In our crowd-annotated data, 21.42% of the non-constructive comments and 17.89% of the constructive comments are toxic, suggesting that non-constructive comments are not much more toxic than constructive comments.",
"title": ""
},
{
"docid": "229288405fbbc0779c42fb311754ca1d",
"text": "We present a system for monocular simultaneous localization and mapping (mono-SLAM) relying solely on video input. Our algorithm makes it possible to precisely estimate the camera trajectory without relying on any motion model. The estimation is completely incremental: at a given time frame, only the current location is estimated while the previous camera positions are never modified. In particular, we do not perform any simultaneous iterative optimization of the camera positions and estimated 3D structure (local bundle adjustment). The key aspect of the system is a fast and simple pose estimation algorithm that uses information not only from the estimated 3D map, but also from the epipolar constraint. We show that the latter leads to a much more stable estimation of the camera trajectory than the conventional approach. We perform high precision camera trajectory estimation in urban scenes with a large amount of clutter. Using an omnidirectional camera placed on a vehicle, we cover one of the longest distance ever reported, up to 2.5 kilometers.",
"title": ""
},
{
"docid": "486417082d921eba9320172a349ee28f",
"text": "Circulating tumor cells (CTCs) are a popular topic in cancer research because they can be obtained by liquid biopsy, a minimally invasive procedure with more sample accessibility than tissue biopsy, to monitor a patient's condition. Over the past decades, CTC research has covered a wide variety of topics such as enumeration, profiling, and correlation between CTC number and patient overall survival. It is important to isolate and enrich CTCs before performing CTC analysis because CTCs in the blood stream are very rare (0⁻10 CTCs/mL of blood). Among the various approaches to separating CTCs, here, we review the research trends in the isolation and analysis of CTCs using microfluidics. Microfluidics provides many attractive advantages for CTC studies such as continuous sample processing to reduce target cell loss and easy integration of various functions into a chip, making \"do-everything-on-a-chip\" possible. However, tumor cells obtained from different sites within a tumor exhibit heterogenetic features. Thus, heterogeneous CTC profiling should be conducted at a single-cell level after isolation to guide the optimal therapeutic path. We describe the studies on single-CTC analysis based on microfluidic devices. Additionally, as a critical concern in CTC studies, we explain the use of CTCs in cancer research, despite their rarity and heterogeneity, compared with other currently emerging circulating biomarkers, including exosomes and cell-free DNA (cfDNA). Finally, the commercialization of products for CTC separation and analysis is discussed.",
"title": ""
},
{
"docid": "1fdefb217531d57dbae14a3e9572e861",
"text": "Quantum teleportation — the transmission and reconstruction over arbitrary distances of the state of a quantum system — is demonstrated experimentally. During teleportation, an initial photon which carries the polarization that is to be transferred and one of a pair of entangled photons are subjected to a measurement such that the second photon of the entangled pair acquires the polarization of the initial photon. This latter photon can be arbitrarily far away from the initial one. Quantum teleportation will be a critical ingredient for quantum computation networks.",
"title": ""
}
] |
scidocsrr
|
f126b61049bbb51f626739997889d900
|
Investigating users' query formulations for cognitive search intents
|
[
{
"docid": "b585947e882fca6f07b65dc940cc819f",
"text": "One way to help all users of commercial Web search engines be more successful in their searches is to better understand what those users with greater search expertise are doing, and use this knowledge to benefit everyone. In this paper we study the interaction logs of advanced search engine users (and those not so advanced) to better understand how these user groups search. The results show that there are marked differences in the queries, result clicks, post-query browsing, and search success of users we classify as advanced (based on their use of query operators), relative to those classified as non-advanced. Our findings have implications for how advanced users should be supported during their searches, and how their interactions could be used to help searchers of all experience levels find more relevant information and learn improved searching strategies.",
"title": ""
}
] |
[
{
"docid": "55dd9bf3372b1ae383d43664d60e9da8",
"text": "In this report, we consider the task of automated assessment of English as a Second Language (ESOL) examination scripts written in response to prompts eliciting free text answers. We review and critically evaluate previous work on automated assessment for essays, especially when applied to ESOL text. We formally define the task as discriminative preference ranking and develop a new system trained and tested on a corpus of manually-graded scripts. We show experimentally that our best performing system is very close to the upper bound for the task, as defined by the agreement between human examiners on the same corpus. Finally we argue that our approach, unlike extant solutions, is relatively prompt-insensitive and resistant to subversion, even when its operating principles are in the public domain. These properties make our approach significantly more viable for high-stakes assessment.",
"title": ""
},
{
"docid": "7aeb10faf8590ed9f4054bafcd4dee0c",
"text": "Concept, design, and measurement results of a frequency-modulated continuous-wave radar sensor in low-temperature co-fired ceramics (LTCC) technology is presented in this paper. The sensor operates in the frequency band between 77–81 GHz. As a key component of the system, wideband microstrip grid array antennas with a broadside beam are presented and discussed. The combination with a highly integrated feeding network and a four-channel transceiver chip based on SiGe technology results in a very compact LTCC RF frontend (23 mm <formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\times$</tex></formula> 23 mm). To verify the feasibility of the concept, first radar measurement results are presented.",
"title": ""
},
{
"docid": "dcfb5ebabf07e87843668338d8d9927a",
"text": "Click Fraud Bots pose a significant threat to the online economy. To-date efforts to filter bots have been geared towards identifiable useragent strings, as epitomized by the IAB's Robots and Spiders list. However bots designed to perpetrate malicious activity or fraud, are designed to avoid detection with these kinds of lists, and many use very sophisticated schemes for cloaking their activities. In order to combat this emerging threat, we propose the creation of Bot Signatures for training and evaluation of candidate Click Fraud Detection Systems. Bot signatures comprise keyed records connected to case examples. We demonstrate the technique by developing 8 simulated examples of Bots described in the literature including Click Bot A.",
"title": ""
},
{
"docid": "1159d83815e18d7822b8eb39c50e438d",
"text": "Imbalanced time series classification (TSC) involving many real-world applications has increasingly captured attention of researchers. Previous work has proposed an intelligent-structure preserving over-sampling method (SPO), which the authors claimed achieved better performance than other existing over-sampling and state-of-the-art methods in TSC. The main disadvantage of over-sampling methods is that they significantly increase the computational cost of training a classification model due to the addition of new minority class instances to balance data-sets with high dimensional features. These challenging issues have motivated us to find a simple and efficient solution for imbalanced TSC. Statistical tests are applied to validate our conclusions. The experimental results demonstrate that this proposed simple random under-sampling technique with SVM is efficient and can achieve results that compare favorably with the existing complicated SPO method for imbalanced TSC.",
"title": ""
},
{
"docid": "71b5a4d02be14868302f1b60d0a26484",
"text": "In cloud computing, data owners host their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, this new paradigm of data hosting service also introduces new security challenges, which requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking methods can only serve for static archive data and, thus, cannot be applied to the auditing service since the data in the cloud can be dynamically updated. Thus, an efficient and secure dynamic auditing protocol is desired to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems and propose an efficient and privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient and provably secure in the random oracle model. We further extend our auditing protocol to support batch auditing for both multiple owners and multiple clouds, without using any trusted organizer. The analysis and simulation results show that our proposed auditing protocols are secure and efficient, especially it reduce the computation cost of the auditor.",
"title": ""
},
{
"docid": "d4a9ebafbc8f35380ab2b3bbbefd5583",
"text": "We present a GPU implementation of LAMMPS, a widely-used parallel molecular dynamics (MD) software package, and show 5x to 13x single node speedups versus the CPU-only version of LAMMPS. This new CUDA package for LAMMPS also enables multi-GPU simulation on hybrid heterogeneous clusters, using MPI for inter-node communication, CUDA kernels on the GPU for all methods working with particle data, and standard LAMMPS C++ code for CPU execution. Cell and neighbor list approaches are compared for best performance on GPUs, with thread-peratom and block-per-atom neighbor list variants showing best performance at low and high neighbor counts, respectively. Computational performance results of GPU-enabled LAMMPS are presented for a variety of materials classes (e.g. biomolecules, polymers, metals, semiconductors), along with a speed comparison versus other available GPU-enabled MD software. Finally, we show strong and weak scaling performance on a CPU/GPU cluster using up to 128 dual GPU nodes.",
"title": ""
},
{
"docid": "fd0a441610f5aef8aa29edd469dcf88a",
"text": "We treat with tools from convex analysis the general problem of cutting planes, separating a point from a (closed convex) set P . Crucial for this is the computation of extreme points in the so-called reverse polar set, introduced by E. Balas in 1979. In the polyhedral case, this enables the computation of cuts that define facets of P . We exhibit three (equivalent) optimization problems to compute such extreme points; one of them corresponds to selecting a specific normalization to generate cuts. We apply the above development to the case where P is (the closed convex hull of) a union, and more particularly a union of polyhedra (case of disjunctive cuts). We conclude with some considerations on the design of efficient cut generators. The paper also contains an appendix, reviewing some fundamental concepts of convex analysis.",
"title": ""
},
{
"docid": "b5c8d34b75dbbfdeb666fd76ef524be7",
"text": "Systematic Literature Reviews (SLR) may not provide insight into the \"state of the practice\" in SE, as they do not typically include the \"grey\" (non-published) literature. A Multivocal Literature Review (MLR) is a form of a SLR which includes grey literature in addition to the published (formal) literature. Only a few MLRs have been published in SE so far. We aim at raising the awareness for MLRs in SE by addressing two research questions (RQs): (1) What types of knowledge are missed when a SLR does not include the multivocal literature in a SE field? and (2) What do we, as a community, gain when we include the multivocal literature and conduct MLRs? To answer these RQs, we sample a few example SLRs and MLRs and identify the missing and the gained knowledge due to excluding or including the grey literature. We find that (1) grey literature can give substantial benefits in certain areas of SE, and that (2) the inclusion of grey literature brings forward certain challenges as evidence in them is often experience and opinion based. Given these conflicting viewpoints, the authors are planning to prepare systematic guidelines for performing MLRs in SE.",
"title": ""
},
{
"docid": "c443ca07add67d6fc0c4901e407c68f2",
"text": "This paper proposes a compiler-based programming framework that automatically translates user-written structured grid code into scalable parallel implementation code for GPU-equipped clusters. To enable such automatic translations, we design a small set of declarative constructs that allow the user to express stencil computations in a portable and implicitly parallel manner. Our framework translates the user-written code into actual implementation code in CUDA for GPU acceleration and MPI for node-level parallelization with automatic optimizations such as computation and communication overlapping. We demonstrate the feasibility of such automatic translations by implementing several structured grid applications in our framework. Experimental results on the TSUBAME2.0 GPU-based supercomputer show that the performance is comparable as hand-written code and good strong and weak scalability up to 256 GPUs.",
"title": ""
},
{
"docid": "d7f5449cf398b56a29c64adada7cf7d2",
"text": "Review The Prefrontal Cortex—An Update: Time Is of the Essence many of the principles discussed below apply also to the PFC of nonprimate species. Anatomy and Connections The PFC is the association cortex of the frontal lobe. In Los Angeles, California 90095 primates, it comprises areas 8–13, 24, 32, 46, and 47 according to the cytoarchitectonic map of Brodmann The physiology of the cerebral cortex is organized in (1909), recently updated for the monkey by Petrides and hierarchical manner. At the bottom of the cortical organi-Pandya (Figure 1). Phylogenetically, it is one of the latest zation, sensory and motor areas support specific sen-cortices to develop, having attained maximum relative sory and motor functions. Progressively higher areas—of growth in the human brain (Brodmann, 1912; Jerison, later phylogenetic and ontogenetic development—support 1994), where it constitutes nearly one-third of the neocor-functions that are progressively more integrative. The tex. Furthermore, the PFC undergoes late development in prefrontal cortex (PFC) constitutes the highest level of the course of ontogeny. In the human, by myelogenic and the cortical hierarchy dedicated to the representation synaptogenic criteria, the PFC is clearly late-maturing and execution of actions. The PFC can be subdivided in three major regions: Huttenlocher and Dabholkar, 1997). In the monkey's orbital, medial, and lateral. The orbital and medial re-PFC, myelogenesis also seems to develop late (Gibson, gions are involved in emotional behavior. The lateral 1991). However, the assumption that the synaptic struc-region, which is maximally developed in the human, pro-ture of the PFC lags behind that of other neocortical vides the cognitive support to the temporal organization areas has been challenged with morphometric data of behavior, speech, and reasoning. This function of (Bourgeois et al., 1994). In any case, imaging studies temporal organization is served by several subordinate indicate that, in the human, prefrontal areas do not attain functions that are closely intertwined (e.g., temporal in-full maturity until adolescence (Chugani et al., 1987; tegration, working memory, set). Whatever areal special-Paus et al., 1999; Sowell et al., 1999). This conclusion ization can be discerned in the PFC is not so much is consistent with the behavioral evidence that these attributable to the topographical distribution of those areas are critical for those higher cognitive functions functions as to the nature of the cognitive information that develop late, such as propositional speech and with which they operate. Much of the prevalent confu-reasoning. sion in the PFC literature derives from …",
"title": ""
},
{
"docid": "0389a49d23b72bf29c0a186de9566939",
"text": "IEEE 1451 has been around for almost 20 years and in that time it has seen many changes in the world of smart sensors. One of the most distinct paradigms to arise was the Internet-of-Things and with it, the popularity of light-weight and simple to implement communication protocols. One of these protocols in particular, MQ Telemetry Transport has become synonymous with large cloud service providers such as Amazon Web Services, IBM Watson, and Microsoft Azure, along with countless other services. While MQTT had be traditionally used in controlled networks within server centers, the simplicity of the protocol has caused it to be utilized on the open internet. Now being called the language of the IoT, it seems obvious that any standard that is aiming to bring a common network service layer to the IoT architecture should be able to utilize MQTT. This paper proposes potential methodologies to extend the Common Architectures and Network services found in the IEEE 1451 Family of Standard into applications which utilize MQTT.",
"title": ""
},
{
"docid": "70c82bb98d0e558280973d67429cea8a",
"text": "We present an algorithm for separating the local gradient information and Lambertian color by using 4-source color photometric stereo in the presence of highlights and shadows. We assume that the surface reflectance can be approximated by the sum of a Lambertian and a specular component. The conventional photometric method is generalized for color images. Shadows and highlights in the input images are detected using either spectral or directional cues and excluded from the recovery process, thus giving more reliable estimates of local surface parameters.",
"title": ""
},
{
"docid": "e9229d3ab3e9ec7e5020e50ca23ada0b",
"text": "Human beings have been recently reviewed as ‘metaorganisms’ as a result of a close symbiotic relationship with the intestinal microbiota. This assumption imposes a more holistic view of the ageing process where dynamics of the interaction between environment, intestinal microbiota and host must be taken into consideration. Age-related physiological changes in the gastrointestinal tract, as well as modification in lifestyle, nutritional behaviour, and functionality of the host immune system, inevitably affect the gut microbial ecosystem. Here we review the current knowledge of the changes occurring in the gut microbiota of old people, especially in the light of the most recent applications of the modern molecular characterisation techniques. The hypothetical involvement of the age-related gut microbiota unbalances in the inflamm-aging, and immunosenescence processes will also be discussed. Increasing evidence of the importance of the gut microbiota homeostasis for the host health has led to the consideration of medical/nutritional applications of this knowledge through the development of probiotic and prebiotic preparations specific for the aged population. The results of the few intervention trials reporting the use of pro/prebiotics in clinical conditions typical of the elderly will be critically reviewed.",
"title": ""
},
{
"docid": "fce6ac500501d0096aac3513639c2627",
"text": "Recent technological advances made necessary the use of the robots in various types of applications. Currently, the traditional robot-like scenarios dedicated to industrial applications with repetitive tasks, were replaced by applications which require human interaction. The main field of such applications concerns the rehabilitation and aid of elderly persons. In this study, we present a state-of-the-art of the main research advances in lower limbs actuated orthosis/wearable robots in the literature. This will include a review on researches covering full limb exoskeletons, lower limb exoskeletons and particularly the knee joint orthosis. Rehabilitation using treadmill based device and use of Functional Electrical Stimulation (FES) are also investigated. We discuss finally the challenges not yet solved such as issues related to portability, energy consumption, social constraints and high costs of theses devices.",
"title": ""
},
{
"docid": "6c730f32b02ca58f66e98f9fc5181484",
"text": "When analyzing a visualized network, users need to explore different sections of the network to gain insight. However, effective exploration of large networks is often a challenge. While various tools are available for users to explore the global and local features of a network, these tools usually require significant interaction activities, such as repetitive navigation actions to follow network nodes and edges. In this paper, we propose a structure-based suggestive exploration approach to support effective exploration of large networks by suggesting appropriate structures upon user request. Encoding nodes with vectorized representations by transforming information of surrounding structures of nodes into a high dimensional space, our approach can identify similar structures within a large network, enable user interaction with multiple similar structures simultaneously, and guide the exploration of unexplored structures. We develop a web-based visual exploration system to incorporate this suggestive exploration approach and compare performances of our approach under different vectorizing methods and networks. We also present the usability and effectiveness of our approach through a controlled user study with two datasets.",
"title": ""
},
{
"docid": "3a95b876619ce4b666278810b80cae77",
"text": "On 14 November 2016, northeastern South Island of New Zealand was struck by a major moment magnitude (Mw) 7.8 earthquake. Field observations, in conjunction with interferometric synthetic aperture radar, Global Positioning System, and seismology data, reveal this to be one of the most complex earthquakes ever recorded. The rupture propagated northward for more than 170 kilometers along both mapped and unmapped faults before continuing offshore at the island’s northeastern extent. Geodetic and field observations reveal surface ruptures along at least 12 major faults, including possible slip along the southern Hikurangi subduction interface; extensive uplift along much of the coastline; and widespread anelastic deformation, including the ~8-meter uplift of a fault-bounded block. This complex earthquake defies many conventional assumptions about the degree to which earthquake ruptures are controlled by fault segmentation and should motivate reevaluation of these issues in seismic hazard models.",
"title": ""
},
{
"docid": "66a4aa1e96596221729611add5390daf",
"text": "Table characteristics vary widely. Consequently, a great variety of computational approaches have been applied to table recognition. In this survey, the table recognition literature is presented as an interaction of table models, observations, transformations, and inferences. A table model defines the physical and logical structure of tables; the model is used to detect tables and to analyze and decompose the detected tables. Observations perform feature measurements and data lookup, transformations alter or restructure data, and inferences generate and test hypotheses. This presentation clarifies both the decisions made by a table recognizer and the assumptions and inferencing techniques that underlie these decisions.",
"title": ""
},
{
"docid": "295809398866d81cab85c44b145df56d",
"text": "This paper discusses the “Building-In Reliability” (BIR) approach to process development, particularly for technologies integrating Bipolar, CMOS, and DMOS devices (so-called BCD technologies). Examples of BIR reliability assessments include gate oxide integrity (GOI) through Time-Dependent Dielectric Breakdown (TDDB) studies and degradation of laterally diffused MOS (LDMOS) devices by Hot-Carrier Injection (HCI) stress. TDDB allows calculation of gate oxide failure rates based on operating voltage waveforms and temperature. HCI causes increases in LDMOS resistance (Rdson), which decreases efficiency in power applications.",
"title": ""
},
{
"docid": "975bc281e14246e29da61495e1e5dae1",
"text": "We have introduced the biomechanical research on snakes and developmental research on snake-like robots that we have been working on. We could not introduce everything we developed. There were also a smaller snake-like active endoscope; a large-sized snake-like inspection robot for nuclear reactor related facility, Koryu, 1 m in height, 3.5 m in length, and 350 kg in weight; and several other snake-like robots. Development of snake-like robots is still one of our latest research topics. We feel that the technical difficulties in putting snake-like robots into practice have almost been overcome by past research, so we believe that such practical use of snake-like robots can be realized soon.",
"title": ""
},
{
"docid": "f0ab3049cb9f66176c34a57d27592537",
"text": "We take a new, scenario-based look at evaluation in information visualization. Our seven scenarios, evaluating visual data analysis and reasoning, evaluating user performance, evaluating user experience, evaluating environments and work practices, evaluating communication through visualization, evaluating visualization algorithms, and evaluating collaborative data analysis were derived through an extensive literature review of over 800 visualization publications. These scenarios distinguish different study goals and types of research questions and are illustrated through example studies. Through this broad survey and the distillation of these scenarios, we make two contributions. One, we encapsulate the current practices in the information visualization research community and, two, we provide a different approach to reaching decisions about what might be the most effective evaluation of a given information visualization. Scenarios can be used to choose appropriate research questions and goals and the provided examples can be consulted for guidance on how to design one's own study.",
"title": ""
}
] |
scidocsrr
|
5dd623ca5cc151e4f047f67a3e4c3cfa
|
Differentially Private Recommender Systems: Building Privacy into the Netflix Prize Contenders
|
[
{
"docid": "3f46d98f695da70d75cefdeefe6b9a15",
"text": "Our RMSE=0.8643 solution is a linear blend of over 100 results. Some of them are new to this year, whereas many others belong to the set that was reported a year ago in our 2007 Progress Prize report [3]. This report is structured accordingly. In Section 2 we detail methods new to this year. In general, our view is that those newer methods deliver a superior performance compared to the methods we used a year ago. Throughout the description of the methods, we highlight the specific predictors that participated in the final blended solution. Nonetheless, the older methods still play a role in the blend, and thus in Section 3 we list those methods repeated from a year ago. Finally, we conclude with general thoughts in Section 4.",
"title": ""
},
{
"docid": "e49aa0d0f060247348f8b3ea0a28d3c6",
"text": "Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.",
"title": ""
},
{
"docid": "21384ea8d80efbf2440fb09a61b03be2",
"text": "We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.",
"title": ""
},
{
"docid": "d77fcf9947573c228ac1e000f29153d7",
"text": "Our final solution (RMSE=0.8712) consists of blending 107 individual results. Since many of these results are close variants, we first describe the main approaches behind them. Then, we will move to describing each individual result. The core components of the solution are published in our ICDM'2007 paper [1] (or, KDD-Cup'2007 paper [2]), and also in the earlier KDD'2007 paper [3]. We assume that the reader is familiar with these works and our terminology there. A movie-oriented k-NN approach was thoroughly described in our KDD-Cup'2007 paper [kNN]. We apply it as a post-processor for most other models. Interestingly, it was most effective when applied on residuals of RBMs [5], thereby driving the Quiz RMSE from 0.9093 to 0.8888. An earlier k-NN approach was described in the KDD'2007 paper ([3], Sec. 3) [Slow-kNN]. It appears that this earlier approach can achieve slightly more accurate results than the newer one, at the expense of a significant increase in running time. Consequently, we dropped the older approach, though some results involving it survive within the final blend. We also tried more naïve k-NN models, where interpolation weights are based on pairwise similarities between movies (see [2], Sec. 2.2). Specifically, we based weights on corr 2 /(1-corr 2) [Corr-kNN], or on mse-10 [MSE-kNN]. Here, corr is the Pearson correlation coefficient between the two respective movies, and mse is the mean squared distance between two movies (see definition of s ij in Sec. 4.1 of [2]). We also tried taking the interpolation weights as the \"support-based similarities\", which will be defined shortly [Supp-kNN]. Other variants that we tried for computing the interpolation coefficients are: (1) using our KDD-Cup'2007 [2] method on a binary user-movie matrix, which replaces every rating with \" 1 \" , and sets non-rated user-movie pairs to \" 0 \" [Bin-kNN]. (2) Taking results of factorization, and regressing the factors associated with the target movie on the factors associated with its neighbors. Then, the resulting regression coefficients are used as interpolation weights [Fctr-kNN]. As explained in our papers, we also tried user-oriented k-NN approaches. Either in a profound way (see: [1], Sec. 4.3; [3], Sec. 5) [User-kNN], or by just taking weights as pairwise similarities among users [User-MSE-kNN], which is the user-oriented parallel of the aforementioned [MSE-kNN]. Prior to computing interpolation weights, one has to choose the set of neighbors. We find the most similar neighbors based on an appropriate similarity measure. In …",
"title": ""
}
] |
[
{
"docid": "b1a0a76e73aa5b0a893e50b2fadf0ad2",
"text": "The field of occupational therapy, as with all facets of health care, has been profoundly affected by the changing climate of health care delivery. The combination of cost-effectiveness and quality of care has become the benchmark for and consequent drive behind the rise of managed health care delivery systems. The spawning of outcomes research is in direct response to the need for comparative databases to provide results of effectiveness in health care treatment protocols, evaluations of health-related quality of life, and cost containment measures. Outcomes management is the application of outcomes research data by all levels of health care providers. The challenges facing occupational therapists include proving our value in an economic trend of downsizing, competing within the medical profession, developing and affiliating with new payer sources, and reengineering our careers to meet the needs of the new, nontraditional health care marketplace.",
"title": ""
},
{
"docid": "c8f6eac662b30768b2e64b3bd3502e73",
"text": "This paper discusses the use of genetic programming (GP) and genetic algorithms (GA) to evolve solutions to a problem in robot control. GP is seen as an intuitive evolutionary method while GAs require an extra layer of human intervention. The infrastructures for the different evolutionary approaches are compared.",
"title": ""
},
{
"docid": "abf91984fd590173faf616bbcb806d92",
"text": "As high performance clusters continue to grow in size, the mean time between failures shrinks. Thus, the issues of fault tolerance and reliability are becoming one of the challenging factors for application scalability. The traditional disk-based method of dealing with faults is to checkpoint the state of the entire application periodically to reliable storage and restart from the recent checkpoint. The recovery of the application from faults involves (often manually) restarting applications on all processors and having it read the data from disks on all processors. The restart can therefore take minutes after it has been initiated. Such a strategy requires that the failed processor can be replaced so that the number of processors at checkpoint-time and recovery-time are the same. We present FTC-Charms ++, a fault-tolerant runtime based on a scheme for fast and scalable in-memory checkpoint and restart. At restart, when there is no extra processor, the program can continue to run on the remaining processors while minimizing the performance penalty due to losing processors. The method is useful for applications whose memory footprint is small at the checkpoint state, while a variation of this scheme - in-disk checkpoint/restart can be applied to applications with large memory footprint. The scheme does not require any individual component to be fault-free. We have implemented this scheme for Charms++ and AMPI (an adaptive version of MPl). This work describes the scheme and shows performance data on a cluster using 128 processors.",
"title": ""
},
{
"docid": "0db229bd2dfd325c0f23bc9437141e69",
"text": "The emergence of Infrastructure as a Service framework brings new opportunities, which also accompanies with new challenges in auto scaling, resource allocation, and security. A fundamental challenge underpinning these problems is the continuous tracking and monitoring of resource usage in the system. In this paper, we present ATOM, an efficient and effective framework to automatically track, monitor, and orchestrate resource usage in an Infrastructure as a Service (IaaS) system that is widely used in cloud infrastructure. We use novel tracking method to continuously track important system usage metrics with low overhead, and develop a Principal Component Analysis (PCA) based approach to continuously monitor and automatically find anomalies based on the approximated tracking results. We show how to dynamically set the tracking threshold based on the detection results, and further, how to adjust tracking algorithm to ensure its optimality under dynamic workloads. Lastly, when potential anomalies are identified, we use introspection tools to perform memory forensics on VMs guided by analyzed results from tracking and monitoring to identify malicious behavior inside a VM. We demonstrate the extensibility of ATOM through virtual machine (VM) clustering. The performance of our framework is evaluated in an open source IaaS system.",
"title": ""
},
{
"docid": "a880c96ff3fc3c52af2be7374b7d9fed",
"text": "Researchers have studied how people use self-tracking technologies and discovered a long list of barriers including lack of time and motivation as well as difficulty in data integration and interpretation. Despite the barriers, an increasing number of Quantified-Selfers diligently track many kinds of data about themselves, and some of them share their best practices and mistakes through Meetup talks, blogging, and conferences. In this work, we aim to gain insights from these \"extreme users,\" who have used existing technologies and built their own workarounds to overcome different barriers. We conducted a qualitative and quantitative analysis of 52 video recordings of Quantified Self Meetup talks to understand what they did, how they did it, and what they learned. We highlight several common pitfalls to self-tracking, including tracking too many things, not tracking triggers and context, and insufficient scientific rigor. We identify future research efforts that could help make progress toward addressing these pitfalls. We also discuss how our findings can have broad implications in designing and developing self-tracking technologies.",
"title": ""
},
{
"docid": "2899b31339acbd774aff53fc99590a45",
"text": "An ultra-wideband patch antenna is presented for K-band communication. The antenna is designed by employing stacked geometry and aperture-coupled technique. The rectangular patch shape and coaxial fed configuration is used for particular design. The ultra-wideband characteristics are achieved by applying a specific surface resistance of 75Ω/square to the upper rectangular patch and it is excited through a rectangular slot made on the lower patch element (made of copper). The proposed patch antenna is able to operate in the frequency range of 12-27.3 GHz which is used in radar and satellite communication, commonly named as K-band. By employing a technique of thicker substrate and by applying a specific surface resistance to the upper patch element, an impedance bandwidth of 77.8% is achieved having VSWR ≤ 2. It is noted that the gain of proposed antenna is linearly increased in the frequency range of 12-26 GHz and after that the gain is decreased up to 6 dBi. Simulation results are presented to demonstrate the performance of proposed ultra-wideband microstrip patch antenna.",
"title": ""
},
{
"docid": "e743bfe8c4f19f1f9a233106919c99a7",
"text": "We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a data set of concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are labeled across a broad range of visual concepts including objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability is an axis-independent property of the representation space, then we apply the method to compare the latent representations of various networks when trained to solve different classification problems. We further analyze the effect of training iterations, compare networks trained with different initializations, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.",
"title": ""
},
{
"docid": "f1b32219b6cd38cf8514d3ae2e926612",
"text": "Creativity refers to the potential to produce novel ideas that are task-appropriate and high in quality. Creativity in a societal context is best understood in terms of a dialectical relation to intelligence and wisdom. In particular, intelligence forms the thesis of such a dialectic. Intelligence largely is used to advance existing societal agendas. Creativity forms the antithesis of the dialectic, questioning and often opposing societal agendas, as well as proposing new ones. Wisdom forms the synthesis of the dialectic, balancing the old with the new. Wise people recognize the need to balance intelligence with creativity to achieve both stability and change within a societal context.",
"title": ""
},
{
"docid": "20c6b7417a31aceb39bcf1b1fa3fce4b",
"text": "In the process of dealing with the cutting calculation of Multi-axis CNC Simulation, the traditional Voxel Model not only will cost large computation time when judging whether the cutting happens or not, but also the data points may occupy greater storage space. So it cannot satisfy the requirement of real-time emulation, In the construction method of Compressed Voxel Model, it can satisfy the need of Multi-axis CNC Simulation, and storage space is relatively small. Also the model reconstruction speed is faster, but the Boolean computation in the cutting judgment is very complex, so it affects the real-time of CNC Simulation indirectly. Aimed at the shortcomings of these methods, we propose an improved solid modeling technique based on the Voxel model, which can meet the demand of real-time in cutting computation and Graphic display speed.",
"title": ""
},
{
"docid": "9458b13e5a87594140d7ee759e06c76c",
"text": "Digital ecosystem, as a neoteric terminology, has emerged along with the appearance of Business Ecosystem which is a form of naturally existing business network of small and medium enterprises. However, few researches have been found in the field of defining digital ecosystem. In this paper, by means of ontology technology as our research methodology, we propose to develop a conceptual model for digital ecosystem. By introducing an innovative ontological notation system, we create the hierarchical framework of digital ecosystem form up to down, based on the related theories form Digital ecosystem and business intelligence institute.",
"title": ""
},
{
"docid": "e1be36e185b024561190bcf85ab4c756",
"text": "Molecular (nucleic acid)-based diagnostics tests have many advantages over immunoassays, particularly with regard to sensitivity and specificity. Most on-site diagnostic tests, however, are immunoassay-based because conventional nucleic acid-based tests (NATs) require extensive sample processing, trained operators, and specialized equipment. To make NATs more convenient, especially for point-of-care diagnostics and on-site testing, a simple plastic microfluidic cassette (\"chip\") has been developed for nucleic acid-based testing of blood, other clinical specimens, food, water, and environmental samples. The chip combines nucleic acid isolation by solid-phase extraction; isothermal enzymatic amplification such as LAMP (Loop-mediated AMPlification), NASBA (Nucleic Acid Sequence Based Amplification), and RPA (Recombinase Polymerase Amplification); and real-time optical detection of DNA or RNA analytes. The microfluidic cassette incorporates an embedded nucleic acid binding membrane in the amplification reaction chamber. Target nucleic acids extracted from a lysate are captured on the membrane and amplified at a constant incubation temperature. The amplification product, labeled with a fluorophore reporter, is excited with a LED light source and monitored in situ in real time with a photodiode or a CCD detector (such as available in a smartphone). For blood analysis, a companion filtration device that separates plasma from whole blood to provide cell-free samples for virus and bacterial lysis and nucleic acid testing in the microfluidic chip has also been developed. For HIV virus detection in blood, the microfluidic NAT chip achieves a sensitivity and specificity that are nearly comparable to conventional benchtop protocols using spin columns and thermal cyclers.",
"title": ""
},
{
"docid": "89dd97465c8373bb9dabf3cbb26a4448",
"text": "Unidirectional connections from the cortex to the matrix of the corpus striatum initiate the cortico-basal ganglia (BG)-thalamocortical loop, thought to be important in momentary action selection and in longer-term fine tuning of behavioural repertoire; a discrete set of striatal compartments, striosomes, has the complementary role of registering or anticipating reward that shapes corticostriatal plasticity. Re-entrant signals traversing the cortico-BG loop impact predominantly frontal cortices, conveyed through topographically ordered output channels; by contrast, striatal input signals originate from a far broader span of cortex, and are far more divergent in their termination. The term ‘disclosed loop’ is introduced to describe this organisation: a closed circuit that is open to outside influence at the initial stage of cortical input. The closed circuit component of corticostriatal afferents is newly dubbed ‘operative’, as it is proposed to establish the bid for action selection on the part of an incipient cortical action plan; the broader set of converging corticostriatal afferents is described as contextual. A corollary of this proposal is that every unit of the striatal volume, including the long, C-shaped tail of the caudate nucleus, should receive a mandatory component of operative input, and hence include at least one area of BG-recipient cortex amongst the sources of its corticostriatal afferents. Individual operative afferents contact twin classes of GABAergic striatal projection neuron (SPN), distinguished by their neurochemical character, and onward circuitry. This is the basis of the classic direct and indirect pathway model of the cortico-BG loop. Each pathway utilises a serial chain of inhibition, with two such links, or three, providing positive and negative feedback, respectively. Operative co-activation of direct and indirect SPNs is, therefore, pictured to simultaneously promote action, and to restrain it. The balance of this rival activity is determined by the contextual inputs, which summarise the external and internal sensory environment, and the state of ongoing behavioural priorities. Notably, the distributed sources of contextual convergence upon a striatal locus mirror the transcortical network harnessed by the origin of the operative input to that locus, thereby capturing a similar set of contingencies relevant to determining action. The disclosed loop formulation of corticostriatal and subsequent BG loop circuitry, as advanced here, refines the operating rationale of the classic model and allows the integration of more recent anatomical and physiological data, some of which can appear at variance with the classic model. Equally, it provides a lucid functional context for continuing cellular studies of SPN biophysics and mechanisms of synaptic plasticity.",
"title": ""
},
{
"docid": "b5cb64a0a17954310910d69c694ad786",
"text": "This paper proposes a hybrid of handcrafted rules and a machine learning method for chunking Korean. In the partially free word-order languages such as Korean and Japanese, a small number of rules dominate the performance due to their well-developed postpositions and endings. Thus, the proposed method is primarily based on the rules, and then the residual errors are corrected by adopting a memory-based machine learning method. Since the memory-based learning is an efficient method to handle exceptions in natural language processing, it is good at checking whether the estimates are exceptional cases of the rules and revising them. An evaluation of the method yields the improvement in F-score over the rules or various machine learning methods alone.",
"title": ""
},
{
"docid": "727e4b745037587df8e9789f978e0db4",
"text": "There is a growing number of courses delivered using elearning environments and their online discussions play an important role in collaborative learning of students. Even in courses with a few number of students, there could be thousands of messages generated in a few months within these forums. Manually evaluating the participation of students in such case is a significant challenge, considering the fact that current e-learning environments do not provide much information regarding the structure of interactions between students. There is a recent line of research on applying social network analysis (SNA) techniques to study these interactions.\n Here we propose to exploit SNA techniques, including community mining, in order to discover relevant structures in social networks we generate from student communications but also information networks we produce from the content of the exchanged messages. With visualization of these discovered relevant structures and the automated identification of central and peripheral participants, an instructor is provided with better means to assess participation in the online discussions. We implemented these new ideas in a toolbox, named Meerkat-ED, which automatically discovers relevant network structures, visualizes overall snapshots of interactions between the participants in the discussion forums, and outlines the leader/peripheral students. Moreover, it creates a hierarchical summarization of the discussed topics, which gives the instructor a quick view of what is under discussion. We believe exploiting the mining abilities of this toolbox would facilitate fair evaluation of students' participation in online courses.",
"title": ""
},
{
"docid": "aa7fe787492aa8aa3d50f748b2df17cb",
"text": "Smart Contracts sind rechtliche Vereinbarungen, die sich IT-Technologien bedienen, um die eigene Durchsetzbarkeit sicherzustellen. Es werden durch Smart Contracts autonom Handlungen initiiert, die zuvor vertraglich vereinbart wurden. Beispielsweise können vereinbarte Zahlungen von Geldbeträgen selbsttätig veranlasst werden. Basieren Smart Contracts auf Blockchains, ergeben sich per se vertrauenswürdige Transaktionen. Eine dritte Instanz zur Sicherstellung einer korrekten Transaktion, beispielsweise eine Bank oder ein virtueller Marktplatz, wird nicht benötigt. Echte Peer-to-Peer-Verträge sind möglich. Ein weiterer Anwendungsfall von Smart Contracts ist denkbar. Smart Contracts könnten statt Vereinbarungen von Vertragsparteien gesetzliche Regelungen ausführen. Beispielsweise die Regelungen des Patentgesetzes könnten durch einen Smart Contract implementiert werden. Die Verwaltung von IPRs (Intellectual Property Rights) entsprechend den gesetzlichen Regelungen würde dadurch sichergestellt werden. Bislang werden Spezialisten, beispielsweise Patentanwälte, benötigt, um eine akkurate Administration von Schutzrechten zu gewährleisten. Smart Contracts könnten die Dienstleistungen dieser Spezialisten auf dem Gebiet des geistigen Eigentums obsolet werden lassen.",
"title": ""
},
{
"docid": "bd6c2c591cd5fe1493968b98746175c0",
"text": "In this paper we investigate mapping stream programs (i.e., programs written in a streaming style for streaming architectures such as Imagine and Raw) onto a general-purpose CPU. We develop and explore a novel way of mapping these programs onto the CPU. We show how the salient features of stream programming such as computation kernels, local memories, and asynchronous bulk memory loads and stores can be easily mapped by a simple compilation system to CPU features such as the processor caches, simultaneous multi-threading, and fast inter-thread communication support, resulting in an executable that efficiently uses CPU resources. We present an evaluation of our mapping on a hyperthreaded Intel Pentium 4 CPU as a canonical example of a general-purpose processor. We compare the mapped stream program against the same program coded in a more conventional style for the general-purpose processor. Using both micro-benchmarks and scientific applications we show that programs written in a streaming style can run comparably to equivalent programs written in a traditional style. Our results show that coding programs in a streaming style can improve performance on today¿s machines and smooth the way for significant performance improvements with the deployment of streaming architectures.",
"title": ""
},
{
"docid": "986a2771edc62a5658c0099e5cc0a920",
"text": "Very-low-energy diets (VLEDs) and ketogenic low-carbohydrate diets (KLCDs) are two dietary strategies that have been associated with a suppression of appetite. However, the results of clinical trials investigating the effect of ketogenic diets on appetite are inconsistent. To evaluate quantitatively the effect of ketogenic diets on subjective appetite ratings, we conducted a systematic literature search and meta-analysis of studies that assessed appetite with visual analogue scales before (in energy balance) and during (while in ketosis) adherence to VLED or KLCD. Individuals were less hungry and exhibited greater fullness/satiety while adhering to VLED, and individuals adhering to KLCD were less hungry and had a reduced desire to eat. Although these absolute changes in appetite were small, they occurred within the context of energy restriction, which is known to increase appetite in obese people. Thus, the clinical benefit of a ketogenic diet is in preventing an increase in appetite, despite weight loss, although individuals may indeed feel slightly less hungry (or more full or satisfied). Ketosis appears to provide a plausible explanation for this suppression of appetite. Future studies should investigate the minimum level of ketosis required to achieve appetite suppression during ketogenic weight loss diets, as this could enable inclusion of a greater variety of healthy carbohydrate-containing foods into the diet.",
"title": ""
},
{
"docid": "2672e9f29c0c54d09758dd10dc7441f4",
"text": "An examination of test manuals and published research indicates that widely used memory tests (e.g., Verbal Paired Associates and Word List tests of the Wechsler Memory Scale, Rey Auditory Verbal Learning Test, and California Verbal Learning Test) are afflicted by severe ceiling effects. In the present study, the true extent of memory ability in healthy young adults was tested by giving 208 college undergraduates verbal paired-associate and verbal learning tests of various lengths; the findings demonstrate that healthy adults can remember much more than is suggested by the normative data for the memory tests just mentioned. The findings highlight the adverse effects of low ceilings in memory assessment and underscore the severe consequences of ceiling effects on score distributions, means, standard deviations, and all variability-dependent indices, such as reliability, validity, and correlations with other tests. The article discusses the optimal test lengths for verbal paired-associate and verbal list-learning tests, shows how to identify ceiling-afflicted data in published research, and explains how proper attention to this phenomenon can improve future research and clinical practice.",
"title": ""
},
{
"docid": "5932b3f1f0523f07190855e51abc04b9",
"text": "This paper proposes an optimization algorithm based on how human fight and learn from each duelist. Since this algorithm is based on population, the proposed algorithm starts with an initial set of duelists. The duel is to determine the winner and loser. The loser learns from the winner, while the winner try their new skill or technique that may improve their fighting capabilities. A few duelists with highest fighting capabilities are called as champion. The champion train a new duelists such as their capabilities. The new duelist will join the tournament as a representative of each champion. All duelist are re-evaluated, and the duelists with worst fighting capabilities is eliminated to maintain the amount of duelists. Two optimization problem is applied for the proposed algorithm, together with genetic algorithm, particle swarm optimization and imperialist competitive algorithm. The results show that the proposed algorithm is able to find the better global optimum and faster iteration. Keywords—Optimization; global, algorithm; duelist; fighting",
"title": ""
},
{
"docid": "c8a9aff29f3e420a1e0442ae7caa46eb",
"text": "Four new species of Ixora (Rubiaceae, Ixoreae) from Brazil are described and illustrated and their relationships to morphologically similar species as well as their conservation status are discussed. The new species, Ixora cabraliensis, Ixora emygdioi, Ixora grazielae, and Ixora pilosostyla are endemic to the Atlantic Forest of southern Bahia and Espirito Santo. São descritas e ilustradas quatro novas espécies de Ixora (Rubiaceae, Ixoreae) para o Brasil bem como discutidos o relacionamento morfológico com espécies mais similares e o estado de conservação. As novas espécies Ixora cabraliensis, Ixora emygdioi, Ixora grazielae e Ixora pilosostyla são endêmicas da Floresta Atlântica, no trecho do sul do estado da Bahia e o estado do Espírito Santo.",
"title": ""
}
] |
scidocsrr
|
c5507c30d0e14b4e2c5dd9e1a4cd1f1d
|
A Lyapunov-based Approach to Safe Reinforcement Learning
|
[
{
"docid": "c85ee4139239b17d98b0d77836e00b72",
"text": "We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents. These problems include safe interruptibility, avoiding side effects, absent supervisor, reward gaming, safe exploration, as well as robustness to self-modification, distributional shift, and adversaries. To measure compliance with the intended safe behavior, we equip each environment with a performance function that is hidden from the agent. This allows us to categorize AI safety problems into robustness and specification problems, depending on whether the performance function corresponds to the observed reward function. We evaluate A2C and Rainbow, two recent deep reinforcement learning agents, on our environments and show that they are not able to solve them satisfactorily.",
"title": ""
}
] |
[
{
"docid": "39070a1f503e60b8709050fc2a250378",
"text": "Plants in their natural habitats adapt to drought stress in the environment through a variety of mechanisms, ranging from transient responses to low soil moisture to major survival mechanisms of escape by early flowering in absence of seasonal rainfall. However, crop plants selected by humans to yield products such as grain, vegetable, or fruit in favorable environments with high inputs of water and fertilizer are expected to yield an economic product in response to inputs. Crop plants selected for their economic yield need to survive drought stress through mechanisms that maintain crop yield. Studies on model plants for their survival under stress do not, therefore, always translate to yield of crop plants under stress, and different aspects of drought stress response need to be emphasized. The crop plant model rice ( Oryza sativa) is used here as an example to highlight mechanisms and genes for adaptation of crop plants to drought stress.",
"title": ""
},
{
"docid": "5c15dc63e21fa4ea0e2a096a711880e5",
"text": "We analyze the effect of the human capital or “quality” of the top management of a firm on its innovation activities. We extract a “management quality factor” using common factor analysis on various individual proxies for the quality of a firm’s management team, such as management team size, fraction of managers with MBAs, the average employmentand education-based connections of each manager in the management team, fraction of members with prior work experience in a top management position, the average number of prior board positions that each manager serves on, and the fraction of managers with doctoral degrees. We find that firms with higher quality management teams not only invest more in innovation (as measured by R&D expenditures), but also have a greater quantity and quality of innovation output, as measured by the number of patents and citations per patent, respectively. We control for the endogenous matching of higher quality managers and higher quality firms using an instrumental variable analysis where we use a function of the number of top managers who faced Vietnam War era drafts (and therefore had an incentive to go to graduate school to get a draft deferment) as an instrument for top management human capital. We also show that an important channel through which higher management quality firms achieve greater innovation success is by hiring a larger number of inventors (controlling for R&D expenditures), and also by hiring higher quality inventors (as measured by their prior citations per patent record). Finally, we show that firms with higher quality top management teams are able to develop a larger number of both exploratory and exploitative innovations.",
"title": ""
},
{
"docid": "fc9ddeeae99a4289d5b955c9ba90c682",
"text": "In recent years there have been growing calls for forging greater connections between education and cognitive neuroscience.As a consequence great hopes for the application of empirical research on the human brain to educational problems have been raised. In this article we contend that the expectation that results from cognitive neuroscience research will have a direct and immediate impact on educational practice are shortsighted and unrealistic. Instead, we argue that an infrastructure needs to be created, principally through interdisciplinary training, funding and research programs that allow for bidirectional collaborations between cognitive neuroscientists, educators and educational researchers to grow.We outline several pathways for scaffolding such a basis for the emerging field of ‘Mind, Brain and Education’ to flourish as well as the obstacles that are likely to be encountered along the path.",
"title": ""
},
{
"docid": "f64b1262b14385ad7d625a0697bd86ba",
"text": "Nowadays, the usage of resource constrained devices is increasing and these devices are primarily working with sensitive data. Consequently, data security has become crucial for both producers and users. Limitation of resources is deemed as the major issue that makes these devices vulnerable. Attackers might exploit these limitations to get access to the valuable data. Therefore, carefully chosen and practically tested encryption algorithm must be applied to increase the device efficiency and mitigate the risk of sensitive data loss. This study will compare elliptic curve cryptography (ECC) algorithm with Key size of 160-bit and Rivest-Shamir-Adleman (RSA) algorithm with Key size of 1024-bit. As a result of this study utilizing ECC in resource constrained devices has advantages over RSA but ECC needs continues enhancement to satisfy the limitations of newly designed chips.",
"title": ""
},
{
"docid": "691dccc83f11f97994480491ea8c0c0d",
"text": "The various physical factors affecting measured diffraction intensities are discussed, as are the scaling models which may be used to put the data on a consistent scale. After scaling, the intensities can be analysed to set the real resolution of the data set, to detect bad regions (e.g. bad images), to analyse radiation damage and to assess the overall quality of the data set. The significance of any anomalous signal may be assessed by probability and correlation analysis. The algorithms used by the CCP4 scaling program SCALA are described. A requirement for the scaling and merging of intensities is knowledge of the Laue group and point-group symmetries: the possible symmetry of the diffraction pattern may be determined from scores such as correlation coefficients between observations which might be symmetry-related. These scoring functions are implemented in a new program POINTLESS.",
"title": ""
},
{
"docid": "c470e4b10e452bc39e271a195303359b",
"text": "This paper presents KeypointNet, an end-to-end geometric reasoning framework to learn an optimal set of category-specific 3D keypoints, along with their detectors. Given a single image, KeypointNet extracts 3D keypoints that are optimized for a downstream task. We demonstrate this framework on 3D pose estimation by proposing a differentiable objective that seeks the optimal set of keypoints for recovering the relative pose between two views of an object. Our model discovers geometrically and semantically consistent keypoints across viewing angles and instances of an object category. Importantly, we find that our end-to-end framework using no ground-truth keypoint annotations outperforms a fully supervised baseline using the same neural network architecture on the task of pose estimation. The discovered 3D keypoints on the car, chair, and plane categories of ShapeNet [6] are visualized at keypointnet.github.io.",
"title": ""
},
{
"docid": "acf390e07ab773d3f82ba4f8e807669a",
"text": "The increasing popularity of server usage has brought a plenty of anomaly log events, which have threatened a vast collection of machines. Recognizing and categorizing the anomalous events thereby is a much salient work for our systems, especially the ones generate the massive amount of data and harness it for technology value creation and business development. To assist in focusing on the classification and the prediction of anomaly events, and gaining critical insights from system event records, we propose a novel log preprocessing method which is very effective to filter abundant information and retain critical characteristics. Additionally, a competitive approach for automated classification of anomalous events detected from the distributed system logs with the state-ofthe-art deep (Convolutional Neural Network) architectures is proposed in this paper. We measure a series of deep CNN algorithms with varied hyper-parameter combinations by using standard evaluation metrics, the results of our study reveals the advantages and potential capabilities of the proposed deep CNN models for anomaly event classification tasks on real-world systems. The optimal classification precision of our approach is 98.14%, which surpasses the popular traditional machine learning methods. Keywords-anomaly event classification; deep learning; convolutional neural network; log preprocessing; distributed system",
"title": ""
},
{
"docid": "0ec7a27ed4d89909887b08c5ea823756",
"text": "Brain responses to pain, assessed through positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) are reviewed. Functional activation of brain regions are thought to be reflected by increases in the regional cerebral blood flow (rCBF) in PET studies, and in the blood oxygen level dependent (BOLD) signal in fMRI. rCBF increases to noxious stimuli are almost constantly observed in second somatic (SII) and insular regions, and in the anterior cingulate cortex (ACC), and with slightly less consistency in the contralateral thalamus and the primary somatic area (SI). Activation of the lateral thalamus, SI, SII and insula are thought to be related to the sensory-discriminative aspects of pain processing. SI is activated in roughly half of the studies, and the probability of obtaining SI activation appears related to the total amount of body surface stimulated (spatial summation) and probably also by temporal summation and attention to the stimulus. In a number of studies, the thalamic response was bilateral, probably reflecting generalised arousal in reaction to pain. ACC does not seem to be involved in coding stimulus intensity or location but appears to participate in both the affective and attentional concomitants of pain sensation, as well as in response selection. ACC subdivisions activated by painful stimuli partially overlap those activated in orienting and target detection tasks, but are distinct from those activated in tests involving sustained attention (Stroop, etc.). In addition to ACC, increased blood flow in the posterior parietal and prefrontal cortices is thought to reflect attentional and memory networks activated by noxious stimulation. Less noted but frequent activation concerns motor-related areas such as the striatum, cerebellum and supplementary motor area, as well as regions involved in pain control such as the periaqueductal grey. In patients, chronic spontaneous pain is associated with decreased resting rCBF in contralateral thalamus, which may be reverted by analgesic procedures. Abnormal pain evoked by innocuous stimuli (allodynia) has been associated with amplification of the thalamic, insular and SII responses, concomitant to a paradoxical CBF decrease in ACC. It is argued that imaging studies of allodynia should be encouraged in order to understand central reorganisations leading to abnormal cortical pain processing. A number of brain areas activated by acute pain, particularly the thalamus and anterior cingulate, also show increases in rCBF during analgesic procedures. Taken together, these data suggest that hemodynamic responses to pain reflect simultaneously the sensory, cognitive and affective dimensions of pain, and that the same structure may both respond to pain and participate in pain control. The precise biochemical nature of these mechanisms remains to be investigated.",
"title": ""
},
{
"docid": "3da6fadaf2363545dfd0cea87fe2b5da",
"text": "It is a marketplace reality that marketing managers sometimes inflict switching costs on their customers, to inhibit them from defecting to new suppliers. In a competitive setting, such as the Internet market, where competition may be only one click away, has the potential of switching costs as an exit barrier and a binding ingredient of customer loyalty become altered? To address that issue, this article examines the moderating effects of switching costs on customer loyalty through both satisfaction and perceived-value measures. The results, evoked from a Web-based survey of online service users, indicate that companies that strive for customer loyalty should focus primarily on satisfaction and perceived value. The moderating effects of switching costs on the association of customer loyalty and customer satisfaction and perceived value are significant only when the level of customer satisfaction or perceived value is above average. In light of the major findings, the article sets forth strategic implications for customer loyalty in the setting of electronic commerce. © 2004 Wiley Periodicals, Inc. In the consumer marketing community, customer loyalty has long been regarded as an important goal (Reichheld & Schefter, 2000). Both marketing academics and professionals have attempted to uncover the most prominent antecedents of customer loyalty. Numerous studies have Psychology & Marketing, Vol. 21(10):799–822 (October 2004) Published online in Wiley InterScience (www.interscience.wiley.com) © 2004 Wiley Periodicals, Inc. DOI: 10.1002/mar.20030",
"title": ""
},
{
"docid": "af332b92495781f3ef08fd4f8463a917",
"text": "The goal of semantic parsing is to map natural language to a machine interpretable meaning representation language (MRL). One of the constraints that limits full exploration of deep learning technologies for semantic parsing is the lack of sufficient annotation training data. In this paper, we propose using sequence-to-sequence in a multi-task setup for semantic parsing with a focus on transfer learning. We explore three multi-task architectures for sequence-to-sequence modeling and compare their performance with an independently trained model. Our experiments show that the multi-task setup aids transfer learning from an auxiliary task with large labeled data to a target task with smaller labeled data. We see absolute accuracy gains ranging from 1.0% to 4.4% in our inhouse data set, and we also see good gains ranging from 2.5% to 7.0% on the ATIS semantic parsing tasks with syntactic and semantic auxiliary tasks.",
"title": ""
},
{
"docid": "2eac0a94204b24132e496639d759f545",
"text": "Numerous algorithms have been proposed for transferring knowledge from a label-rich domain (source) to a label-scarce domain (target). Most of them are proposed for closed-set scenario, where the source and the target domain completely share the class of their samples. However, in practice, a target domain can contain samples of classes that are not shared by the source domain. We call such classes the “unknown class” and algorithms that work well in the open set situation are very practical. However, most existing distribution matching methods for domain adaptation do not work well in this setting because unknown target samples should not be aligned with the source. In this paper, we propose a method for an open set domain adaptation scenario, which utilizes adversarial training. This approach allows to extract features that separate unknown target from known target samples. During training, we assign two options to the feature generator: aligning target samples with source known ones or rejecting them as unknown target ones. Our method was extensively evaluated and outperformed other methods with a large margin in most settings.",
"title": ""
},
{
"docid": "228cd0696e0da6f18a22aa72f009f520",
"text": "Modern Convolutional Neural Networks (CNN) are extremely powerful on a range of computer vision tasks. However, their performance may degrade when the data is characterised by large intra-class variability caused by spatial transformations. The Spatial Transformer Network (STN) is currently the method of choice for providing CNNs the ability to remove those transformations and improve performance in an end-to-end learning framework. In this paper, we propose Densely Fused Spatial Transformer Network (DeSTNet), which, to our best knowledge, is the first dense fusion pattern for combining multiple STNs. Specifically, we show how changing the connectivity pattern of multiple STNs from sequential to dense leads to more powerful alignment modules. Extensive experiments on three benchmarks namely, MNIST, GTSRB, and IDocDB show that the proposed technique outperforms related state-of-the-art methods (i.e., STNs and CSTNs) both in terms of accuracy and robustness.",
"title": ""
},
{
"docid": "1006d47cf469f6b946de3df44bba8c55",
"text": "Semantic role labeling (SRL) is a well known task in Natural Language Processing, consisting of identifying and labeling verbal arguments. It has been widely studied in English, but scarcely explored in other languages. In this paper, we employ a two-step convolutional neural architecture to label semantic arguments in Brazilian Portuguese texts, and avoid the use of external NLP tools. We achieve an F1 score of 62.2, which, although considerably lower than the state-of-the-art for English, seems promising considering the available resources. Also, dividing the process into two easier subtasks makes it more feasible to further improve performance through semi-supervised learning. Our system is available online and ready to be used out of the box to label new texts.",
"title": ""
},
{
"docid": "8d17cd62276ad7c4142630f5b5940662",
"text": "We present an end-to-end trained memory system that quickly adapts to new data and generates samples like them. Inspired by Kanerva’s sparse distributed memory, it has a robust distributed reading and writing mechanism. The memory is analytically tractable, which enables optimal on-line compression via a Bayesian update-rule. We formulate it as a hierarchical conditional generative model, where memory provides a rich data-dependent prior distribution. Consequently, the top-down memory and bottom-up perception are combined to produce the code representing an observation. Empirically, we demonstrate that the adaptive memory significantly improves generative models trained on both the Omniglot and CIFAR datasets. Compared with the Differentiable Neural Computer (DNC) and its variants, our memory model has greater capacity and is significantly easier to train.",
"title": ""
},
{
"docid": "df7e2e6431ebdaf41eea0b647106ede5",
"text": "We present a novel approach to automatic metaphor identification, that discovers both metaphorical associations and metaphorical expressions in unrestricted text. Our system first performs hierarchical graph factorization clustering (HGFC) of nouns and then searches the resulting graph for metaphorical connections between concepts. It then makes use of the salient features of the metaphorically connected clusters to identify the actual metaphorical expressions. In contrast to previous work, our method is fully unsupervised. Despite this fact, it operates with an encouraging precision (0.69) and recall (0.61). Our approach is also the first one in NLP to exploit the cognitive findings on the differences in organisation of abstract and concrete concepts in the human brain.",
"title": ""
},
{
"docid": "76c75c11ade707808a2d877674300685",
"text": "Modern aircraft increasingly rely on electric power, resulting in high safety criticality and complexity in their electric powergenerationanddistribution systems.Motivatedby the resulting rapid increase in the costs andduration of the design cycles for such systems, the use of formal specification and automated correct-by-construction control protocols synthesis for primarydistribution in vehicular electric power networks is investigated.Adesignworkflow is discussed that aims to transition from the traditional “design and verify” approach to a “specify and synthesize” approach. An overview is given of a subset of the recent advances in the synthesis of reactive control protocols. These techniques are applied in the context of reconfiguration of the networks in reaction to the changes in their operating environment. These automatically synthesized control protocols are also validated on high-fidelity simulationmodels and on an academic-scale hardware testbed.",
"title": ""
},
{
"docid": "0fbf7e46689102a4dd031eb54e6c083c",
"text": "The analyzing and extracting important information from a text document is crucial and has produced interest in the area of text mining and information retrieval. This process is used in order to notice particularly in the text. Furthermore, on view of the readers that people tend to read almost everything in text documents to find some specific information. However, reading a text document consumes time to complete and additional time to extract information. Thus, classifying text to a subject can guide a person to find relevant information. In this paper, a subject identification method which is based on term frequency to categorize groups of text into a particular subject is proposed. Since term frequency tends to ignore the semantics of a document, the term extraction algorithm is introduced for improving the result of the extracted relevant terms from the text. The evaluation of the extracted terms has shown that the proposed method is exceeded other extraction techniques.",
"title": ""
},
{
"docid": "ec7afd2f1ebb917f414a05061c906f62",
"text": "In the past many solutions for simultaneous localization and mapping (SLAM) have been presented. Recently these solutions have been extended to map large environments with six degrees of freedom (DoF) poses. To demonstrate the capabilities of these SLAM algorithms it is common practice to present the generated maps and successful loop closing. Unfortunately there is often no objective performance metric that allows to compare different approaches. This fact is attributed to the lack of ground truth data. For this reason we present a novel method that is able to generate this ground truth data based on reference maps. Further on, the resulting reference path is used to measure the absolute performance of different 6D SLAM algorithms building a large urban outdoor map.",
"title": ""
},
{
"docid": "bd50f2ab6c2779fdb848fd556078e2f1",
"text": "We present a pipelined architecture for a text simplification system and describe our implementation of the three stages— analysis, transformation and regeneration. Our architecture allows each component to be developed and evaluated independently. We lay particular emphasis on the discourse level aspects of syntactic simplification as these are crucial to the process and have not been dealt with by previous research in the field. These aspects include generating referring expressions, deciding determiners, deciding sentence order and preserving rhetorical and anaphoric structure.",
"title": ""
},
{
"docid": "72f9d32f241992d02990a7a2e9aad9bb",
"text": "— Improved methods are proposed for disk drive failure prediction. The SMART (Self Monitoring and Reporting Technology) failure prediction system is currently implemented in disk drives. Its purpose is to predict the near-term failure of an individual hard disk drive, and issue a backup warning to prevent data loss. Two experimentally tests of SMART showed only moderate accuracy at low false alarm rates. (A rate of 0.2% of total drives per year implies that 20% of drive returns would be good drives, relative to ≈1% annual failure rate of drives). This requirement for very low false alarm rates is well known in medical diagnostic tests for rare diseases, and methodology used there suggests ways to improve SMART. ACRONYMS ATA Standard drive interface, desktop computers FA Failure analysis of apparently failed drive FAR False alarm rate, 100 times probability value MVRS Multivariate rank sum statistical test NPF Drive failed, but “No problem found” in FA RS Rank sum statistical hypothesis test R Sum of ranks of warning set data Rc Predict fail if R> Rc critical value SCSI Standard drive interface, high-end computers SMART “Self monitoring and reporting technology” WA Failure warning accuracy (probability) Two improved SMART algorithms are proposed here. They use the SMART internal drive attribute measurements in present drives. The present warning algorithm based on maximum error thresholds is replaced by distribution-free statistical hypothesis tests. These improved algorithms are computationally simple enough to be implemented in drive microprocessor firmware code. They require only integer sort operations to put several hundred attribute values in rank order. Some tens of these ranks are added up and the SMART warning is issued if the sum exceeds a prestored limit. NOTATION: n Number of reference (old) measurements m Number of warning (new) measurements N Total ranked measurements (n+m) p Number of different attributes measured Q(X) Normal probability Pr(x>X) RS Rank sum statistical hypothesis test R Sum of ranks of warning set data Rc Predict fail if R> Rc critical value",
"title": ""
}
] |
scidocsrr
|
6c4201253760c8d371447fd68afc0e03
|
Gamifying software development scrum projects
|
[
{
"docid": "2ccb76e0cda888491ebb37bb316c5490",
"text": "For any Software Process Improvement (SPI) initiative to succeed human factors, in particular, motivation and commitment of the people involved should be kept in mind. In fact, Organizational Change Management (OCM) has been identified as an essential knowledge area for any SPI initiative. However, enough attention is still not given to the human factors and therefore, the high degree of failures in the SPI initiatives is directly linked to a lack of commitment and motivation. Gamification discipline allows us to define mechanisms that drive people’s motivation and commitment towards the development of tasks in order to encourage and accelerate the acceptance of an SPI initiative. In this paper, a gamification framework oriented to both organization needs and software practitioners groups involved in an SPI initiative is defined. This framework tries to take advantage of the transverse nature of gamification in order to apply its Critical Success Factors (CSF) to the organizational change management of an SPI. Gamification framework guidelines have been validated by some qualitative methods. Results show some limitations that threaten the reliability of this validation. These require further empirical validation of a software organization.",
"title": ""
}
] |
[
{
"docid": "6cca53a0b41a981bb6a1707c55e924da",
"text": "During sustained high-intensity military training or simulated combat exercises, significant decreases in physical performance measures are often seen. The use of dietary supplements is becoming increasingly popular among military personnel, with more than half of the US soldiers deployed or garrisoned reported to using dietary supplements. β-Alanine is a popular supplement used primarily by strength and power athletes to enhance performance, as well as training aimed at improving muscle growth, strength and power. However, there is limited research examining the efficacy of β-alanine in soldiers conducting operationally relevant tasks. The gains brought about by β-alanine use by selected competitive athletes appears to be relevant also for certain physiological demands common to military personnel during part of their training program. Medical and health personnel within the military are expected to extrapolate and implement relevant knowledge and doctrine from research performed on other population groups. The evidence supporting the use of β-alanine in competitive and recreational athletic populations suggests that similar benefits would also be observed among tactical athletes. However, recent studies in military personnel have provided direct evidence supporting the use of β-alanine supplementation for enhancing combat-specific performance. This appears to be most relevant for high-intensity activities lasting 60–300 s. Further, limited evidence has recently been presented suggesting that β-alanine supplementation may enhance cognitive function and promote resiliency during highly stressful situations.",
"title": ""
},
{
"docid": "653ca5c9478b1b1487fc24eeea8c1677",
"text": "A fundamental question in information theory and in computer science is how to measure similarity or the amount of shared information between two sequences. We have proposed a metric, based on Kolmogorov complexity, to answer this question and have proven it to be universal. We apply this metric in measuring the amount of shared information between two computer programs, to enable plagiarism detection. We have designed and implemented a practical system SID (Software Integrity Diagnosis system) that approximates this metric by a heuristic compression algorithm. Experimental results demonstrate that SID has clear advantages over other plagiarism detection systems. SID system server is online at http://software.bioinformatics.uwaterloo.ca/SID/.",
"title": ""
},
{
"docid": "1b919e6f56e908902480c90d6f0d4ce0",
"text": "Vehicular Ad-hoc Network (VANET) is an emerging new technology to enable communications among vehicles and nearby roadside infrastructures to provide intelligent transportation applications. In order to provide stable connections between vehicles, a reliable routing protocol is needed. Currently, there are several routing protocols designed for MANETs could be applied to VANETs. However, due to the unique characteristics of VANETs, the results are not encouraging. In this paper, we propose a new routing protocol named AODV-VANET, which incorporates the vehicles' movement information into the route discovery process based on Ad hoc On-Demand Distance Vector (AODV). A Total Weight of the Route is introduced to choose the best route together with an expiration time estimation to minimize the link breakages. With these modifications, the proposed protocol is able to achieve better routing performances.",
"title": ""
},
{
"docid": "bc4b545faba28a81202e3660c32c7ec2",
"text": "This paper describes a novel two-stage fully-differential CMOS amplifier comprising two self-biased inverter stages, with optimum compensation and high efficiency. Although it relies on a class A topology, it is shown through simulations, that it achieves the highest efficiency of its class and comparable to the best class AB amplifiers. Due to the self-biasing, a low variability in the DC gain over process, temperature, and supply is achieved. A detailed circuit analysis, a design methodology for optimization and the most relevant simulation results are presented, together with a final comparison among state-of-the-art amplifiers.",
"title": ""
},
{
"docid": "e24f60bc524a69976f727cb847ed92fa",
"text": "In large scale and complex IT service environments, a problematic incident is logged as a ticket and contains the ticket summary (system status and problem description). The system administrators log the step-wise resolution description when such tickets are resolved. The repeating service events are most likely resolved by inferring similar historical tickets. With the availability of reasonably large ticket datasets, we can have an automated system to recommend the best matching resolution for a given ticket summary. In this paper, we first identify the challenges in real-world ticket analysis and develop an integrated framework to efficiently handle those challenges. The framework first quantifies the quality of ticket resolutions using a regression model built on carefully designed features. The tickets, along with their quality scores obtained from the resolution quality quantification, are then used to train a deep neural network ranking model that outputs the matching scores of ticket summary and resolution pairs. This ranking model allows us to leverage the resolution quality in historical tickets when recommending resolutions for an incoming incident ticket. In addition, the feature vectors derived from the deep neural ranking model can be effectively used in other ticket analysis tasks, such as ticket classification and clustering. The proposed framework is extensively evaluated with a large real-world dataset.",
"title": ""
},
{
"docid": "60dd1689962a702e72660b33de1f2a17",
"text": "A grammar formalism called GHRG based on CHR is proposed analogously to the way Definite Clause Grammars are defined and implemented on top of Prolog. A CHRG executes as a robust bottom-up parser with an inherent treatment of ambiguity. The rules of a CHRG may refer to grammar symbols on either side of a sequence to be matched and this provides a powerful way to let parsing and attribute evaluation depend on linguistic context; examples show disambiguation of simple and ambiguous context-free rules and a handling of coordination in natural language. CHRGs may have rules to produce and consume arbitrary hypothesis and as an important application is shown an implementation of Assumption Grammars.",
"title": ""
},
{
"docid": "070ecf3890362cb4c24682aff5fa01c6",
"text": "This review builds on self-control theory (Carver & Scheier, 1998) to develop a theoretical framework for investigating associations of implicit theories with self-regulation. This framework conceptualizes self-regulation in terms of 3 crucial processes: goal setting, goal operating, and goal monitoring. In this meta-analysis, we included articles that reported a quantifiable assessment of implicit theories and at least 1 self-regulatory process or outcome. With a random effects approach used, meta-analytic results (total unique N = 28,217; k = 113) across diverse achievement domains (68% academic) and populations (age range = 5-42; 10 different nationalities; 58% from United States; 44% female) demonstrated that implicit theories predict distinct self-regulatory processes, which, in turn, predict goal achievement. Incremental theories, which, in contrast to entity theories, are characterized by the belief that human attributes are malleable rather than fixed, significantly predicted goal setting (performance goals, r = -.151; learning goals, r = .187), goal operating (helpless-oriented strategies, r = -.238; mastery-oriented strategies, r = .227), and goal monitoring (negative emotions, r = -.233; expectations, r = .157). The effects for goal setting and goal operating were stronger in the presence (vs. absence) of ego threats such as failure feedback. Discussion emphasizes how the present theoretical analysis merges an implicit theory perspective with self-control theory to advance scholarship and unlock major new directions for basic and applied research.",
"title": ""
},
{
"docid": "c4dbf075f91d1a23dda421261911a536",
"text": "In cultures of the Litopenaeus vannamei with biofloc, the concentrations of nitrate rise during the culture period, which may cause a reduction in growth and mortality of the shrimps. Therefore, the aim of this study was to determine the effect of the concentration of nitrate on the growth and survival of shrimp in systems using bioflocs. The experiment consisted of four treatments with three replicates each: The concentrations of nitrate that were tested were 75 (control), 150, 300, and 600 mg NO3 −-N/L. To achieve levels above 75 mg NO3 −-N/L, different dosages of sodium nitrate (PA) were added. For this purpose, twelve experimental units with a useful volume of 45 L were stocked with 15 juvenile L. vannamei (1.30 ± 0.31 g), corresponding to a stocking density of 333 shrimps/m3, that were reared for an experimental period of 42 days. Regarding the water quality parameters measured throughout the study, no significant differences were detected (p > 0.05). Concerning zootechnical performance, a significant difference (p < 0.05) was verified with the 75 (control) and 150 treatments presenting the best performance indexes, while the 300 and 600 treatments led to significantly poorer results (p < 0.05). The histopathological damage was observed in the gills and hepatopancreas of the shrimps exposed to concentrations ≥300 mg NO3 −-N/L for 42 days, and poorer zootechnical performance and lower survival were observed in the shrimps reared at concentrations ≥300 mg NO3 −-N/L under a salinity of 23. The results obtained in this study show that concentrations of nitrate up to 177 mg/L are acceptable for the rearing of L. vannamei in systems with bioflocs, without renewal of water, at a salinity of 23.",
"title": ""
},
{
"docid": "1014860e267cf8b36c118bb32995b34f",
"text": "Recently, several indoor localization solutions based on WiFi, Bluetooth, and UWB have been proposed. Due to the limitation and complexity of the indoor environment, the solution to achieve a low-cost and accurate positioning system remains open. This article presents a WiFibased positioning technique that can improve the localization performance from the bottleneck in ToA/AoA. Unlike the traditional approaches, our proposed mechanism relaxes the need for wide signal bandwidth and large numbers of antennas by utilizing the transmission of multiple predefined messages while maintaining high-accuracy performance. The overall system structure is demonstrated by showing localization performance with respect to different numbers of messages used in 20/40 MHz bandwidth WiFi APs. Simulation results show that our WiFi-based positioning approach can achieve 1 m accuracy without any hardware change in commercial WiFi products, which is much better than the conventional solutions from both academia and industry concerning the trade-off of cost and system complexity.",
"title": ""
},
{
"docid": "c10829be320a9be6ecbc9ca751e8b56e",
"text": "This article analyzes two decades of research regarding the mass media's role in shaping, perpetuating, and reducing the stigma of mental illness. It concentrates on three broad areas common in media inquiry: production, representation, and audiences. The analysis reveals that descriptions of mental illness and the mentally ill are distorted due to inaccuracies, exaggerations, or misinformation. The ill are presented not only as peculiar and different, but also as dangerous. Thus, the media perpetuate misconceptions and stigma. Especially prominent is the absence of agreed-upon definitions of \"mental illness,\" as well as the lack of research on the inter-relationships in audience studies between portrayals in the media and social perceptions. The analysis concludes with suggestions for further research on mass media's inter-relationships with mental illness.",
"title": ""
},
{
"docid": "ca6d23374e0caa125a91618164284b9a",
"text": "We propose a spectral clustering algorithm for the multi-view setting where we have access to multiple views of the data, each of which can be independently used for clustering. Our spectral clustering algorithm has a flavor of co-training, which is already a widely used idea in semi-supervised learning. We work on the assumption that the true underlying clustering would assign a point to the same cluster irrespective of the view. Hence, we constrain our approach to only search for the clusterings that agree across the views. Our algorithm does not have any hyperparameters to set, which is a major advantage in unsupervised learning. We empirically compare with a number of baseline methods on synthetic and real-world datasets to show the efficacy of the proposed algorithm.",
"title": ""
},
{
"docid": "9430b0f220538e878d99ef410fdc1ab2",
"text": "The prevalence of pregnancy, substance abuse, violence, and delinquency among young people is unacceptably high. Interventions for preventing problems in large numbers of youth require more than individual psychological interventions. Successful interventions include the involvement of prevention practitioners and community residents in community-level interventions. The potential of community-level interventions is illustrated by a number of successful studies. However, more inclusive reviews and multisite comparisons show that although there have been successes, many interventions did not demonstrate results. The road to greater success includes prevention science and newer community-centered models of accountability and technical assistance systems for prevention.",
"title": ""
},
{
"docid": "c2ade16afaf22ac6cc546134a1227d68",
"text": "In this work we present a novel method for the challenging problem of depth image up sampling. Modern depth cameras such as Kinect or Time-of-Flight cameras deliver dense, high quality depth measurements but are limited in their lateral resolution. To overcome this limitation we formulate a convex optimization problem using higher order regularization for depth image up sampling. In this optimization an an isotropic diffusion tensor, calculated from a high resolution intensity image, is used to guide the up sampling. We derive a numerical algorithm based on a primal-dual formulation that is efficiently parallelized and runs at multiple frames per second. We show that this novel up sampling clearly outperforms state of the art approaches in terms of speed and accuracy on the widely used Middlebury 2007 datasets. Furthermore, we introduce novel datasets with highly accurate ground truth, which, for the first time, enable to benchmark depth up sampling methods using real sensor data.",
"title": ""
},
{
"docid": "784d75662234e45f78426c690356d872",
"text": "Chinese-English parallel corpora are key resources for Chinese-English cross-language information processing, Chinese-English bilingual lexicography, Chinese-English language research and teaching. But so far large-scale Chinese-English corpus is still unavailable yet, given the difficulties and the intensive labours required. In this paper, our work towards building a large-scale Chinese-English parallel corpus is presented. We elaborate on the collection, annotation and mark-up of the parallel Chinese-English texts and the workflow that we used to construct the corpus. In addition, we also present our work toward building tools for constructing and using the corpus easily for different purposes. Among these tools, a parallel concordance tool developed by us is examined in detail. Several applications of the corpus being conducted are also introduced briefly in the paper.",
"title": ""
},
{
"docid": "bf11641b432e551d61c56180d8f0e8eb",
"text": "Deep Reinforcement Learning algorithms lead to agents that can solve difficult decision making problems in complex environments. However, many difficult multi-agent competitive games, especially real-time strategy games are still considered beyond the capability of current deep reinforcement learning algorithms, although there has been a recent effort to change this (OpenAI, 2017; Vinyals et al., 2017). Moreover, when the opponents in a competitive game are suboptimal, the current Nash Equilibrium seeking, selfplay algorithms are often unable to generalize their strategies to opponents that play strategies vastly different from their own. This suggests that a learning algorithm that is beyond conventional self-play is necessary. We develop Hierarchical Agent with Self-Play , a learning approach for obtaining hierarchically structured policies that can achieve higher performance than conventional self-play on competitive games through the use of a diverse pool of sub-policies we get from Counter Self-Play (CSP). We demonstrate that the ensemble policy generated by Hierarchical Agent with Self-Play can achieve better performance while facing unseen opponents that use sub-optimal policies. On a motivating iterated Rock-Paper-Scissor game and a partially observable real-time strategic game (http://generals.io/), we are led to the conclusion that Hierarchical Agent with Self-Play can perform better than conventional self-play as well as achieve 77% win rate against FloBot, an open-source agent which has ranked at position number 2 on the online leaderboards.",
"title": ""
},
{
"docid": "a6cf168632efb2a4c4a4d91c4161dc24",
"text": "This paper presents a systematic approach to transform various fault models to a unified model such that all faults of interest can be handled in one ATPG run. The fault models that can be transformed include, but are not limited to, stuck-at faults, various types of bridging faults, and cell-internal faults. The unified model is the aggressor-victim type of bridging fault model. Two transformation methods, namely fault-based and pattern-based transformations, are developed for cell-external and cell-internal faults, respectively. With the proposed approach, one can use an ATPG tool for bridging faults to deal with the test generation problems of multiple fault models simultaneously. Hence the total test generation time can be reduced and highly compact test sets can be obtained. Experimental results show that on average 54.94% (16.45%) and 47.22% (17.51%) test pattern volume reductions are achieved compared to the method that deals with the three fault models separately without (with) fault dropping for ISCAS'89 andIWLS'05 circuits, respectively.",
"title": ""
},
{
"docid": "1b556f4e0c69c81780973a7da8ba2f8e",
"text": "We explore ways of allowing for the offloading of computationally rigorous tasks from devices with slow logical processors onto a network of anonymous peer-processors. Recent advances in secret sharing schemes, decentralized consensus mechanisms, and multiparty computation (MPC) protocols are combined to create a P2P MPC market. Unlike other computational ”clouds”, ours is able to generically compute any arithmetic circuit, providing a viable platform for processing on the semantic web. Finally, we show that such a system works in a hostile environment, that it scales well, and that it adapts very easily to any future advances in the complexity theoretic cryptography used. Specifically, we show that the feasibility of our system can only improve, and is historically guaranteed to do so.",
"title": ""
},
{
"docid": "4c5eb84d510b9a2d064bfd53d981934f",
"text": "Video-game playing is popular among college students. Cognitive and negative consequences have been studied frequently. However, little is known about the influence of gaming behavior on IT college students’ academic performance. An increasing number of college students take online courses, use social network websites for social interactions, and play video games online. To analyze the relationship between college students’ gaming behavior and their academic performance, a research model is proposed and a survey study is conducted. The study result of a multiple regression analysis shows that self-control capability, social interaction using face-to-face or phone communications, and playing video games using a personal computer make statistically significant contributions to the IT college students’ academic performance measured by GPA.",
"title": ""
},
{
"docid": "ab1b4a5694e17772b01a2156afc08f55",
"text": "Clunealgia is caused by neuropathy of inferior cluneal branches of the posterior femoral cutaneous nerve resulting in pain in the inferior gluteal region. Image-guided anesthetic nerve injections are a viable and safe therapeutic option in sensory peripheral neuropathies that provides significant pain relief when conservative therapy fails and surgery is not desired or contemplated. The authors describe two cases of clunealgia, where computed-tomography-guided technique for nerve blocks of the posterior femoral cutaneous nerve and its branches was used as a cheaper, more convenient, and faster alternative with similar face validity as the previously described magnetic-resonance-guided injection.",
"title": ""
},
{
"docid": "857d8003dff05b8e1ba5eeb8f6b3c14e",
"text": "Traditional static spectrum allocation policies have been to grant each wireless service exclusive usage of certain frequency bands, leaving several spectrum bands unlicensed for industrial, scientific and medical purposes. The rapid proliferation of low-cost wireless applications in unlicensed spectrum bands has resulted in spectrum scarcity among those bands. Since most applications in Wireless Sensor Networks (WSNs) utilize the unlicensed spectrum, network-wide performance of WSNs will inevitably degrade as their popularity increases. Sharing of under-utilized licensed spectrum among unlicensed devices is a promising solution to the spectrum scarcity issue. Cognitive Radio (CR) is a new paradigm in wireless communication that allows sensor nodes as the unlicensed users or Secondary Users (SUs) to detect and use the under-utilized licensed spectrum temporarily. Given that the licensed or Primary Users (PUs) are oblivious to the presence of SUs, the SUs access the licensed spectrum opportunistically without interfering the PUs, while improving their own performance. In this paper, we propose an approach to build Cognitive Radio-based Wireless Sensor Networks (CR-WSNs). We believe that CR-WSN is the next-generation WSN. Realizing that both WSNs and CR present unique challenges to the design of CR-WSNs, we provide an overview and conceptual design of WSNs from the perspective of CR. The open issues are discussed to motivate new research interests in this field. We also present our method to achieving context-awareness and intelligence, which are the key components in CR networks, to address an open issue in CR-WSN.",
"title": ""
}
] |
scidocsrr
|
4da0339551db4f6f476a0688fdffb3e2
|
Fast, Scalable and Secure Onloading of Edge Functions Using AirBox
|
[
{
"docid": "00a3504c21cf0a971a717ce676d76933",
"text": "In recent years, researchers have proposed systems for running trusted code on an untrusted operating system. Protection mechanisms deployed by such systems keep a malicious kernel from directly manipulating a trusted application's state. Under such systems, the application and kernel are, conceptually, peers, and the system call API defines an RPC interface between them.\n We introduce Iago attacks, attacks that a malicious kernel can mount in this model. We show how a carefully chosen sequence of integer return values to Linux system calls can lead a supposedly protected process to act against its interests, and even to undertake arbitrary computation at the malicious kernel's behest.\n Iago attacks are evidence that protecting applications from malicious kernels is more difficult than previously realized.",
"title": ""
}
] |
[
{
"docid": "d2b45d76e93f07ededbab03deee82431",
"text": "A cordless battery charger will greatly improve the user friendliness of electric vehicles (EVs), accelerating the replacement of traditional internal combustion engine (ICE) vehicles with EVs and improving energy sustainability as a result. Resonant circuits are used for both the power transmitter and receiver of a cordless charger to compensate their coils and improve power transfer efficiency. However, conventional compensation circuit topology is not suitable for application to an EV, which involves very large power, a wide gap between the transmitter and receiver coils, and large horizontal misalignment. This paper proposes a novel compensation circuit topology that has a carefully designed series capacitor added to the parallel resonant circuit of the receiver. The proposed circuit has been implemented and tested on an EV. The simulation and experimental results are presented to show that the circuit can improve the power factor and power transfer efficiency, and as a result, allow a larger gap between the transmitter and receiver coils.",
"title": ""
},
{
"docid": "9afb086e38b883676a503bb10fba3e8f",
"text": "This paper reports a structured literature survey of research in wearable technology for upper-extremity rehabilitation, e.g., after stroke, spinal cord injury, for multiple sclerosis patients or even children with cerebral palsy. A keyword based search returned 61 papers relating to this topic. Examination of the abstracts of these papers identified 19 articles describing distinct wearable systems aimed at upper extremity rehabilitation. These are classified in three categories depending on their functionality: movement and posture monitoring; monitoring and feedback systems that support rehabilitation exercises, serious games for rehabilitation training. We characterize the state of the art considering respectively the reported performance of these technologies, availability of clinical evidence, or known clinical applications.",
"title": ""
},
{
"docid": "bf2746e237446a477919b3d6c2940237",
"text": "In this paper, we first introduce the RF performance of Globalfoundries 45RFSOI process. NFET Ft > 290GHz and Fmax >380GHz. Then we present several mm-Wave circuit block designs, i.e., Switch, Power Amplifier, and LNA, based on 45RFSOI process for 5G Front End Module (FEM) applications. For the SPDT switch, insertion loss (IL) < 1dB at 30GHz with 32dBm P1dB and > 25dBm Pmax. For the PA, with a 2.9V power supply, the PA achieves 13.1dB power gain and a saturated output power (Psat) of 16.2dBm with maximum power-added efficiency (PAE) of 41.5% at 24Ghz continuous-wave (CW). With 960Mb/s 64QAM signal, 22.5% average PAE, −29.6dB EVM, and −30.5dBc ACLR are achieved with 9.5dBm average output power.",
"title": ""
},
{
"docid": "7621e0dcdad12367dc2cfcd12d31c719",
"text": "Microblogging sites have emerged as major platforms for bloggers to create and consume posts as well as to follow other bloggers and get informed of their updates. Due to the large number of users, and the huge amount of posts they create, it becomes extremely difficult to identify relevant and interesting blog posts. In this paper, we propose a novel convex collective matrix completion (CCMC) method that effectively utilizes user-item matrix and incorporates additional user activity and topic-based signals to recommend relevant content. The key advantage of CCMC over existing methods is that it can obtain a globally optimal solution and can easily scale to large-scale matrices using Hazan’s algorithm. To the best of our knowledge, this is the first work which applies and studies CCMC as a recommendation method in social media. We conduct a large scale study and show significant improvement over existing state-ofthe-art approaches.",
"title": ""
},
{
"docid": "1183b3ea7dd929de2c18af49bf549ceb",
"text": "Robust and time-efficient skeletonization of a (planar) shape, which is connectivity preserving and based on Euclidean metrics, can be achieved by first regularizing the Voronoi diagram (VD) of a shape’s boundary points, i.e., by removal of noise-sensitive parts of the tessellation and then by establishing a hierarchic organization of skeleton constituents . Each component of the VD is attributed with a measure of prominence which exhibits the expected invariance under geometric transformations and noise. The second processing step, a hierarchic clustering of skeleton branches, leads to a multiresolution representation of the skeleton, termed skeleton pyramid. Index terms — Distance transform, hierarchic skeletons, medial axis, regularization, shape description, thinning, Voronoi tessellation.",
"title": ""
},
{
"docid": "3baf8d673b5ecf130cf770019aaa3e3c",
"text": "Fuzzy logic may be considered as an assortment of decision making techniques. In many applications like process control, the algorithm’s outcome is ruled by a number of key decisions which are made in the algorithm. Defining the best decision requires extensive knowledge of the system. When experience or understanding of the problem is not available, optimising the algorithm becomes very difficult. This is the reason why fuzzy logic is useful.",
"title": ""
},
{
"docid": "2eab1c44f9d31ff6ddbc650677cd57fe",
"text": "Customer churn prediction in Mobile industry is a latest research topic in recent years. A huge amount of data is generated in Mobile industry every minute. Data mining techniques have also developed in various ways. Customer churn is considered as one of the major issues in Mobile industry. The research signifies that it is more expensive to gain a new customer than to retain an existing one. The knowledge extracted from Mobile industry helps to understand the reasons of customer churn and Telecom providers use the data to retain existing customers. This paper surveys the commonly used data mining techniques to identify customer churn patterns. Classification and association models are the two most commonly used models for data mining in Customer Relationship Management. This study focuses on data mining techniques for reducing customer churn and also likely to reduce error ratio. The recent literature in the area of predictive data mining techniques in customer churn behavior is reviewed and a discussion on the future research directions is offered. Keywords— Customer churn, Customer retention, Customer relationship management (CRM), Data mining, C4.5, Naive Bayes classifier.",
"title": ""
},
{
"docid": "bd0e01675a12193752588e6bc730edd5",
"text": "Online safety is everyone's responsibility---a concept much easier to preach than to practice.",
"title": ""
},
{
"docid": "71e9bb057e90f754f658c736e4f02b7a",
"text": "When tourists visit a city or region, they cannot visit every point of interest available, as they are constrained in time and budget. Tourist recommender applications help tourists by presenting a personal selection. Providing adequate tour scheduling support for these kinds of applications is a daunting task for the application developer. The objective of this paper is to demonstrate how existing models from the field of Operations Research (OR) fit this scheduling problem, and enable a wide range of tourist trip planning functionalities. Using the Orienteering Problem (OP) and its extensions to model the tourist trip planning problem, allows to deal with a vast number of practical planning problems.",
"title": ""
},
{
"docid": "614e98183bc64accab99e44117cc8c50",
"text": "Spatial crowding is a well-known deficit in amblyopia. We have previously reported evidence suggesting that the inability to isolate stimuli in space in crowded displays (spatial crowding) is a largely independent component of the amblyopic deficit in visual acuity, which is typically found in strabismic amblyopia [Bonneh, Y., Sagi, D., & Polat, U. (2004a). Local and non-local deficits in amblyopia: Acuity and spatial interactions. Vision Research, 44, 3009-3110]. Here, we extend this result to the temporal domain by measuring visual acuity (VA) for a single pattern in a rapid serial visual presentation (RSVP-VA, N=15) for fast (\"crowded\") and slow (\"uncrowded\") presentations. We found that strabismic amblyopes but not anisometropic amblyopes or normal controls exhibited a significant difference between VA under the fast and slow conditions. We further compared the \"temporal crowding\" measure to two measures of spatial crowding: (1) static Tumbling-E acuity in multi-pattern crowded displays (N=26) and (2) Gabor alignment with lateral flankers (N=20). We found that all three measures of crowding (one temporal and two spatial) were highly correlated across subjects while being largely independent of the visual acuity for a single isolated pattern, with both spatial and temporal crowding being high and correlated in strabismus and low in anisometropia. This suggests that time and space are related in crowding, at least in amblyopia.",
"title": ""
},
{
"docid": "20af5209de71897158820f935018d877",
"text": "This paper presents a new bag-of-entities representation for document ranking, with the help of modern knowledge bases and automatic entity linking. Our system represents query and documents by bag-of-entities vectors constructed from their entity annotations, and ranks documents by their matches with the query in the entity space. Our experiments with Freebase on TREC Web Track datasets demonstrate that current entity linking systems can provide sufficient coverage of the general domain search task, and that bag-of-entities representations outperform bag-of-words by as much as 18% in standard document ranking tasks.",
"title": ""
},
{
"docid": "955feaf32277aa431473554514e81b60",
"text": "This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques.",
"title": ""
},
{
"docid": "35e6ad2d7c84a5a96c44234962eea57d",
"text": "Material Recognition Ting-Chun Wang1, Jun-Yan Zhu1, Ebi Hiroaki2, Manmohan Chandraker2, Alexei A. Efros1, Ravi Ramamoorthi2 1University of California, Berkeley University of California, San Diego Motivation • Light-field images should help recognize materials since reflectance can be estimated • CNNs have recently been very successful in material recognition • We combine these two and propose a new light-field dataset since no one is currently available",
"title": ""
},
{
"docid": "d97518a615c4f963d86e36c9dd30b643",
"text": "In this paper, the Polyjet technology was applied to build high-Q X-band resonators and low loss filters for the first time. As one of state-of-the-art 3-D printing technologies, the Polyjet technique produces RF models with finest resolution and outstanding surface finish in a clean, fast and affordable way. The measured resonator with 0.3% frequency shift yielded a quality factor of 214 at 10.26 GHz. A Vertically stacked two-cavity bandpass filter with an insertion loss of 2.1 dB and 5.1% bandwidth (BW) was realized successfully. The dimensional tolerance of this process was found to be less than 0.5%. The well matched performance of the resonator and the filter, as well as the fine feature size indicate that the Polyjet process is suitable for the implementation of low loss and low cost RF devices.",
"title": ""
},
{
"docid": "22bbeceff175ee2e9a462b753ce24103",
"text": "BACKGROUND\nEUS-guided FNA can help diagnose and differentiate between various pancreatic and other lesions.The aim of this study was to compare approaches among involved/relevant physicians to the controversies surrounding the use of FNA in EUS.\n\n\nMETHODS\nA five-case survey was developed, piloted, and validated. It was collected from a total of 101 physicians, who were all either gastroenterologists (GIs), surgeons or oncologists. The survey compared the management strategies chosen by members of these relevant disciplines regarding EUS-guided FNA.\n\n\nRESULTS\nFor CT operable T2NOM0 pancreatic tumors the research demonstrated variance as to whether to undertake EUS-guided FNA, at p < 0.05. For inoperable pancreatic tumors 66.7% of oncologists, 62.2% of surgeons and 79.1% of GIs opted for FNA (p < 0.05). For cystic pancreatic lesions, oncologists were more likely to send patients to surgery without FNA. For stable simple pancreatic cysts (23 mm), most physicians (66.67%) did not recommend FNA. For a submucosal gastric 19 mm lesion, 63.2% of surgeons recommended FNA, vs. 90.0% of oncologists (p < 0.05).\n\n\nCONCLUSIONS\nControversies as to ideal application of EUS-FNA persist. Optimal guidelines should reflect the needs and concerns of the multidisciplinary team who treat patients who need EUS-FNA. Multi-specialty meetings assembled to manage patients with these disorders may be enlightening and may help develop consensus.",
"title": ""
},
{
"docid": "f1e97086c14f6d3d2a408aeca029c645",
"text": "Unifying principles of movement have emerged from the computational study of motor control. We review several of these principles and show how they apply to processes such as motor planning, control, estimation, prediction and learning. Our goal is to demonstrate how specific models emerging from the computational approach provide a theoretical framework for movement neuroscience.",
"title": ""
},
{
"docid": "9ce5377315e50c70337aa4b7d6512de0",
"text": "This paper discusses two main software engineering methodologies to system development, the waterfall model and the objectoriented approach. A review of literature reveals that waterfall model uses linear approach and is only suitable for sequential or procedural design. In waterfall, errors can only be detected at the end of the whole process and it may be difficult going back to repeat the entire process because the processes are sequential. Also, software based on waterfall approach is difficult to maintain and upgrade due to lack of integration between software components. On the other hand, the Object Oriented approach enables software systems to be developed as integration of software objects that work together to make a holistic and functional system. The software objects are independent of each other, allowing easy upgrading and maintenance of software codes. The paper also highlighted the merits and demerits of each of the approaches. This work concludes with the appropriateness of each approach in relation to the complexity of the problem domain.",
"title": ""
},
{
"docid": "66d21320fab73188fa7023a87e102092",
"text": "Topic models represent latent topics as probability distributions over words which can be hard to interpret due to the lack of grounded semantics. In this paper, we propose a structured topic representation based on an entity taxonomy from a knowledge base. A probabilistic model is developed to infer both hidden topics and entities from text corpora. Each topic is equipped with a random walk over the entity hierarchy to extract semantically grounded and coherent themes. Accurate entity modeling is achieved by leveraging rich textual features from the knowledge base. Experiments show significant superiority of our approach in topic perplexity and key entity identification, indicating potentials of the grounded modeling for semantic extraction and language understanding applications.",
"title": ""
},
{
"docid": "28aa6f270e578881abb710ca2ddb904d",
"text": "An implantable real-time blood pressure monitoring microsystem for laboratory mice has been demonstrated. The system achieves a 10-bit blood pressure sensing resolution and can wirelessly transmit the pressure information to an external unit. The implantable device is operated in a batteryless manner, powered by an external RF power source. The received RF power level can be sensed and wirelessly transmitted along with blood pressure signal for feedback control of the external RF power. The microsystem employs an instrumented silicone cuff, wrapped around a blood vessel with a diameter of approximately 200 ¿m, for blood pressure monitoring. The cuff is filled by low-viscosity silicone oil with an immersed MEMS capacitive pressure sensor and integrated electronic system to detect a down-scaled vessel blood pressure waveform with a scaling factor of approximately 0.1. The integrated electronic system, consisting of a capacitance-to-voltage converter, an 11-bit ADC, an adaptive RF powering system, an oscillator-based 433 MHz FSK transmitter and digital control circuitry, is fabricated in a 1.5 ¿m CMOS process and dissipates a power of 300 ¿W. The packaged microsystem weighs 130 milligram and achieves a capacitive sensing resolution of 75 aF over 1 kHz bandwidth, equivalent to a pressure sensing resolution of 1 mmHg inside an animal vessel, with a dynamic range of 60 dB. Untethered laboratory animal in vivo evaluation demonstrates that the microsystem can capture real-time blood pressure information with a high fidelity under an adaptive RF powering and wireless data telemetry condition.",
"title": ""
},
{
"docid": "3ac1f139546b3675d191b2a3c7b18ba0",
"text": "We consider the question: what can be learnt by looking at and listening to a large number of unlabelled videos? There is a valuable, but so far untapped, source of information contained in the video itself – the correspondence between the visual and the audio streams, and we introduce a novel “Audio-Visual Correspondence” learning task that makes use of this. Training visual and audio networks from scratch, without any additional supervision other than the raw unconstrained videos themselves, is shown to successfully solve this task, and, more interestingly, result in good visual and audio representations. These features set the new state-of-the-art on two sound classification benchmarks, and perform on par with the state-of-the-art selfsupervised approaches on ImageNet classification. We also demonstrate that the network is able to localize objects in both modalities, as well as perform fine-grained recognition tasks.",
"title": ""
}
] |
scidocsrr
|
6bcc9572787b80f0f3422e02d6d5bcd3
|
Advancing Software-Defined Networks: A Survey
|
[
{
"docid": "e93c5395f350d44b59f549a29e65d75c",
"text": "Software Defined Networking (SDN) is an exciting technology that enables innovation in how we design and manage networks. Although this technology seems to have appeared suddenly, SDN is part of a long history of efforts to make computer networks more programmable. In this paper, we trace the intellectual history of programmable networks, including active networks, early efforts to separate the control and data plane, and more recent work on OpenFlow and network operating systems. We highlight key concepts, as well as the technology pushes and application pulls that spurred each innovation. Along the way, we debunk common myths and misconceptions about the technologies and clarify the relationship between SDN and related technologies such as network virtualization.",
"title": ""
},
{
"docid": "bbd219f59ab4211a387cb7a721c797c8",
"text": "Wireless network virtualization and information-centric networking (ICN) are two promising techniques in software-defined 5G mobile wireless networks. Traditionally, these two technologies have been addressed separately. In this paper we show that integrating wireless network virtualization with ICN techniques can significantly improve the end-to-end network performance. In particular, we propose an information- centric wireless network virtualization architecture for integrating wireless network virtualization with ICN. We develop the key components of this architecture: radio spectrum resource, wireless network infrastructure, virtual resources (including content-level slicing, network-level slicing, and flow-level slicing), and informationcentric wireless virtualization controller. Then we formulate the virtual resource allocation and in-network caching strategy as an optimization problem, considering the gain of not only virtualization but also in-network caching in our proposed information-centric wireless network virtualization architecture. The obtained simulation results show that our proposed information-centric wireless network virtualization architecture and the related schemes significantly outperform the other existing schemes.",
"title": ""
}
] |
[
{
"docid": "6fab2f7c340b6edbffe30b061bcd991e",
"text": "A Majority-Inverter Graph (MIG) is a recently introduced logic representation form whose algebraic and Boolean properties allow for efficient logic optimization. In particular, when considering logic depth reduction, MIG algorithms obtained significantly superior synthesis results as compared to the state-of-the-art approaches based on AND-inverter graphs and commercial tools. In this paper, we present a new MIG optimization algorithm targeting size minimization based on functional hashing. The proposed algorithm makes use of minimum MIG representations which are precomputed for functions up to 4 variables using an approach based on Satisfiability Modulo Theories (SMT). Experimental results show that heavily-optimized MIGs can be further minimized also in size, thanks to our proposed methodology. When using the optimized MIGs as starting point for technology mapping, we were able to improve both depth and area for the arithmetic instances of the EPFL benchmarks beyond the current results achievable by state-of-the-art logic synthesis algorithms.",
"title": ""
},
{
"docid": "9ec8a4b8e052b352775b5f6fb98ff914",
"text": "For most of the existing commercial driver assistance systems the use of a single environmental sensor and a tracking model tied to the characteristics of this sensor is sufficient. When using a multi-sensor fusion approach with heterogeneous sensors the information available for tracking depends on the sensors detecting the object. This paper describes an approach where multiple models are used for tracking moving objects. The best model for tracking is chosen based on the available sensor information. The architecture of the tracking system along with the tracking models and algorithm for model selection are presented. The design of the architecture and algorithms allows an extension of the system with new sensors and tracking models without changing existing software. The approach was implemented and successfully used in Tartan Racing’s autonomous vehicle for the Urban Grand Challenge. The advantages of the multisensor approach are explained and practical results of a representative scenario are presented.",
"title": ""
},
{
"docid": "8ff810556a5c5d2da7d446dc78cdf93d",
"text": "Panoramic stitching technology is the focus of current panoramic technology, and cylindrical panoramas are commonly used because of their ease of capture and storage. Moreover it is a simple method for constructing panoramic video. This paper presents a cylindrical panoramic generation method based on multi-cameras. First, we use a backward-division model, which can rapidly solve the distortion of fish eye lens. Second, in order to maintain consistency of stitching, use the cylindrical projection. Third, we apply SIFT feature detection method to image mosaic process. To ensure the accuracy of matching, we use the RANSAC algorithm to purify the detected feature points. In addition, we use an image fusion method based on Lapalace pyramid. The original images captured by fisheye multi-cameras device are processed according to the procedure described in this paper, and then the cylindrical panoramic image is obtained.",
"title": ""
},
{
"docid": "4fa688e986d177771c5992262cf342b5",
"text": "The TIPSTER Text Summarization Evaluation (SUMMAC) has developed several new extrinsic and intrinsic methods for evaluating summaries. It has established definitively that automatic text summarization is very effective in relevance assessment tasks on news articles. Summaries as short as 17% of full text length sped up decision-making by almost a factor of 2 with no statistically significant degradation in accuracy. Analysis of feedback forms filled in after each decision indicated that the intelligibility of present-day machine-generated summaries is high. Systems that performed most accurately in the production of indicative and informative topic-related summaries used term frequency and co-occurrence statistics, and vocabulary overlap comparisons between text passages. However, in the absence of a topic, these statistical methods do not appear to provide any additional leverage: in the case of generic summaries, the systems were indistinguishable in accuracy. The paper discusses some of the tradeoffs and challenges faced by the evaluation, and also lists some of the lessons learned, impacts, and possible future directions. The evaluation methods used in the SUMMAC evaluation are of interest to both summarization evaluation as well as evaluation of other 'output-related' NLP technologies, where there may be many potentially acceptable outputs, with no automatic way to compare them.",
"title": ""
},
{
"docid": "08dcf41de314afe40b4430132be40380",
"text": "Robust speech recognition in everyday conditions requires the solution to a number of challenging problems, not least the ability to handle multiple sound sources. The specific case of speech recognition in the presence of a competing talker has been studied for several decades, resulting in a number of quite distinct algorithmic solutions whose focus ranges from modeling both target and competing speech to speech separation using auditory grouping principles. The purpose of the monaural speech separation and recognition challenge was to permit a large-scale comparison of techniques for the competing talker problem. The task was to identify keywords in sentences spoken by a target talker when mixed into a single channel with a background talker speaking similar sentences. Ten independent sets of results were contributed, alongside a baseline recognition system. Performance was evaluated using common training and test data and common metrics. Listeners’ performance in the same task was also measured. This paper describes the challenge problem, compares the performance of the contributed algorithms, and discusses the factors which distinguish the systems. One highlight of the comparison was the finding that several systems achieved near-human performance in some conditions, and one out-performed listeners overall. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8e18fa3850177d016a85249555621723",
"text": "Obstacle fusion algorithms usually perform obstacle association and gating in order to improve the obstacle position if it was detected by multiple sensors. However, this strategy is not common in multi sensor occupancy grid fusion. Thus, the quality of the fused grid, in terms of obstacle position accuracy, largely depends on the sensor with the lowest accuracy. In this paper an efficient method to associate obstacles across sensor grids is proposed. Imprecise sensors are discounted locally in cells where a more accurate sensor, that detected the same obstacle, derived free space. Furthermore, fixed discount factors to optimize false negative and false positive rates are used. Because of its generic formulation with the covariance of each sensor grid, the method is scalable to any sensor setup. The quantitative evaluation with a highly precise navigation map shows an increased obstacle position accuracy compared to standard evidential occupancy grid fusion.",
"title": ""
},
{
"docid": "77951641fea1115aae1bafcd589dfb7e",
"text": "We provide an overview of current approaches to DNA-based storage system design and of accompanying synthesis, sequencing and editing methods. We also introduce and analyze a suite of new constrained coding schemes for both archival and random access DNA storage channels. The analytic contribution of our work is the construction and design of sequences over discrete alphabets that avoid pre-specified address patterns, have balanced base content, and exhibit other relevant substring constraints. These schemes adapt the stored signals to the DNA medium and thereby reduce the inherent error-rate of the system.",
"title": ""
},
{
"docid": "b7521521277f944a9532dc4435a2bda7",
"text": "The NDN project investigates Jacobson's proposed evolution from today's host-centric network architecture (IP) to a data-centric network architecture (NDN). This conceptually simple shift has far-reaching implications in how we design, develop, deploy and use networks and applications. The NDN design and development has attracted significant attention from the networking community. To facilitate broader participation in addressing NDN research and development challenges, this tutorial will describe the vision of this new architecture and its basic components and operations.",
"title": ""
},
{
"docid": "db83931d7fef8174acdb3a1f4ef0d043",
"text": "Physical fatigue has been identified as a risk factor associated with the onset of occupational injury. Muscular fatigue developed from repetitive hand-gripping tasks is of particular concern. This study examined the use of a maximal, repetitive, static power grip test of strength-endurance in detecting differences in exertions between workers with uninjured and injured hands, and workers who were asked to provide insincere exertions. The main dependent variable of interest was power grip muscular force measured with a force strain gauge. Group data showed that the power grip protocol, used in this study, provided a valid and reliable estimate of wrist-hand strength-endurance. Force fatigue curves showed both linear and curvilinear effects among the study groups. An endurance index based on force decrement during repetitive power grip was shown to differentiate between uninjured, injured, and insincere groups.",
"title": ""
},
{
"docid": "30e22be2c7383e90a6fd16becc34a586",
"text": "SUMMARY\nThe etiology of age-related facial changes has many layers. Multiple theories have been presented over the past 50-100 years with an evolution of understanding regarding facial changes related to skin, soft tissue, muscle, and bone. This special topic will provide an overview of the current literature and evidence and theories of facial changes of the skeleton, soft tissues, and skin over time.",
"title": ""
},
{
"docid": "e5e3cbe942723ef8e3524baf56121bf5",
"text": "Requirements prioritization is recognized as an important activity in product development. In this paper, we describe the current state of requirements prioritization practices in two case companies and present the practical challenges involved. Our study showed that requirements prioritization is an ambiguous concept and current practices in the companies are informal. Requirements prioritization requires complex context-specific decision-making and must be performed iteratively in many phases during development work. Practitioners are seeking more systematic ways to prioritize requirements but they find it difficult to pay attention to all the relevant factors that have an effect on priorities and explicitly to draw different stakeholder views together. In addition, practitioners need more information about real customer preferences.",
"title": ""
},
{
"docid": "c1317791c1f1aa1de90b3be47ab036a1",
"text": "Although injuries to the posterolateral corner of the knee were previously considered to be a rare condition, they have been shown to be present in almost 16% of all knee injuries and are responsible for sustained instability and failure of concomitant reconstructions if not properly recognized. Although also once considered to be the \"dark side of the knee\", increased knowledge of the posterolateral corner anatomy and biomechanics has led to improved diagnostic ability with better understanding of physical and imaging examinations. The management of posterolateral corner injuries has also evolved and good outcomes have been reported after operative treatment following anatomical reconstruction principles.",
"title": ""
},
{
"docid": "0945e9b6f9e95a6d6ec6ab82a2eed84e",
"text": "................................................................................................................................ 3",
"title": ""
},
{
"docid": "e4694f9cdbc8756398e5996b9cd78989",
"text": "In this paper, a 3D computer vision system for cognitive assessment and rehabilitation based on the Kinect device is presented. It is intended for individuals with body scheme dysfunctions and left-right confusion. The system processes depth information to overcome the shortcomings of a previously presented 2D vision system for the same application. It achieves left and right-hand tracking, and face and facial feature detection (eye, nose, and ears) detection. The system is easily implemented with a consumer-grade computer and an affordable Kinect device and is robust to drastic background and illumination changes. The system was tested and achieved a successful monitoring percentage of 96.28%. The automation of the human body parts motion monitoring, its analysis in relation to the psychomotor exercise indicated to the patient, and the storage of the result of the realization of a set of exercises free the rehabilitation experts of doing such demanding tasks. The vision-based system is potentially applicable to other tasks with minor changes.",
"title": ""
},
{
"docid": "52e148ddb2d1448bca390b896ac4eb1f",
"text": "This study looked at Human Capital Investment and Economic Growth in Nigeria – the Role of Education. Even though there are different perspectives to economic growth, there is a general consensus that growth will lead to a good change manifested in increased capacity of people to have control over material assets, intellectual resources and ideology, and obtain physical necessities of life like food, clothing, shelter, employment, e.t.c. This is why some people have argued that the purpose of growth is to improve peoples’ lives by expanding their choices, freedom and dignity. The belief in human capital as a necessity for growth started in Nigeria during the implementation of the 1955-60 Development Plan and today, with the importance of knowledge in the economy, human capital has increasingly attracted both academic and public interest. This study made use of the Unit Root and Augmented Dickey Fuller (ADF) tests and found out that a positive relationship exists between government expenditure on education and economic growth while a negative relationship exists between government expenditure on health and economic growth. Therefore, based on these findings, the study recommended that the Government should increase not just the amount of expenditure made on the education and health sectors, but also the percentage of its total expenditure accorded to these sectors. The ten percent benchmark proffered by the present national plan should be adopted.",
"title": ""
},
{
"docid": "e9e2887e7aae5315a8661c9d7456aa2e",
"text": "It has been shown that learning distributed word representations is highly useful for Twitter sentiment classification. Most existing models rely on a single distributed representation for each word. This is problematic for sentiment classification because words are often polysemous and each word can contain different sentiment polarities under different topics. We address this issue by learning topic-enriched multi-prototype word embeddings (TMWE). In particular, we develop two neural networks which 1) learn word embeddings that better capture tweet context by incorporating topic information, and 2) learn topic-enriched multiple prototype embeddings for each word. Experiments on Twitter sentiment benchmark datasets in SemEval 2013 show that TMWE outperforms the top system with hand-crafted features, and the current best neural network model.",
"title": ""
},
{
"docid": "7adb0a3079fb3b64f7a503bd8eae623e",
"text": "Attack trees have found their way to practice because they have proved to be an intuitive aid in threat analysis. Despite, or perhaps thanks to, their apparent simplicity, they have not yet been provided with an unambiguous semantics. We argue that such a formal interpretation is indispensable to precisely understand how attack trees can be manipulated during construction and analysis. We provide a denotational semantics, based on a mapping to attack suites, which abstracts from the internal structure of an attack tree, we study transformations between attack trees, and we study the attribution and projection of an attack tree.",
"title": ""
},
{
"docid": "ebb43198da619d656c068f2ab1bfe47f",
"text": "Remote data integrity checking (RDIC) enables a server to prove to an auditor the integrity of a stored file. It is a useful technology for remote storage such as cloud storage. The auditor could be a party other than the data owner; hence, an RDIC proof is based usually on publicly available information. To capture the need of data privacy against an untrusted auditor, Hao et al. formally defined “privacy against third party verifiers” as one of the security requirements and proposed a protocol satisfying this definition. However, we observe that all existing protocols with public verifiability supporting data update, including Hao et al.’s proposal, require the data owner to publish some meta-data related to the stored data. We show that the auditor can tell whether or not a client has stored a specific file and link various parts of those files based solely on the published meta-data in Hao et al.’s protocol. In other words, the notion “privacy against third party verifiers” is not sufficient in protecting data privacy, and hence, we introduce “zero-knowledge privacy” to ensure the third party verifier learns nothing about the client’s data from all available information. We enhance the privacy of Hao et al.’s protocol, develop a prototype to evaluate the performance and perform experiment to demonstrate the practicality of our proposal.",
"title": ""
},
{
"docid": "ac1a7abbf9101e24ea49649a8eedd46a",
"text": "issues that involves very large numbers of heterogeneous agents in the hostile environment. The intention of the RoboCup Rescue project is to promote research and development in this socially significant domain at various levels, involving multiagent teamwork coordination, physical agents for search and rescue, information infrastructures, personal digital assistants, a standard simulator and decision-support systems, evaluation benchmarks for rescue strategies, and robotic systems that are all integrated into a comprehensive system in the future. For this effort, which was built on the success of the RoboCup Soccer project, we will provide forums of technical discussions and competitive evaluations for researchers and practitioners. Although the rescue domain is intuitively appealing as a large-scale multiagent and intelligent system domain, analysis has not yet revealed its domain characteristics. The first research evaluation meeting will be held at RoboCup-2001, in conjunction with the Seventeenth International Joint Conference on Artificial Intelligence (IJCAI-2001), as part of the RoboCup Rescue Simulation League and RoboCup/AAAI Rescue Robot Competition. In this article, we present a detailed analysis of the task domain and elucidate characteristics necessary for multiagent and intelligent systems for this domain. Then, we present an overview of the RoboCup Rescue project.",
"title": ""
}
] |
scidocsrr
|
024700bfa86c0117953acb56cc5cc266
|
Scatter/Gather Clustering: Flexibly Incorporating User Feedback to Steer Clustering Results
|
[
{
"docid": "b7a4eec912eb32b3b50f1b19822c44a1",
"text": "Mining numerical data is a relatively difficult problem in data mining. Clustering is one of the techniques. We consider a database with numerical attributes, in which each transaction is viewed as a multi-dimensional vector. By studying the clusters formed by these vectors, we can discover certain behaviors hidden in the data. Traditional clustering algorithms find clusters in the full space of the data sets. This results in high dimensional clusters, which are poorly comprehensible to human. One important task in this setting is the ability to discover clusters embedded in the subspaces of a high-dimensional data set. This problem is known as subspace clustering. We follow the basic assumptions of previous work CLIQUE. It is found that the number of subspaces with clustering is very large, and a criterion called the coverage is proposed in CLIQUE for the pruning. In addition to coverage, we identify new useful criteria for this problem and propose an entropybased algorithm called ENCLUS to handle the criteria. Our major contributions are: (1) identify new meaningful criteria of high density and correlation of dimensions for goodness of clustering in subspaces, (2) introduce the use of entropy and provide evidence to support its use, (3) make use of two closure properties based on entropy to prune away uninteresting subspaces efficiently, (4) propose a mechanism to mine non-minimally correlated subspaces which are of interest because of strong clustering, (5) experiments are carried out to show the effectiveness of the proposed method.",
"title": ""
}
] |
[
{
"docid": "af956aac653d1da6c7cf658640ab82a8",
"text": "In this study, we successfully developed a high signal-to-noise ratio (SNR) rangefinder based on a piezoelectric micromachined ultrasonic transducer (pMUT). A monocrystalline Pb(Mn<inf>1/3</inf>, Nb<inf>2</inf>/3)O<inf>3</inf>-Pb(Zr, Ti)O<inf>3</inf> (PMnN-PZT) thin film was used because it has large figures-of-merit (FOM) for SNR due to its high piezoelectric coefficient and small relative permittivity (typical values: e<inf>31,f</inf> = −14 C/m<sup>2</sup>, ε<inf>r</inf> = 200∼300). The rangefinding ability of the monocrystalline PMnN-PZT pMUT was evaluated using a pair of the devices as transmitter and receiver. The maximum range was estimated to be over 2 m at a low actuating voltage of 1 V<inf>p-p</inf>, when 12 dB was set as the threshold SNR for reliable rangefinding. The energy consumption of the transmitter was as small as ∼55 pJ for the generation of an ultrasonic burst. This performance is suitable for rangefinding applications in consumer electronics.",
"title": ""
},
{
"docid": "250fe1b4b9cb3ea8efc8e7b039dcba45",
"text": "In this paper we present a WebVRGIS based Interactive On line 3D Virtual Community which is achieved based on WebGIS technology and web VR technology. It is Multi-Dimensional(MD) web geographic information system (WebGIS) based 3D interactive on line virtual community which is a virtual real-time 3D communication systems and web systems development platform. It is capable of running on a variety of browsers. In this work, four key issues are studied: (1) Multi-source MD geographical data fusion of the WebGIS, (2) scene combination with 3D avatar, (3) massive data network dispatch, and (4) multi-user avatar real-time interactive. Our system is divided into three modules: data preprocessing, background management and front end user interaction. The core of the front interaction module is packaged in the MD map expression engine 3GWebMapper and the free plug-in network 3D rendering engine WebFlashVR. We have evaluated the robustness of our system on three campus of Ocean University of China(OUC) as a testing base. The results shows high efficiency, easy to use and robustness of our system.",
"title": ""
},
{
"docid": "f1fcc04fdc1a8c45b0ef670328c3e98e",
"text": "T digital divide has loomed as a public policy issue for over a decade. Yet, a theoretical account for the effects of the digital divide is currently lacking. This study examines three levels of the digital divide. The digital access divide (the first-level digital divide) is the inequality of access to information technology (IT) in homes and schools. The digital capability divide (the second-level digital divide) is the inequality of the capability to exploit IT arising from the first-level digital divide and other contextual factors. The digital outcome divide (the third-level digital divide) is the inequality of outcomes (e.g., learning and productivity) of exploiting IT arising from the second-level digital divide and other contextual factors. Drawing on social cognitive theory and computer self-efficacy literature, we developed a model to show how the digital access divide affects the digital capability divide and the digital outcome divide among students. The digital access divide focuses on computer ownership and usage in homes and schools. The digital capability divide and the digital outcome divide focus on computer self-efficacy and learning outcomes, respectively. This model was tested using data collected from over 4,000 students in Singapore. The results generate insights into the relationships among the three levels of the digital divide and provide a theoretical account for the effects of the digital divide. While school computing environments help to increase computer self-efficacy for all students, these factors do not eliminate knowledge the gap between students with and without home computers. Implications for theory and practice are discussed.",
"title": ""
},
{
"docid": "8b3a1137c44932bbb0e9315f04565dfd",
"text": "Many complex and interesting spatiotemporal patterns have been observed in a wide range of scientific areas. In this paper, two kinds of spatiotemporal patterns including spot replication and Turing systems are investigated and new identification methods are proposed to obtain Coupled Map Lattice (CML) models for this class of systems. Initially, a new correlation analysis method is introduced to determine an appropriate temporal and spatial data sampling step procedure for the identification of spatiotemporal systems. A new combined Orthogonal Forward Regression and Bayesian Learning algorithm with Laplace priors is introduced to identify sparse and robust CML models for complex spatiotemporal patterns. The final identified CML models are validated using correlation based model validation tests for spatiotemporal systems. Numerical results illustrate the identification procedure and demonstrate the validity of the identified models.",
"title": ""
},
{
"docid": "9043a5aae40471cb9f671a33725b0072",
"text": "In a software development group of IBM Retail Store Solutions, we built a non-trivial software system based on a stable standard specification using a disciplined, rigorous unit testing and build approach based on the test- driven development (TDD) practice. Using this practice, we reduced our defect rate by about 50 percent compared to a similar system that was built using an ad-hoc unit testing approach. The project completed on time with minimal development productivity impact. Additionally, the suite of automated unit test cases created via TDD is a reusable and extendable asset that will continue to improve quality over the lifetime of the software system. The test suite will be the basis for quality checks and will serve as a quality contract between all members of the team.",
"title": ""
},
{
"docid": "6960f6c70ffa1ea8325a54cd73b60cde",
"text": "The CUDA programming model provides a straightforward means of describing inherently parallel computations, and NVIDIA's Tesla GPU architecture delivers high computational throughput on massively parallel problems. This article surveys experiences gained in applying CUDA to a diverse set of problems and the parallel speedups over sequential codes running on traditional CPU architectures attained by executing key computations on the GPU.",
"title": ""
},
{
"docid": "ffeb8ab86966a7ac9b8c66bdec7bfc32",
"text": "Electrophysiological connectivity patterns in cortex often have a few strong connections, which are sometimes bidirectional, among a lot of weak connections. To explain these connectivity patterns, we created a model of spike timing–dependent plasticity (STDP) in which synaptic changes depend on presynaptic spike arrival and the postsynaptic membrane potential, filtered with two different time constants. Our model describes several nonlinear effects that are observed in STDP experiments, as well as the voltage dependence of plasticity. We found that, in a simulated recurrent network of spiking neurons, our plasticity rule led not only to development of localized receptive fields but also to connectivity patterns that reflect the neural code. For temporal coding procedures with spatio-temporal input correlations, strong connections were predominantly unidirectional, whereas they were bidirectional under rate-coded input with spatial correlations only. Thus, variable connectivity patterns in the brain could reflect different coding principles across brain areas; moreover, our simulations suggested that plasticity is fast.",
"title": ""
},
{
"docid": "6a04e07937d1c5beef84acb0a4e0e328",
"text": "Linear hashing and spiral storage are two dynamic hashing schemes originally designed for external files. This paper shows how to adapt these two methods for hash tables stored in main memory. The necessary data structures and algorithms are described, the expected performance is analyzed mathematically, and actual execution times are obtained and compared with alternative techniques. Linear hashing is found to be both faster and easier to implement than spiral storage. Two alternative techniques are considered: a simple unbalanced binary tree and double hashing with periodic rehashing into a larger table. The retrieval time of linear hashing is similar to double hashing and substantially faster than a binary tree, except for very small trees. The loading times of double hashing (with periodic reorganization), a binary tree, and linear hashing are similar. Overall, linear hashing is a simple and efficient technique for applications where the cardinality of the key set is not known in advance.",
"title": ""
},
{
"docid": "68abef37fe49bb675d7a2ce22f7bf3a7",
"text": "Objective: The case for exercise and health has primarily been made on its impact on diseases such coronary heart disease, obesity and diabetes. However, there is a very high cost attributed to mental disorders and illness and in the last 15 years there has been increasing research into the role of exercise a) in the treatment of mental health, and b) in improving mental well-being in the general population. There are now several hundred studies and over 30 narrative or meta-analytic reviews of research in this field. These have summarised the potential for exercise as a therapy for clinical or subclinical depression or anxiety, and the use of physical activity as a means of upgrading life quality through enhanced self-esteem, improved mood states, reduced state and trait anxiety, resilience to stress, or improved sleep. The purpose of this paper is to a) provide an updated view of this literature within the context of public health promotion and b) investigate evidence for physical activity and dietary interactions affecting mental well-being. Design: Narrative review and summary. Conclusions: Sufficient evidence now exists for the effectiveness of exercise in the treatment of clinical depression. Additionally, exercise has a moderate reducing effect on state and trait anxiety and can improve physical self-perceptions and in some cases global self-esteem. Also there is now good evidence that aerobic and resistance exercise enhances mood states, and weaker evidence that exercise can improve cognitive function (primarily assessed by reaction time) in older adults. Conversely, there is little evidence to suggest that exercise addiction is identifiable in no more than a very small percentage of exercisers. Together, this body of research suggests that moderate regular exercise should be considered as a viable means of treating depression and anxiety and improving mental well-being in the general public.",
"title": ""
},
{
"docid": "224defa4906e121e42218f17c6efa4f2",
"text": "This paper presents a particular model of heuristic search as a path-finding problem in a directed graph. A class of graph-searching procedures is described which uses a heuristic function to guide search. Heuristic functions are estimates of the number o f edges that remain to be traversed in reaching a goal node. A number of theoretical results for this model, and the intuition for these results, are presented. They relate the e])~ciency o f search to the accuracy o f the heuristic function. The results also explore efficiency as a consequence of the reliance or weight placed on the heuristics used.",
"title": ""
},
{
"docid": "74de053230e7b96ee4e1aee844813723",
"text": "OBJECTIVE\nTo investigate the immediate effects of Kinesio Taping® (KT) on sit-to-stand (STS) movement, balance and dynamic postural control in children with cerebral palsy (CP).\n\n\nMETHODS\nFour children diagnosed with left hemiplegic CP level I by the Gross Motor Function Classification System were evaluated under conditions without taping as control condition (CC); and with KT as kinesio condition. A motion analysis system was used to measure total duration of STS movement and angular movements of each joint. Clinical instruments such as Pediatric Balance Scale (PBS) and Timed up and Go (TUG) were also applied.\n\n\nRESULTS\nCompared to CC, decreased total duration of STS, lower peak ankle flexion, higher knee extension at the end of STS, and decreased total time in TUG; but no differences were obtained on PBS score in KT.\n\n\nCONCLUSION\nNeuromuscular taping seems to be beneficial on dynamic activities, but not have the same performance in predominantly static activities studied.",
"title": ""
},
{
"docid": "7b64650fc5eb117ddf5a2611a5964cab",
"text": "Recent studies have provided long-sought evidence that behavioural learning involves specific synapse gain and elimination processes, which lead to memory traces that influence behaviour. The connectivity rearrangements are preceded by enhanced synapse turnover, which can be modulated through changes in inhibitory connectivity. Behaviourally related synapse rearrangement events tend to co-occur spatially within short stretches of dendrites, and involve signalling pathways partially overlapping with those controlling the functional plasticity of synapses. The new findings suggest that a mechanistic understanding of learning and memory processes will require monitoring ensembles of synapses in situ and the development of synaptic network models that combine changes in synaptic function and connectivity.",
"title": ""
},
{
"docid": "1b78fd9e2d90393ee877c49f582d23ee",
"text": "Many “big data” applications need to act on data arriving in real time. However, current programming models for distributed stream processing are relatively low-level, often leaving the user to worry about consistency of state across the system and fault recovery. Furthermore, the models that provide fault recovery do so in an expensive manner, requiring either hot replication or long recovery times. We propose a new programming model, discretized streams (D-Streams), that offers a high-level functional API, strong consistency, and efficient fault recovery. D-Streams support a new recovery mechanism that improves efficiency over the traditional replication and upstream backup schemes in streaming databases— parallel recovery of lost state—and unlike previous systems, also mitigate stragglers. We implement D-Streams as an extension to the Spark cluster computing engine that lets users seamlessly intermix streaming, batch and interactive queries. Our system can process over 60 million records/second at sub-second latency on 100 nodes.",
"title": ""
},
{
"docid": "8091d32fd96df9fed309ff6f7d1579d9",
"text": "The dynamics of neural networks is influenced strongly by the spectrum of eigenvalues of the matrix describing their synaptic connectivity. In large networks, elements of the synaptic connectivity matrix can be chosen randomly from appropriate distributions, making results from random matrix theory highly relevant. Unfortunately, classic results on the eigenvalue spectra of random matrices do not apply to synaptic connectivity matrices because of the constraint that individual neurons are either excitatory or inhibitory. Therefore, we compute eigenvalue spectra of large random matrices with excitatory and inhibitory columns drawn from distributions with different means and equal or different variances.",
"title": ""
},
{
"docid": "b12bae586bc49a12cebf11cca49c0386",
"text": "Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input. However, these models are vulnerable to adversarial perturbations—small input changes crafted explicitly to fool the model. In this paper, we ask whether a DNN can distinguish adversarial samples from their normal and noisy counterparts. We investigate model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model. The result is a method for implicit adversarial detection that is oblivious to the attack algorithm. We evaluate this method on a variety of standard datasets including MNIST and CIFAR-10 and show that it generalizes well across different architectures and attacks. Our findings report that 85-93% ROC-AUC can be achieved on a number of standard classification tasks with a negative class that consists of both normal and noisy samples.",
"title": ""
},
{
"docid": "5542f4693a4251edcf995e7608fbda56",
"text": "This paper investigates the antecedents and consequences of customer loyalty in an online business-to-consumer (B2C) context. We identify eight factors (the 8Cs—customization, contact interactivity, care, community, convenience, cultivation, choice, and character) that potentially impact e-loyalty and develop scales to measure these factors. Data collected from 1,211 online customers demonstrate that all these factors, except convenience, impact e-loyalty. The data also reveal that e-loyalty has an impact on two customer-related outcomes: word-ofmouth promotion and willingness to pay more. © 2002 by New York University. All rights reserved.",
"title": ""
},
{
"docid": "adfbf14b5fd7ecf9f62870e855bef051",
"text": "The growth of fingerprint databases creates a need for strategies to reduce the identification time. Fingerprint classification reduces the search penetration rate by grouping the fingerprints into several classes. Typically, features describing the visual patterns of a fingerprint are extracted and fed to a classifier. The extraction can be time-consuming and error-prone, especially for fingerprints whose visual classification is dubious, and often includes a criterion to reject ambiguous fingerprints. In this paper, we propose to improve on this manually designed process by using deep neural networks, which extract implicit features directly from the images and perform the classification within a single learning process. An extensive experimental study assesses that convolutional neural networks outperform all other tested approaches by achieving a very high accuracy with no rejection. Moreover, multiple copies of the same fingerprint are consistently classified. The runtime of convolutional networks is also lower than that of combining feature extraction procedures with classification algorithms.",
"title": ""
},
{
"docid": "5b17c5637af104b1f20ff1ca9ce9c700",
"text": "According to the traditional understanding of cerebrospinal fluid (CSF) physiology, the majority of CSF is produced by the choroid plexus, circulates through the ventricles, the cisterns, and the subarachnoid space to be absorbed into the blood by the arachnoid villi. This review surveys key developments leading to the traditional concept. Challenging this concept are novel insights utilizing molecular and cellular biology as well as neuroimaging, which indicate that CSF physiology may be much more complex than previously believed. The CSF circulation comprises not only a directed flow of CSF, but in addition a pulsatile to and fro movement throughout the entire brain with local fluid exchange between blood, interstitial fluid, and CSF. Astrocytes, aquaporins, and other membrane transporters are key elements in brain water and CSF homeostasis. A continuous bidirectional fluid exchange at the blood brain barrier produces flow rates, which exceed the choroidal CSF production rate by far. The CSF circulation around blood vessels penetrating from the subarachnoid space into the Virchow Robin spaces provides both a drainage pathway for the clearance of waste molecules from the brain and a site for the interaction of the systemic immune system with that of the brain. Important physiological functions, for example the regeneration of the brain during sleep, may depend on CSF circulation.",
"title": ""
},
{
"docid": "f4f2f6e7801bed3331eb5c162d9edcfa",
"text": "This paper presents the design method for a compact dual-band bandpass filter with a large ratio of center frequencies. Emphasis is placed on circuit synthesis for simultaneously matching the in-band responses at the two designated passbands. In an interdigital configuration, a 0.9/5.8 GHz bandpass filter is designed and fabricated. It is believed that this is the filter has the largest ratio of the center frequencies in comparison with those in open literature. The measured responses show good agreement with the simulation results.",
"title": ""
},
{
"docid": "228678ad5d18d21d4bc7c1819329274f",
"text": "Intentional frequency perturbation by recently researched active islanding detection techniques for inverter based distributed generation (DG) define new threshold settings for the frequency relays. This innovation has enabled the modern frequency relays to operate inside the non-detection zone (NDZ) of the conventional frequency relays. However, the effect of such perturbation on the performance of the rate of change of frequency (ROCOF) relays has not been researched so far. This paper evaluates the performance of ROCOF relays under such perturbations for an inverter interfaced DG and proposes an algorithm along with the new threshold settings to enable it work under the NDZ. The proposed algorithm is able to differentiate between an islanding and a non-islanding event. The operating principle of relay is based on low frequency current injection through grid side voltage source converter (VSC) control of doubly fed induction generator (DFIG) and therefore, the relay is defined as “active ROCOF relay”. Simulations are done in MATLAB.",
"title": ""
}
] |
scidocsrr
|
f781e26add873ec2f58316feec41fdf8
|
A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks
|
[
{
"docid": "c2845a8a4f6c2467c7cd3a1a95a0ca37",
"text": "In this report I introduce ReSuMe a new supervised learning method for Spiking Neural Networks. The research on ReSuMe has been primarily motivated by the need of inventing an efficient learni ng method for control of movement for the physically disabled. Howeve r, thorough analysis of the ReSuMe method reveals its suitability not on ly to the task of movement control, but also to other real-life applicatio ns including modeling, identification and control of diverse non-statio nary, nonlinear objects. ReSuMe integrates the idea of learning windows, known from t he spikebased Hebbian rules, with a novel concept of remote supervis ion. General overview of the method, the basic definitions, the netwo rk architecture and the details of the learning algorithm are presented . The properties of ReSuMe such as locality, computational simplicity a nd the online processing suitability are discussed. ReSuMe learning abi lities are illustrated in a verification experiment.",
"title": ""
}
] |
[
{
"docid": "3144f076574e5e67a6c69862cc8e2063",
"text": "As the number of alerts generated by collaborative applications grows, users receive more unwanted alerts. FeedMe is a general alert management system based on XML feed protocols such as RSS and ATOM. In addition to traditional rule-based alert filtering, FeedMe uses techniques from machine-learning to infer alert preferences based on user feedback. In this paper, we present and evaluate a new collaborative naïve Bayes filtering algorithm. Using FeedMe, we collected alert ratings from 33 users over 29 days. We used the data to design and verify the accuracy of the filtering algorithm and provide insights into alert prediction.",
"title": ""
},
{
"docid": "5172a41cd749c7b2f6eed3a7e25969dd",
"text": "Missing values in inputs, outputs cannot be handled by the original data envelopment analysis (DEA) models. In this paper we introduce an approach based on interval DEA that allows the evaluation of the units with missing values along with the other units with available crisp data. The missing values are replaced by intervals in which the unknown values are likely to belong. The constant bounds of the intervals, depending on the application, can be estimated by using statistical or experiential techniques. For the units with missing values, the proposed models are able to identify an upper and a lower bound of their efficiency scores. The efficiency analysis is further extended by estimating new values for the initial interval bounds that may turn the unit to an efficient one. The proposed methodology is illustrated by an application which evaluates the efficiency of a set of secondary public schools in Greece, a number of which appears to have missing values in some inputs and outputs. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "6b8281957b0fd7e9ff88f64b8b6462aa",
"text": "As Critical National Infrastructures are becoming more vulnerable to cyber attacks, their protection becomes a significant issue for any organization as well as a nation. Moreover, the ability to attribute is a vital element of avoiding impunity in cyberspace. In this article, we present main threats to critical infrastructures along with protective measures that one nation can take, and which are classified according to legal, technical, organizational, capacity building, and cooperation aspects. Finally we provide an overview of current methods and practices regarding cyber attribution and cyber peace keeping.",
"title": ""
},
{
"docid": "49218bcad26390909d0309bc7e04c780",
"text": "Credit card fraud costs consumers and the financial industry billions of dollars annually. However, there is a dearth of published literature on credit card fraud detection. In this study we employed transaction aggregation strategy to detect credit card fraud. We aggregated transactions to capture consumer buying behavior prior to each transaction and used these aggregations for model estimation to identify fraudulent transactions. We use real-life data of credit card transactions from an international credit card operation for transaction aggregation and model estimation. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3da4b3ec70a371b4748e552a5752305c",
"text": "In big cities, taxi service is imbalanced. In some areas, passengers wait too long for a taxi, while in others, many taxis roam without passengers. Knowledge of where a taxi will become available can help us solve the taxi demand imbalance problem. In this paper, we employ a holistic approach to predict taxi demand at high spatial resolution. We showcase our techniques using two real-world data sets, yellow cabs and Uber trips in New York City, and perform an evaluation over 9,940 building blocks in Manhattan. Our approach consists of two key steps. First, we use entropy and the temporal correlation of human mobility to measure the demand uncertainty at the building block level. Second, to identify which predictive algorithm can approach the theoretical maximum predictability, we implement and compare three predictors: the Markov predictor (a probability-based predictive algorithm), the Lempel-Ziv-Welch predictor (a sequence-based predictive algorithm), and the Neural Network predictor (a predictive algorithm that uses machine learning). The results show that predictability varies by building block and, on average, the theoretical maximum predictability can be as high as 83%. The performance of the predictors also vary: the Neural Network predictor provides better accuracy for blocks with low predictability, and the Markov predictor provides better accuracy for blocks with high predictability. In blocks with high maximum predictability, the Markov predictor is able to predict the taxi demand with an 89% accuracy, 11% better than the Neural Network predictor, while requiring only 0.03% computation time. These findings indicate that the maximum predictability can be a good metric for selecting prediction algorithms.",
"title": ""
},
{
"docid": "22c643e0a13c3510f0099ac61282fcfb",
"text": "We propose and study a novel panoptic segmentation (PS) task. Panoptic segmentation unifies the typically distinct tasks of semantic segmentation (assign a class label to each pixel) and instance segmentation (detect and segment each object instance). The proposed task requires generating a coherent scene segmentation that is rich and complete, an important step toward real-world vision systems. While early work in computer vision addressed related image/scene parsing tasks, these are not currently popular, possibly due to lack of appropriate metrics or associated recognition challenges. To address this, we first propose a novel panoptic quality (PQ) metric that captures performance for all classes (stuff and things) in an interpretable and unified manner. Using the proposed metric, we perform a rigorous study of both human and machine performance for PS on three existing datasets, revealing interesting insights about the task. Second, we are working to introduce panoptic segmentation tracks at upcoming recognition challenges. The aim of our work is to revive the interest of the community in a more unified view of image segmentation.",
"title": ""
},
{
"docid": "a8bfa82740973038b08bb03df0ad55dd",
"text": "This study tested predictions from W. Ickes and J. A. Simpson's (1997, 2001) empathic accuracy model. Married couples were videotaped as they tried to resolve a problem in their marriage. Both spouses then viewed a videotape of the interaction, recorded the thoughts and feelings they had at specific time points, and tried to infer their partner's thoughts and feelings. Consistent with the model, when the partner's thoughts and feelings were relationship-threatening (as rated by both the partners and by trained observers), greater empathic accuracy on the part of the perceiver was associated with pre-to-posttest declines in the perceiver's feelings of subjective closeness. The reverse was true when the partner's thoughts and feelings were nonthreatening. Exploratory analyses revealed that these effects were partially mediated through observer ratings of the degree to which partners tried to avoid the discussion issue.",
"title": ""
},
{
"docid": "210e26d5d11582be68337a0cc387ab8e",
"text": "This paper presents the results of experiments carried out with the goal of applying the machine learning techniques of reinforcement learning and neural networks with reinforcement learning to the game of Tetris. Tetris is a well-known computer game that can be played either by a single player or competitively with slight variations, toward the end of accumulating a high score or defeating the opponent. The fundamental hypothesis of this paper is that if the points earned in Tetris are used as the reward function for a machine learning agent, then that agent should be able to learn to play Tetris without other supervision. Toward this end, a state-space that summarizes the essential feature of the Tetris board is designed, high-level actions are developed to interact with the game, and agents are trained using Q-Learning and neural networks. As a result of these efforts, agents learn to play Tetris and to compete with other players. While the learning agents fail to accumulate as many points as the most advanced AI agents, they do learn to play more efficiently.",
"title": ""
},
{
"docid": "b4b500e4a59224162b7f1192c9d07d17",
"text": "The purpose of this paper is to highlight the costs, benefits, and externalities associated with organizations׳ use of big data. Specifically, it investigates how various inherent characteristics of big data are related to privacy, security and consumer welfare. The relation between characteristics of big data and privacy, security and consumer welfare issues are examined from the standpoints of data collection, storing, sharing and accessibility. The paper also discusses how privacy, security and welfare effects of big data are likely to vary across consumers of different levels of sophistication, vulnerability and technological savviness. Big data | Externalities | Privacy | Security | Personally identifiable information |",
"title": ""
},
{
"docid": "9c447f9a2b00a2e27433601fce4ab4ce",
"text": "The Hypertext Transfer Protocol (HTTP) has been widely adopted and deployed as the key protocol for video streaming over the Internet. One of the consequences of leveraging traditional HTTP for video streaming is the significantly increased request overhead due to the segmentation of the video content into HTTP resources. The overhead becomes even more significant when non-multiplexed video and audio segments are deployed. In this paper, we investigate and address the request overhead problem by employing the server push technology in the new HTTP 2.0 protocol. In particular, we develop a set of push strategies that actively deliver video and audio content from the HTTP server without requiring a request for each individual segment. We evaluate our approach in a Dynamic Adaptive Streaming over HTTP (DASH) streaming system. We show that the request overhead can be significantly reduced by using our push strategies. Also, we validate that the server push based approach is compatible with the existing HTTP streaming features, such as adaptive bitrate switching.",
"title": ""
},
{
"docid": "cc8ce41d7ae2bb0d92fa51cb26769aa1",
"text": "185 All Rights Reserved © 2012 IJARCET Abstract-With increasing amounts of data being generated by businesses and researchers there is a need for fast, accurate and robust algorithms for data analysis. Improvements in databases technology, computing performance and artificial intelligence have contributed to the development of intelligent data analysis. Support vector machines are a specific type of machine learning algorithm that are among the most widelyused for many statistical learning problems, such as spam filtering, text classification, handwriting analysis, face and object recognition, and countless others. Support vector machines have also come into widespread use in practically every area of bioinformatics within the last ten years, and their area of influence continues to expand today. The support vector machine has been developed as robust tool for classification and regression in noisy, complex domains. The two key features of support vector machines are generalization theory, which leads to a principled way to choose an hypothesis; and, kernel functions, which introduce nonlinearity in the hypothesis space without explicitly requiring a non-linear algorithm.",
"title": ""
},
{
"docid": "5efc720f54c94dffc52390d9d5eb7d3f",
"text": "Software-Defined Networking (SDN) is an emerging technology which brings flexibility and programmability to networks and introduces new services and features. However, most SDN architectures have been designed for wired infrastructures, especially in the data center space, and primary trends for wireless and mobile SDN are on the access network and the wireless backhaul. In this paper, we propose several designs for SDN-based Mobile Cloud architectures, focusing on Ad hoc networks. We present the required core components to build SDN-based Mobile Cloud, including variations that are required to accommodate different wireless environments, such as mobility and unreliable wireless link conditions. We also introduce several instances of the proposed architectures based on frequency selection of wireless transmission that are designed around different use cases of SDN-based Mobile Cloud. We demonstrate the feasibility of our architecture by implementing SDN-based routing in the mobile cloud and comparing it with traditional Mobile Ad Hoc Network (MANET) routing. The feasibility of our architecture is shown by achieving high packet delivery ratio with acceptable overhead.",
"title": ""
},
{
"docid": "b51f3871cf5354c23e5ffd18881fe951",
"text": "As the Internet grows in importance, concerns about online privacy have arisen. We describe the development and validation of three short Internet-administered scales measuring privacy related attitudes ('Privacy Concern') and behaviors ('General Caution' and 'Technical Protection'). Internet Privacy Scales 1 In Press: Journal of the American Society for Information Science and Technology UNCORRECTED proofs. This is a preprint of an article accepted for publication in Journal of the American Society for Information Science and Technology copyright 2006 Wiley Periodicals, Inc. Running Head: INTERNET PRIVACY SCALES Development of measures of online privacy concern and protection for use on the",
"title": ""
},
{
"docid": "6498337f1ad2a5bdbc0e3f41363a6c06",
"text": "Due to their ability to navigate in 6 degree of freedom space, Unmanned Aerial Vehicles (UAVs) can access many locations that are inaccessible to ground vehicles. While mobile manipulation is an extremely active field of research for ground traveling host platforms, UAVs have historically been used for applications that avoid interaction with their environment at all costs. Recent efforts have been aimed at equipping UAVs with dexterous manipulators in an attempt to allow these Mobile Manipulating UAVs (MM-UAVs) to perform meaningful tasks such as infrastructure repair, disaster response, casualty extraction, and cargo resupply. Among many challenges associated with the successful manipulation of objects from a UAV host platform include: a) the manipulator's movements and interaction with objects negatively impact the host platform's stability and b) movements of the host platform, even when using highly accurate motion capture systems for position control, translate to poor end effector position control relative to fixed objects. To address these two problems, we propose the use of a hyper-redundant manipulator for MM-UAV applications. The benefits of such a manipulator are that it: a) can be controlled in such a way that links are moved within the arm's free space to help reduce negative impacts on the host platform's stability and b) the redundancy of the arm affords a highly reachable workspace for the end effector, allowing the end effector to track environmental objects smoothly despite host platform motions. This paper describes the design of a hyper-redundant manipulator suitable for studying its applicability to MM-UAV applications and provides preliminary results from its initial testing while mounted on a stationary scaffold.",
"title": ""
},
{
"docid": "ecf2b2d6a951d84aad15321f029fd014",
"text": "This paper reports the design principles and evaluation results of a new experimental hybrid intrusion detection system (HIDS). This hybrid system combines the advantages of low false-positive rate of signature-based intrusion detection system (IDS) and the ability of anomaly detection system (ADS) to detect novel unknown attacks. By mining anomalous traffic episodes from Internet connections, we build an ADS that detects anomalies beyond the capabilities of signature-based SNORT or Bro systems. A weighted signature generation scheme is developed to integrate ADS with SNORT by extracting signatures from anomalies detected. HIDS extracts signatures from the output of ADS and adds them into the SNORT signature database for fast and accurate intrusion detection. By testing our HIDS scheme over real-life Internet trace data mixed with 10 days of Massachusetts Institute of Technology/Lincoln Laboratory (MIT/LL) attack data set, our experimental results show a 60 percent detection rate of the HIDS, compared with 30 percent and 22 percent in using the SNORT and Bro systems, respectively. This sharp increase in detection rate is obtained with less than 3 percent false alarms. The signatures generated by ADS upgrade the SNORT performance by 33 percent. The HIDS approach proves the vitality of detecting intrusions and anomalies, simultaneously, by automated data mining and signature generation over Internet connection episodes",
"title": ""
},
{
"docid": "8eb5e5d7c224782506aba37dcb91614f",
"text": "With adolescents’ frequent use of social media, electronic bullying has emerged as a powerful platform for peer victimization. The present two studies explore how adolescents perceive electronic vs. traditional bullying in emotional impact and strategic responses. In Study 1, 97 adolescents (mean age = 15) viewed hypothetical peer victimization scenarios, in parallel electronic and traditional forms, with female characters experiencing indirect relational aggression and direct verbal aggression. In Study 2, 47 adolescents (mean age = 14) viewed the direct verbal aggression scenario from Study 1, and a new scenario, involving male characters in the context of direct verbal aggression. Participants were asked to imagine themselves as the victim in all scenarios and then rate their emotional reactions, strategic responses, and goals for the outcome. Adolescents reported significant negative emotions and disruptions in typical daily activities as the victim across divergent bullying scenarios. In both studies few differences emerged when comparing electronic to traditional bullying, suggesting that online and off-line bullying are subtypes of peer victimization. There were expected differences in strategic responses that fit the medium of the bullying. Results also suggested that embarrassment is a common and highly relevant negative experience in both indirect relational and direct verbal aggression among",
"title": ""
},
{
"docid": "42616fa0c56be96e84dc86d463a926d3",
"text": "Forensic dentistry delineates the overlap between the dental and the legal professions. Forensic identifi cations by their nature are multidisciplinary team eff orts. Odontologists can examine the structure of the teeth and jaws for clues that may support anthropological age estimates. Apart from dental identifi cation, forensic odontology is also applied in the investigation of crimes caused by dentition, such as bite marks. The importance of pedodontist in forensic odontology is to apply his expertise in various fi elds like child abuse and neglect, mass disaster, accidental and non-accidental oral trauma, age determination, and dental records. The aim of this paper is to discuss about the pedodontist perspective in forensic dentistry.",
"title": ""
},
{
"docid": "1aa8cb45c495f6086706648318147a83",
"text": "In what ways, and to what extent, is social cognition distinguished from cognition in general? And how do data from cognitive neuroscience speak to this question? I review recent findings that argue social cognition may indeed be specialized, and at multiple levels. One particularly interesting respect in which social cognition differs from the rest of cognition is in its close interaction with the social environment. We actively probe other people in order to make inferences about what is going on in their minds (e.g., by asking them questions, and directing our gaze onto them), and we use the minds of other people as a collective resource. Experiments from our own laboratory point to the amygdala as one structure that is critically involved in such processes.",
"title": ""
},
{
"docid": "58fbd637f7c044aeb0d55ba015c70f61",
"text": "This paper outlines an innovative software development that utilizes Quality of Service (QoS) and parallel technologies in Cisco Catalyst Switches to increase the analytical performance of a Network Intrusion Detection and Protection System (NIDPS) when deployed in highspeed networks. We have designed a real network to present experiments that use a Snort NIDPS. Our experiments demonstrate the weaknesses of NIDPSes, such as inability to process multiple packets and propensity to drop packets in heavy traffic and high-speed networks without analysing them. We tested Snort’s analysis performance, gauging the number of packets sent, analysed, dropped, filtered, injected, and outstanding. We suggest using QoS configuration technologies in a Cisco Catalyst 3560 Series Switch and parallel Snorts to improve NIDPS performance and to reduce the number of dropped packets. Our results show that our novel configuration improves performance.",
"title": ""
},
{
"docid": "cc8c46399664594cdaa1bfc6c480a455",
"text": "INTRODUCTION\nPatients will typically undergo awake surgery for permanent implantation of spinal cord stimulation (SCS) in an attempt to optimize electrode placement using patient feedback about the distribution of stimulation-induced paresthesia. The present study compared efficacy of first-time electrode placement under awake conditions with that of neurophysiologically guided placement under general anesthesia.\n\n\nMETHODS\nA retrospective review was performed of 387 SCS surgeries among 259 patients which included 167 new stimulator implantation to determine whether first time awake surgery for placement of spinal cord stimulators is preferable to non-awake placement.\n\n\nRESULTS\nThe incidence of device failure for patients implanted using neurophysiologically guided placement under general anesthesia was one-half that for patients implanted awake (14.94% vs. 29.7%).\n\n\nCONCLUSION\nNon-awake surgery is associated with fewer failure rates and therefore fewer re-operations, making it a viable alternative. Any benefits of awake implantation should carefully be considered in the future.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.