query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 4
100
| subset
stringclasses 7
values |
---|---|---|---|---|
d5448b21752877b65d6e37847d950e32
|
Rice crop yield forecasting of tropical wet and dry climatic zone of India using data mining techniques
|
[
{
"docid": "8fb10190ba586026ff5235432c438c47",
"text": "This paper presents the various crop yield prediction methods using data mining techniques. Agricultural system is very complex since it deals with large data situation which comes from a number of factors. Crop yield prediction has been a topic of interest for producers, consultants, and agricultural related organizations. In this paper our focus is on the applications of data mining techniques in agricultural field. Different Data Mining techniques such as K-Means, K-Nearest Neighbor(KNN), Artificial Neural Networks(ANN) and Support Vector Machines(SVM) for very recent applications of data mining techniques in agriculture field. Data mining technology has received a great progress with the rapid development of computer science, artificial intelligence. Data Mining is an emerging research field in agriculture crop yield analysis. Data Mining is the process of identifying the hidden patterns from large amount of data. Yield prediction is a very important agricultural problem that remains to be solved based on the available data. The problem of yield prediction can be solved by employing data mining techniques.",
"title": ""
}
] |
[
{
"docid": "164e5bde10882e3f7a6bcdf473eb7387",
"text": "This paper is written for (social science) researchers seeking to analyze the wealth of social media now available. It presents a comprehensive review of software tools for social networking media, wikis, really simple syndication feeds, blogs, newsgroups, chat and news feeds. For completeness, it also includes introductions to social media scraping, storage, data cleaning and sentiment analysis. Although principally a review, the paper also provides a methodology and a critique of social media tools. Analyzing social media, in particular Twitter feeds for sentiment analysis, has become a major research and business activity due to the availability of web-based application programming interfaces (APIs) provided by Twitter, Facebook and News services. This has led to an ‘explosion’ of data services, software tools for scraping and analysis and social media analytics platforms. It is also a research area undergoing rapid change and evolution due to commercial pressures and the potential for using social media data for computational (social science) research. Using a simple taxonomy, this paper provides a review of leading software tools and how to use them to scrape, cleanse and analyze the spectrum of social media. In addition, it discussed the requirement of an experimental computational environment for social media research and presents as an illustration the system architecture of a social media (analytics) platform built by University College London. The principal contribution of this paper is to provide an overview (including code fragments) for scientists seeking to utilize social media scraping and analytics either in their research or business. The data retrieval techniques that are presented in this paper are valid at the time of writing this paper (June 2014), but they are subject to change since social media data scraping APIs are rapidly changing.",
"title": ""
},
{
"docid": "586d89b6d45fd49f489f7fb40c87eb3a",
"text": "Little research has examined the impacts of enterprise resource planning (ERP) systems implementation on job satisfaction. Based on a 12-month study of 2,794 employees in a telecommunications firm, we found that ERP system implementation moderated the relationships between three job characteristics (skill variety, autonomy, and feedback) and job satisfaction. Our findings highlight the key role that ERP system implementation can have in altering wellestablished relationships in the context of technology-enabled organizational change situations. This work also extends research on technology diffusion by moving beyond a focus on technology-centric outcomes, such as system use, to understanding broader job outcomes. Carol Saunders was the accepting senior editor for this paper.",
"title": ""
},
{
"docid": "0070d6e21bdb8bac260178603cfbf67d",
"text": "Sound is a medium that conveys functional and emotional information in a form of multilayered streams. With the use of such advantage, robot sound design can open a way for being more efficient communication in human-robot interaction. As the first step of research, we examined how individuals perceived the functional and emotional intention of robot sounds and whether the perceived information from sound is associated with their previous experience with science fiction movies. The sound clips were selected based on the context of the movie scene (i.e., Wall-E, R2-D2, BB8, Transformer) and classified as functional (i.e., platform, monitoring, alerting, feedback) and emotional (i.e., positive, neutral, negative). A total of 12 participants were asked to identify the perceived properties for each of the 30 items. We found that the perceived emotional and functional messages varied from those originally intended and differed by previous experience.",
"title": ""
},
{
"docid": "09ab256473ab66272ccf0862b6a51891",
"text": "The Mediterranean Diet has been associated with greater longevity and quality of life in epidemiological studies, the majority being observational. The application of evidence-based medicine to the area of public health nutrition involves the necessity of developing clinical trials and systematic reviews to develop sound recommendations. The purpose of this study was to analyze and review the experimental studies on Mediterranean diet and disease prevention. A systematic review was made and a total of 43 articles corresponding to 35 different experimental studies were selected. Results were analyzed for the effects of the Mediterranean diet on lipoproteins, endothelial resistance, diabetes and antioxidative capacity, cardiovascular diseases, arthritis, cancer, body composition, and psychological function. The Mediterranean diet showed favorable effects on lipoprotein levels, endothelium vasodilatation, insulin resistance, metabolic syndrome, antioxidant capacity, myocardial and cardiovascular mortality, and cancer incidence in obese patients and in those with previous myocardial infarction. Results disclose the mechanisms of the Mediterranean diet in disease prevention, particularly in cardiovascular disease secondary prevention, but also emphasize the need to undertake experimental research and systematic reviews in the areas of primary prevention of cardiovascular disease, hypertension, diabetes, obesity, infectious diseases, age-related cognitive impairment, and cancer, among others. Interventions should use food scores or patterns to ascertain adherence to the Mediterranean diet. Further experimental research is needed to corroborate the benefits of the Mediterranean diet and the underlying mechanisms, and in this sense the methodology of the ongoing PREDIMED study is explained.",
"title": ""
},
{
"docid": "11c245ca7bc133155ff761374dfdea6e",
"text": "Received Nov 12, 2017 Revised Jan 20, 2018 Accepted Feb 11, 2018 In this paper, a modification of PVD (Pixel Value Differencing) algorithm is used for Image Steganography in spatial domain. It is normalizing secret data value by encoding method to make the new pixel edge difference less among three neighbors (horizontal, vertical and diagonal) and embedding data only to less intensity pixel difference areas or regions. The proposed algorithm shows a good improvement for both color and gray-scale images compared to other algorithms. Color images performances are better than gray images. However, in this work the focus is mainly on gray images. The strenght of this scheme is that any random hidden/secret data do not make any shuttle differences to Steg-image compared to original image. The bit plane slicing is used to analyze the maximum payload that has been embeded into the cover image securely. The simulation results show that the proposed algorithm is performing better and showing great consistent results for PSNR, MSE values of any images, also against Steganalysis attack.",
"title": ""
},
{
"docid": "89432b112f153319d3a2a816c59782e3",
"text": "The Eyelink Toolbox software supports the measurement of eye movements. The toolbox provides an interface between a high-level interpreted language (MATLAB), a visual display programming toolbox (Psychophysics Toolbox), and a video-based eyetracker (Eyelink). The Eyelink Toolbox enables experimenters to measure eye movements while simultaneously executing the stimulus presentation routines provided by the Psychophysics Toolbox. Example programs are included with the toolbox distribution. Information on the Eyelink Toolbox can be found at http://psychtoolbox.org/.",
"title": ""
},
{
"docid": "2a67a524cb3279967207b1fa8748cd04",
"text": "Recent work in Information Retrieval (IR) using Deep Learning models has yielded state of the art results on a variety of IR tasks. Deep neural networks (DNN) are capable of learning ideal representations of data during the training process, removing the need for independently extracting features. However, the structures of these DNNs are often tailored to perform on specific datasets. In addition, IR tasks deal with text at varying levels of granularity from single factoids to documents containing thousands of words. In this paper, we examine the role of the granularity on the performance of common state of the art DNN structures in IR.",
"title": ""
},
{
"docid": "1ef2e54d021f9d149600f0bc7bebb0cd",
"text": "The field of open-domain conversation generation using deep neural networks has attracted increasing attention from researchers for several years. However, traditional neural language models tend to generate safe, generic reply with poor logic and no emotion. In this paper, an emotional conversation generation orientated syntactically constrained bidirectional-asynchronous framework called E-SCBA is proposed to generate meaningful (logical and emotional) reply. In E-SCBA, pre-generated emotion keyword and topic keyword are asynchronously introduced into the reply during the generation, and the process of decoding is much different from the most existing methods that generates reply from the first word to the end. A newly designed bidirectional-asynchronous decoder with the multi-stage strategy is proposed to support this idea, which ensures the fluency and grammaticality of reply by making full use of syntactic constraint. Through the experiments, the results show that our framework not only improves the diversity of replies, but gains a boost on both logic and emotion compared with baselines as well.",
"title": ""
},
{
"docid": "211cf327b65cbd89cf635bbeb5fa9552",
"text": "BACKGROUND\nAdvanced mobile communications and portable computation are now combined in handheld devices called \"smartphones\", which are also capable of running third-party software. The number of smartphone users is growing rapidly, including among healthcare professionals. The purpose of this study was to classify smartphone-based healthcare technologies as discussed in academic literature according to their functionalities, and summarize articles in each category.\n\n\nMETHODS\nIn April 2011, MEDLINE was searched to identify articles that discussed the design, development, evaluation, or use of smartphone-based software for healthcare professionals, medical or nursing students, or patients. A total of 55 articles discussing 83 applications were selected for this study from 2,894 articles initially obtained from the MEDLINE searches.\n\n\nRESULTS\nA total of 83 applications were documented: 57 applications for healthcare professionals focusing on disease diagnosis (21), drug reference (6), medical calculators (8), literature search (6), clinical communication (3), Hospital Information System (HIS) client applications (4), medical training (2) and general healthcare applications (7); 11 applications for medical or nursing students focusing on medical education; and 15 applications for patients focusing on disease management with chronic illness (6), ENT-related (4), fall-related (3), and two other conditions (2). The disease diagnosis, drug reference, and medical calculator applications were reported as most useful by healthcare professionals and medical or nursing students.\n\n\nCONCLUSIONS\nMany medical applications for smartphones have been developed and widely used by health professionals and patients. The use of smartphones is getting more attention in healthcare day by day. Medical applications make smartphones useful tools in the practice of evidence-based medicine at the point of care, in addition to their use in mobile clinical communication. Also, smartphones can play a very important role in patient education, disease self-management, and remote monitoring of patients.",
"title": ""
},
{
"docid": "b93446bab637abd4394338615a5ef6e9",
"text": "Genetic programming is a methodology inspired by biological evolution. By using computational analogs to biological crossover and mutation new versions of a program are generated automatically. This population of new programs is then evaluated by an user defined fittness function to only select the programs that show an improved behavior as compared to the original program. In this case the desired behavior is to retain all original functionality and additionally fixing bugs found in the program code.",
"title": ""
},
{
"docid": "cfeb97c3be1c697fb500d54aa43af0e1",
"text": "The development of accurate and robust palmprint verification algorithms is a critical issue in automatic palmprint authentication systems. Among various palmprint verification approaches, the orientation based coding methods, such as competitive code (CompCode), palmprint orientation code (POC) and robust line orientation code (RLOC), are state-of-the-art ones. They extract and code the locally dominant orientation as features and could match the input palmprint in real-time and with high accuracy. However, using only one dominant orientation to represent a local region may lose some valuable information because there are cross lines in the palmprint. In this paper, we propose a novel feature extraction algorithm, namely binary orientation co-occurrence vector (BOCV), to represent multiple orientations for a local region. The BOCV can better describe the local orientation features and it is more robust to image rotation. Our experimental results on the public palmprint database show that the proposed BOCV outperforms the CompCode, POC and RLOC by reducing the equal error rate (EER) significantly. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "759f5b6d1889e09cfc78b2539283fa38",
"text": "CONTEXT\nVentilator management protocols shorten the time required to wean adult patients from mechanical ventilation. The efficacy of such weaning protocols among children has not been studied.\n\n\nOBJECTIVE\nTo evaluate whether weaning protocols are superior to standard care (no defined protocol) for infants and children with acute illnesses requiring mechanical ventilator support and whether a volume support weaning protocol using continuous automated adjustment of pressure support by the ventilator (ie, VSV) is superior to manual adjustment of pressure support by clinicians (ie, PSV).\n\n\nDESIGN AND SETTING\nRandomized controlled trial conducted in the pediatric intensive care units of 10 children's hospitals across North America from November 1999 through April 2001.\n\n\nPATIENTS\nOne hundred eighty-two spontaneously breathing children (<18 years old) who had been receiving ventilator support for more than 24 hours and who failed a test for extubation readiness on minimal pressure support.\n\n\nINTERVENTIONS\nPatients were randomized to a PSV protocol (n = 62), VSV protocol (n = 60), or no protocol (n = 60).\n\n\nMAIN OUTCOME MEASURES\nDuration of weaning time (from randomization to successful extubation); extubation failure (any invasive or noninvasive ventilator support within 48 hours of extubation).\n\n\nRESULTS\nExtubation failure rates were not significantly different for PSV (15%), VSV (24%), and no protocol (17%) (P =.44). Among weaning successes, median duration of weaning was not significantly different for PSV (1.6 days), VSV (1.8 days), and no protocol (2.0 days) (P =.75). Male children more frequently failed extubation (odds ratio, 7.86; 95% confidence interval, 2.36-26.2; P<.001). Increased sedative use in the first 24 hours of weaning predicted extubation failure (P =.04) and, among extubation successes, duration of weaning (P<.001).\n\n\nCONCLUSIONS\nIn contrast with adult patients, the majority of children are weaned from mechanical ventilator support in 2 days or less. Weaning protocols did not significantly shorten this brief duration of weaning.",
"title": ""
},
{
"docid": "2f522594c025614bf6c44913e8cc672b",
"text": "Electronic-based communication (such as Immersive Virtual Environments; IVEs) may offer new ways of satisfying the need for social connection, but they also provide ways this need can be thwarted. Ostracism, being ignored and excluded, is a common social experience that threatens fundamental human needs (i.e., belonging, control, self-esteem, and meaningful existence). Previous ostracism research has made use of a variety of paradigms, including minimal electronic-based interactions (e.g., Cyberball) and communication (e.g., chatrooms and Short Message Services). These paradigms, however, lack the mundane realism that many IVEs now offer. Further, IVE paradigms designed to measure ostracism may allow researchers to test more nuanced hypotheses about the effects of ostracism. We created an IVE in which ostracism could be manipulated experimentally, emulating a previously validated minimal ostracism paradigm. We found that participants who were ostracized in this IVE experienced the same negative effects demonstrated in other ostracism paradigms, providing, to our knowledge, the first evidence of the negative effects of ostracism in virtual environments. Though further research directly exploring these effects in online virtual environments is needed, this research suggests that individuals encountering ostracism in other virtual environments (such as massively multiplayer online role playing games; MMORPGs) may experience negative effects similar to those of being ostracized in real life. This possibility may have serious implications for individuals who are marginalized in their real life and turn to IVEs to satisfy their need for social connection.",
"title": ""
},
{
"docid": "fcd9a80d35a24c7222392c11d3376c72",
"text": "A dual-band coplanar waveguide (CPW)-fed hybrid antenna consisting of a 5.4 GHz high-band CPW-fed inductive slot antenna and a 2.4 GHz low-band bifurcated F-shaped monopole antenna is proposed and investigated experimentally. This antenna possesses an appealing characteristic that the CPW-fed inductive slot antenna reinforces and thus improves the radiation efficiency of the bifurcated monopole antenna. Moreover, due to field orthogonality, one band resonant frequency and return loss bandwidth of the proposed hybrid antenna allows almost independent optimization without noticeably affecting those of the other band.",
"title": ""
},
{
"docid": "52ab2c3f6f47d3b9e5ce60fbbe3385a6",
"text": "Nosologically, Alzheimer disease may not be considered to be a single disorder in spite of a common clinical phenotype. Only a small proportion of about 5% to 10% of all Alzheimer cases is due to genetic mutations (type I) whereas the great majority of patients was found to be sporadic in origin. It may be assumed that susceptibility genes along with lifestyle risk factors contribute to the causation of the age-related sporadic Alzheimer disease (type II). In this context, the desensitization of the neuronal insulin receptor similar to not-insulin dependent diabetes mellitus may be of pivotal significance. This abnormality along with a reduction in brain insulin concentration is assumed to induce a cascade-like process of disturbances including cellular glucose, acetylcholine, cholesterol, and ATP associated with abnormalities in membrane pathology and the formation of both amyloidogenic derivatives and hyperphosphorylated tau protein. Sporadic Alzheimer disease may, thus, be considered to be the brain type of diabetes mellitus II. Experimental evidence is provided and discussed.",
"title": ""
},
{
"docid": "6e4798c01a0a241d1f3746cd98ba9421",
"text": "BACKGROUND\nLarge blood-based prospective studies can provide reliable assessment of the complex interplay of lifestyle, environmental and genetic factors as determinants of chronic disease.\n\n\nMETHODS\nThe baseline survey of the China Kadoorie Biobank took place during 2004-08 in 10 geographically defined regions, with collection of questionnaire data, physical measurements and blood samples. Subsequently, a re-survey of 25,000 randomly selected participants was done (80% responded) using the same methods as in the baseline. All participants are being followed for cause-specific mortality and morbidity, and for any hospital admission through linkages with registries and health insurance (HI) databases.\n\n\nRESULTS\nOverall, 512,891 adults aged 30-79 years were recruited, including 41% men, 56% from rural areas and mean age was 52 years. The prevalence of ever-regular smoking was 74% in men and 3% in women. The mean blood pressure was 132/79 mmHg in men and 130/77 mmHg in women. The mean body mass index (BMI) was 23.4 kg/m(2) in men and 23.8 kg/m(2) in women, with only 4% being obese (>30 kg/m(2)), and 3.2% being diabetic. Blood collection was successful in 99.98% and the mean delay from sample collection to processing was 10.6 h. For each of the main baseline variables, there is good reproducibility but large heterogeneity by age, sex and study area. By 1 January 2011, over 10,000 deaths had been recorded, with 91% of surviving participants already linked to HI databases.\n\n\nCONCLUSION\nThis established large biobank will be a rich and powerful resource for investigating genetic and non-genetic causes of many common chronic diseases in the Chinese population.",
"title": ""
},
{
"docid": "971ec0f5c672c12b2ea58f9306053e6c",
"text": "The random forest algorithm (RF) has several hyperparameters that have to be set by the user, e.g., the number of observations drawn randomly for each tree and whether they are drawn with or without replacement, the number of variables drawn randomly for each split, the splitting rule, the minimum number of samples that a node must contain and the number of trees. In this paper, we first provide a literature review on the parameters’ influence on the prediction performance and on variable importance measures. It is well known that in most cases RF works reasonably well with the default values of the hyperparameters specified in software packages. Nevertheless, tuning the hyperparameters can improve the performance of RF. In the second part of this paper, after a brief overview of tuning strategies we demonstrate the application of one of the most established tuning strategies, model-based optimization (MBO). To make it easier to use, we provide the tuneRanger R package that tunes RF with MBO automatically. In a benchmark study on several datasets, we compare the prediction performance and runtime of tuneRanger with other tuning implementations in R and RF with default hyperparameters.",
"title": ""
},
{
"docid": "b9c40aa4c8ac9d4b6cbfb2411c542998",
"text": "This review will summarize molecular and genetic analyses aimed at identifying the mechanisms underlying the sequence of events during plant zygotic embryogenesis. These events are being studied in parallel with the histological and morphological analyses of somatic embryogenesis. The strength and limitations of somatic embryogenesis as a model system will be discussed briefly. The formation of the zygotic embryo has been described in some detail, but the molecular mechanisms controlling the differentiation of the various cell types are not understood. In recent years plant molecular and genetic studies have led to the identification and characterization of genes controlling the establishment of polarity, tissue differentiation and elaboration of patterns during embryo development. An investigation of the developmental basis of a number of mutant phenotypes has enabled the identification of gene activities promoting (1) asymmetric cell division and polarization leading to heterogeneous partitioning of the cytoplasmic determinants necessary for the initiation of embryogenesis (e.g. GNOM), (2) the determination of the apical-basal organization which is established independently of the differentiation of the tissues of the radial pattern elements (e.g. KNOLLE, FACKEL, ZWILLE), (3) the differentiation of meristems (e.g. SHOOT-MERISTEMLESS), and (4) the formation of a mature embryo characterized by the accumulation of LEA and storage proteins. The accumulation of these two types of proteins is controlled by ABA-dependent regulatory mechanisms as shown using both ABA-deficient and ABA-insensitive mutants (e.g. ABA, ABI3). Both types of embryogenesis have been studied by different techniques and common features have been identified between them. In spite of the relative difficulty of identifying the original cells involved in the developmental processes of somatic embryogenesis, common regulatory mechanisms are probably involved in the first stages up to the globular form. Signal molecules, such as growth regulators, have been shown to play a role during development of both types of embryos. The most promising method for identifying regulatory mechanisms responsible for the key events of embryogenesis will come from molecular and genetic analyses. The mutations already identified will shed light on the nature of the genes that affect developmental processes as well as elucidating the role of the various regulatory genes that control plant embryogenesis.",
"title": ""
},
{
"docid": "8adf698c03f01dced7d021cc103d51a4",
"text": "Real world data, especially in the domain of robotics, is notoriously costly to collect. One way to circumvent this can be to leverage the power of simulation in order to produce large amounts of labelled data. However, training models on simulated images does not readily transfer to real-world ones. Using domain adaptation methods to cross this “reality gap” requires at best a large amount of unlabelled real-world data, whilst domain randomization alone can waste modeling power, rendering certain reinforcement learning (RL) methods unable to learn the task of interest. In this paper, we present Randomized-to-Canonical Adaptation Networks (RCANs), a novel approach to crossing the visual reality gap that uses no real-world data. Our method learns to translate randomized rendered images into their equivalent non-randomized, canonical versions. This in turn allows for real images to also be translated into canonical sim images. We demonstrate the effectiveness of this sim-to-real approach by training a vision-based closed-loop grasping reinforcement learning agent in simulation, and then transferring it to the real world to attain 70% zeroshot grasp success on unseen objects, a result that almost doubles the success of learning the same task directly on domain randomization alone. Additionally, by joint finetuning in the real-world with only 5,000 real-world grasps, our method achieves 91%, outperforming a state-of-the-art system trained with 580,000 real-world grasps, resulting in a reduction of real-world data by more than 99%.",
"title": ""
},
{
"docid": "a091e8885bd30e58f6de7d14e8170199",
"text": "This paper represents the design and implementation of an indoor based navigation system for visually impaired people using a path finding algorithm and a wearable cap. This development of the navigation system consists of two modules: a Wearable part and a schematic of the area where the navigation system works by guiding the user. The wearable segment consists of a cap designed with IR receivers, an Arduino Nano processor, a headphone and an ultrasonic sensor. The schematic segment plans for the movement directions inside a room by dividing the room area into cells with a predefined matrix containing location information. For navigating the user, sixteen IR transmitters which continuously monitor the user position are placed at equal interval in the XY (8 in X-plane and 8 in Y-plane) directions of the indoor environment. A Braille keypad is used by the user where he gave the cell number for determining destination position. A path finding algorithm has been developed for determining the position of the blind person and guide him/her to his/her destination. The developed algorithm detects the position of the user by receiving continuous data from transmitter and guide the user to his/her destination by voice command. The ultrasonic sensor mounted on the cap detects the obstacles along the pathway of the visually impaired person. This proposed navigation system does not require any complex infrastructure design or the necessity of holding any extra assistive device by the user (i.e. augmented cane, smartphone, cameras). In the proposed design, prerecorded voice command will provide movement guideline to every edge of the indoor environment according to the user's destination choice. This makes this navigation system relatively simple and user friendly for those who are not much familiar with the most advanced technology and people with physical disabilities. Moreover, this proposed navigation system does not need GPS or any telecommunication networks which makes it suitable for use in rural areas where there is no telecommunication network coverage. In conclusion, the proposed system is relatively cheaper to implement in comparison to other existing navigation system, which will contribute to the betterment of the visually impaired people's lifestyle of developing and under developed countries.",
"title": ""
}
] |
scidocsrr
|
526a12a1da4c359479ce419be9393b2c
|
Beyond Left-to-Right: Multiple Decomposition Structures for SMT
|
[
{
"docid": "afd00b4795637599f357a7018732922c",
"text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.",
"title": ""
},
{
"docid": "4292a60a5f76fd3e794ce67d2ed6bde3",
"text": "If two translation systems differ differ in performance on a test set, can we trust that this indicates a difference in true system quality? To answer this question, we describe bootstrap resampling methods to compute statistical significance of test results, and validate them on the concrete example of the BLEU score. Even for small test sizes of only 300 sentences, our methods may give us assurances that test result differences are real.",
"title": ""
}
] |
[
{
"docid": "d1eb2bf9d265017450a8a891540afa30",
"text": "Air-gapped networks are isolated, separated both logically and physically from public networks. Although the feasibility of invading such systems has been demonstrated in recent years, exfiltration of data from air-gapped networks is still a challenging task. In this paper we present GSMem, a malware that can exfiltrate data through an air-gap over cellular frequencies. Rogue software on an infected target computer modulates and transmits electromagnetic signals at cellular frequencies by invoking specific memory-related instructions and utilizing the multichannel memory architecture to amplify the transmission. Furthermore, we show that the transmitted signals can be received and demodulated by a rootkit placed in the baseband firmware of a nearby cellular phone. We present crucial design issues such as signal generation and reception, data modulation, and transmission detection. We implement a prototype of GSMem consisting of a transmitter and a receiver and evaluate its performance and limitations. Our current results demonstrate its efficacy and feasibility, achieving an effective transmission distance of 1 5.5 meters with a standard mobile phone. When using a dedicated, yet affordable hardware receiver, the effective distance reached over 30 meters.",
"title": ""
},
{
"docid": "793082d8e5367625145a7d7993bec19f",
"text": "Future advanced driver assistant systems put high demands on the environmental perception especially in urban environments. Today's on-board sensors and on-board algorithms still do not reach a satisfying level of development from the point of view of robustness and availability. Thus, map data is often used as an additional data input to support the on-board sensor system and algorithms. The usage of map data requires a highly correct pose within the map even in cases of positioning errors by global navigation satellite systems or geometrical errors in the map data. In this paper we propose and compare two approaches for map-relative localization exclusively using a lane-level map. These approaches deliberately avoid the usage of detailed a priori maps containing point-landmarks, grids or road-markings. Additionally, we propose a grid-based on-board fusion of road-marking information and stationary obstacles addressing the problem of missing or incomplete road-markings in urban scenarios.",
"title": ""
},
{
"docid": "0257589dc59f1ddd4ec19a2450e3156f",
"text": "Drawing upon the literatures on beliefs about magical contagion and property transmission, we examined people's belief in a novel mechanism of human-to-human contagion, emotional residue. This is the lay belief that people's emotions leave traces in the physical environment, which can later influence others or be sensed by others. Studies 1-4 demonstrated that Indians are more likely than Americans to endorse a lay theory of emotions as substances that move in and out of the body, and to claim that they can sense emotional residue. However, when the belief in emotional residue is measured implicitly, both Indians and American believe to a similar extent that emotional residue influences the moods and behaviors of those who come into contact with it (Studies 5-7). Both Indians and Americans also believe that closer relationships and a larger number of people yield more detectable residue (Study 8). Finally, Study 9 demonstrated that beliefs about emotional residue can influence people's behaviors. Together, these finding suggest that emotional residue is likely to be an intuitive concept, one that people in different cultures acquire even without explicit instruction.",
"title": ""
},
{
"docid": "f91a507a9cb7bdee2e8c3c86924ced8d",
"text": "a r t i c l e i n f o It is often stated that bullying is a \" group process \" , and many researchers and policymakers share the belief that interventions against bullying should be targeted at the peer-group level rather than at individual bullies and victims. There is less insight into what in the group level should be changed and how, as the group processes taking place at the level of the peer clusters or school classes have not been much elaborated. This paper reviews the literature on the group involvement in bullying, thus providing insight into the individuals' motives for participation in bullying, the persistence of bullying, and the adjustment of victims across different peer contexts. Interventions targeting the peer group are briefly discussed and future directions for research on peer processes in bullying are suggested. Bullying is a subtype of aggressive behavior, in which an individual or a group of individuals repeatedly attacks, humiliates, and/or excludes a relatively powerless person. The majority of studies on the topic have been conducted in schools, focusing on bullying among the concept of bullying is used to refer to peer-to-peer bullying among school-aged children and youth, when not otherwise mentioned. It is known that a sizable minority of primary and secondary school students is involved in peer-to-peer bullying either as perpetrators or victims — or as both, being both bullied themselves and harassing others. In WHO's Health Behavior in School-Aged Children survey (HBSC, see Craig & Harel, 2004), the average prevalence of victims across the 35 countries involved was 11%, whereas bullies represented another 11%. Children who report both bullying others and being bullied by others (so-called bully–victims) were not identified in the HBSC study, but other studies have shown that approximately 4–6% of the children can be classified as bully–victims (Haynie et al., 2001; Nansel et al., 2001). Bullying constitutes a serious risk for the psychosocial and academic adjustment of both victims",
"title": ""
},
{
"docid": "2a4eb1eddf0b83f1a7cd77d35c95b684",
"text": "Recent developments in machine learning have the potential to revolutionize education by providing an optimized, personalized learning experience for each student. We study the problem of selecting the best personalized learning action that each student should take next given their learning history; possible actions could include reading a textbook section, watching a lecture video, interacting with a simulation or lab, solving a practice question, and so on. We first estimate each student’s knowledge profile from their binary-valued graded responses to questions in their previous assessments using the SPARFA framework. We then employ these knowledge profiles as contexts in the contextual (multi-armed) bandits framework to learn a policy that selects the personalized learning actions that maximize each student’s immediate success, i.e., their performance on their next assessment. We develop two algorithms for personalized learning action selection. While one is mainly of theoretical interest, we experimentally validate the other using a real-world educational dataset. Our experimental results demonstrate that our approach achieves superior or comparable performance as compared to existing algorithms in terms of maximizing the students’ immediate success.",
"title": ""
},
{
"docid": "323abed1a623e49db50bed383ab26a92",
"text": "Robust object detection is a critical skill for robotic applications in complex environments like homes and offices. In this paper we propose a method for using multiple cameras to simultaneously view an object from multiple angles and at high resolutions. We show that our probabilistic method for combining the camera views, which can be used with many choices of single-image object detector, can significantly improve accuracy for detecting objects from many viewpoints. We also present our own single-image object detection method that uses large synthetic datasets for training. Using a distributed, parallel learning algorithm, we train from very large datasets (up to 100 million image patches). The resulting object detector achieves high performance on its own, but also benefits substantially from using multiple camera views. Our experimental results validate our system in realistic conditions and demonstrates significant performance gains over using standard single-image classifiers, raising accuracy from 0.86 area-under-curve to 0.97.",
"title": ""
},
{
"docid": "90563706ada80e880b7fcf25489f9b27",
"text": "We describe the large vocabulary automatic speech recognition system developed for Modern Standard Arabic by the SRI/Nightingale team, and used for the 2007 GALE evaluation as part of the speech translation system. We show how system performance is affected by different development choices, ranging from text processing and lexicon to decoding system architecture design. Word error rate results are reported on broadcast news and conversational data from the GALE development and evaluation test sets.",
"title": ""
},
{
"docid": "83463fcefea45df04278bbfab15ab053",
"text": "Software metrics as a subject area is over 30 years old, but it has barely penetrated into mainstream software engineering. A key reason for this is that most software metrics activities have not addressed their most important requirement: to provide information to support quantitative managerial decision-making during the software lifecycle. Good support for decision-making implies support for risk assessment and reduction. Yet traditional metrics approaches, often driven by regression-based models for cost estimation and defects prediction, provide little support for managers wishing to use measurement to analyse and minimise risk. The future for software metrics lies in using relatively simple existing metrics to build management decision-support tools that combine different aspects of software development and testing and enable managers to make many kinds of predictions, assessments and trade-offs during the software life-cycle. Our recommended approach is to handle the key factors largely missing from the usual metrics approaches, namely: causality, uncertainty, and combining different (often subjective) evidence. Thus the way forward for software metrics research lies in causal modelling (we propose using Bayesian nets), empirical software engineering, and multi-criteria decision aids.",
"title": ""
},
{
"docid": "a01333e16abb503cf6d26c54ac24d473",
"text": "Topic models could have a huge impact on improving the ways users find and discover content in digital libraries and search interfaces through their ability to automatically learn and apply subject tags to each and every item in a collection, and their ability to dynamically create virtual collections on the fly. However, much remains to be done to tap this potential, and empirically evaluate the true value of a given topic model to humans. In this work, we sketch out some sub-tasks that we suggest pave the way towards this goal, and present methods for assessing the coherence and interpretability of topics learned by topic models. Our large-scale user study includes over 70 human subjects evaluating and scoring almost 500 topics learned from collections from a wide range of genres and domains. We show how scoring model -- based on pointwise mutual information of word-pair using Wikipedia, Google and MEDLINE as external data sources - performs well at predicting human scores. This automated scoring of topics is an important first step to integrating topic modeling into digital libraries",
"title": ""
},
{
"docid": "ffd84e3418a6d1d793f36bfc2efed6be",
"text": "Anterior cingulate cortex (ACC) is a part of the brain's limbic system. Classically, this region has been related to affect, on the basis of lesion studies in humans and in animals. In the late 1980s, neuroimaging research indicated that ACC was active in many studies of cognition. The findings from EEG studies of a focal area of negativity in scalp electrodes following an error response led to the idea that ACC might be the brain's error detection and correction device. In this article, these various findings are reviewed in relation to the idea that ACC is a part of a circuit involved in a form of attention that serves to regulate both cognitive and emotional processing. Neuroimaging studies showing that separate areas of ACC are involved in cognition and emotion are discussed and related to results showing that the error negativity is influenced by affect and motivation. In addition, the development of the emotional and cognitive roles of ACC are discussed, and how the success of this regulation in controlling responses might be correlated with cingulate size. Finally, some theories are considered about how the different subdivisions of ACC might interact with other cortical structures as a part of the circuits involved in the regulation of mental and emotional activity.",
"title": ""
},
{
"docid": "e444dcc97882005658aca256991e816e",
"text": "The terms superordinate, hyponym, and subordinate designate the hierarchical taxonomic relationship of words. They also represent categories and concepts. This relationship is a subject of interest for anthropology, cognitive psychology, psycholinguistics, linguistic semantics, and cognitive linguistics. Taxonomic hierarchies are essentially classificatory systems, and they are supposed to reflect the way that speakers of a language categorize the world of experience. A well-formed taxonomy offers an orderly and efficient set of categories at different levels of specificity (Cruse 2000:180). However, the terms and levels of taxonomic hierarchy used in each discipline vary. This makes it difficult to carry out cross-disciplinary readings on the hierarchical taxonomy of words or categories, which act as an interface in these cognitive-based cross-disciplinary ventures. Not only words— terms and concepts differ but often the nature of the problem is compounded as some terms refer to differing word classes, categories and concepts at the same time. Moreover, the lexical relationship of terms among these lexical hierarchies is far from clear. As a result two lines of thinking can be drawn from the literature: (1) technical terms coined for the hierarchical relationship of words are conflicting and do not reflect reality or environment, and (2) the relationship among these hierarchies of word levels and the underlying principles followed to explain them are uncertain except that of inclusion.",
"title": ""
},
{
"docid": "97f153d8139958fd00002e6a2365d965",
"text": "A method is proposed for fused three-dimensional (3-D) shape estimation and visibility analysis of an unknown, markerless, deforming object through a multicamera vision system. Complete shape estimation is defined herein as the process of 3-D reconstruction of a model through fusion of stereo triangulation data and a visual hull. The differing accuracies of both methods rely on the number and placement of the cameras. Stereo triangulation yields a high-density, high-accuracy reconstruction of a surface patch from a small surface area, while a visual hull yields a complete, low-detail volumetric approximation of the object. The resultant complete 3-D model is, then, temporally projected based on the tracked object’s deformation, yielding a robust deformed shape prediction. Visibility and uncertainty analyses, on the projected model, estimate the expected accuracy of reconstruction at the next sampling instant. In contrast to common techniques that rely on a priori known models and identities of static objects, our method is distinct in its direct application to unknown, markerless, deforming objects, where the object model and identity are unknown to the system. Extensive simulations and comparisons, some of which are presented herein, thoroughly demonstrate the proposed method and its benefits over individual reconstruction techniques. © 2016 SPIE and IS&T [DOI: 10.1117/1.JEI.25.4.041009]",
"title": ""
},
{
"docid": "c63fa63e8af9d5b25ca7f40a710cfcc2",
"text": "With the recent development of deep learning, research in AI has gained new vigor and prominence. While machine learning has succeeded in revitalizing many research fields, such as computer vision, speech recognition, and medical diagnosis, we are yet to witness impressive progress in natural language understanding. One of the reasons behind this unmatched expectation is that, while a bottom-up approach is feasible for pattern recognition, reasoning and understanding often require a top-down approach. In this work, we couple sub-symbolic and symbolic AI to automatically discover conceptual primitives from text and link them to commonsense concepts and named entities in a new three-level knowledge representation for sentiment analysis. In particular, we employ recurrent neural networks to infer primitives by lexical substitution and use them for grounding common and commonsense knowledge by means of multi-dimensional scaling.",
"title": ""
},
{
"docid": "316f7f744db9f8f66c9f4d5b69e7431d",
"text": "We propose automated sport game models as a novel technical means for the analysis of team sport games. The basic idea is that automated sport game models are based on a conceptualization of key notions in such games and probabilistically derived from a set of previous games. In contrast to existing approaches, automated sport game models provide an analysis that is sensitive to their context and go beyond simple statistical aggregations allowing objective, transparent and meaningful concept definitions. Based on automatically gathered spatio-temporal data by a computer vision system, a model hierarchy is built bottom up, where context-sensitive concepts are instantiated by the application of machine learning techniques. We describe the current state of implementation of the ASPOGAMO system including its computer vision subsystem that realizes the idea of automated sport game models. Their usage is exemplified with an analysis of the final of the soccer World Cup 2006.",
"title": ""
},
{
"docid": "df9d74df931a596b7025150d11a18364",
"text": "In recent years, ''gamification'' has been proposed as a solution for engaging people in individually and socially sustainable behaviors, such as exercise, sustainable consumption, and education. This paper studies demographic differences in perceived benefits from gamification in the context of exercise. On the basis of data gathered via an online survey (N = 195) from an exercise gamification service Fitocracy, we examine the effects of gender, age, and time using the service on social, hedonic, and utilitarian benefits and facilitating features of gamifying exercise. The results indicate that perceived enjoyment and usefulness of the gamification decline with use, suggesting that users might experience novelty effects from the service. The findings show that women report greater social benefits from the use of gamification. Further, ease of use of gamification is shown to decline with age. The implications of the findings are discussed. The question of how we understand gamer demographics and gaming behaviors, along with use cultures of different demographic groups, has loomed over the last decade as games became one of the main veins of entertainment and consumer culture (Yi, 2004). The deeply established perception of games being a field of entertainment dominated by young males has been challenged. Nowadays, digital gaming is a mainstream activity with broad demographics. The gender divide has been diminishing, the age span has been widening, and the average age is higher than An illustrative study commissioned by PopCap (Information Solutions Group, 2011) reveals that it is actually women in their 30s and 40s who play the popular social games on social networking services (see e.g. most – outplaying men and younger people. It is clear that age and gender perspectives on gaming activities and motivations require further scrutiny. The expansion of the game industry and the increased competition within the field has also led to two parallel developments: (1) using game design as marketing (Hamari & Lehdonvirta, 2010) and (2) gamification – going beyond what traditionally are regarded as games and implementing game design there often for the benefit of users. For example, services such as Mindbloom, Fitocracy, Zombies, Run!, and Nike+ are aimed at assisting the user toward beneficial behavior related to lifestyle and health choices. However, it is unclear whether we can see age and gender discrepancies in use of gamified services similar to those in other digital gaming contexts. The main difference between games and gamifica-tion is that gamification is commonly …",
"title": ""
},
{
"docid": "246c00f833bf74645eabd8bd773f93d7",
"text": "What kinds of content do children and teenagers author and share on public video platforms? We approached this question through a qualitative directed content analysis of over 250 youth-authored videos filtered by crowdworkers from public videos on YouTube and Vine. We found differences between YouTube and Vine platforms in terms of the age of the youth authors, the type of collaborations witnessed in the videos, and the significantly greater amount of violent, sexual, and obscene content on Vine. We also highlight possible differences in how adults and youths approach online video sharing. Specifically, we consider that adults may view online video as an archive to keep precious memories of everyday life with their family, friends, and pets, humorous moments, and special events, while children and teenagers treat online video as a stage to perform, tell stories, and express their opinions and identities in a performative way.",
"title": ""
},
{
"docid": "86712db837d9057d1c1084c39c871649",
"text": "This paper reports the measurement of the properties of dry or pasteless conductive electrodes to be used for long-term recording of the human electrocardiogram (ECG). Knowledge of these properties is essential for the correct design of the input stage of associated recording amplifiers. Measurements were made on three commercially available conductive carbon based electrodes at pressures of 5mmHg and 20mmHg, located on the lower abdomen of the body on three subjects having different skin types. Parameter values were fitted to a two-time-constant based model of the electrode using data measured over a period of 10s. Values of resistance, ranging from 40kOmega to 1590kOmega and of capacitance ranging from 0.05muF to 38muF were obtained for the components, while the values of the time-constants varied from 0.07s to 3.9s.",
"title": ""
},
{
"docid": "fa7174afedb6b5ed1af73714e086bbab",
"text": "Software failures in server applications are a significant problem for preserving system availability. We present ASSURE, a system that introduces rescue points that recover software from unknown faults while maintaining both system integrity and availability, by mimicking system behavior under known error conditions. Rescue points are locations in existing application code for handling a given set of programmer-anticipated failures, which are automatically repurposed and tested for safely enabling fault recovery from a larger class of (unanticipated) faults. When a fault occurs at an arbitrary location in the program, ASSURE restores execution to an appropriate rescue point and induces the program to recover execution by virtualizing the program's existing error-handling facilities. Rescue points are identified using fuzzing, implemented using a fast coordinated checkpoint-restart mechanism that handles multi-process and multi-threaded applications, and, after testing, are injected into production code using binary patching. We have implemented an ASSURE Linux prototype that operates without application source code and without base operating system kernel changes. Our experimental results on a set of real-world server applications and bugs show that ASSURE enabled recovery for all of the bugs tested with fast recovery times, has modest performance overhead, and provides automatic self-healing orders of magnitude faster than current human-driven patch deployment methods.",
"title": ""
},
{
"docid": "7c85a62d9fd756f729b01024256d9728",
"text": "WiFi are easily available almost everywhere nowadays. Due to this, there is increasing interest in harnessing this technology for purposes other than communication. Therefore, this research was carried out with the main idea of using WiFi in developing an efficient, low cost control system for small office home office (SOHO) indoor environment. The main objective of the research is to develop a proof of concept that WiFi received signal strength indicator (RSSI) can be harnessed and used to develop a control system. The control system basically will help to save energy in an intelligent manner with a very minimum cost for the controller circuit. There are two main parts in the development of the system. First is extracting the RSSI monitoring feed information and analyzing it for designing the control system. The second is the development of the controller circuit for real environment. The simple yet inexpensive controller was tested in an indoor environment and results showed successful operation of the circuit developed.",
"title": ""
},
{
"docid": "394fa55cbbaa5afc7b4cf9b316b4d2ff",
"text": "Paralysis following spinal cord injury, brainstem stroke, amyotrophic lateral sclerosis and other disorders can disconnect the brain from the body, eliminating the ability to perform volitional movements. A neural interface system could restore mobility and independence for people with paralysis by translating neuronal activity directly into control signals for assistive devices. We have previously shown that people with long-standing tetraplegia can use a neural interface system to move and click a computer cursor and to control physical devices. Able-bodied monkeys have used a neural interface system to control a robotic arm, but it is unknown whether people with profound upper extremity paralysis or limb loss could use cortical neuronal ensemble signals to direct useful arm actions. Here we demonstrate the ability of two people with long-standing tetraplegia to use neural interface system-based control of a robotic arm to perform three-dimensional reach and grasp movements. Participants controlled the arm and hand over a broad space without explicit training, using signals decoded from a small, local population of motor cortex (MI) neurons recorded from a 96-channel microelectrode array. One of the study participants, implanted with the sensor 5 years earlier, also used a robotic arm to drink coffee from a bottle. Although robotic reach and grasp actions were not as fast or accurate as those of an able-bodied person, our results demonstrate the feasibility for people with tetraplegia, years after injury to the central nervous system, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals.",
"title": ""
}
] |
scidocsrr
|
9b39d94bcbed362dab33dbbebc376b29
|
Entropy-Driven Adaptive
|
[
{
"docid": "9eba7766cfd92de0593937defda6ce64",
"text": "A basic classifier system, ZCS, is presented that keeps much of Holland's original framework but simplifies it to increase understandability and performance. ZCS's relation to Q-learning is brought out, and their performances compared in environments of two difficulty levels. Extensions to ZCS are proposed for temporary memory, better action selection, more efficient use of the genetic algorithm, and more general classifier representation.",
"title": ""
}
] |
[
{
"docid": "e91d3ae1224ca4c86f72646fd86cc661",
"text": "We examine the functional cohesion of procedures using a data slice abstraction. Our analysis identi es the data tokens that lie on more than one slice as the \\glue\" that binds separate components together. Cohesion is measured in terms of the relative number of glue tokens, tokens that lie on more than one data slice, and super-glue tokens, tokens that lie on all data slices in a procedure, and the adhesiveness of the tokens. The intuition and measurement scale factors are demonstrated through a set of abstract transformations and composition operators. Index terms | software metrics, cohesion, program slices, measurement theory",
"title": ""
},
{
"docid": "48c49e1f875978ec4e2c1d4549a98ffd",
"text": "Deep neural networks have been shown to be very powerful modeling tools for many supervised learning tasks involving complex input patterns. However, they can also easily overfit to training set biases and label noises. In addition to various regularizers, example reweighting algorithms are popular solutions to these problems, but they require careful tuning of additional hyperparameters, such as example mining schedules and regularization hyperparameters. In contrast to past reweighting methods, which typically consist of functions of the cost value of each example, in this work we propose a novel meta-learning algorithm that learns to assign weights to training examples based on their gradient directions. To determine the example weights, our method performs a meta gradient descent step on the current mini-batch example weights (which are initialized from zero) to minimize the loss on a clean unbiased validation set. Our proposed method can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available.",
"title": ""
},
{
"docid": "5f58ed86b89e6967e5c32019415ba4e6",
"text": "Orthogonal matrix has shown advantages in training Recurrent Neural Networks (RNNs), but such matrix is limited to be square for the hidden-to-hidden transformation in RNNs. In this paper, we generalize such square orthogonal matrix to orthogonal rectangular matrix and formulating this problem in feed-forward Neural Networks (FNNs) as Optimization over Multiple Dependent Stiefel Manifolds (OMDSM). We show that the orthogonal rectangular matrix can stabilize the distribution of network activations and regularize FNNs. We propose a novel orthogonal weight normalization method to solve OMDSM. Particularly, it constructs orthogonal transformation over proxy parameters to ensure the weight matrix is orthogonal. To guarantee stability, we minimize the distortions between proxy parameters and canonical weights over all tractable orthogonal transformations. In addition, we design orthogonal linear module (OLM) to learn orthogonal filter banks in practice, which can be used as an alternative to standard linear module. Extensive experiments demonstrate that by simply substituting OLM for standard linear module without revising any experimental protocols, our method improves the performance of the state-of-the-art networks, including Inception and residual networks on CIFAR and ImageNet datasets.",
"title": ""
},
{
"docid": "f069501007d4c9d1ada190353d01c7e9",
"text": "A discrimination theory of selective perception was used to predict that a given trait would be spontaneously salient in a person's self-concept to the exten that this trait was distinctive for the person within her or his social groups. Sixth-grade students' general and physical spontaneous self-concepts were elicited in their classroom settings. The distinctiveness within the classroom of each student's characteristics on each of a variety of dimensions was determined, and it was found that in a majority of cases the dimension was significantly more salient in the spontaneous self-concepts of those students whose characteristic on thedimension was more distinctive. Also reported are incidental findings which include a description of the contents of spontaneous self-comcepts as well as determinants of their length and of the spontaneous mention of one's sex as part of one's self-concept.",
"title": ""
},
{
"docid": "3402901e3f28447d618f5db0371e5ffa",
"text": "A number of studies have found that today's Visual Question Answering (VQA) models are heavily driven by superficial correlations in the training data and lack sufficient image grounding. To encourage development of models geared towards the latter, we propose a new setting for VQA where for every question type, train and test sets have different prior distributions of answers. Specifically, we present new splits of the VQA v1 and VQA v2 datasets, which we call Visual Question Answering under Changing Priors (VQA-CP v1 and VQA-CP v2 respectively). First, we evaluate several existing VQA models under this new setting and show that their performance degrades significantly compared to the original VQA setting. Second, we propose a novel Grounded Visual Question Answering model (GVQA) that contains inductive biases and restrictions in the architecture specifically designed to prevent the model from 'cheating' by primarily relying on priors in the training data. Specifically, GVQA explicitly disentangles the recognition of visual concepts present in the image from the identification of plausible answer space for a given question, enabling the model to more robustly generalize across different distributions of answers. GVQA is built off an existing VQA model - Stacked Attention Networks (SAN). Our experiments demonstrate that GVQA significantly outperforms SAN on both VQA-CP v1 and VQA-CP v2 datasets. Interestingly, it also outperforms more powerful VQA models such as Multimodal Compact Bilinear Pooling (MCB) in several cases. GVQA offers strengths complementary to SAN when trained and evaluated on the original VQA v1 and VQA v2 datasets. Finally, GVQA is more transparent and interpretable than existing VQA models.",
"title": ""
},
{
"docid": "664b9bb1f132a87e2f579945a31852b7",
"text": "Major efforts have been conducted on ontology learning, that is, semiautomatic processes for the construction of domain ontologies from diverse sources of information. In the past few years, a research trend has focused on the construction of educational ontologies, that is, ontologies to be used for educational purposes. The identification of the terminology is crucial to build ontologies. Term extraction techniques allow the identification of the domain-related terms from electronic resources. This paper presents LiTeWi, a novel method that combines current unsupervised term extraction approaches for creating educational ontologies for technology supported learning systems from electronic textbooks. LiTeWi uses Wikipedia as an additional information source. Wikipedia contains more than 30 million articles covering the terminology of nearly every domain in 288 languages, which makes it an appropriate generic corpus for term extraction. Furthermore, given that its content is available in several languages, it promotes both domain and language independence. LiTeWi is aimed at being used by teachers, who usually develop their didactic material from textbooks. To evaluate its performance, LiTeWi was tuned up using a textbook on object oriented programming and then tested with two textbooks of different domains—astronomy and molecular biology. Introduction",
"title": ""
},
{
"docid": "404ab94fd25ff1d708e8225968557db9",
"text": "This paper presents a technique to select a representative set of test cases from a test suite that provides the same coverage as the entire test suite. This selection is performed by identifying, and then eliminating, the redundant and obsolete test cases in the test suite. The representative set replaces the original test suite and thus, potentially produces a smaller test suite. The representative set can also be used to identify those test cases that should be rerun to test the program after it has been changed. Our technique is independent of the testing methodology and only requires an association between a testing requirement and the test cases that satisfy the requirement. We illustrate the technique using the data flow testing methodology. The reduction that is possible with our technique is illustrated by experimental results.",
"title": ""
},
{
"docid": "1766d61252101e10d0fde31ba3c304e7",
"text": "The mobile ecosystem is constantly changing. The roles of each actor are uncertain and the question how each actor cooperates with each other is of interest of researchers both in academia and industry. In this paper we examine the mobile ecosystem from a business perspective. We used five mobile companies as case studies, which were investigated through interviews and questionnaire surveys. The companies covered different roles in the ecosystem, including network operator, device manufacturer, and application developer. With our empirical data as a starting point, we analyze the revenue streams of different actors in the ecosystem. The results will contribute to an understanding of the business models and dependencies that characterize actors in the current mobile ecosystem.",
"title": ""
},
{
"docid": "0eba5306a558f2a4018f135ff6e4d29d",
"text": "The impact of Automated Trading Systems (ATS) on financial markets is growing every year and the trades generated by an algorithm now account for the majority of orders that arrive at stock exchanges. In this paper we explore how to find a trading strategy via Reinforcement Learning (RL), a branch of Machine Learning (ML) that allows to find an optimal strategy for a sequential decision problem by directly interacting with the environment. We show that the the long-short strategy learned for a synthetic asset, whose price follows a stochastic process with some exploitable patterns, consistently outperforms the market. RL thus shows the potential to deal with many financial problems, that can be often formulated as sequential decision problems.",
"title": ""
},
{
"docid": "ac6410d8891491d050b32619dc2bdd50",
"text": "Due to the increase of generation sources in distribution networks, it is becoming very complex to develop and maintain models of these networks. Network operators need to determine reduced models of distribution networks to be used in grid management functions. This paper presents a novel method that synthesizes steady-state models of unbalanced active distribution networks with the use of dynamic measurements (time series) from phasor measurement units (PMUs). Since phasor measurement unit (PMU) measurements may contain errors and bad data, this paper presents the application of a Kalman filter technique for real-time data processing. In addition, PMU data capture the power system's response at different time-scales, which are generated by different types of power system events; the presented Kalman filter has been improved to extract the steady-state component of the PMU measurements to be fed to the steady-state model synthesis application. Performance of the proposed methods has been assessed by real-time hardware-in-the-loop simulations on a sample distribution network.",
"title": ""
},
{
"docid": "480f5f6db398e8793675965587232a5f",
"text": "Cellular systems are rapidly evolving from a homogeneous macrocell deployment to a heterogeneous deployment of macrocells overlaid with many small cells. The striking features of small cells include small coverage area, ad-hoc deployment, and flexible backhaul connectivity. These features call for a profound rethinking of traditional cellular concepts including mobility management and interference management among others. Owing to the unique features, coordinated small cells or commonly referred to as network of small cells promises several benefits including efficient mobility management in a rapid and scalable fashion. The problem of handover in a high-density small cell deployment is studied in this work. A novel local anchor-based architecture for a static cluster of small cells is proposed using which new handover schemes are presented. Such clusters are prevalent in the evolving cellular systems involving high-density small cell deployments in urban, residential, and enterprise environments. A mathematical framework is developed using discrete-time-Markov-models to evaluate the proposed schemes. Using this, closed-form expressions for key handover metrics including handover cost and interruption time are derived. Extensive numerical and simulation studies indicate significant savings of over 50 percent in the handover costs and, more importantly, up to 80 percent in the handover interruption time compared to the existing 3GPP scheme for coordinated small cells.",
"title": ""
},
{
"docid": "1118318698695880432c689ce33e14ee",
"text": "The paper describes a multi-resolution algorithm based on fractal geometry for texture analysis and detection of oil spills in SAR images. The multi-resolution approach reduces the problems of speckle and sea clutter and preserves subtle variations of oil slicks. The use of fractal dimension as a feature for classification improves the oil spill detection, since enhances texture discrimination. The proposed approach computes the fractal dimension from power spectra: the ratio between powers at different scales is straightforwardly related to fractal dimension. The proposed method to compute a fractal signature is based on the multi-scale image decomposition obtained by the Normalized Laplacian Pyramid which provides a reliable estimation of the fractal dimension even in the presence of speckle.",
"title": ""
},
{
"docid": "15ccdecd20bbd9c4b93c57717cbfb787",
"text": "As a crucial challenge for video understanding, exploiting the spatial-temporal structure of video has attracted much attention recently, especially on video captioning. Inspired by the insight that people always focus on certain interested regions of video content, we propose a novel approach which will automatically focus on regions-of-interest and catch their temporal structures. In our approach, we utilize a specific attention model to adaptively select regions-of-interest for each video frame. Then a Dual Memory Recurrent Model (DMRM) is introduced to incorporate temporal structure of global features and regions-of-interest features in parallel, which will obtain rough understanding of video content and particular information of regions-of-interest. Since the attention model could not always catch the right interests, we additionally adopt semantic supervision to attend to interested regions more correctly. We evaluate our method for video captioning on two public benchmarks: the Microsoft Video Description Corpus (MSVD) and the Montreal Video Annotation Dataset (M-VAD). The experiments demonstrate that catching temporal regions-of-interest information really enhances the representation of input videos and our approach obtains the state-of-the-art results on popular evaluation metrics like BLEU-4, CIDEr, and METEOR.",
"title": ""
},
{
"docid": "81d41d9ba03aec8b3d908d757f0a7464",
"text": "The increasing demand of new functionalities in next generation vehicles, leads to a growth of the complexity level for the E/E automotive systems. On the same way, the automotive software also tends to follow the same pace, so new methods should be adopted to deal with this scenario of complexity. The next generation of automotive embedded software is rapidly migrating to the AUTOSAR standard, which is an architectural composition of software components idealized to establish an open industry standard for the automotive industry. AUTOSAR aims to increase the reuse of these software components, in particular between different vehicle platforms, and between OEMs and suppliers. Inside this development process, software control suppliers are able to check if the system functionalities are attending to the requirements already in preliminary phases, even if the ECU is not yet available. In this paper the authors show the workflow to develop a virtual validation based on the AUTOSAR standard with a Virtual ECU (V-ECU) using a toolchain consisting of dSPACE (SystemDesk, VEOS and TargetLink) and MathWorks (Matlab, Simulink and Stateflow) software. The simulation of the architecture has been realized considering the communication inside a same V-ECU and also between two different V-ECUs considering a distributed architecture. As result, a point-to-point explanation about AUTOSAR methodology is done to show how the process is done.",
"title": ""
},
{
"docid": "3c043f939416aa7e3e93900639683015",
"text": "Programmable Logic Controllers are used for smart homes, in production processes or to control critical infrastructures. Modern industrial devices in the control level are often communicating over proprietary protocols on top of TCP/IP with each other and SCADA systems. The networks in which the controllers operate are usually considered as trustworthy and thereby they are not properly secured. Due to the growing connectivity caused by the Internet of Things (IoT) and Industry 4.0 the security risks are rising. Therefore, the demand of security assessment tools for industrial networks is high. In this paper, we introduce a new fuzzing framework called PropFuzz, which is capable to fuzz proprietary industrial control system protocols and monitor the behavior of the controller. Furthermore, we present first results of a security assessment with our framework.",
"title": ""
},
{
"docid": "fb34a0868942928ada71cf8d1c746c19",
"text": "We introduce the new Multimodal Named Entity Disambiguation (MNED) task for multimodal social media posts such as Snapchat or Instagram captions, which are composed of short captions with accompanying images. Social media posts bring significant challenges for disambiguation tasks because 1) ambiguity not only comes from polysemous entities, but also from inconsistent or incomplete notations, 2) very limited context is provided with surrounding words, and 3) there are many emerging entities often unseen during training. To this end, we build a new dataset called SnapCaptionsKB, a collection of Snapchat image captions submitted to public and crowd-sourced stories, with named entity mentions fully annotated and linked to entities in an external knowledge base. We then build a deep zeroshot multimodal network for MNED that 1) extracts contexts from both text and image, and 2) predicts correct entity in the knowledge graph embeddings space, allowing for zeroshot disambiguation of entities unseen in training set as well. The proposed model significantly outperforms the stateof-the-art text-only NED models, showing efficacy and potentials of the MNED task.",
"title": ""
},
{
"docid": "e57c5a868c879efbe48eb6c55727c816",
"text": "Follicular lymphoma (FL), the second most common non-Hodgkin's lymphoma (NHL), is well characterised by a classic histological appearance and an indolent course. Current treatment protocols for FL range from close observation to immunotherapy, chemotherapy and/or radiotherapies. We report the case of a 42-year-old woman diagnosed by excisional biopsy with stage IIIa, grade 1 FL. In addition to close observation, the patient underwent a medically supervised, 21-day water-only fast after which enlarged lymph nodes were substantially reduced in size. The patient then consumed a diet of minimally processed plant foods free of added sugar, oil and salt (SOS), and has remained on the diet since leaving the residential facility. At 6 and 9-month follow-up visits, the patient's lymph nodes were non-palpable and she remained asymptomatic. This case establishes a basis for further studies evaluating water-only fasting and a plant foods, SOS-free diet as a treatment protocol for FL.",
"title": ""
},
{
"docid": "6eb85c1a42dd2e4eaa6835e924fdfebf",
"text": "The concept of ‘sleeping on a problem’ is familiar to most of us. But with myriad stages of sleep, forms of memory and processes of memory encoding and consolidation, sorting out how sleep contributes to memory has been anything but straightforward. Nevertheless, converging evidence, from the molecular to the phenomenological, leaves little doubt that offline memory reprocessing during sleep is an important component of how our memories are formed and ultimately shaped.",
"title": ""
},
{
"docid": "8721c71811ae7b35f378904651a43ce7",
"text": "The literature suggests that firms cannot be competitive if their business and information technology strategies are not aligned. Yet achieving strategic alignment continues to be a major concern for business executives. A number of alignment models have been offered in the literature, primary among them the strategic alignment model (SAM). However, there is little published research that attempts to validate SAM or describe its use in practice. This paper reports on the use of SAM in a financial services firm. Data from completed projects are applied to the model to determine whether SAM is useful as a management tool to create, assess and sustain strategic alignment between information technology and the business. The paper demonstrates that SAM has conceptual and practical value. The paper also proposes a practical framework that allows management, particularly technology management, to determine current alignment levels and to monitor and change future alignment as required. Through the use of this framework, alignment is more likely to be achieved in practice. q 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c971c19f8006f92cb013adca941e36aa",
"text": "In this work, we address the human parsing task with a novel Contextualized Convolutional Neural Network (Co-CNN) architecture, which well integrates the cross-layer context, global image-level context, semantic edge context, within-super-pixel context and cross-super-pixel neighborhood context into a unified network. Given an input human image, Co-CNN produces the pixelwise categorization in an end-to-end way. First, the cross-layer context is captured by our basic local-to-global-to-local structure, which hierarchically combines the global semantic information and the local fine details across different convolutional layers. Second, the global image-level label prediction is used as an auxiliary objective in the intermediate layer of the Co-CNN, and its outputs are further used for guiding the feature learning in subsequent convolutional layers to leverage the global image-level context. Third, semantic edge context is further incorporated into Co-CNN, where the high-level semantic boundaries are leveraged to guide pixel-wise labeling. Finally, to further utilize the local super-pixel contexts, the within-super-pixel smoothing and cross-super-pixel neighbourhood voting are formulated as natural sub-components of the Co-CNN to achieve the local label consistency in both training and testing process. Comprehensive evaluations on two public datasets well demonstrate the significant superiority of our Co-CNN over other state-of-the-arts for human parsing. In particular, the F-1 score on the large dataset [1] reaches 81.72 percent by Co-CNN, significantly higher than 62.81 percent and 64.38 percent by the state-of-the-art algorithms, M-CNN [2] and ATR [1], respectively. By utilizing our newly collected large dataset for training, our Co-CNN can achieve 85.36 percent in F-1 score.",
"title": ""
}
] |
scidocsrr
|
2bddcd883d10222a9c464129ae1e1b0f
|
Idiom Token Classification using Sentential Distributed Semantics
|
[
{
"docid": "28f0b9aeba498777e1f4a946f2bb4e65",
"text": "Idiomatic expressions are plentiful in everyday language, yet they remain mysterious, as it is not clear exactly how people learn and understand them. They are of special interest to linguists, psycholinguists, and lexicographers, mainly because of their syntactic and semantic idiosyncrasies as well as their unclear lexical status. Despite a great deal of research on the properties of idioms in the linguistics literature, there is not much agreement on which properties are characteristic of these expressions. Because of their peculiarities, idiomatic expressions have mostly been overlooked by researchers in computational linguistics. In this article, we look into the usefulness of some of the identified linguistic properties of idioms for their automatic recognition. Specifically, we develop statistical measures that each model a specific property of idiomatic expressions by looking at their actual usage patterns in text. We use these statistical measures in a type-based classification task where we automatically separate idiomatic expressions (expressions with a possible idiomatic interpretation) from similar-on-the-surface literal phrases (for which no idiomatic interpretation is possible). In addition, we use some of the measures in a token identification task where we distinguish idiomatic and literal usages of potentially idiomatic expressions in context.",
"title": ""
}
] |
[
{
"docid": "77c8f9723134571d11ae9fc193fd377e",
"text": "s of Invited Talks From Relational to Semantic Data Mining",
"title": ""
},
{
"docid": "90469bbf7cf3216b2ab1ee8441fbce14",
"text": "This work presents the evolution of a solution for predictive maintenance to a Big Data environment. The proposed adaptation aims for predicting failures on wind turbines using a data-driven solution deployed in the cloud and which is composed by three main modules. (i) A predictive model generator which generates predictive models for each monitored wind turbine by means of Random Forest algorithm. (ii) A monitoring agent that makes predictions every 10 minutes about failures in wind turbines during the next hour. Finally, (iii) a dashboard where given predictions can be visualized. To implement the solution Apache Spark, Apache Kafka, Apache Mesos and HDFS have been used. Therefore, we have improved the previous work in terms of data process speed, scalability and automation. In addition, we have provided fault-tolerant functionality with a centralized access point from where the status of all the wind turbines of a company localized all over the world can be monitored, reducing O&M costs.",
"title": ""
},
{
"docid": "82234158dc94216222efa5f80eee0360",
"text": "We investigate the possibility to prove security of the well-known blind signature schemes by Chaum, and by Pointcheval and Stern in the standard model, i.e., without random oracles. We subsume these schemes under a more general class of blind signature schemes and show that finding security proofs for these schemes via black-box reductions in the standard model is hard. Technically, our result deploys meta-reduction techniques showing that black-box reductions for such schemes could be turned into efficient solvers for hard non-interactive cryptographic problems like RSA or discrete-log. Our technique yields significantly stronger impossibility results than previous meta-reductions in other settings by playing off the two security requirements of the blind signatures (unforgeability and blindness).",
"title": ""
},
{
"docid": "2175ec8bccfafe0e577c0793d87349e7",
"text": "Modern dog breeding has given rise to more than 400 breeds differing both in morphology and behaviour. Traditionally, kennel clubs have utilized an artificial category system based on the morphological similarity and historical function of each dog breed. Behavioural comparisons at the breed-group level produced ambiguous results as to whether the historical function still has an influence on the breed-typical behaviour. Recent genetic studies have uncovered genetic relatedness between dog breeds, which can be independent from their historical function and may offer an alternative explanation of behavioural differences among breeds. This exploratory study aimed to investigate the behaviour profiles of 98 breeds, and the behavioural differences among conventional breed groups based on historical utility and among genetic breed clusters. Owners of 5733 dogs (98 breeds) filled out an online questionnaire in German. Breed trait scores on trainability, boldness, calmness and dog sociability were calculated by averaging the scores of all individuals of the breed. Breeds were ranked on the four traits and a cluster analysis was performed to explore behavioural similarity between breeds. We found that two of the behaviour traits (trainability and boldness) significantly differed both among the conventional and the genetic breed groups. Using the conventional classification we revealed that Herding dogs were more trainable than Hounds, Working dogs, Toy dogs and Non-sporting dogs; Sporting dogs were also more trainable than Nonsporting dogs. In parallel, Terriers were bolder than Hounds and Herding dogs. Regarding genetic relatedness, breeds with ancient Asian or African origin (Ancient breeds) were less trainable than breeds in the Herding/sighthound cluster and the Hunting breeds. Breeds in the Mastiff/terrier cluster were bolder than the Ancient breeds, the breeds in the Herding/sighthound cluster and the Hunting breeds. Six breed clusters were created on the basis of behavioural similarity. All the conventional and genetic groups had representatives in at least three of these clusters. Thus, the behavioural breed clusters showed poor correspondence to both the functional and genetic categorisation, which may reflect the effect of recent selective processes. Behavioural breed clusters can provide a more reliable characterization of the breeds’ current typical behaviour. © 2011 Elsevier B.V. All rights reserved. ∗ Corresponding author. Tel.: +36 1 3812179; fax: +36 1 3812180. E-mail addresses: borbala.turcsan@gmail.com (B. Turcsán), kubinyie@gmail.com (E. Kubinyi), amiklosi62@gmail.com (Á. Miklósi).",
"title": ""
},
{
"docid": "32378690ded8920eb81689fea1ac8c23",
"text": "OBJECTIVE\nTo investigate the effect of Beri-honey-impregnated dressing on diabetic foot ulcer and compare it with normal saline dressing.\n\n\nSTUDY DESIGN\nA randomized, controlled trial.\n\n\nPLACE AND DURATION OF STUDY\nSughra Shafi Medical Complex, Narowal, Pakistan and Bhatti International Trust (BIT) Hospital, Affiliated with Central Park Medical College, Lahore, from February 2006 to February 2010.\n\n\nMETHODOLOGY\nPatients with Wagner's grade 1 and 2 ulcers were enrolled. Those patients were divided in two groups; group A (n=179) treated with honey dressing and group B (n=169) treated with normal saline dressing. Outcome measures were calculated in terms of proportion of wounds completely healed (primary outcome), wound healing time, and deterioration of wounds. Patients were followed-up for a maximum of 120 days.\n\n\nRESULTS\nOne hundred and thirty six wounds (75.97%) out of 179 were completely healed with honey dressing and 97 (57.39%) out of 169 wtih saline dressing (p=0.001). The median wound healing time was 18.00 (6 - 120) days (Median with IQR) in group A and 29.00 (7 - 120) days (Median with IQR) in group B (p < 0.001).\n\n\nCONCLUSION\nThe present results showed that honey is an effective dressing agent instead of conventional dressings, in treating patients of diabetic foot ulcer.",
"title": ""
},
{
"docid": "c3566171b68e4025931a72064e74e4ae",
"text": "Training a Fully Convolutional Network (FCN) for semantic segmentation requires a large number of pixel-level masks, which involves a large amount of human labour and time for annotation. In contrast, image-level labels are much easier to obtain. In this work, we propose a novel method for weakly supervised semantic segmentation with only image-level labels. The method relies on a large scale co-segmentation framework that can produce object masks for a group of images containing objects belonging to the same semantic class. We first retrieve images from search engines, e.g. Flickr and Google, using semantic class names as queries, e.g. class names in PASCAL VOC 2012. We then use high quality masks produced by co-segmentation on the retrieved images as well as the target dataset images with image level labels to train segmentation networks. We obtain IoU 56.9 on test set of PASCAL VOC 2012, which reaches state of the art performance.",
"title": ""
},
{
"docid": "5fa0e48da2045baa1f00a27a9baa4897",
"text": "The inferred cost of work-related stress call for prevention strategies that aim at detecting early warning signs at the workplace. This paper goes one step towards the goal of developing a personal health system for detecting stress. We analyze the discriminative power of electrodermal activity (EDA) in distinguishing stress from cognitive load in an office environment. A collective of 33 subjects underwent a laboratory intervention that included mild cognitive load and two stress factors, which are relevant at the workplace: mental stress induced by solving arithmetic problems under time pressure and psychosocial stress induced by social-evaluative threat. During the experiments, a wearable device was used to monitor the EDA as a measure of the individual stress reaction. Analysis of the data showed that the distributions of the EDA peak height and the instantaneous peak rate carry information about the stress level of a person. Six classifiers were investigated regarding their ability to discriminate cognitive load from stress. A maximum accuracy of 82.8% was achieved for discriminating stress from cognitive load. This would allow keeping track of stressful phases during a working day by using a wearable EDA device.",
"title": ""
},
{
"docid": "43c619d24864eb97498700315aea2d45",
"text": "BACKGROUND\nThe central nervous system (CNS) is involved in organic integration. Nervous modulation via bioactive compounds can modify metabolism in order to prevent systemic noncommunicable diseases (NCDs). Concerning this, plant polyphenols are proposed as neurotropic chemopreventive/ therapeutic agents, given their redox and regulating properties.\n\n\nOBJECTIVE\nTo review polyphenolic pharmacology and potential neurological impact on NCDs.\n\n\nMETHOD\nFirst, polyphenolic chemistry was presented, as well as pharmacology, i.e. kinetics and dynamics. Toxicology was particularly described. Then, functional relevance of these compounds was reviewed focusing on the metabolic CNS participation to modulate NCDs, with data being finally integrated.\n\n\nRESULTS\nOxidative stress is a major risk factor for NCDs. Polyphenols regulate the redox biology of different organic systems including the CNS, which participates in metabolic homeostasis. Polyphenolic neurotropism is determined by certain pharmacological characteristics, modifying nervous and systemic physiopathology, acting on several biological targets. Nonetheless, because these phytochemicals can trigger toxic effects, they should not be recommended indiscriminately.\n\n\nCONCLUSION\nSumming up, the modulating effects of polyphenols allow for the physiological role of CNS on metabolism and organic integration to be utilized in order to prevent NCDs, without losing sight of the risks.",
"title": ""
},
{
"docid": "9fd1ffabf31b3e6c4de126aac1e2baec",
"text": "This paper introduces Graph Convolutional Recurrent Network (GCRN), a deep learning model able to predict structured sequences of data. Precisely, GCRN is a generalization of classical recurrent neural networks (RNN) to data structured by an arbitrary graph. Such structured sequences can represent series of frames in videos, spatio-temporal measurements on a network of sensors, or random walks on a vocabulary graph for natural language modeling. The proposed model combines convolutional neural networks (CNN) on graphs to identify spatial structures and RNN to find dynamic patterns. We study two possible architectures of GCRN, and apply the models to two practical problems: predicting moving MNIST data, and modeling natural language with the Penn Treebank dataset. Experiments show that exploiting simultaneously graph spatial and dynamic information about data can improve both precision and learning speed.",
"title": ""
},
{
"docid": "4845233571c0572570445f4e3ca4ebc2",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. You may purchase this article from the Ask*IEEE Document Delivery Service at http://www.ieee.org/services/askieee/",
"title": ""
},
{
"docid": "e43ede0fe674fe92fbfa2f76165cf034",
"text": "In this communication, a compact circularly polarized (CP) substrate integrated waveguide (SIW) horn antenna is proposed and investigated. Through etching a sloping slot on the common broad wall of two SIWs, mode coupling is generated between the top and down SIWs, and thus, a new field component as TE01 mode is produced. During the coupling process along the sloping slot, the difference in guide wavelengths of the two orthogonal modes also brings a phase shift between the two modes, which provides a possibility for radiating the CP wave. Moreover, the two different ports will generate the electric field components of TE01 mode with the opposite direction, which indicates the compact SIW horn antenna with a dual CP property can be realized as well. Measured results indicate that the proposed antenna operates with a wide 3-dB axial ratio bandwidth of 11.8% ranging from 17.6 to 19.8 GHz. The measured results are in good accordance with the simulated ones.",
"title": ""
},
{
"docid": "0c177af9c2fffa6c4c667d1b4a4d3d79",
"text": "In the last decade, a large number of different software component models have been developed, with different aims and using different principles and technologies. This has resulted in a number of models which have many similarities, but also principal differences, and in many cases unclear concepts. Component-based development has not succeeded in providing standard principles, as has, for example, object-oriented development. In order to increase the understanding of the concepts and to differentiate component models more easily, this paper identifies, discusses, and characterizes fundamental principles of component models and provides a Component Model Classification Framework based on these principles. Further, the paper classifies a large number of component models using this framework.",
"title": ""
},
{
"docid": "b3881be74f7338038b53dc6ddfa1183d",
"text": "Molecular chaperones, ubiquitin ligases and proteasome impairment have been implicated in several neurodegenerative diseases, including Alzheimer's and Parkinson's disease, which are characterized by accumulation of abnormal protein aggregates (e.g. tau and alpha-synuclein respectively). Here we report that CHIP, an ubiquitin ligase that interacts directly with Hsp70/90, induces ubiquitination of the microtubule associated protein, tau. CHIP also increases tau aggregation. Consistent with this observation, diverse of tau lesions in human postmortem tissue were found to be immunopositive for CHIP. Conversely, induction of Hsp70 through treatment with either geldanamycin or heat shock factor 1 leads to a decrease in tau steady-state levels and a selective reduction in detergent insoluble tau. Furthermore, 30-month-old mice overexpressing inducible Hsp70 show a significant reduction in tau levels. Together these data demonstrate that the Hsp70/CHIP chaperone system plays an important role in the regulation of tau turnover and the selective elimination of abnormal tau species. Hsp70/CHIP may therefore play an important role in the pathogenesis of tauopathies and also represents a potential therapeutic target.",
"title": ""
},
{
"docid": "f7bc1678e45157246bd1cac50fe33aa0",
"text": "Histopathologic diagnosis of tubal intraepithelial carcinoma (TIC) has emerged as a significant challenge in the last few years. The avoidance of pitfalls in the diagnosis of TIC is crucial if a better understanding of its natural history and outcome is to be achieved. Herein, we present a case of a 52-year-old woman who underwent a risk-reducing salpingo-oophorectomy procedure. Histologic examination of a fallopian tube demonstrated a focus of atypical epithelial proliferation, which was initially considered to be a TIC. Complete study of the case indicated that the focus was, in fact, papillary syncytial metaplasia of tubal mucosal endometriosis. Papillary syncytial metaplasia may resemble TIC and should be considered in cases of proliferative lesions of the tubal epithelium.",
"title": ""
},
{
"docid": "322d23354a9bf45146e4cb7c733bf2ec",
"text": "In this chapter we consider the problem of automatic facial expression analysis. Our take on this is that the field has reached a point where it needs to move away from considering experiments and applications under in-the-lab conditions, and move towards so-called in-the-wild scenarios. We assume throughout this chapter that the aim is to develop technology that can be deployed in practical applications under unconstrained conditions. While some first efforts in this direction have been reported very recently, it is still unclear what the right path to achieving accurate, informative, robust, and real-time facial expression analysis will be. To illuminate the journey ahead, we first provide in Sec. 1 an overview of the existing theories and specific problem formulations considered within the computer vision community. Then we describe in Sec. 2 the standard algorithmic pipeline which is common to most facial expression analysis algorithms. We include suggestions as to which of the current algorithms and approaches are most suited to the scenario considered. In section 3 we describe our view of the remaining challenges, and the current opportunities within the field. This chapter is thus not intended as a review of different approaches, but rather a selection of what we believe are the most suitable state-of-the-art algorithms, and a selection of exemplars chosen to characterise a specific approach. We review in section 4 some of the exciting opportunities for the application of automatic facial expression analysis to everyday practical problems and current commercial applications being exploited. Section 5 ends the chapter by summarising the major conclusions drawn. Brais Martinez School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: brais.martinez@nottingham.ac.uk Michel F. Valstar School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: michel.valstar@nottingham.ac.uk",
"title": ""
},
{
"docid": "64e5cad1b64f1412b406adddc98cd421",
"text": "We examine the influence of venture capital on patented inventions in the United States across twenty industries over three decades. We address concerns about causality in several ways, including exploiting a 1979 policy shift that spurred venture capital fundraising. We find that increases in venture capital activity in an industry are associated with significantly higher patenting rates. While the ratio of venture capital to R&D averaged less than 3% from 1983–1992, our estimates suggest that venture capital may have accounted for 8% of industrial innovations in that period.",
"title": ""
},
{
"docid": "8a1a255a338a06c729f586b8c9b513ac",
"text": "In addition to maintaining the GenBank(R) nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides data analysis and retrieval resources for the data in GenBank and other biological data made available through NCBI's website. NCBI resources include Entrez, PubMed, PubMed Central, LocusLink, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Electronic PCR, OrfFinder, Spidey, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, Cancer Chromosome Aberration Project (CCAP), Entrez Genomes and related tools, the Map Viewer, Model Maker, Evidence Viewer, Clusters of Orthologous Groups (COGs) database, Retroviral Genotyping Tools, SARS Coronavirus Resource, SAGEmap, Gene Expression Omnibus (GEO), Online Mendelian Inheritance in Man (OMIM), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD) and the Conserved Domain Architecture Retrieval Tool (CDART). Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at: http://www.ncbi.nlm.nih.gov.",
"title": ""
},
{
"docid": "7a09764d50a72214a0516e85f9a3e5c6",
"text": "The training complexity of deep learning-based channel decoders scales exponentially with the codebook size and therefore with the number of information bits. Thus, neural network decoding (NND) is currently only feasible for very short block lengths. In this work, we show that the conventional iterative decoding algorithm for polar codes can be enhanced when sub-blocks of the decoder are replaced by neural network (NN) based components. Thus, we partition the encoding graph into smaller sub-blocks and train them individually, closely approaching maximum a posteriori (MAP) performance per sub-block. These blocks are then connected via the remaining conventional belief propagation decoding stage(s). The resulting decoding algorithm is non-iterative and inherently enables a highlevel of parallelization, while showing a competitive bit error rate (BER) performance. We examine the degradation through partitioning and compare the resulting decoder to state-of-the art polar decoders such as successive cancellation list and belief propagation decoding.",
"title": ""
},
{
"docid": "690c27afb23df9778af47ddcbcfea48d",
"text": "Introduction Origanum vulgare (the scientific name of oregano) has been studied in depth due to a number of interesting and exciting potential clinical uses. There is also an ongoing interest in a number of industries to replace synthetic chemicals with natural products that have similar properties. Many bioactive compounds can be found in aromatic plants, and there are a number of different ways they can be extracted. In one study, the major components of oregano essential oils were found to be carvacrol, beta-fenchyl alcohol, thymol, and gamma-terpinene.[1] A hot-water extraction was found to be the best method of extracting antioxidant properties and provided the highest phenolic content. This study also tested the oregano extracts against seven bacterial cultures, but they were ineffective. However, the essential oil itself was able to inhibit the growth of all bacteria, causing greater reductions on both Listeria strains that were tested.",
"title": ""
}
] |
scidocsrr
|
ebd84f9c04ddf1989f448fc2bcd74756
|
Creativity: Generating Diverse Questions Using Variational Autoencoders
|
[
{
"docid": "4e5fba594bf9b6236123aa21ecf05075",
"text": "In this paper, we introduce a new dataset consisting of 360,001 focused natural language descriptions for 10,738 images. This dataset, the Visual Madlibs dataset, is collected using automatically produced fill-in-the-blank templates designed to gather targeted descriptions about: people and objects, their appearances, activities, and interactions, as well as inferences about the general scene or its broader context. We provide several analyses of the Visual Madlibs dataset and demonstrate its applicability to two new description generation tasks: focused description generation, and multiple-choice question-answering for images. Experiments using joint-embedding and deep learning methods show promising results on these tasks.",
"title": ""
},
{
"docid": "c879ee3945592f2e39bb3306602bb46a",
"text": "This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.",
"title": ""
},
{
"docid": "7ebff2391401cef25b27d510675e9acd",
"text": "We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann’s hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real c ©2003 Kobus Barnard, Pinar Duygulu, David Forsyth, Nando de Freitas, David Blei and Michael Jordan. BARNARD, DUYGULU, FORSYTH, DE FREITAS, BLEI AND JORDAN scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.",
"title": ""
},
{
"docid": "4337f8c11a71533d38897095e5e6847a",
"text": "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling “where to look” or visual attention, it is equally important to model “what words to listen to” or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.1. 1 Introduction Visual Question Answering (VQA) [2, 7, 16, 17, 29] has emerged as a prominent multi-discipline research problem in both academia and industry. To correctly answer visual questions about an image, the machine needs to understand both the image and question. Recently, visual attention based models [20, 23–25] have been explored for VQA, where the attention mechanism typically produces a spatial map highlighting image regions relevant to answering the question. So far, all attention models for VQA in literature have focused on the problem of identifying “where to look” or visual attention. In this paper, we argue that the problem of identifying “which words to listen to” or question attention is equally important. Consider the questions “how many horses are in this image?” and “how many horses can you see in this image?\". They have the same meaning, essentially captured by the first three words. A machine that attends to the first three words would arguably be more robust to linguistic variations irrelevant to the meaning and answer of the question. Motivated by this observation, in addition to reasoning about visual attention, we also address the problem of question attention. Specifically, we present a novel multi-modal attention model for VQA with the following two unique features: Co-Attention: We propose a novel mechanism that jointly reasons about visual attention and question attention, which we refer to as co-attention. Unlike previous works, which only focus on visual attention, our model has a natural symmetry between the image and question, in the sense that the image representation is used to guide the question attention and the question representation(s) are used to guide image attention. Question Hierarchy: We build a hierarchical architecture that co-attends to the image and question at three levels: (a) word level, (b) phrase level and (c) question level. At the word level, we embed the words to a vector space through an embedding matrix. At the phrase level, 1-dimensional convolution neural networks are used to capture the information contained in unigrams, bigrams and trigrams. The source code can be downloaded from https://github.com/jiasenlu/HieCoAttenVQA 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. ar X iv :1 60 6. 00 06 1v 3 [ cs .C V ] 2 6 O ct 2 01 6 Ques%on:\t\r What\t\r color\t\r on\t\r the stop\t\r light\t\r is\t\r lit\t\r up\t\r \t\r ? ...\t\r ... color\t\r stop\t\r light\t\r lit co-‐a7en%on color\t\r ...\t\r stop\t\r \t\r light\t\r \t\r ... What color\t\r ... the stop light light\t\r \t\r ... What color What\t\r color\t\r on\t\r the\t\r stop\t\r light\t\r is\t\r lit\t\r up ...\t\r ... the\t\r stop\t\r light ...\t\r ... stop Image Answer:\t\r green Figure 1: Flowchart of our proposed hierarchical co-attention model. Given a question, we extract its word level, phrase level and question level embeddings. At each level, we apply co-attention on both the image and question. The final answer prediction is based on all the co-attended image and question features. Specifically, we convolve word representations with temporal filters of varying support, and then combine the various n-gram responses by pooling them into a single phrase level representation. At the question level, we use recurrent neural networks to encode the entire question. For each level of the question representation in this hierarchy, we construct joint question and image co-attention maps, which are then combined recursively to ultimately predict a distribution over the answers. Overall, the main contributions of our work are: • We propose a novel co-attention mechanism for VQA that jointly performs question-guided visual attention and image-guided question attention. We explore this mechanism with two strategies, parallel and alternating co-attention, which are described in Sec. 3.3; • We propose a hierarchical architecture to represent the question, and consequently construct image-question co-attention maps at 3 different levels: word level, phrase level and question level. These co-attended features are then recursively combined from word level to question level for the final answer prediction; • At the phrase level, we propose a novel convolution-pooling strategy to adaptively select the phrase sizes whose representations are passed to the question level representation; • Finally, we evaluate our proposed model on two large datasets, VQA [2] and COCO-QA [17]. We also perform ablation studies to quantify the roles of different components in our model.",
"title": ""
}
] |
[
{
"docid": "1c15b05da7b2ac2237ece177bf0fb0e9",
"text": "The purpose of this paper is to present an introduction to Distributed Database. It contains two main parts: first one is fundamental concept in Distributed Database and second one is Different technique use in Distributed Database. Database with a production of huge data sets and their processing in real-time applications, the needs for environmental data management have grown significantly. Management Systems (DBMSs) are a ubiquitous and critical component of modern computing. The architecture and motivation for the design have also been presented in this paper. The Proposed Method is Distributed Data Mining. It is also use for to reduce the complexity of database. Keywords—Distributed Database, DBMS, Computing.",
"title": ""
},
{
"docid": "a922051835f239db76be1dbb8edead3e",
"text": "Among the simplest and most intuitively appealing classes of nonprobabilistic classification procedures are those that weight the evidence of nearby sample observations most heavily. More specifically, one might wish to weight the evidence of a neighbor close to an unclassified observation more heavily than the evidence of another neighbor which is at a greater distance from the unclassified observation. One such classification rule is described which makes use of a neighbor weighting function for the purpose of assigning a class to an unclassified sample. The admissibility of such a rule is also considered.",
"title": ""
},
{
"docid": "d150439e46201c3d3979bc243fb38c26",
"text": "Genetic Algorithms and Evolution Strategies represent two of the three major Evolutionary Algorithms. This paper examines the history, theory and mathematical background, applications, and the current direction of both Genetic Algorithms and Evolution Strategies.",
"title": ""
},
{
"docid": "eb344bf180467ccbd27d0aff2c57be73",
"text": "Most IP-geolocation mapping schemes [14], [16], [17], [18] take delay-measurement approach, based on the assumption of a strong correlation between networking delay and geographical distance between the targeted client and the landmarks. In this paper, however, we investigate a large region of moderately connected Internet and find the delay-distance correlation is weak. But we discover a more probable rule - with high probability the shortest delay comes from the closest distance. Based on this closest-shortest rule, we develop a simple and novel IP-geolocation mapping scheme for moderately connected Internet regions, called GeoGet. In GeoGet, we take a large number of webservers as passive landmarks and map a targeted client to the geolocation of the landmark that has the shortest delay. We further use JavaScript at targeted clients to generate HTTP/Get probing for delay measurement. To control the measurement cost, we adopt a multistep probing method to refine the geolocation of a targeted client, finally to city level. The evaluation results show that when probing about 100 landmarks, GeoGet correctly maps 35.4 percent clients to city level, which outperforms current schemes such as GeoLim [16] and GeoPing [14] by 270 and 239 percent, respectively, and the median error distance in GeoGet is around 120 km, outperforming GeoLim and GeoPing by 37 and 70 percent, respectively.",
"title": ""
},
{
"docid": "1de0fb2c19bf7a61ac2c89af49e3b386",
"text": "Many situations in human life present choices between (a) narrowly preferred particular alternatives and (b) narrowly less preferred (or aversive) particular alternatives that nevertheless form part of highly preferred abstract behavioral patterns. Such alternatives characterize problems of self-control. For example, at any given moment, a person may accept alcoholic drinks yet also prefer being sober to being drunk over the next few days. Other situations present choices between (a) alternatives beneficial to an individual and (b) alternatives that are less beneficial (or harmful) to the individual that would nevertheless be beneficial if chosen by many individuals. Such alternatives characterize problems of social cooperation; choices of the latter alternative are generally considered to be altruistic. Altruism, like self-control, is a valuable temporally-extended pattern of behavior. Like self-control, altruism may be learned and maintained over an individual's lifetime. It needs no special inherited mechanism. Individual acts of altruism, each of which may be of no benefit (or of possible harm) to the actor, may nevertheless be beneficial when repeated over time. However, because each selfish decision is individually preferred to each altruistic decision, people can benefit from altruistic behavior only when they are committed to an altruistic pattern of acts and refuse to make decisions on a case-by-case basis.",
"title": ""
},
{
"docid": "0d27b687287ea23c1eb2bcff307af818",
"text": "To cite: Suchak T, Hussey J, Takhar M, et al. J Fam Plann Reprod Health Care Published Online First: [please include Day Month Year] doi:10.1136/jfprhc-2014101091 BACKGROUND UK figures estimate that in 1998 there were 3170 people over the age of 15 years assigned as male at birth who had presented with gender dysphoria. This figure is comparable to that found in the Netherlands where 2440 have presented; however, far fewer people actually undergo sex reassignment surgery. Recent statistics from the Netherlands indicate that about 1 in 12 000 natal males undergo sex-reassignment and about 1 in 34 000 natal females. Since April 2013, English gender identity services have been among the specialised services commissioned centrally by NHS England and this body is therefore responsible for commissioning transgender surgical services. The growth in the incidence of revealed gender dysphoria amongst both young and adult people has major implications for commissioners and providers of public services. The present annual requirement is 480 genital and gonadal male-to-female reassignment procedures. There are currently three units in the UK offering this surgery for National Health Service (NHS) patients. Prior to surgery trans women will have had extensive evaluation, including blood tests, advice on smoking, alcohol and obesity, and psychological/psychiatric evaluation. They usually begin to take female hormones after 3 months of transition, aiming to encourage development of breast buds and alter muscle and fat distribution. Some patients may elect at this stage to have breast surgery. Before genital surgery can be considered the patient must have demonstrated they have lived for 1 year full-time as a woman. Figure 1 shows a typical post-surgical result. A trans person who has lived exclusively in their identified gender for at least 2 years (as required by the Gender Recognition Act 2004) can apply for a gender recognition certificate (GRC). This is independent of whether gender reassignment surgery has taken place. Once a trans person has a GRC they can then obtain a new birth certificate. The trans person will also have new hospital records in a new name. It is good practice for health providers to take practical steps to ensure that gender reassignment is not casually visible in records or communicated without the informed consent of the user. Consent must always be sought (and documented) for all medical correspondence where the surgery or life before surgery when living as a different gender is mentioned (exceptions include an order of court and prevention or investigation of crime). 5 It is advisable to seek medico-legal advice before disclosing. Not all trans women opt to undergo vaginoplasty. Patients have free choice as to how much surgery they wish to undertake. Trans women often live a considerable distance from where their surgery was performed and as a result many elect to see their own general practitioner or local Sexual Health Clinic if they have postoperative problems. Fortunately reported complications following surgery are rare. Lawrence summarised 15 papers investigating 232 cases of vaginoplasty surgery; 13 reported rectal-vaginal fistula, 39 reported vaginal stenosis and 33 urethral stenosis; however, it is likely that there is significant under-reporting of complications. Here we present some examples of post-vaginoplasty problems presenting to a Sexual Health Service in the North East of England, and how they were managed.",
"title": ""
},
{
"docid": "668b8d1475bae5903783159a2479cc32",
"text": "As environmental concerns and energy consumption continue to increase, utilities are looking at cost effective strategies for improved network operation and consumer consumption. Smart grid is a collection of next generation power delivery concepts that includes new power delivery components, control and monitoring throughout the power grid and more informed customer options. This session will cover utilization of AMI networks to realize some of the smart grid goals.",
"title": ""
},
{
"docid": "2cfb782a527b1806eda302c4c7b63219",
"text": "The latest version of the ISO 26262 standard from 2016 represents the state of the art for a safety-guided development of safety-critical electric/electronic vehicle systems. These vehicle systems include advanced driver assistance systems and vehicle guidance systems. The development process proposed in the ISO 26262 standard is based upon multiple V-models, and defines activities and work products for each process step. In many of these process steps, scenario based approaches can be applied to achieve the defined work products for the development of automated driving functions. To accomplish the work products of different process steps, scenarios have to focus on various aspects like a human understandable notation or a description via state variables. This leads to contradictory requirements regarding the level of detail and way of notation for the representation of scenarios. In this paper, the authors discuss requirements for the representation of scenarios in different process steps defined by the ISO 26262 standard, propose a consistent terminology based on prior publications for the identified levels of abstraction, and demonstrate how scenarios can be systematically evolved along the phases of the development process outlined in the ISO 26262 standard.",
"title": ""
},
{
"docid": "31555a5981fd234fe9dce3ed47f690f2",
"text": "An accredited biennial 2012 study by the Association of Certified Fraud Examiners claims that on average 5% of a company’s revenue is lost because of unchecked fraud every year. The reason for such heavy losses are that it takes around 18 months for a fraud to be caught and audits catch only 3% of the actual fraud. This begs the need for better tools and processes to be able to quickly and cheaply identify potential malefactors. In this paper, we describe a robust tool to identify procurement related fraud/risk, though the general design and the analytical components could be adapted to detecting fraud in other domains. Besides analyzing standard transactional data, our solution analyzes multiple public and private data sources leading to wider coverage of fraud types than what generally exists in the marketplace. Moreover, our approach is more principled in the sense that the learning component, which is based on investigation feedback has formal guarantees. Though such a tool is ever evolving, an initial deployment of this tool over the past 6 months has found many interesting cases from compliance risk and fraud point of view, increasing the number of true positives found by over 80% compared with other state-of-the-art tools that the domain experts were previously using.",
"title": ""
},
{
"docid": "92fb73e03b487d5fbda44e54cf59640d",
"text": "The eyes and periocular area are the central aesthetic unit of the face. Facial aging is a dynamic process that involves skin, subcutaneous soft tissues, and bony structures. An understanding of what is perceived as youthful and beautiful is critical for success. Knowledge of the functional aspects of the eyelid and periocular area can identify pre-preoperative red flags.",
"title": ""
},
{
"docid": "7cfeadc550f412bb92df4f265bf99de0",
"text": "AIM\nCorrective image reconstruction methods which produce reconstructed images with improved spatial resolution and decreased noise level became recently commercially available. In this work, we tested the performance of three new software packages with reconstruction schemes recommended by the manufacturers using physical phantoms simulating realistic clinical settings.\n\n\nMETHODS\nA specially designed resolution phantom containing three (99m)Tc lines sources and the NEMA NU-2 image quality phantom were acquired on three different SPECT/CT systems (General Electrics Infinia, Philips BrightView and Siemens Symbia T6). Measurement of both phantoms was done with the trunk filled with a (99m)Tc-water solution. The projection data were reconstructed using the GE's Evolution for Bone(®), Philips Astonish(®) and Siemens Flash3D(®) software. The reconstruction parameters employed (number of iterations and subsets, the choice of post-filtering) followed theses recommendations of each vendor. These results were compared with reference reconstructions using the ordered subset expectation maximization (OSEM) reconstruction scheme.\n\n\nRESULTS\nThe best results (smallest value for resolution, highest percent contrast values) for all three packages were found for the scatter corrected data without applying any post-filtering. The advanced reconstruction methods improve the full width at half maximum (FWHM) of the line sources from 11.4 to 9.5mm (GE), from 9.1 to 6.4mm (Philips), and from 12.1 to 8.9 mm (Siemens) if no additional post filter was applied. The total image quality control index measured for a concentration ratio of 8:1 improves for GE from 147 to 189, from 179. to 325 for Philips and from 217 to 320 for Siemens using the reference method for comparison. The same trends can be observed for the 4:1 concentration ratio. The use of a post-filter reduces the background variability approximately by a factor of two, but deteriorates significantly the spatial resolution.\n\n\nCONCLUSIONS\nUsing advanced reconstruction algorithms the largest improvement in image resolution and contrast is found for the scatter corrected slices without applying post-filtering. The user has to choose whether noise reduction by post-filtering or improved image resolution fits better a particular imaging procedure.",
"title": ""
},
{
"docid": "0f86e14cc3d47efe5762dc4db83a3b70",
"text": "Emotional intelligence (EI) involves the ability to carry out accurate reasoning about emotions and the ability to use emotions and emotional knowledge to enhance thought. We discuss the origins of the EI concept, define EI, and describe the scope of the field today. We review three approaches taken to date from both a theoretical and methodological perspective. We find that Specific-Ability and Integrative-Model approaches adequately conceptualize and measure EI. Pivotal in this review are those studies that address the relation between EI measures and meaningful criteria including social outcomes, performance, and psychological and physical well-being. The Discussion section is followed by a list of summary points and recommended issues for future research.",
"title": ""
},
{
"docid": "9f8ff3d7322aefafb99e5cc0dd3b33c2",
"text": "We report on the use of scenario-based methods for evaluating collaborative systems. We describe the method, the case study where it was applied, and provide results of its efficacy in the field. The results suggest that scenario-based evaluation is effective in helping to focus evaluation efforts and in identifying the range of technical, human, organizational and other contextual factors that impact system success. The method also helps identify specific actions, for example, prescriptions for design to enhance system effectiveness. However, we found the method somewhat less useful for identifying the measurable benefits gained from a CSCW implementation, which was one of our primary goals. We discuss challenges faced applying the technique, suggest recommendations for future research, and point to implications for practice.",
"title": ""
},
{
"docid": "3476246809afe4e6b7cef9bbbed1926e",
"text": "The aim of this study was to investigate the efficacy of a proposed new implant mediated drug delivery system (IMDDS) in rabbits. The drug delivery system is applied through a modified titanium implant that is configured to be implanted into bone. The implant is hollow and has multiple microholes that can continuously deliver therapeutic agents into the systematic body. To examine the efficacy and feasibility of the IMDDS, we investigated the pharmacokinetic behavior of dexamethasone in plasma after a single dose was delivered via the modified implant placed in the rabbit tibia. After measuring the plasma concentration, the areas under the curve showed that the IMDDS provided a sustained release for a relatively long period. The result suggests that the IMDDS can deliver a sustained release of certain drug components with a high bioavailability. Accordingly, the IMDDS may provide the basis for a novel approach to treating patients with chronic diseases.",
"title": ""
},
{
"docid": "aefa758e6b5681c213150ed674eae915",
"text": "This paper presents a solution to automatically recognize the correct left/right and upright/upside-down orientation of iris images. This solution can be used to counter spoofing attacks directed to generate fake identities by rotating an iris image or the iris sensor during the acquisition. Two approaches are compared on the same data, using the same evaluation protocol: 1) feature engineering, using hand-crafted features classified by a support vector machine (SVM) and 2) feature learning, using data-driven features learned and classified by a convolutional neural network (CNN). A data set of 20 750 iris images, acquired for 103 subjects using four sensors, was used for development. An additional subject-disjoint data set of 1,939 images, from 32 additional subjects, was used for testing purposes. Both same-sensor and cross-sensor tests were carried out to investigate how the classification approaches generalize to unknown hardware. The SVM-based approach achieved an average correct classification rate above 95% (89%) for recognition of left/right (upright/upside-down) orientation when tested on subject-disjoint data and camera-disjoint data, and 99% (97%) if the images were acquired by the same sensor. The CNN-based approach performed better for same-sensor experiments, and presented slightly worse generalization capabilities to unknown sensors when compared with the SVM. We are not aware of any other papers on the automatic recognition of upright/upside-down orientation of iris images, or studying both hand-crafted and data-driven features in same-sensor and cross-sensor subject-disjoint experiments. The data sets used in this paper, along with random splits of the data used in cross-validation, are being made available.",
"title": ""
},
{
"docid": "24e1a6f966594d4230089fc433e38ce6",
"text": "The need for omnidirectional antennas for wireless applications has increased considerably. The antennas are used in a variety of bands anywhere from 1.7 to 2.5 GHz, in different configurations which mainly differ in gain. The omnidirectionality is mostly obtained using back-to-back elements or simply using dipoles in different collinear-array configurations. The antenna proposed in this paper is a patch which was built in a cylindrical geometry rather than a planar one, and which generates an omnidirectional pattern in the H-plane.",
"title": ""
},
{
"docid": "d15804e98b58fa5ec0985c44f6bb6033",
"text": "Urrently, the most successful learning models in computer vision are based on learning successive representations followed by a decision layer. This is usually actualized through feedforward multilayer neural networks, e.g. ConvNets, where each layer forms one of such successive representations. However, an alternative that can achieve the same goal is a feedback based approach in which the representation is formed in an iterative manner based on a feedback received from previous iterations output. We establish that a feedback based approach has several core advantages over feedforward: it enables making early predictions at the query time, its output naturally conforms to a hierarchical structure in the label space (e.g. a taxonomy), and it provides a new basis for Curriculum Learning. We observe that feedback develops a considerably different representation compared to feedforward counterparts, in line with the aforementioned advantages. We provide a general feedback based learning architecture, instantiated using existing RNNs, with the endpoint results on par or better than existing feedforward networks and the addition of the above advantages.",
"title": ""
},
{
"docid": "5f72d4b335c16887c895e611431fa58f",
"text": "As Android OS establishes itself as the primary platform on smartphones, a substantial increase in malware targeted at Android devices is being observed in the wild. While anti-virus software is available, and Android limits applications to user approved permissions, many users remain unaware of the threat posed by malware and of actual infections on their devices. In this paper we explore techniques to enable mobile network operators to detect Android malware and violations of user privacy through network traffic analysis.",
"title": ""
},
{
"docid": "35c08abd57d2700164373c688c24b2a6",
"text": "Image enhancement is a common pre-processing step before the extraction of biometric features from a fingerprint sample. This can be essential especially for images of low image quality. An ideal fingerprint image enhancement should intend to improve the end-to-end biometric performance, i.e. the performance achieved on biometric features extracted from enhanced fingerprint samples. We use a model from Deep Learning for the task of image enhancement. This work's main contribution is a dedicated cost function which is optimized during training The cost function takes into account the biometric feature extraction. Our approach intends to improve the accuracy and reliability of the biometric feature extraction process: No feature should be missed and all features should be extracted as precise as possible. By doing so, the loss function forced the image enhancement to learn how to improve the suitability of a fingerprint sample for a biometric comparison process. The effectivity of the cost function was demonstrated for two different biometric feature extraction algorithms.",
"title": ""
},
{
"docid": "0f9b073461047d698b6bba8d9ee7bff2",
"text": "Different psychotherapeutic theories provide contradictory accounts of adult narcissism as the product of either parental coldness or excessive parental admiration during childhood. Yet, none of these theories has been tested systematically in a nonclinical sample. The authors compared four structural equation models predicting overt and covert narcissism among 120 United Kingdom adults. Both forms of narcissism were predicted by both recollections of parental coldness and recollections of excessive parental admiration. Moreover, a suppression relationship was detected between these predictors: The effects of each were stronger when modeled together than separately. These effects were found after controlling for working models of attachment; covert narcissism was predicted also by attachment anxiety. This combination of childhood experiences may help to explain the paradoxical combination of grandiosity and fragility in adult narcissism.",
"title": ""
}
] |
scidocsrr
|
756fcbce37d44dab3ebb080ee1b5f0c8
|
COnGRATS: Realistic simulation of traffic sequences for autonomous driving
|
[
{
"docid": "641b18d9173f4badc570662fd38859f7",
"text": "With the MPI-Sintel Flow dataset, we introduce a naturalistic dataset for optical flow evaluation derived from the open source CGI movie Sintel. In contrast to the well-known Middlebury dataset, the MPI-Sintel Flow dataset contains longer and more varied sequences with image degradations such as motion blur, defocus blur, and atmospheric effects. Animators use a variety of techniques that produce pleasing images but make the raw animation data inappropriate for computer vision applications if used “out of the box”. Several changes to the rendering software and animation files were necessary in order to produce data for flow evaluation and similar changes are likely for future efforts to construct a scientific dataset from an animated film. Here we distill our experience with Sintel into a set of best practices for using computer animation to generate scientific data for vision research.",
"title": ""
},
{
"docid": "7ff291833a25ca1a073ebc2a2e5274e7",
"text": "High precision ground truth data is a very important factor for the development and evaluation of computer vision algorithms and especially for advanced driver assistance systems. Unfortunately, some types of data, like accurate optical flow and depth as well as pixel-wise semantic annotations are very difficult to obtain. In order to address this problem, in this paper we present a new framework for the generation of high quality synthetic camera images, depth and optical flow maps and pixel-wise semantic annotations. The framework is based on a realistic driving simulator called VDrift [1], which allows us to create traffic scenarios very similar to those in real life. We show how we can use the proposed framework to generate an extensive dataset for the task of multi-class image segmentation. We use the dataset to train a pairwise CRF model and to analyze the effects of using various combinations of features in different image modalities.",
"title": ""
},
{
"docid": "54ddf9729582747d66e703dd72f51425",
"text": "Background subtraction is one of the key techniques for automatic video analysis, especially in the domain of video surveillance. Although its importance, evaluations of recent background subtraction methods with respect to the challenges of video surveillance suffer from various shortcomings. To address this issue, we first identify the main challenges of background subtraction in the field of video surveillance. We then compare the performance of nine background subtraction methods with post-processing according to their ability to meet those challenges. Therefore, we introduce a new evaluation data set with accurate ground truth annotations and shadow masks. This enables us to provide precise in-depth evaluation of the strengths and drawbacks of background subtraction methods.",
"title": ""
},
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
}
] |
[
{
"docid": "c20733b414a1b39122ef54d161885d81",
"text": "This paper discusses the role of clusters and focal firms in the economic performance of small firms in Italy. Using the example of the packaging industry of northern Italy, it shows how clusters of small firms have emerged around a few focal or leading companies. These companies have helped the clusters grow and diversify through technological and managerial spillover effects, through the provision of purchase orders, and sometimes through financial links. The role of common local training institutes, whose graduates often start up small firms within the local cluster, is also discussed.",
"title": ""
},
{
"docid": "0be3178ff2f412952934a49084ee8edc",
"text": "This article introduces the physics of information in the context of molecular biology and genomics. Entropy and information, the two central concepts of Shannon’s theory of information and communication, are often confused with each other but play transparent roles when applied to statistical ensembles (i.e., identically prepared sets) of symbolic sequences. Such an approach can distinguish between entropy and information in genes, predict the secondary structure of ribozymes, and detect the covariation between residues in folded proteins. We also review applications to molecular sequence and structure analysis, and introduce new tools in the characterization of resistance mutations, and in drug design. In a curious twist of history, the dawn of the age of genomics has both seen the rise of the science of bioinformatics as a tool to cope with the enormous amounts of data being generated daily, and the decline of the theory of information as applied to molecular biology. Hailed as a harbinger of a “new movement” (Quastler 1953) along with Cybernetics, the principles of information theory were thought to be applicable to the higher functions of living organisms, and able to analyze such functions as metabolism, growth, and differentiation (Quastler 1953). Today, the metaphors and the jargon of information theory are still widely used (Maynard Smith 1999a, 1999b), as opposed to the mathematical formalism, which is too often considered to be inapplicable to biological information. Clearly, looking back it appears that too much hope was laid upon this theory’s relevance for biology. However, there was well-founded optimism that information theory ought to be able to address the complex issues associated with the storage of information in the genetic code, only to be repeatedly questioned and rebuked (see, e.g., Vincent 1994, Sarkar 1996). In this article, I outline the concepts of entropy and information (as defined by Shannon) in the context of molecular biology. We shall see that not only are these terms well-defined and useful, they also coincide precisely with what we intuitively mean when we speak about information stored in genes, for example. I then present examples of applications of the theory to measuring the information content of biomolecules, the identification of polymorphisms, RNA and protein secondary structure prediction, the prediction and analysis of molecular interactions, and drug design. 1 Entropy and Information Entropy and information are often used in conflicting manners in the literature. A precise understanding, both mathematical and intuitive, of the notion of information (and its relationship to entropy) is crucial for applications in molecular biology. Therefore, let us begin by outlining Shannon’s original entropy concept (Shannon, 1948). 1.1 Shannon’s Uncertainty Measure Entropy in Shannon’s theory (defined mathematically below) is a measure of uncertainty about the identity of objects in an ensemble. Thus, while “en-",
"title": ""
},
{
"docid": "69548f662a286c0b3aca5374f36ce2c7",
"text": "A hallmark of glaucomatous optic nerve damage is retinal ganglion cell (RGC) death. RGCs, like other central nervous system neurons, have a limited capacity to survive or regenerate an axon after injury. Strategies that prevent or slow down RGC degeneration, in combination with intraocular pressure management, may be beneficial to preserve vision in glaucoma. Recent progress in neurobiological research has led to a better understanding of the molecular pathways that regulate the survival of injured RGCs. Here we discuss a variety of experimental strategies including intraocular delivery of neuroprotective molecules, viral-mediated gene transfer, cell implants and stem cell therapies, which share the ultimate goal of promoting RGC survival after optic nerve damage. The challenge now is to assess how this wealth of knowledge can be translated into viable therapies for the treatment of glaucoma and other optic neuropathies.",
"title": ""
},
{
"docid": "f9e3a402e1b36e27bada5499d958b2a8",
"text": "A miniaturized antipodal Vivaldi antenna to operate from 1 to 30 GHz is designed for nondestructive testing and evaluation of construction materials, such as concrete, polymers, and dielectric composites. A step-by-step procedure has been employed to design and optimize performance of the proposed antenna. First, a conventional antipodal Vivaldi antenna (CAVA) is designed as a reference. Second, the CAVA is shortened to have a small size of the CAVA. Third, to extend the low end of frequency band, the inner edges of the top and bottom radiators of the shortened CAVA have been bent. To enhance gain at lower frequencies, regular slit edge technique is employed. Finally, a half elliptical-shaped dielectric lens as an extension of the antenna substrate is added to the antenna to feature high gain and front-to-back ratio. A prototype of the antenna is employed as a part of the microwave imaging system to detect voids inside concrete specimen. High-range resolution images of voids are achieved by applying synthetic aperture radar algorithm.",
"title": ""
},
{
"docid": "7526ae3542d1e922bd73be0da7c1af72",
"text": "Cooperative coevolutionary algorithms (CCEAs) rely on multiple coevolving populations for the evolution of solutions composed of coadapted components. CCEAs enable, for instance, the evolution of cooperative multiagent systems composed of heterogeneous agents, where each agent is modelled as a component of the solution. Previous works have, however, shown that CCEAs are biased toward stability: the evolutionary process tends to converge prematurely to stable states instead of (near-)optimal solutions. In this study, we show how novelty search can be used to avoid the counterproductive attraction to stable states in coevolution. Novelty search is an evolutionary technique that drives evolution toward behavioural novelty and diversity rather than exclusively pursuing a static objective. We evaluate three novelty-based approaches that rely on, respectively (1) the novelty of the team as a whole, (2) the novelty of the agents’ individual behaviour, and (3) the combination of the two. We compare the proposed approaches with traditional fitness-driven cooperative coevolution in three simulated multirobot tasks. Our results show that team-level novelty scoring is the most effective approach, significantly outperforming fitness-driven coevolution at multiple levels. Novelty-driven cooperative coevolution can substantially increase the potential of CCEAs while maintaining a computational complexity that scales well with the number of populations.",
"title": ""
},
{
"docid": "d9273d3d9bb409ac3d46519a35f61d4a",
"text": "The increasing interest in using Near Field Communications (NFC) technology [1] at 13.5MHz is growing rapidly in the area of contactless payments, as well as numerous other applications, between devices that are within 10cm distance apart. However, there is growing concern that the use of such devices for contactless payments invites problems with regards to using metallic objects in the vicinity of the two devices to act as “rogue” antennas, which eavesdrop information during a financial transaction is taking place. This paper presents aspects of designing H-antennas both for the two devices communicating while also identifying the means by which rogue antennas can be created by exploiting real life metallic structures. In this paper, a shopping trolley is taken as an example.",
"title": ""
},
{
"docid": "53df69bf8750a7e97f12b1fcac14b407",
"text": "In photovoltaic (PV) power systems where a set of series-connected PV arrays (PVAs) is connected to a conventional two-level inverter, the occurrence of partial shades and/or the mismatching of PVAs leads to a reduction of the power generated from its potential maximum. To overcome these problems, the connection of the PVAs to a multilevel diode-clamped converter is considered in this paper. A control and pulsewidth-modulation scheme is proposed, capable of independently controlling the operating voltage of each PVA. Compared to a conventional two-level inverter system, the proposed system configuration allows one to extract maximum power, to reduce the devices voltage rating (with the subsequent benefits in device-performance characteristics), to reduce the output-voltage distortion, and to increase the system efficiency. Simulation and experimental tests have been conducted with three PVAs connected to a four-level three-phase diode-clamped converter to verify the good performance of the proposed system configuration and control strategy.",
"title": ""
},
{
"docid": "fa8e64cca3e690814cb950342ce43479",
"text": "We estimated the effect of human capital on leadership. Our human capital measures included not only the traditional measures of education and on-the-job learning but also measures of cognitive and noncognitive abilities. The measures of cognitive abilities included numeracy, literacy, and problem solving, and the noncognitive abilities measures included perseverance, openness to learning, and social trust. Our data came from the Programme for the International Assessment of Adult Competencies (PIAAC) survey for the United States. The results indicated that, in addition to education and on-the-job learning, both cognitive and noncognitive abilities were significant and substantial determinants of leadership. More specifically, out of the cognitive abilities, the most important factor was problem-solving ability; and among noncognitive abilities included, perseverance was most important.",
"title": ""
},
{
"docid": "40df4f2d0537bca3cf92dc3005d2b9f3",
"text": "The pages of this Sample Chapter may have slight variations in final published form. H istorically, we talk of first-force psychodynamic, second-force cognitive-behavioral, and third-force existential-humanistic counseling and therapy theories. Counseling and psychotherapy really began with Freud and psychoanalysis. James Watson and, later, B. F. Skinner challenged Freud's emphasis on the unconscious and focused on observable behavior. Carl Rogers, with his person-centered counseling, revolutionized the helping professions by focusing on the importance of nurturing a caring therapist-client relationship in the helping process. All three approaches are still alive and well in the fields of counseling and psychology, as discussed in Chapters 5 through 10. As you reflect on the new knowledge and skills you exercised by reading the preceding chapters and completing the competency-building activities in those chapters, hopefully you part three 319 will see that you have gained a more sophisticated foundational understanding of the three traditional theoretical forces that have shaped the fields of counseling and therapy over the past one hundred years. Efforts in this book have been intended to bring your attention to both the strengths and limitations of psychodynamic, cognitive-behavioral, and existential-humanistic perspectives. With these perspectives in mind, the following chapters examine the fourth major theoretical force that has emerged in the mental health professions over the past 40 years: the multicultural-feminist-social justice counseling world-view. The perspectives of the fourth force challenge you to learn new competencies you will need to acquire to work effectively, respectfully, and ethically in a culturally diverse 21st-century society. Part Three begins by discussing the rise of the feminist counseling and therapy perspective (Chapter 11) and multicultural counseling and therapy (MCT) theories (Chapter 12). To assist you in synthesizing much of the information contained in all of the preceding chapters, Chapter 13 presents a comprehensive and integrative helping theory referred to as developmental counseling and therapy (DCT). Chapter 14 offers a comprehensive examination of family counseling and therapy theories to further extend your knowledge of ways that mental health practitioners can assist entire families in realizing new and untapped dimensions of their collective well-being. Finally Chapter 15 provides guidelines to help you develop your own approach to counseling and therapy that complements a growing awareness of your own values, biases, preferences, and relational compe-tencies as a mental health professional. Throughout, competency-building activities offer you opportunities to continue to exercise new skills associated with the different theories discussed in Part Three. …",
"title": ""
},
{
"docid": "83fd53c0f9bbf4093d14e2784ebbae5f",
"text": "This paper present a short survey about recent trends on the arising field of big data. After a definition and explanation of ”Big Data” and a discussion why data sizes increase, appropriate methods to solve big data problems are introduced. In addition, recent applications and future potentials in smart buildings and smart grids are discussed.",
"title": ""
},
{
"docid": "4bcc299aaaea50bfbf11960b66d6d5d3",
"text": "The multigram model assumes that language can be described as the output of a memoryless source that emits variable-length sequences of words. The estimation of the model parameters can be formulated as a Maximum Likelihood estimation problem from incomplete data. We show that estimates of the model parameters can be computed through an iterative Expectation-Maximization algorithm and we describe a forward-backward procedure for its implementation. We report the results of a systematical evaluation of multi-grams for language modeling on the ATIS database. The objective performance measure is the test set perplexity. Our results show that multigrams outperform conventional n-grams for this task.",
"title": ""
},
{
"docid": "c1978e4936ed5bda4e51863dea7e93ee",
"text": "In needle-based medical procedures, beveled-tip flexible needles are steered inside soft tissue with the aim of reaching pre-defined target locations. The efficiency of needle-based interventions depends on accurate control of the needle tip. This paper presents a comprehensive mechanics-based model for simulation of planar needle insertion in soft tissue. The proposed model for needle deflection is based on beam theory, works in real-time, and accepts the insertion velocity as an input that can later be used as a control command for needle steering. The model takes into account the effects of tissue deformation, needle-tissue friction, tissue cutting force, and needle bevel angle on needle deflection. Using a robot that inserts a flexible needle into a phantom tissue, various experiments are conducted to separately identify different subsets of the model parameters. The validity of the proposed model is verified by comparing the simulation results to the empirical data. The results demonstrate the accuracy of the proposed model in predicting the needle tip deflection for different insertion velocities.",
"title": ""
},
{
"docid": "eadc810575416fccea879c571ddfbfd2",
"text": "In this paper, we propose a zoom-out-and-in network for generating object proposals. A key observation is that it is difficult to classify anchors of different sizes with the same set of features. Anchors of different sizes should be placed accordingly based on different depth within a network: smaller boxes on high-resolution layers with a smaller stride while larger boxes on low-resolution counterparts with a larger stride. Inspired by the conv/deconv structure, we fully leverage the low-level local details and high-level regional semantics from two feature map streams, which are complimentary to each other, to identify the objectness in an image. A map attention decision (MAD) unit is further proposed to aggressively search for neuron activations among two streams and attend the most contributive ones on the feature learning of the final loss. The unit serves as a decision-maker to adaptively activate maps along certain channels with the solely purpose of optimizing the overall training loss. One advantage of MAD is that the learned weights enforced on each feature channel is predicted on-the-fly based on the input context, which is more suitable than the fixed enforcement of a convolutional kernel. Experimental results on three datasets demonstrate the effectiveness of our proposed algorithm over other state-of-the-arts, in terms of average recall for region proposal and average precision for object detection.",
"title": ""
},
{
"docid": "d37388cba8f77630f47a419d6d7094a4",
"text": "The aim of the study was to develop waist circumference (WC) percentiles in Polish children and youth and to compare these with the results obtained in other countries. The study comprised a random group of 5663 Polish children aged 7-18 years. Smoothed WC percentile curves were computed using the LMS method. The curves displaying the values of the 50th (WC(50)) and the 90th (WC(90)) percentile were then compared with the results of similar studies carried out in children from the UK, Spain, Germany, Turkey, Cyprus, Canada and the USA. WC increased with age in both boys and girls and in all observed age periods the boys were seen to dominate. For 18-year-old Polish boys and girls the values of WC(90) were 86.5 and 78.2, respectively, and were lower than the current criteria developed by the International Diabetes Federation. Both WC(50) and WC(90) were higher in Polish boys and girls compared with their counterparts in the UK, Turkey and Canada and significantly lower than in children from the USA, Cyprus and Spain. The percentile curves for Polish children and youth, which were developed here for the first time, are base curves that can be applied in analysing trends as well as making comparisons with results of similar studies performed in other countries.",
"title": ""
},
{
"docid": "0ec1a33be6e06b4dbff7c906ccf970f0",
"text": "Free/Open Source Software (F/OSS) projects are people-oriented and knowledge intensive software development environments. Many researchers focused on mailing lists to study coding activities of software developers. How expert software developers interact with each other and with non-developers in the use of community products have received little attention. This paper discusses the altruistic sharing of knowledge between knowledge providers and knowledge seekers in the Developer and User mailing lists of the Debian project. We analyze the posting and replying activities of the participants by counting the number of email messages they posted to the lists and the number of replies they made to questions others posted. We found out that participants interact and share their knowledge a lot, their positing activity is fairly highly correlated with their replying activity, the characteristics of posting and replying activities are different for different kinds of lists, and the knowledge sharing activity of self-organizing Free/Open Source communities could best be explained in terms of what we called ‘‘Fractal Cubic Distribution’’ rather than the power-law distribution mostly reported in the literature. The paper also proposes what could be researched in knowledge sharing activities in F/OSS projects mailing list and for what purpose. The research findings add to our understanding of knowledge sharing activities in F/OSS projects. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "cd25829b5e42a77485ceefd18b682410",
"text": "Members of the Fleischner Society compiled a glossary of terms for thoracic imaging that replaces previous glossaries published in 1984 and 1996 for thoracic radiography and computed tomography (CT), respectively. The need to update the previous versions came from the recognition that new words have emerged, others have become obsolete, and the meaning of some terms has changed. Brief descriptions of some diseases are included, and pictorial examples (chest radiographs and CT scans) are provided for the majority of terms.",
"title": ""
},
{
"docid": "18a317b8470b4006ccea0e436f54cfcd",
"text": "Device-to-device communications enable two proximity users to transmit signal directly without going through the base station. It can increase network spectral efficiency and energy efficiency, reduce transmission delay, offload traffic for the BS, and alleviate congestion in the cellular core networks. However, many technical challenges need to be addressed for D2D communications to harvest the potential benefits, including device discovery and D2D session setup, D2D resource allocation to guarantee QoS, D2D MIMO transmission, as well as D2D-aided BS deployment in heterogeneous networks. In this article, the basic concepts of D2D communications are first introduced, and then existing fundamental works on D2D communications are discussed. In addition, some potential research topics and challenges are also identified.",
"title": ""
},
{
"docid": "02e3f296f7c0c30cc8320abb7456bc9c",
"text": "Purpose – This research aims to examine the relationship between information security strategy and organization performance, with organizational capabilities as important factors influencing successful implementation of information security strategy and organization performance. Design/methodology/approach – Based on existing literature in strategic management and information security, a theoretical model was proposed and validated. A self-administered survey instrument was developed to collect empirical data. Structural equation modeling was used to test hypotheses and to fit the theoretical model. Findings – Evidence suggests that organizational capabilities, encompassing the ability to develop high-quality situational awareness of the current and future threat environment, the ability to possess appropriate means, and the ability to orchestrate the means to respond to information security threats, are positively associated with effective implementation of information security strategy, which in turn positively affects organization performance. However, there is no significant relationship between decision making and information security strategy implementation success. Research limitations/implications – The study provides a starting point for further research on the role of decision-making in information security. Practical implications – Findings are expected to yield practical value for business leaders in understanding the viable predisposition of organizational capabilities in the context of information security, thus enabling firms to focus on acquiring the ones indispensable for improving organization performance. Originality/value – This study provides the body of knowledge with an empirical analysis of organization’s information security capabilities as an aggregation of sense making, decision-making, asset availability, and operations management constructs.",
"title": ""
},
{
"docid": "10a2fefd81b61e3184d3fdc018ff42ab",
"text": "Recently, models based on deep neural networks have dominated the fields of scene text detection and recognition. In this paper, we investigate the problem of scene text spotting, which aims at simultaneous text detection and recognition in natural images. An end-to-end trainable neural network model for scene text spotting is proposed. The proposed model, named as Mask TextSpotter, is inspired by the newly published work Mask R-CNN. Different from previous methods that also accomplish text spotting with end-to-end trainable deep neural networks, Mask TextSpotter takes advantage of simple and smooth end-to-end learning procedure, in which precise text detection and recognition are acquired via semantic segmentation. Moreover, it is superior to previous methods in handling text instances of irregular shapes, for example, curved text. Experiments on ICDAR2013, ICDAR2015 and Total-Text demonstrate that the proposed method achieves state-of-the-art results in both scene text detection and end-to-end text recognition tasks.",
"title": ""
},
{
"docid": "5cec6746f24246f6e99b1dae06f9a21a",
"text": "Recently there has been arising interest in automatically recognizing nonverbal behaviors that are linked with psychological conditions. Work in this direction has shown great potential for cases such as depression and post-traumatic stress disorder (PTSD), however most of the times gender differences have not been explored. In this paper, we show that gender plays an important role in the automatic assessment of psychological conditions such as depression and PTSD. We identify a directly interpretable and intuitive set of predictive indicators, selected from three general categories of nonverbal behaviors: affect, expression variability and motor variability. For the analysis, we employ a semi-structured virtual human interview dataset which includes 53 video recorded interactions. Our experiments on automatic classification of psychological conditions show that a gender-dependent approach significantly improves the performance over a gender agnostic one.",
"title": ""
}
] |
scidocsrr
|
5a6887e33ec830afafeae7b655b9823d
|
A Study on Outlier Detection for Temporal Data
|
[
{
"docid": "f598677e19789c92c31936440e709c4d",
"text": "Temporal datasets, in which data evolves continuously, exist in a wide variety of applications, and identifying anomalous or outlying objects from temporal datasets is an important and challenging task. Different from traditional outlier detection, which detects objects that have quite different behavior compared with the other objects, temporal outlier detection tries to identify objects that have different evolutionary behavior compared with other objects. Usually objects form multiple communities, and most of the objects belonging to the same community follow similar patterns of evolution. However, there are some objects which evolve in a very different way relative to other community members, and we define such objects as evolutionary community outliers. This definition represents a novel type of outliers considering both temporal dimension and community patterns. We investigate the problem of identifying evolutionary community outliers given the discovered communities from two snapshots of an evolving dataset. To tackle the challenges of community evolution and outlier detection, we propose an integrated optimization framework which conducts outlier-aware community matching across snapshots and identification of evolutionary outliers in a tightly coupled way. A coordinate descent algorithm is proposed to improve community matching and outlier detection performance iteratively. Experimental results on both synthetic and real datasets show that the proposed approach is highly effective in discovering interesting evolutionary community outliers.",
"title": ""
},
{
"docid": "90564374d0c72816f930bc629f97d277",
"text": "Outlier detection is an integral component of statistical modelling and estimation. For highdimensional data, classical methods based on the Mahalanobis distance are usually not applicable. We propose an outlier detection procedure that replaces the classical minimum covariance determinant estimator with a high-breakdown minimum diagonal product estimator. The cut-off value is obtained from the asymptotic distribution of the distance, which enables us to control the Type I error and deliver robust outlier detection. Simulation studies show that the proposed method behaves well for high-dimensional data.",
"title": ""
},
{
"docid": "a0ebe19188abab323122a5effc3c4173",
"text": "In this paper, we present LOADED, an algorithm for outlier detection in evolving data sets containing both continuous and categorical attributes. LOADED is a tunable algorithm, wherein one can trade off computation for accuracy so that domain-specific response times are achieved. Experimental results show that LOADED provides very good detection and false positive rates, which are several times better than those of existing distance-based schemes.",
"title": ""
}
] |
[
{
"docid": "b91b42da0e7ffe838bf9d7ab0bd54bea",
"text": "When creating line drawings, artists frequently depict intended curves using multiple, tightly clustered, or overdrawn, strokes. Given such sketches, human observers can readily envision these intended, aggregate, curves, and mentally assemble the artist's envisioned 2D imagery. Algorithmic stroke consolidation---replacement of overdrawn stroke clusters by corresponding aggregate curves---can benefit a range of sketch processing and sketch-based modeling applications which are designed to operate on consolidated, intended curves. We propose StrokeAggregator, a novel stroke consolidation method that significantly improves on the state of the art, and produces aggregate curve drawings validated to be consistent with viewer expectations. Our framework clusters strokes into groups that jointly define intended aggregate curves by leveraging principles derived from human perception research and observation of artistic practices. We employ these principles within a coarse-to-fine clustering method that starts with an initial clustering based on pairwise stroke compatibility analysis, and then refines it by analyzing interactions both within and in-between clusters of strokes. We facilitate this analysis by computing a common 1D parameterization for groups of strokes via common aggregate curve fitting. We demonstrate our method on a large range of line drawings, and validate its ability to generate consolidated drawings that are consistent with viewer perception via qualitative user evaluation, and comparisons to manually consolidated drawings and algorithmic alternatives.",
"title": ""
},
{
"docid": "ebe28ced7ecfccd52aa01b7740a617d3",
"text": "Converting handwritten formulas to LaTex is a challenging machine learning problem. An essential step in the recognition of mathematical formulas is the symbol recognition. In this paper we show that pyramids of oriented gradients (PHOG) are effective features for recognizing mathematical symbols. Our best results are obtained using PHOG features along with a one-againstone SVM classifier. We train our classifier using images extracted from XY coordinates of online data from the CHROHME dataset, which contains 22000 character samples. We limit our analysis to 59 characters. The classifier achieves a 96% generalization accuracy on these characters and makes reasonable mistakes. We also demonstrate that our classifier is able to generalize gracefully to phone images of mathematical symbols written by a new user. On a small experiment performed on images of 75 handwritten symbols, the symbol recognition rates is 92 %. The code is available at: https://github.com/nicodjimenez/",
"title": ""
},
{
"docid": "c478773f832e84e560b57a5ed74cbc76",
"text": "Structural variants are implicated in numerous diseases and make up the majority of varying nucleotides among human genomes. Here we describe an integrated set of eight structural variant classes comprising both balanced and unbalanced variants, which we constructed using short-read DNA sequencing data and statistically phased onto haplotype blocks in 26 human populations. Analysing this set, we identify numerous gene-intersecting structural variants exhibiting population stratification and describe naturally occurring homozygous gene knockouts that suggest the dispensability of a variety of human genes. We demonstrate that structural variants are enriched on haplotypes identified by genome-wide association studies and exhibit enrichment for expression quantitative trait loci. Additionally, we uncover appreciable levels of structural variant complexity at different scales, including genic loci subject to clusters of repeated rearrangement and complex structural variants with multiple breakpoints likely to have formed through individual mutational events. Our catalogue will enhance future studies into structural variant demography, functional impact and disease association.",
"title": ""
},
{
"docid": "b43178b53f927eb90473e2850f948cb6",
"text": "We study the problem of learning a navigation policy for a robot to actively search for an object of interest in an indoor environment solely from its visual inputs. While scene-driven visual navigation has been widely studied, prior efforts on learning navigation policies for robots to find objects are limited. The problem is often more challenging than target scene finding as the target objects can be very small in the view and can be in an arbitrary pose. We approach the problem from an active perceiver perspective, and propose a novel framework that integrates a deep neural network based object recognition module and a deep reinforcement learning based action prediction mechanism. To validate our method, we conduct experiments on both a simulation dataset (AI2-THOR)and a real-world environment with a physical robot. We further propose a new decaying reward function to learn the control policy specific to the object searching task. Experimental results validate the efficacy of our method, which outperforms competing methods in both average trajectory length and success rate.",
"title": ""
},
{
"docid": "ec1e79530ef20e2d8610475d07ee140d",
"text": "a School of Social Sciences, Faculty of Health, Education and Social Sciences, University of the West of Scotland, High St., Paisley Campus, Paisley PA1 2BE, Scotland, United Kingdom b School of Computing, Faculty of Science and Technology, University of the West of Scotland, Paisley Campus, Paisley PA1 2BE, Scotland, United Kingdom c School of Psychological Sciences and Health, Faculty of Humanities and Social Science, University of Strathclyde, Glasgow, Scotland, United Kingdom",
"title": ""
},
{
"docid": "155c9444bfdb61352eddd7140ae75125",
"text": "To the best of our knowledge, we present the first hardware implementation of isogeny-based cryptography available in the literature. Particularly, we present the first implementation of the supersingular isogeny Diffie-Hellman (SIDH) key exchange, which features quantum-resistance. We optimize this design for speed by creating a high throughput multiplier unit, taking advantage of parallelization of arithmetic in $\\mathbb {F}_{p^{2}}$ , and minimizing pipeline stalls with optimal scheduling. Consequently, our results are also faster than software libraries running affine SIDH even on Intel Haswell processors. For our implementation at 85-bit quantum security and 128-bit classical security, we generate ephemeral public keys in 1.655 million cycles for Alice and 1.490 million cycles for Bob. We generate the shared secret in an additional 1.510 million cycles for Alice and 1.312 million cycles for Bob. On a Virtex-7, these results are approximately 1.5 times faster than known software implementations running the same 512-bit SIDH. Our results and observations show that the isogeny-based schemes can be implemented with high efficiency on reconfigurable hardware.",
"title": ""
},
{
"docid": "b1a9a691c39ab778dcdcaab502dd13b2",
"text": "Point-of-Interest recommendation is an essential means to help people discover attractive locations, especially when people travel out of town or to unfamiliar regions. While a growing line of research has focused on modeling user geographical preferences for POI recommendation, they ignore the phenomenon of user interest drift across geographical regions, i.e., users tend to have different interests when they travel in different regions, which discounts the recommendation quality of existing methods, especially for out-of-town users. In this paper, we propose a latent class probabilistic generative model Spatial-Temporal LDA (ST-LDA) to learn region-dependent personal interests according to the contents of their checked-in POIs at each region. As the users' check-in records left in the out-of-town regions are extremely sparse, ST-LDA incorporates the crowd's preferences by considering the public's visiting behaviors at the target region. To further alleviate the issue of data sparsity, a social-spatial collective inference framework is built on ST-LDA to enhance the inference of region-dependent personal interests by effectively exploiting the social and spatial correlation information. Besides, based on ST-LDA, we design an effective attribute pruning (AP) algorithm to overcome the curse of dimensionality and support fast online recommendation for large-scale POI data. Extensive experiments have been conducted to evaluate the performance of our ST-LDA model on two real-world and large-scale datasets. The experimental results demonstrate the superiority of ST-LDA and AP, compared with the state-of-the-art competing methods, by making more effective and efficient mobile recommendations.",
"title": ""
},
{
"docid": "43fc501b2bf0802b7c1cc8c4280dcd85",
"text": "We propose a data-driven stochastic method (DSM) to study stochastic partial differential equations (SPDEs) in the multiquery setting. An essential ingredient of the proposed method is to construct a data-driven stochastic basis under which the stochastic solutions to the SPDEs enjoy a compact representation for a broad range of forcing functions and/or boundary conditions. Our method consists of offline and online stages. A data-driven stochastic basis is computed in the offline stage using the Karhunen–Loève (KL) expansion. A two-level preconditioning optimization approach and a randomized SVD algorithm are used to reduce the offline computational cost. In the online stage, we solve a relatively small number of coupled deterministic PDEs by projecting the stochastic solution into the data-driven stochastic basis constructed offline. Compared with a generalized polynomial chaos method (gPC), the ratio of the computational complexities between DSM (online stage) and gPC is of order O((m/Np) ). Herem andNp are the numbers of elements in the basis used in DSM and gPC, respectively. Typically we expect m Np when the effective dimension of the stochastic solution is small. A timing model, which takes into account the offline computational cost of DSM, is constructed to demonstrate the efficiency of DSM. Applications of DSM to stochastic elliptic problems show considerable computational savings over traditional methods even with a small number of queries. We also provide a method for an a posteriori error estimate and error correction.",
"title": ""
},
{
"docid": "31ed7a47aa5ca6cf55d4bc1fbb1413d5",
"text": "This article depicts the results of a study carried out to ascertain the information pattern based on the sources used by graduate students from the Islamic Studies Academy submitted at the University of Malaya, Kuala Lumpur. A total of 14377 citations consisting of 54 doctoral dissertations from the Year 2005 to 2009 were examined using the citation analysis. The highest citations per dissertation was 684, while the lowest being 105 citations. The result shows that the materials used by graduate students in this field vary and are multidisciplinary by nature. Books were cited more than other forms of sources contributing 65%, where journal articles contributed 20%.Conference proceedings contributed 11%, dissertations and thesis 3% and other categories consisted of web sites, interviews and legal documents contributing 9%. These findings corroborate with previous citations done in the Humanities discipline. Among the most popular cited journals are in-house journals namely Jurnal Syariah and Jurnal Usuluddin. In addition, graduate students used a substantial amount of Malaysian language sources at the rate of 60%, Arabic language scholarships contributed to 40% of the total citations. Approximately 30% of all sources cited are over 10 years of age. Hence, this study provides valuable insights to guide librarians in understanding the sources used and serves as an analytic tool for the development of source collection in the library services.",
"title": ""
},
{
"docid": "1a1c9b8fa2b5fc3180bc1b504def5ea1",
"text": "Wireless sensor networks can be deployed in any attended or unattended environments like environmental monitoring, agriculture, military, health care etc., where the sensor nodes forward the sensing data to the gateway node. As the sensor node has very limited battery power and cannot be recharged after deployment, it is very important to design a secure, effective and light weight user authentication and key agreement protocol for accessing the sensed data through the gateway node over insecure networks. Most recently, Turkanovic et al. proposed a light weight user authentication and key agreement protocol for accessing the services of the WSNs environment and claimed that the same protocol is efficient in terms of security and complexities than related existing protocols. In this paper, we have demonstrated several security weaknesses of the Turkanovic et al. protocol. Additionally, we have also illustrated that the authentication phase of the Turkanovic et al. is not efficient in terms of security parameters. In order to fix the above mentioned security pitfalls, we have primarily designed a novel architecture for the WSNs environment and basing upon which a proposed scheme has been presented for user authentication and key agreement scheme. The security validation of the proposed protocol has done by using BAN logic, which ensures that the protocol achieves mutual authentication and session key agreement property securely between the entities involved. Moreover, the proposed scheme has simulated using well popular AVISPA security tool, whose simulation results show that the protocol is SAFE under OFMC and CL-AtSe models. Besides, several security issues informally confirm that the proposed protocol is well protected in terms of relevant security attacks including the above mentioned security pitfalls. The proposed protocol not only resists the above mentioned security weaknesses, but also achieves complete security requirements including specially energy efficiency, user anonymity, mutual authentication and user-friendly password change phase. Performance comparison section ensures that the protocol is relatively efficient in terms of complexities. The security and performance analysis makes the system so efficient that the proposed protocol can be implemented in real-life application. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9544b2cc301e2e3f170f050de659dda4",
"text": "In SDN, the underlying infrastructure is usually abstracted for applications that can treat the network as a logical or virtual entity. Commonly, the ``mappings\" between virtual abstractions and their actual physical implementations are not one-to-one, e.g., a single \"big switch\" abstract object might be implemented using a distributed set of physical devices. A key question is, what abstractions could be mapped to multiple physical elements while faithfully preserving their native semantics? E.g., can an application developer always expect her abstract \"big switch\" to act exactly as a physical big switch, despite being implemented using multiple physical switches in reality?\n We show that the answer to that question is \"no\" for existing virtual-to-physical mapping techniques: behavior can differ between the virtual \"big switch\" and the physical network, providing incorrect application-level behavior. We also show that that those incorrect behaviors occur despite the fact that the most pervasive and commonly-used correctness invariants, such as per-packet consistency, are preserved throughout. These examples demonstrate that for practical notions of correctness, new systems and a new analytical framework are needed. We take the first steps by defining end-to-end correctness, a correctness condition that focuses on applications only, and outline a research vision to obtain virtualization systems with correct virtual to physical mappings.",
"title": ""
},
{
"docid": "99942af4e58325aeb3f733b04c337607",
"text": "For E-commerce, such as online trade and interactions on the Internet are on the rise, a key issue is how to use simple and effective evaluation methods to accomplish trust decision-making for customers. It is well known subjective trust holds uncertainty like randomness and fuzziness. However, existing approaches commonly based on probability or fuzzy set theory cannot attach enough importance to uncertainty. To remedy this problem, a new quantificational subjective trust evaluation approach is proposed based on the cloud model. The subjective trust may be modeled with cloud model, and expected value and hyper-entropy of subjective cloud is used to evaluate the reputation of trust objects. Our experimental data shows that the method can effectively support subjective trust decision, which provides a helpful exploitation for the subjective trust evaluation.",
"title": ""
},
{
"docid": "a245aca07bd707ee645cf5cb283e7c5e",
"text": "The paradox of blunted parathormone (PTH) secretion in patients with severe hypomagnesemia has been known for more than 20 years, but the underlying mechanism is not deciphered. We determined the effect of low magnesium on in vitro PTH release and on the signals triggered by activation of the calcium-sensing receptor (CaSR). Analogous to the in vivo situation, PTH release from dispersed parathyroid cells was suppressed under low magnesium. In parallel, the two major signaling pathways responsible for CaSR-triggered block of PTH secretion, the generation of inositol phosphates, and the inhibition of cAMP were enhanced. Desensitization or pertussis toxin-mediated inhibition of CaSR-stimulated signaling suppressed the effect of low magnesium, further confirming that magnesium acts within the axis CaSR-G-protein. However, the magnesium binding site responsible for inhibition of PTH secretion is not identical with the extracellular ion binding site of the CaSR, because the magnesium deficiency-dependent signal enhancement was not altered on CaSR receptor mutants with increased or decreased affinity for calcium and magnesium. By contrast, when the magnesium affinity of the G alpha subunit was decreased, CaSR activation was no longer affected by magnesium. Thus, the paradoxical block of PTH release under magnesium deficiency seems to be mediated through a novel mechanism involving an increase in the activity of G alpha subunits of heterotrimeric G-proteins.",
"title": ""
},
{
"docid": "960022742172d6d0e883a23c74d800ef",
"text": "A novel algorithm to remove rain or snow streaks from a video sequence using temporal correlation and low-rank matrix completion is proposed in this paper. Based on the observation that rain streaks are too small and move too fast to affect the optical flow estimation between consecutive frames, we obtain an initial rain map by subtracting temporally warped frames from a current frame. Then, we decompose the initial rain map into basis vectors based on the sparse representation, and classify those basis vectors into rain streak ones and outliers with a support vector machine. We then refine the rain map by excluding the outliers. Finally, we remove the detected rain streaks by employing a low-rank matrix completion technique. Furthermore, we extend the proposed algorithm to stereo video deraining. Experimental results demonstrate that the proposed algorithm detects and removes rain or snow streaks efficiently, outperforming conventional algorithms.",
"title": ""
},
{
"docid": "4daec6170f18cc8896411e808e53355f",
"text": "The goal of this note is to point out that any distributed representation can be turned into a classifier through inversion via Bayes rule. The approach is simple and modular, in that it will work with any language representation whose training can be formulated as optimizing a probability model. In our application to 2 million sentences from Yelp reviews, we also find that it performs as well as or better than complex purpose-built algorithms.",
"title": ""
},
{
"docid": "0e1610c6b54a6e819b5557bcac0274cb",
"text": "This work presents a novel broad-band dual-polarized microstrip patch antenna, which is fed by proximity coupling. The microstrip line with slotted ground plane is used at two ports to feed the patch antenna. By using only one patch, the prototype antenna yields a bandwidth of 22% and 21.3% at the input port 1 and 2, respectively. The isolation between two input ports is below -34 dB across the bandwidth. Good broadside radiation patterns are observed, and the cross-polar levels are below -21 dB at both E and H planes. Due to its simple structure, it is easy to form arrays by using this antenna as an element.",
"title": ""
},
{
"docid": "e1b6cc1dbd518760c414cd2ddbe88dd5",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Mind the Traps! Design Guidelines for Rigorous BCI Experiments Camille Jeunet, Stefan Debener, Fabien Lotte, Jeremie Mattout, Reinhold Scherer, Catharina Zich",
"title": ""
},
{
"docid": "a25fa0c0889b62b70bf95c16f9966cc4",
"text": "We deal with the problem of document representation for the task of measuring semantic relatedness between documents. A document is represented as a compact concept graph where nodes represent concepts extracted from the document through references to entities in a knowledge base such as DBpedia. Edges represent the semantic and structural relationships among the concepts. Several methods are presented to measure the strength of those relationships. Concepts are weighted through the concept graph using closeness centrality measure which reflects their relevance to the aspects of the document. A novel similarity measure between two concept graphs is presented. The similarity measure first represents concepts as continuous vectors by means of neural networks. Second, the continuous vectors are used to accumulate pairwise similarity between pairs of concepts while considering their assigned weights. We evaluate our method on a standard benchmark for document similarity. Our method outperforms state-of-the-art methods including ESA (Explicit Semantic Annotation) while our concept graphs are much smaller than the concept vectors generated by ESA. Moreover, we show that by combining our concept graph with ESA, we obtain an even further improvement.",
"title": ""
},
{
"docid": "8d5dca364cbe5e3825e2f267d1c41d50",
"text": "This paper describes an algorithm based on constrained variance maximization for the restoration of a blurred image. Blurring is a smoothing process by definition. Accordingly, the deblurring filter shall be able to perform as a high pass filter, which increases the variance. Therefore, we formulate a variance maximization object function for the deconvolution filter. Using principal component analysis (PCA), we find the filter maximizing the object function. PCA is more than just a high pass filter; by maximizing the variances, it is able to perform the decorrelation, by which the original image is extracted from the mixture (the blurred image). Our approach was experimentally compared with the adaptive Lucy-Richardson maximum likelihood (ML) algorithm. The comparative results on both synthesized and real blurred images are included.",
"title": ""
}
] |
scidocsrr
|
ef243730603d2b6d49ea72f6a6533abe
|
Joint Chinese Word Segmentation and POS Tagging on Heterogeneous Annotated Corpora with Multiple Task Learning
|
[
{
"docid": "5588fd19a3d0d73598197ad465315fd6",
"text": "The growing need for Chinese natural language processing (NLP) is largely in a range of research and commercial applications. However, most of the currently Chinese NLP tools or components still have a wide range of issues need to be further improved and developed. FudanNLP is an open source toolkit for Chinese natural language processing (NLP), which uses statistics-based and rule-based methods to deal with Chinese NLP tasks, such as word segmentation, part-ofspeech tagging, named entity recognition, dependency parsing, time phrase recognition, anaphora resolution and so on.",
"title": ""
}
] |
[
{
"docid": "3cb65071d699177b6217e5a334f7eb6f",
"text": "Understanding the why of culture care differences and similarities among and between cultures would offer explanatory power to support nursing as an academic discipline and practice profession . . . Using the theory, then, could help to establish the nature, essence, meanings, expressions, and forms of human care or caring—a highly unique, credible, reliable, and meaningful body of knowledge for nursing. (Leininger, 1991, p. 35)",
"title": ""
},
{
"docid": "ba8cb45d6082bff46b33c2fe5a7e8c07",
"text": "Curiosity, a fundamental drive amongst higher living organisms, is what enables exploration, learning and creativity. In our increasingly data-driven world, data exploration, i.e., Making sense of mounting haystacks of data, is akin to intelligence for science, business and individuals. However, modern data systems -- designed for data retrieval rather than exploration -- only let us retrieve data and ask if it is interesting. This makes knowledge discovery a game of hit-and-trial which can only be orchestrated by expert data scientists. We present the vision toward Queriosity, an automated and personalized data exploration system. Designed on the principles of autonomy, learning and usability, Queriosity envisions a paradigm shift in data exploration and aims to become a a personalized \"data robot\" that provides a direct answer to what is interesting in a user's data set, instead of just retrieving data. Queriosity autonomously and continuously navigates toward interesting findings based on trends, statistical properties and interactive user feedback.",
"title": ""
},
{
"docid": "98b9963b0b6a731184db8e60889ca86c",
"text": "This paper presents a new spatial spectrum-sharing strategy for massive multiple-input multiple-output (MIMO) cognitive radio (CR) systems, where two CR base stations (CBS) are employed at the adjacent sides of each cell to provide a full-space coverage for the whole cell. Thanks to the high spatial resolution of massive antennas, CRs are distinguished by their angular information and their uplink/downlink channels are also represented with reduced parameter dimensions by the proposed two-dimensional spatial basis expansion model (2D-SBEM). To improve the spectral efficiency and the scheduling probability of CRs, a greedy CR scheduling algorithm is designed for the dual CBS system. As the proposed strategy is mainly based on angular information and since the angle reciprocity holds for two frequency carriers with moderate distance, it can be applied for both TDD and FDD systems.",
"title": ""
},
{
"docid": "121d3572c5a60a66da6bb42d0f7bf1af",
"text": "The present study examined the relationships among grit, academic performance, perceived academic failure, and stress levels of Hong Kong associate degree students using path analysis. Three hundred and forty-five students from a community college in Hong Kong voluntarily participated in the study. They completed a questionnaire that measured their grit (operationalized as interest and perseverance) and stress levels. The students also provided their actual academic performance and evaluated their perception of their academic performance as a success or a failure. The results of the path analysis showed that interest and perseverance were negatively associated with stress, and only perceived academic failure was positively associated with stress. These findings suggest that psychological appraisal and resources are more important antecedents of stress than objective negative events. Therefore, fostering students' psychological resilience may alleviate the stress experienced by associate degree students or college students in general.",
"title": ""
},
{
"docid": "cf1c04b4d0c61632d7a3969668d5e751",
"text": "A 3 dB power divider/combiner in substrate integrated waveguide (SIW) technology is presented. The divider consists of an E-plane SIW bifurcation with an embedded thick film resistor. The transition divides a full-height SIW into two SIWs of half the height. The resistor provides isolation between these two. The divider is fabricated in a multilayer process using high frequency substrates. For the resistor carbon paste is printed on the middle layer of the stack-up. Simulation and measurement results are presented. The measured divider exhibits an isolation of better than 22 dB within a bandwidth of more than 3GHz at 20 GHz.",
"title": ""
},
{
"docid": "d06cb1f4699757d95a00014e340f927f",
"text": "Because of appearance variations, training samples of the tracked targets collected by the online tracker are required for updating the tracking model. However, this often leads to tracking drift problem because of potentially corrupted samples: 1) contaminated/outlier samples resulting from large variations (e.g. occlusion, illumination), and 2) misaligned samples caused by tracking inaccuracy. Therefore, in order to reduce the tracking drift while maintaining the adaptability of a visual tracker, how to alleviate these two issues via an effective model learning (updating) strategy is a key problem to be solved. To address these issues, this paper proposes a novel and optimal model learning (updating) scheme which aims to simultaneously eliminate the negative effects from these two issues mentioned above in a unified robust feature template learning framework. Particularly, the proposed feature template learning framework is capable of: 1) adaptively learning uncontaminated feature templates by separating out contaminated samples, and 2) resolving label ambiguities caused by misaligned samples via a probabilistic multiple instance learning (MIL) model. Experiments on challenging video sequences show that the proposed tracker performs favourably against several state-of-the-art trackers.",
"title": ""
},
{
"docid": "ebc107147884d89da4ef04eba2d53a73",
"text": "Twitter sentiment analysis (TSA) has become a hot research topic in recent years. The goal of this task is to discover the attitude or opinion of the tweets, which is typically formulated as a machine learning based text classification problem. Some methods use manually labeled data to train fully supervised models, while others use some noisy labels, such as emoticons and hashtags, for model training. In general, we can only get a limited number of training data for the fully supervised models because it is very labor-intensive and time-consuming to manually label the tweets. As for the models with noisy labels, it is hard for them to achieve satisfactory performance due to the noise in the labels although it is easy to get a large amount of data for training. Hence, the best strategy is to utilize both manually labeled data and noisy labeled data for training. However, how to seamlessly integrate these two different kinds of data into the same learning framework is still a challenge. In this paper, we present a novel model, called emoticon smoothed language model (ESLAM), to handle this challenge. The basic idea is to train a language model based on the manually labeled data, and then use the noisy emoticon data for smoothing. Experiments on real data sets demonstrate that ESLAM can effectively integrate both kinds of data to outperform those methods using only one of them.",
"title": ""
},
{
"docid": "72553ef6330b68e37f83db08cc9016e2",
"text": "Social network services have become a viable source of information for users. In Twitter, information deemed important by the community propagates through retweets. Studying the characteristics of such popular messages is important for a number of tasks, such as breaking news detection, personalized message recommendation, viral marketing and others. This paper investigates the problem of predicting the popularity of messages as measured by the number of future retweets and sheds some light on what kinds of factors influence information propagation in Twitter. We formulate the task into a classification problem and study two of its variants by investigating a wide spectrum of features based on the content of the messages, temporal information, metadata of messages and users, as well as structural properties of the users' social graph on a large scale dataset. We show that our method can successfully predict messages which will attract thousands of retweets with good performance.",
"title": ""
},
{
"docid": "7dadeadea2d281b981dcb72506f19366",
"text": "Spacecrafts, which are used for stereoscopic mapping, imaging and telecommunication applications, require fine attitude and stabilization control which has an important role in high precision pointing and accurate stabilization. The conventional techniques for attitude and stabilization control are thrusters, reaction wheels, control moment gyroscopes (CMG) and magnetic torquers. Since reaction wheel can generate relatively smaller torques, they provide very fine stabilization and attitude control. Although conventional PID framework solves many stabilization problems, it is reported that many PID feedback loops are poorly tuned. In this paper, a model reference adaptive LQR control for reaction wheel stabilization problem is implemented. The tracking performance and disturbance rejection capability of proposed controller is found to give smooth motion after abnormal disruptions.",
"title": ""
},
{
"docid": "86353e0272a3d6fed220eaa85f95e8de",
"text": "Large volumes of electronic health records, including free-text documents, are extensively generated within various sectors of healthcare. Medical concept annotation systems are designed to enrich these documents with key concepts in the domain using reference terminologies. Although there is a wide range of annotation systems, there is a lack of comparative analysis that enables thorough understanding of the effectiveness of both the concept extraction and concept recognition components of these systems, especially within the clinical domain. This paper analyses and evaluates four annotation systems (i.e., MetaMap, NCBO annotator, Ontoserver, and QuickUMLS) for the task of extracting medical concepts from clinical free-text documents. Empirical findings have shown that each annotator exhibits various levels of strengths in terms of overall precision or recall. The concept recognition component of each system, however, was found to be highly sensitive to the quality of the text spans output by the concept extraction component of the annotation system. The effects of these components on each other are quantified in such way as to provide evidence for an informed choice of an annotation system as well as avenues for future research.",
"title": ""
},
{
"docid": "30c980c96931938fff76dbf6fb8aa824",
"text": "English. Emojitalianobot and EmojiWorldBot are two new online tools and digital environments for translation into emoji on Telegram, the popular instant messaging platform. Emojitalianobot is the first open and free Emoji-Italian and Emoji-English translation bot based on Unicode descriptions. The bot was designed to support the translation of Pinocchio into emoji carried out by the followers of the \"Scritture brevi\" blog on Twitter and contains a glossary with all the uses of emojis in the translation of the famous Italian novel. EmojiWorldBot, an off-spring project of Emojitalianobot, is a multilingual dictionary that uses Emoji as a pivot language from dozens of different languages. Currently the emoji-word and word-emoji functions are available for 72 languages imported from the Unicode tables and provide users with an easy search capability to map words in each of these languages to emojis, and vice versa. This paper presents the projects, the background and the main characteristics of these applications. Italiano. Emojitalianobot e EmojiWorldBot sono due applicazioni online per la traduzione in e da emoji su Telegram, la popolare piattaforma di messaggistica istantanea. Emojitalianobot è il primo bot aperto e gratuito di traduzione che contiene i dizionari Emoji-Italiano ed Emoji-Inglese basati sule descrizioni Unicode. Il bot è stato ideato per coadiuvare la traduzione di Pinocchio in emoji su Twitter da parte dei follower del blog Scritture brevi e contiene pertanto anche il glossario con tutti gli usi degli emoji nella traduzione del celebre romanzo per ragazzi. EmojiWorldBot, epigono di Emojitalianobot, è un dizionario multilingue che usa gli emoji come lingua pivot tra dozzine di lingue differenti. Attualmente le funzioni emoji-parola e parola-emoji sono disponibili per 72 lingue importate dalle tabelle Unicode e forniscono agli utenti delle semplici funzioni di ricerca per trovare le corrispondenze in emoji delle parole e viceversa per ciascuna di queste lingue. Questo contributo presenta i progetti, il background e le principali caratteristiche di queste",
"title": ""
},
{
"docid": "c850d33681e618137dca96fafb5e2864",
"text": "There is steep rise in diseases due to current lifestyle and issues related with aging. Healthcare systems need to keep pace with the changing requirements of the health problems. Some of the requirements of current health monitoring systems are that they should be portable, consume less power, user friendly and so on. With the rapid development in technologies such as wireless, embedded, nanotechnology and so on it has become possible to develop such systems and devices. In this paper we have done survey of different types of wireless standards, type of sensors used along with some of the recent methodologies used in the field of health monitoring.",
"title": ""
},
{
"docid": "9de7af8824594b5de7d510c81585c61b",
"text": "The adoption of business process improvement strategies is a challenge to organizations trying to improve the quality and productivity of their services. The quest for the benefits of this improvement on resource optimization and the responsiveness of the organizations has raised several proposals for business process improvement approaches. However, proposals and results of scientific research on process improvement in higher education institutions, extremely complex and unique organizations, are still scarce. This paper presents a method that provides guidance about how practices and knowledge are gathered to contribute for business process improvement based on the communication between different stakeholders.",
"title": ""
},
{
"docid": "7fafa786fd387007479a737950b03004",
"text": "A longstanding goal of behavior-based robotics is to solve high-level navigation tasks using end to end navigation behaviors that directly map sensors to actions. Navigation behaviors, such as reaching a goal or following a path without collisions, can be learned from exploration and interaction with the environment, but are constrained by the type and quality of a robot’s sensors, dynamics, and actuators. Traditional motion planning handles varied robot geometry and dynamics, but typically assumes high-quality observations. Modern vision-based navigation typically considers imperfect or partial observations, but simplifies the robot action space. With both approaches, the transition from simulation to reality can be difficult. Here, we learn two end to end navigation behaviors that avoid moving obstacles: point to point and path following. These policies receive noisy lidar observations and output robot linear and angular velocities. We train these policies in small, static environments with Shaped-DDPG, an adaptation of the Deep Deterministic Policy Gradient (DDPG) reinforcement learning method which optimizes reward and network architecture. Over 500 meters of on-robot experiments show , these policies generalize to new environments and moving obstacles, are robust to sensor, actuator, and localization noise, and can serve as robust building blocks for larger navigation tasks. The path following and point and point policies are 83% and 56% more successful than the baseline, respectively.",
"title": ""
},
{
"docid": "69519dd7e60899acd8b81c141321b052",
"text": "In this paper we address the question of how closely everyday human teachers match a theoretically optimal teacher. We present two experiments in which subjects teach a concept to our robot in a supervised fashion. In the first experiment we give subjects no instructions on teaching and observe how they teach naturally as compared to an optimal strategy. We find that people are suboptimal in several dimensions. In the second experiment we try to elicit the optimal teaching strategy. People can teach much faster using the optimal teaching strategy, however certain parts of the strategy are more intuitive than others.",
"title": ""
},
{
"docid": "fdd0f4f2f495a00d540bf0d0e9630771",
"text": "Stock prediction aims to predict the future trends of a stock in order to help investors make good investment decisions. Traditional solutions for stock prediction are based on time-series models. With the recent success of deep neural networks in modeling sequential data, deep learning has become a promising choice for stock prediction.\n However, most existing deep learning solutions are not optimized toward the target of investment, i.e., selecting the best stock with the highest expected revenue. Specifically, they typically formulate stock prediction as a classification (to predict stock trends) or a regression problem (to predict stock prices). More importantly, they largely treat the stocks as independent of each other. The valuable signal in the rich relations between stocks (or companies), such as two stocks are in the same sector and two companies have a supplier-customer relation, is not considered.\n In this work, we contribute a new deep learning solution, named Relational Stock Ranking (RSR), for stock prediction. Our RSR method advances existing solutions in two major aspects: (1) tailoring the deep learning models for stock ranking, and (2) capturing the stock relations in a time-sensitive manner. The key novelty of our work is the proposal of a new component in neural network modeling, named Temporal Graph Convolution, which jointly models the temporal evolution and relation network of stocks. To validate our method, we perform back-testing on the historical data of two stock markets, NYSE and NASDAQ. Extensive experiments demonstrate the superiority of our RSR method. It outperforms state-of-the-art stock prediction solutions achieving an average return ratio of 98% and 71% on NYSE and NASDAQ, respectively.",
"title": ""
},
{
"docid": "054c2e8fa9421c77939091e5adfc07e5",
"text": "Visualization is a powerful paradigm for exploratory data analysis. Visualizing large graphs, however, often results in excessive edges crossings and overlapping nodes. We propose a new scalable approach called FACETS that helps users adaptively explore large million-node graphs from a local perspective, guiding them to focus on nodes and neighborhoods that are most subjectively interesting to users. We contribute novel ideas to measure this interestingness in terms of how surprising a neighborhood is given the background distribution, as well as how well it matches what the user has chosen to explore. FACETS uses Jensen-Shannon divergence over information-theoretically optimized histograms to calculate the subjective user interest and surprise scores. Participants in a user study found FACETS easy to use, easy to learn, and exciting to use. Empirical runtime analyses demonstrated FACETS’s practical scalability on large real-world graphs with up to 5 million edges, returning results in fewer than 1.5 seconds.",
"title": ""
},
{
"docid": "5846c9761ec90040feaf71656401d6dd",
"text": "Internet of Things (IoT) is an emergent technology that provides a promising opportunity to improve industrial systems by the smartly use of physical objects, systems, platforms and applications that contain embedded technology to communicate and share intelligence with each other. In recent years, a great range of industrial IoT applications have been developed and deployed. Among these applications, the Water and Oil & Gas Distribution System is tremendously important considering the huge amount of fluid loss caused by leakages and other possible hydraulic failures. Accordingly, to design an accurate Fluid Distribution Monitoring System (FDMS) represents a critical task that imposes a serious study and an adequate planning. This paper reviews the current state-of-the-art of IoT, major IoT applications in industries and focus more on the Industrial IoT FDMS (IIoT FDMS).",
"title": ""
},
{
"docid": "97680d32297b8c81388b463a7e98e2f3",
"text": "The research community has considered in the past the application of Artificial Intelligence (AI) techniques to control and operate networks. A notable example is the Knowledge Plane proposed by D.Clark et al. However, such techniques have not been extensively prototyped or deployed in the field yet. In this paper, we explore the reasons for the lack of adoption and posit that the rise of two recent paradigms: Software-Defined Networking (SDN) and Network Analytics (NA), will facilitate the adoption of AI techniques in the context of network operation and control. We describe a new paradigm that accommodates and exploits SDN, NA and AI, and provide use-cases that illustrate its applicability and benefits. We also present simple experimental results that support, for some relevant use-cases, its feasibility. We refer to this new paradigm as Knowledge-Defined Networking (KDN).",
"title": ""
},
{
"docid": "6242a55cb3361cdd17838217c7920a0a",
"text": "We propose a method for detecting geometric structures in an image, without any a priori information. Roughly speaking, we say that an observed geometric event is “meaningful” if the expectation of its occurences would be very small in a random image. We discuss the apories of this definition, solve several of them by introducing “maximal meaningful events” and analyzing their structure. This methodology is applied to the detection of alignments in images.",
"title": ""
}
] |
scidocsrr
|
57a23c4c341035f716275590f03cd2e2
|
A One-Time Password Scheme with QR-Code Based on Mobile Phone
|
[
{
"docid": "4932aedacd73a8af2793242ca7683bfc",
"text": "In this article, we propose a new remote user authentication scheme using smart cards. The scheme is based on the ElGamal’s public key cryptosystem. Our scheme does not require a system to maintain a password table for verifying the legitimacy of the login users. In addition, our scheme can withstand message replaying attack.",
"title": ""
}
] |
[
{
"docid": "83d50f7c66b14116bfa627600ded28d6",
"text": "Diet can affect cognitive ability and behaviour in children and adolescents. Nutrient composition and meal pattern can exert immediate or long-term, beneficial or adverse effects. Beneficial effects mainly result from the correction of poor nutritional status. For example, thiamin treatment reverses aggressiveness in thiamin-deficient adolescents. Deleterious behavioural effects have been suggested; for example, sucrose and additives were once suspected to induce hyperactivity, but these effects have not been confirmed by rigorous investigations. In spite of potent biological mechanisms that protect brain activity from disruption, some cognitive functions appear sensitive to short-term variations of fuel (glucose) availability in certain brain areas. A glucose load, for example, acutely facilitates mental performance, particularly on demanding, long-duration tasks. The mechanism of this often described effect is not entirely clear. One aspect of diet that has elicited much research in young people is the intake/omission of breakfast. This has obvious relevance to school performance. While effects are inconsistent in well-nourished children, breakfast omission deteriorates mental performance in malnourished children. Even intelligence scores can be improved by micronutrient supplementation in children and adolescents with very poor dietary status. Overall, the literature suggests that good regular dietary habits are the best way to ensure optimal mental and behavioural performance at all times. Then, it remains controversial whether additional benefit can be gained from acute dietary manipulations. In contrast, children and adolescents with poor nutritional status are exposed to alterations of mental and/or behavioural functions that can be corrected, to a certain extent, by dietary measures.",
"title": ""
},
{
"docid": "32effb3b888c5b523c4288f270a9c7f3",
"text": "Deep Neural Networks (DNNs) have advanced the state-of-the-art in a variety of machine learning tasks and are deployed in increasing numbers of products and services. However, the computational requirements of training and evaluating large-scale DNNs are growing at a much faster pace than the capabilities of the underlying hardware platforms that they are executed upon. In this work, we propose Dynamic Variable Effort Deep Neural Networks (DyVEDeep) to reduce the computational requirements of DNNs during inference. Previous efforts propose specialized hardware implementations for DNNs, statically prune the network, or compress the weights. Complementary to these approaches, DyVEDeep is a dynamic approach that exploits the heterogeneity in the inputs to DNNs to improve their compute efficiency with comparable classification accuracy. DyVEDeep equips DNNs with dynamic effort mechanisms that, in the course of processing an input, identify how critical a group of computations are to classify the input. DyVEDeep dynamically focuses its compute effort only on the critical computations, while skipping or approximating the rest. We propose 3 effort knobs that operate at different levels of granularity viz. neuron, feature and layer levels. We build DyVEDeep versions for 5 popular image recognition benchmarks — one for CIFAR-10 and four for ImageNet (AlexNet, OverFeat and VGG-16, weightcompressed AlexNet). Across all benchmarks, DyVEDeep achieves 2.1×-2.6× reduction in the number of scalar operations, which translates to 1.8×-2.3× performance improvement over a Caffe-based implementation, with < 0.5% loss in accuracy.",
"title": ""
},
{
"docid": "384f7f309e996d4cd289228a3f368d93",
"text": "With the advent of the ubiquitous era, many studies have been devoted to various situation-aware services in the semantic web environment. One of the most challenging studies involves implementing a situation-aware personalized music recommendation service which considers the user’s situation and preferences. Situation-aware music recommendation requires multidisciplinary efforts including low-level feature extraction and analysis, music mood classification and human emotion prediction. In this paper, we propose a new scheme for a situation-aware/user-adaptive music recommendation service in the semantic web environment. To do this, we first discuss utilizing knowledge for analyzing and retrieving music contents semantically, and a user adaptive music recommendation scheme based on semantic web technologies that facilitates the development of domain knowledge and a rule set. Based on this discussion, we describe our Context-based Music Recommendation (COMUS) ontology for modeling the user’s musical preferences and contexts, and supporting reasoning about the user’s desired emotions and preferences. Basically, COMUS defines an upper music ontology that captures concepts on the general properties of music such as titles, artists and genres. In addition, it provides functionality for adding domain-specific ontologies, such as music features, moods and situations, in a hierarchical manner, for extensibility. Using this context ontology, we believe that logical reasoning rules can be inferred based on high-level (implicit) knowledge such as situations from low-level (explicit) knowledge. As an innovation, our ontology can express detailed and complicated relations among music clips, moods and situations, which enables users to find appropriate music. We present some of the experiments we performed as a case-study for music recommendation.",
"title": ""
},
{
"docid": "498c217fb910a5b4ca6bcdc83f98c11b",
"text": "Theodor Wilhelm Engelmann (1843–1909), who had a creative life in music, muscle physiology, and microbiology, developed a sensitive method for tracing the photosynthetic oxygen production of unicellular plants by means of bacterial aerotaxis (chemotaxis). He discovered the absorption spectrum of bacteriopurpurin (bacteriochlorophyll a) and the scotophobic response, photokinesis, and photosynthesis of purple bacteria.",
"title": ""
},
{
"docid": "6886849300b597fdb179162744b40ee2",
"text": "This paper argues that the dominant study of the form and structure of games – their poetics – should be complemented by the analysis of their aesthetics (as understood by modern cultural theory): how gamers use their games, what aspects they enjoy and what kinds of pleasures they experience by playing them. The paper outlines a possible aesthetic theory of games based on different aspects of pleasure: the psychoanalytical, the social and the physical form of pleasure.",
"title": ""
},
{
"docid": "3bfba79edbd6816a2ad58673eefc4195",
"text": "Typical experiments in psychological and neurophysiological settings often require the accurate control of multiple input and output signals. These signals are often generated or recorded via computer software and/or external dedicated hardware. Dedicated hardware is usually very expensive and requires additional software to control its behavior. In the present article, I present some accuracy tests on a low-cost and open-source I/O board (Arduino family) that may be useful in many lab environments. One of the strengths of Arduinos is the possibility they afford to load the experimental script on the board's memory and let it run without interfacing with computers or external software, thus granting complete independence, portability, and accuracy. Furthermore, a large community has arisen around the Arduino idea and offers many hardware add-ons and hundreds of free scripts for different projects. Accuracy tests show that Arduino boards may be an inexpensive tool for many psychological and neurophysiological labs.",
"title": ""
},
{
"docid": "6a1f1345a390ff886c95a57519535c40",
"text": "BACKGROUND\nThe goal of this pilot study was to evaluate the effects of the cognitive-restructuring technique 'lucid dreaming treatment' (LDT) on chronic nightmares. Becoming lucid (realizing that one is dreaming) during a nightmare allows one to alter the nightmare storyline during the nightmare itself.\n\n\nMETHODS\nAfter having filled out a sleep and a posttraumatic stress disorder questionnaire, 23 nightmare sufferers were randomly divided into 3 groups; 8 participants received one 2-hour individual LDT session, 8 participants received one 2-hour group LDT session, and 7 participants were placed on the waiting list. LDT consisted of exposure, mastery, and lucidity exercises. Participants filled out the same questionnaires 12 weeks after the intervention (follow-up).\n\n\nRESULTS\nAt follow-up the nightmare frequency of both treatment groups had decreased. There were no significant changes in sleep quality and posttraumatic stress disorder symptom severity. Lucidity was not necessary for a reduction in nightmare frequency.\n\n\nCONCLUSIONS\nLDT seems effective in reducing nightmare frequency, although the primary therapeutic component (i.e. exposure, mastery, or lucidity) remains unclear.",
"title": ""
},
{
"docid": "e8c6cdc70be62c6da150b48ba69c0541",
"text": "Stress granules and processing bodies are related mRNA-containing granules implicated in controlling mRNA translation and decay. A genomic screen identifies numerous factors affecting granule formation, including proteins involved in O-GlcNAc modifications. These results highlight the importance of post-translational modifications in translational control and mRNP granule formation.",
"title": ""
},
{
"docid": "d486fca984c9cf930a4d1b4367949016",
"text": "In this paper, we present a generative model to generate a natural language sentence describing a table region, e.g., a row. The model maps a row from a table to a continuous vector and then generates a natural language sentence by leveraging the semantics of a table. To deal with rare words appearing in a table, we develop a flexible copying mechanism that selectively replicates contents from the table in the output sequence. Extensive experiments demonstrate the accuracy of the model and the power of the copying mechanism. On two synthetic datasets, WIKIBIO and SIMPLEQUESTIONS, our model improves the current state-of-the-art BLEU-4 score from 34.70 to 40.26 and from 33.32 to 39.12, respectively. Furthermore, we introduce an open-domain dataset WIKITABLETEXT including 13,318 explanatory sentences for 4,962 tables. Our model achieves a BLEU-4 score of 38.23, which outperforms template based and language model based approaches.",
"title": ""
},
{
"docid": "50d42d832a0cd04becdaa26cc33a9782",
"text": "The performance of Fingerprint recognition system depends on minutiae which are extracted from raw fingerprint image. Often the raw fingerprint image captured from a scanner may not be of good quality, which leads to inaccurate extraction of minutiae. Hence it is essential to preprocess the fingerprint image before extracting the reliable minutiae for matching of two fingerprint images. Image enhancement technique followed by minutiae extraction completes the fingerprint recognition process. Fingerprint recognition process with a matcher constitutes Fingerprint recognition system ASIC implementation of image enhancement technique for fingerprint recognition process using Cadence tool is proposed. Further, the result obtained from hardware design is compared with that of software using MatLab tool.",
"title": ""
},
{
"docid": "d3b0a831715bd2f2de9d94811bdd47e7",
"text": "Aspect Term Extraction (ATE) identifies opinionated aspect terms in texts and is one of the tasks in the SemEval Aspect Based Sentiment Analysis (ABSA) contest. The small amount of available datasets for supervised ATE and the costly human annotation for aspect term labelling give rise to the need for unsupervised ATE. In this paper, we introduce an architecture that achieves top-ranking performance for supervised ATE. Moreover, it can be used efficiently as feature extractor and classifier for unsupervised ATE. Our second contribution is a method to automatically construct datasets for ATE. We train a classifier on our automatically labelled datasets and evaluate it on the human annotated SemEval ABSA test sets. Compared to a strong rule-based baseline, we obtain a dramatically higher F-score and attain precision values above 80%. Our unsupervised method beats the supervised ABSA baseline from SemEval, while preserving high precision scores.",
"title": ""
},
{
"docid": "25cbc3f8f9ecbeb89c2c49c044e61c2a",
"text": "This study investigated lying behavior and the behavior of people who are deceived by using a deception game (Gneezy, 2005) in both anonymity and face-to-face treatments. Subjects consist of students and non-students (citizens) to investigate whether lying behavior is depended on socioeconomic backgrounds. To explore how liars feel about lying, we give senders a chance to confess their behaviors to their counter partner for the guilty aversion of lying. The following results are obtained: i) a frequency of lying behavior for students is significantly higher than that for non-students at a payoff in the anonymity treatment, but that is not significantly difference between the anonymity and face-to-face treatments; ii) lying behavior is not influenced by gender; iii) a frequency of confession is higher in the face-to-face treatment than in the anonymity treatment; and iv) the receivers who are deceived are more likely to believe a sender’s message to be true in the anonymity treatment. This study implies that the existence of the partner prompts liars to confess their behavior because they may feel remorse or guilt.",
"title": ""
},
{
"docid": "7832707feef1e81c3a01e974c37a960b",
"text": "Most current commercial automated fingerprint-authentication systems on the market are based on the extraction of the fingerprint minutiae, and on medium resolution (500 dpi) scanners. Sensor manufacturers tend to reduce the sensing area in order to adapt it to low-power mobile hand-held communication systems and to lower the cost of their devices. An interesting alternative is designing a novel fingerprintauthentication system capable of dealing with an image from a small, high resolution (1000 dpi) sensor area based on combined level 2 (minutiae) and level 3 (sweat pores) feature extraction. In this paper, we propose a new strategy and implementation of a series of techniques for automatic level 2 and level 3 feature extraction in fragmentary fingerprint comparison. The main challenge in achieving high reliability while using a small portion of a fingerprint for matching is that there may not be a sufficient number of minutiae but the uniqueness of the pore configurations provides a powerful means to compensate for this insufficiency. A pilot study performed to test the presented approach confirms the efficacy of using pores in addition to the traditionally used minutiae in fragmentary fingerprint comparison.",
"title": ""
},
{
"docid": "7ca62c2da424c826744bca7196f07def",
"text": "Accurately answering a question about a given image requires combining observations with general knowledge. While this is effortless for humans, reasoning with general knowledge remains an algorithmic challenge. To advance research in this direction a novel ‘fact-based’ visual question answering (FVQA) task has been introduced recently along with a large set of curated facts which link two entities, i.e., two possible answers, via a relation. Given a question-image pair, deep network techniques have been employed to successively reduce the large set of facts until one of the two entities of the final remaining fact is predicted as the answer. We observe that a successive process which considers one fact at a time to form a local decision is sub-optimal. Instead, we develop an entity graph and use a graph convolutional network to ‘reason’ about the correct answer by jointly considering all entities. We show on the challenging FVQA dataset that this leads to an improvement in accuracy of around 7% compared to the state of the art.",
"title": ""
},
{
"docid": "e539354155acf83cf9d93d42f6dc8a71",
"text": "Performance in intense exercise events, such as Olympic rowing, swimming, kayak, track running and track cycling events, involves energy contribution from aerobic and anaerobic sources. As aerobic energy supply dominates the total energy requirements after ∼75s of near maximal effort, and has the greatest potential for improvement with training, the majority of training for these events is generally aimed at increasing aerobic metabolic capacity. A short-term period (six to eight sessions over 2-4 weeks) of high-intensity interval training (consisting of repeated exercise bouts performed close to or well above the maximal oxygen uptake intensity, interspersed with low-intensity exercise or complete rest) can elicit increases in intense exercise performance of 2-4% in well-trained athletes. The influence of high-volume training is less discussed, but its importance should not be downplayed, as high-volume training also induces important metabolic adaptations. While the metabolic adaptations that occur with high-volume training and high-intensity training show considerable overlap, the molecular events that signal for these adaptations may be different. A polarized approach to training, whereby ∼75% of total training volume is performed at low intensities, and 10-15% is performed at very high intensities, has been suggested as an optimal training intensity distribution for elite athletes who perform intense exercise events.",
"title": ""
},
{
"docid": "f9765c97a101a163a486b18e270d67f5",
"text": "We present a formulation of deep learning that aims at producing a large margin classifier. The notion of margin, minimum distance to a decision boundary, has served as the foundation of several theoretically profound and empirically successful results for both classification and regression tasks. However, most large margin algorithms are applicable only to shallow models with a preset feature representation; and conventional margin methods for neural networks only enforce margin at the output layer. Such methods are therefore not well suited for deep networks. In this work, we propose a novel loss function to impose a margin on any chosen set of layers of a deep network (including input and hidden layers). Our formulation allows choosing any lp norm (p ≥ 1) on the metric measuring the margin. We demonstrate that the decision boundary obtained by our loss has nice properties compared to standard classification loss functions. Specifically, we show improved empirical results on the MNIST, CIFAR-10 and ImageNet datasets on multiple tasks: generalization from small training sets, corrupted labels, and robustness against adversarial perturbations. The resulting loss is general and complementary to existing data augmentation (such as random/adversarial input transform) and regularization techniques such as weight decay, dropout, and batch norm. 2",
"title": ""
},
{
"docid": "9f52ee95148490555c10f699678b640d",
"text": "Prior research indicates that Facebook usage predicts declines in subjective well-being over time. How does this come about? We examined this issue in 2 studies using experimental and field methods. In Study 1, cueing people in the laboratory to use Facebook passively (rather than actively) led to declines in affective well-being over time. Study 2 replicated these findings in the field using experience-sampling techniques. It also demonstrated how passive Facebook usage leads to declines in affective well-being: by increasing envy. Critically, the relationship between passive Facebook usage and changes in affective well-being remained significant when controlling for active Facebook use, non-Facebook online social network usage, and direct social interactions, highlighting the specificity of this result. These findings demonstrate that passive Facebook usage undermines affective well-being.",
"title": ""
},
{
"docid": "cc5126ea8a6f9ebca587970377966067",
"text": "In this paper reliability model of the converter valves in VSC-HVDC system is analyzed. The internal structure and functions of converter valve are presented. Taking the StakPak IGBT from ABB Semiconductors for example, the mathematical reliability model for converter valve and its sub-module is established. By means of calculation and analysis, the reliability indices of converter valve under various voltage classes and redundancy designs are obtained, and then optimal redundant scheme is chosen. KeywordsReliability Analysis; VSC-HVDC; Converter Valve",
"title": ""
},
{
"docid": "897efb599e554bf453a7b787c5874d48",
"text": "The Rampant growth of wireless technology and Mobile devices in this era is creating a great impact on our lives. Some early efforts have been made to combine and utilize both of these technologies in advancement of hospitality industry. This research work aims to automate the food ordering process in restaurant and also improve the dining experience of customers. In this paper we discuss about the design & implementation of automated food ordering system with real time customer feedback (AOS-RTF) for restaurants. This system, implements wireless data access to servers. The android application on user’s mobile will have all the menu details. The order details from customer’s mobile are wirelessly updated in central database and subsequently sent to kitchen and cashier respectively. The restaurant owner can manage the menu modifications easily. The wireless application on mobile devices provide a means of convenience, improving efficiency and accuracy for restaurants by saving time, reducing human errors and real-time customer feedback. This system successfully over comes the drawbacks in earlier PDA based food ordering system and is less expensive and more effective than the multi-touchable restaurant management systems.",
"title": ""
}
] |
scidocsrr
|
ba3b0555c1640c32c281df578e28e0ed
|
Comparative study for various DNA based steganography techniques with the essential conclusions about the future research
|
[
{
"docid": "66b909528a566662667a3d8c7c749bf4",
"text": "There exists a big demand for innovative secure electronic communications while the expertise level of attackers increases rapidly and that causes even bigger demands and needs for an extreme secure connection. An ideal security protocol should always be protecting the security of connections in many aspects, and leaves no trapdoor for the attackers. Nowadays, one of the popular cryptography protocols is hybrid cryptosystem that uses private and public key cryptography to change secret message. In available cryptography protocol attackers are always aware of transmission of sensitive data. Even non-interested attackers can get interested to break the ciphertext out of curiosity and challenge, when suddenly catches some scrambled data over the network. First of all, we try to explain the roles of innovative approaches in cryptography. After that we discuss about the disadvantages of public key cryptography to exchange secret key. Furthermore, DNA steganography is explained as an innovative paradigm to diminish the usage of public cryptography to exchange session key. In this protocol, session key between a sender and receiver is hidden by novel DNA data hiding technique. Consequently, the attackers are not aware of transmission of session key through unsecure channel. Finally, the strength point of the DNA steganography is discussed.",
"title": ""
}
] |
[
{
"docid": "f9eed4f99d70c51dc626a61724540d3c",
"text": "A soft-start circuit with soft-recovery function for DC-DC converters is presented in this paper. The soft-start strategy is based on a linearly ramped-up reference and an error amplifier with minimum selector implemented with a three-limb differential pair skillfully. The soft-recovery strategy is based on a compact clamp circuit. The ramp voltage would be clamped once the feedback voltage is detected lower than a threshold, which could control the output to be recovered slowly and linearly. A monolithic DC-DC buck converter with proposed circuit has been fabricated with a 0.5μm CMOS process for validation. The measurement result shows that the ramp-based soft-start and soft-recovery circuit have good performance and agree well with the theoretical analysis.",
"title": ""
},
{
"docid": "7e683f15580e77b1e207731bb73b8107",
"text": "The skeleton is essential for general shape representation. The commonly required properties of a skeletonization algorithm are that the extracted skeleton should be accurate; robust to noise, position and rotation; able to reconstruct the original object; and able to produce a connected skeleton in order to preserve its topological and hierarchical properties. However, the use of a discrete image presents a lot of problems that may in9uence the extraction of the skeleton. Moreover, most of the methods are memory-intensive and computationally intensive, and require a complex data structure. In this paper, we propose a fast, e;cient and accurate skeletonization method for the extraction of a well-connected Euclidean skeleton based on a signed sequential Euclidean distance map. A connectivity criterion is proposed, which can be used to determine whether a given pixel is a skeleton point independently. The criterion is based on a set of point pairs along the object boundary, which are the nearest contour points to the pixel under consideration and its 8 neighbors. Our proposed method generates a connected Euclidean skeleton with a single pixel width without requiring a linking algorithm or iteration process. Experiments show that the runtime of our algorithm is faster than the distance transformation and is linearly proportional to the number of pixels of an image. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "35dbef4cc4b8588d451008b8156f326f",
"text": "Raman spectroscopy is a powerful tool for studying the biochemical composition of tissues and cells in the human body. We describe the initial results of a feasibility study to design and build a miniature, fiber optic probe incorporated into a standard hypodermic needle. This probe is intended for use in optical biopsies of solid tissues to provide valuable information of disease type, such as in the lymphatic system, breast, or prostate, or of such tissue types as muscle, fat, or spinal, when identifying a critical injection site. The optical design and fabrication of this probe is described, and example spectra of various ex vivo samples are shown.",
"title": ""
},
{
"docid": "5374ed153eb37e5680f1500fea5b9dbe",
"text": "Social media have become dominant in everyday life during the last few years where users share their thoughts and experiences about their enjoyable events in posts. Most of these posts are related to different categories related to: activities, such as dancing, landscapes, such as beach, people, such as a selfie, and animals such as pets. While some of these posts become popular and get more attention, others are completely ignored. In order to address the desire of users to create popular posts, several researches have studied post popularity prediction. Existing works focus on predicting the popularity without considering the category type of the post. In this paper we propose category specific post popularity prediction using visual and textual content for action, scene, people and animal categories. In this way we aim to answer the question What makes a post belonging to a specific action, scene, people or animal category popular? To answer to this question we perform several experiments on a collection of 65K posts crawled from Instagram.",
"title": ""
},
{
"docid": "638cc32b94c4e44a1e185fdbdc6646f5",
"text": "Object detection and recognition is an important task in many computer vision applications. In this paper an Android application was developed using Eclipse IDE and OpenCV3 Library. This application is able to detect objects in an image that is loaded from the mobile gallery, based on its color, shape, or local features. The image is processed in the HSV color domain for better color detection. Circular shapes are detected using Circular Hough Transform and other shapes are detected using Douglas-Peucker algorithm. BRISK (binary robust invariant scalable keypoints) local features were applied in the developed Android application for matching an object image in another scene image. The steps of the proposed detection algorithms are described, and the interfaces of the application are illustrated. The application is ported and tested on Galaxy S3, S6, and Note1 Smartphones. Based on the experimental results, the application is capable of detecting eleven different colors, detecting two dimensional geometrical shapes including circles, rectangles, triangles, and squares, and correctly match local features of object and scene images for different conditions. The application could be used as a standalone application, or as a part of another application such as Robot systems, traffic systems, e-learning applications, information retrieval and many others.",
"title": ""
},
{
"docid": "a180735616ded05900cda77be19fc787",
"text": "Economically sustainable software systems must be able to cost-effectively evolve in response to changes in their environment, their usage profile, and business demands. However, in many software development projects, sustainability is treated as an afterthought, as developers are driven by time-to-market pressure and are often not educated to apply sustainability-improving techniques. While software engineering research and practice has suggested a large amount of such techniques, a holistic overview is missing and the effectiveness of individual techniques is often not sufficiently validated. On this behalf we created a catalog of “software sustainability guidelines” to support project managers, software architects, and developers during system design, development, operation, and maintenance. This paper describes how we derived these guidelines and how we applied selected techniques from them in two industrial case studies. We report several lessons learned about sustainable software development.",
"title": ""
},
{
"docid": "f175bfcd43f1c11c6b538022e2db1281",
"text": "The D-AMP methodology, recently proposed by Metzler, Maleki, and Baraniuk, allows one to plug in sophisticated denoisers like BM3D into the AMP algorithm to achieve state-of-the-art compressive image recovery. But AMP diverges with small deviations from the i.i.d.-Gaussian assumption on the measurement matrix. Recently, the VAMP algorithm has been proposed to fix this problem. In this work, we show that the benefits of VAMP extend to D-VAMP. Consider the problem of recovering a (vectorized) image x0 ∈ R from compressive (i.e., M ≪ N ) noisy linear measurements y = Φx0 +w ∈ R M , (1) known as “compressive imaging.” The “sparse” approach to this problem exploits sparsity in the coefficients v0 , Ψx0 ∈ R N of an orthonormal wavelet transform Ψ. The idea is to rewrite (1) as y = Av0 +w for A , ΦΨ , (2) recover an estimate v̂ of v0 from y, and then construct the image estimate as x̂ = Ψ v̂. Although many algorithms have been proposed for sparse recovery of v0, a notable one is the approximate message passing (AMP) algorithm from [1]. It is computationally efficient (i.e., one multiplication by A and A per iteration and relatively few iterations) and its performance, when M and N are large and Φ is zero-mean i.i.d. Gaussian, is rigorously characterized by a scalar state evolution. A variant called “denoising-based AMP” (D-AMP) was recently proposed [2] for direct recovery of x0 from (1). It exploits the fact that, at iteration t, AMP constructs a pseudo-measurement of the form v0 + N (0, σ t I) with known σ t , which is amenable to any image denoising algorithm. By plugging in a state-of-the-art image denoiser like BM3D [3], D-AMP yields state-of-the-art compressive imaging. AMP and D-AMP, however, have a serious weakness: they diverge under small deviations from the zero-mean i.i.d. Gaussian assumption on Φ, such as non-zero mean or mild ill-conditioning. A robust alternative called “vector AMP” (VAMP) was recently proposed [4]. VAMP has similar complexity to AMP and a rigorous state evolution November 7, 2016 DRAFT",
"title": ""
},
{
"docid": "29ce9730d55b55b84e195983a8506e5c",
"text": "In situ Raman spectroscopy is an extremely valuable technique for investigating fundamental reactions that occur inside lithium rechargeable batteries. However, specialized in situ Raman spectroelectrochemical cells must be constructed to perform these experiments. These cells are often quite different from the cells used in normal electrochemical investigations. More importantly, the number of cells is usually limited by construction costs; thus, routine usage of in situ Raman spectroscopy is hampered for most laboratories. This paper describes a modification to industrially available coin cells that facilitates routine in situ Raman spectroelectrochemical measurements of lithium batteries. To test this strategy, in situ Raman spectroelectrochemical measurements are performed on Li//V2O5 cells. Various phases of Li(x)V2O5 could be identified in the modified coin cells with Raman spectroscopy, and the electrochemical cycling performance between in situ and unmodified cells is nearly identical.",
"title": ""
},
{
"docid": "501760c68ed75ed288749e9b4068234f",
"text": "This research investigated impulse buying as resulting from the depletion of a common—but limited—resource that governs self-control. In three investigations, participants’ self-regulatory resources were depleted or not; later, impulsive spending responses were measured. Participants whose resources were depleted, relative to participants whose resources were not depleted, felt stronger urges to buy, were willing to spend more, and actually did spend more money in unanticipated buying situations. Participants having depleted resources reported being influenced equally by affective and cognitive factors and purchased products that were high on each factor at equal rates. Hence, self-regulatory resource availability predicts whether people can resist impulse buying temptations.",
"title": ""
},
{
"docid": "68b2608c91525f3147f74b41612a9064",
"text": "Protective effects of sweet orange (Citrus sinensis) peel and their bioactive compounds on oxidative stress were investigated. According to HPLC-DAD and HPLC-MS/MS analysis, hesperidin (HD), hesperetin (HT), nobiletin (NT), and tangeretin (TT) were present in water extracts of sweet orange peel (WESP). The cytotoxic effect in 0.2mM t-BHP-induced HepG2 cells was inhibited by WESP and their bioactive compounds. The protective effect of WESP and their bioactive compounds in 0.2mM t-BHP-induced HepG2 cells may be associated with positive regulation of GSH levels and antioxidant enzymes, decrease in ROS formation and TBARS generation, increase in the mitochondria membrane potential and Bcl-2/Bax ratio, as well as decrease in caspase-3 activation. Overall, WESP displayed a significant cytoprotective effect against oxidative stress, which may be most likely because of the phenolics-related bioactive compounds in WESP, leading to maintenance of the normal redox status of cells.",
"title": ""
},
{
"docid": "f8821f651731943ce1652bc8a1d2c0d6",
"text": "business units and thus not even practiced in a cohesive, coherent manner. In the worst cases, busy business unit executives trade roving bands of developers like Pokémon cards in a fifth-grade classroom (in an attempt to get ahead). Suffice it to say, none of this is good. The disconnect between security and development has ultimately produced software development efforts that lack any sort of contemporary understanding of technical security risks. Today's complex and highly connected computing environments trigger myriad security concerns, so by blowing off the idea of security entirely, software builders virtually guarantee that their creations will have way too many security weaknesses that could—and should—have been avoided. This article presents some recommendations for solving this problem. Our approach is born out of experience in two diverse fields: software security and information security. Central among our recommendations is the notion of using the knowledge inherent in information security organizations to enhance secure software development efforts. Don't stand so close to me Best practices in software security include a manageable number of simple activities that should be applied throughout any software development process (see Figure 1). These lightweight activities should start at the earliest stages of software development and then continue throughout the development process and into deployment and operations. Although an increasing number of software shops and individual developers are adopting the software security touchpoints we describe here as their own, they often lack the requisite security domain knowledge required to do so. This critical knowledge arises from years of observing system intrusions, dealing with malicious hackers, suffering the consequences of software vulnera-bilities, and so on. Put in this position , even the best-intended development efforts can fail to take into account real-world attacks previously observed on similar application architectures. Although recent books 1,2 are starting to turn this knowledge gap around, the science of attack is a novel one. Information security staff—in particular, incident handlers and vulnerability/patch specialists— have spent years responding to attacks against real systems and thinking about the vulnerabilities that spawned them. In many cases, they've studied software vulnerabili-ties and their resulting attack profiles in minute detail. However, few information security professionals are software developers (at least, on a full-time basis), and their solution sets tend to be limited to reactive techniques such as installing software patches, shoring up firewalls, updating intrusion detection signature databases, and the like. It's very rare to find information security …",
"title": ""
},
{
"docid": "d4d0818e22b736f04acc53cdfcebb2f8",
"text": "Forest and rural fires are one of the main causes of environmental degradation in Mediterranean countries. Existing fire detection systems only focus on detection, but not on the verification of the fire. However, almost all of them are just simulations, and very few implementations can be found. Besides, the systems in the literature lack scalability. In this paper we show all the steps followed to perform the design, research and development of a wireless multisensor network which mixes sensors with IP cameras in a wireless network in order to detect and verify fire in rural and forest areas of Spain. We have studied how many cameras, sensors and access points are needed to cover a rural or forest area, and the scalability of the system. We have developed a multisensor and when it detects a fire, it sends a sensor alarm through the wireless network to a central server. The central server selects the closest wireless cameras to the multisensor, based on a software application, which are rotated to the sensor that raised the alarm, and sends them a message in order to receive real-time images from the zone. The camera lets the fire fighters corroborate the existence of a fire and avoid false alarms. In this paper, we show the test performance given by a test bench formed by four wireless IP cameras in several situations and the energy consumed when they are transmitting. Moreover, we study the energy consumed by each device when the system is set up. The wireless sensor network could be connected to Internet through a gateway and the images of the cameras could be seen from any part of the world.",
"title": ""
},
{
"docid": "7aca3e7f9409fa1381a309d304eb898d",
"text": "The Internet of things (IoT) is composed of billions of sensing devices that are subject to threats stemming from increasing reliance on communications technologies. A Trust-Based Secure Routing (TBSR) scheme using the traceback approach is proposed to improve the security of data routing and maximize the use of available energy in Energy-Harvesting Wireless Sensor Networks (EHWSNs). The main contributions of a TBSR are (a) the source nodes send data and notification to sinks through disjoint paths, separately; in such a mechanism, the data and notification can be verified independently to ensure their security. (b) Furthermore, the data and notification adopt a dynamic probability of marking and logging approach during the routing. Therefore, when attacked, the network will adopt the traceback approach to locate and clear malicious nodes to ensure security. The probability of marking is determined based on the level of battery remaining; when nodes harvest more energy, the probability of marking is higher, which can improve network security. Because if the probability of marking is higher, the number of marked nodes on the data packet routing path will be more, and the sink will be more likely to trace back the data packet routing path and find malicious nodes according to this notification. When data packets are routed again, they tend to bypass these malicious nodes, which make the success rate of routing higher and lead to improved network security. When the battery level is low, the probability of marking will be decreased, which is able to save energy. For logging, when the battery level is high, the network adopts a larger probability of marking and smaller probability of logging to transmit notification to the sink, which can reserve enough storage space to meet the storage demand for the period of the battery on low level; when the battery level is low, increasing the probability of logging can reduce energy consumption. After the level of battery remaining is high enough, nodes then send the notification which was logged before to the sink. Compared with past solutions, our results indicate that the performance of the TBSR scheme has been improved comprehensively; it can effectively increase the quantity of notification received by the sink by 20%, increase energy efficiency by 11%, reduce the maximum storage capacity needed by nodes by 33.3% and improve the success rate of routing by approximately 16.30%.",
"title": ""
},
{
"docid": "1db6ea040880ceeb57737a5054206127",
"text": "Several studies regarding security testing for corporate environments, networks, and systems were developed in the past years. Therefore, to understand how methodologies and tools for security testing have evolved is an important task. One of the reasons for this evolution is due to penetration test, also known as Pentest. The main objective of this work is to provide an overview on Pentest, showing its application scenarios, models, methodologies, and tools from published papers. Thereby, this work may help researchers and people that work with security to understand the aspects and existing solutions related to Pentest. A systematic mapping study was conducted, with an initial gathering of 1145 papers, represented by 1090 distinct papers that have been evaluated. At the end, 54 primary studies were selected to be analyzed in a quantitative and qualitative way. As a result, we classified the tools and models that are used on Pentest. We also show the main scenarios in which these tools and methodologies are applied to. Finally, we present some open issues and research opportunities on Pentest.",
"title": ""
},
{
"docid": "8a679c93185332398c5261ddcfe81e84",
"text": "We discuss the temporal-difference learning algorithm, as applied to approximating the cost-to-go function of an infinite-horizon discounted Markov chain, using a function approximator involving linear combinations of fixed basis functions. The algorithm we analyze performs on-line updating of a parameter vector during a single endless trajectory of an ergodic Markov chain with a finite or infinite state space. We present a proof of convergence (with probability 1), a characterization of the limit of convergence, and a bound on the resulting approximation error. In addition to proving new and stronger results than those previously available, our analysis is based on a new line of reasoning that provides new intuition about the dynamics of temporal-difference learning. Finally, we prove that on-line updates, based on entire trajectories of the Markov chain, are in a certain sense necessary for convergence. This fact reconciles positive and negative results that have been discussed in the literature, regarding the soundness of temporal-difference learning.",
"title": ""
},
{
"docid": "34690f455f9e539b06006f30dd3e512b",
"text": "Disaster relief operations rely on the rapid deployment of wireless network architectures to provide emergency communications. Future emergency networks will consist typically of terrestrial, portable base stations and base stations on-board low altitude platforms (LAPs). The effectiveness of network deployment will depend on strategically chosen station positions. In this paper a method is presented for calculating the optimal proportion of the two station types and their optimal placement. Random scenarios and a real example from Hurricane Katrina are used for evaluation. The results confirm the strength of LAPs in terms of high bandwidth utilisation, achieved by their ability to cover wide areas, their portability and adaptability to height. When LAPs are utilized, the total required number of base stations to cover a desired area is generally lower. For large scale disasters in particular, this leads to shorter response times and the requirement of fewer resources. This goal can be achieved more easily if algorithms such as the one presented in this paper are used.",
"title": ""
},
{
"docid": "7313ab8f065b8cc167aa2d4cd999eae3",
"text": "LossCalcTM version 2.0 is the Moody's KMV model to predict loss given default (LGD) or (1 recovery rate). Lenders and investors use LGD to estimate future credit losses. LossCalc is a robust and validated model of LGD for loans, bonds, and preferred stocks for the US, Canada, the UK, Continental Europe, Asia, and Latin America. It projects LGD for defaults occurring immediately and for defaults that may occur in one year. LossCalc is a statistical model that incorporates information at different levels: collateral, instrument, firm, industry, country, and the macroeconomy to predict LGD. It significantly improves on the use of historical recovery averages to predict LGD, helping institutions to better price and manage credit risk. LossCalc is built on a global dataset of 3,026 recovery observations for loans, bonds, and preferred stock from 1981-2004. This dataset includes over 1,424 defaults of both public and private firms—both rated and unrated instruments—in all industries. LossCalc will help institutions better manage their credit risk and can play a critical role in meeting the Basel II requirements on advanced Internal Ratings Based Approach. This paper describes Moody's KMV LossCalc, its predictive factors, the modeling approach, and its out of-time and out of-sample model validation. AUTHORS Greg M. Gupton Roger M. Stein",
"title": ""
},
{
"docid": "12bdec4e6f70a7fe2bd4c750752287c3",
"text": "Rapid growth in the Internet of Things (IoT) has resulted in a massive growth of data generated by these devices and sensors put on the Internet. Physical-cyber-social (PCS) big data consist of this IoT data, complemented by relevant Web-based and social data of various modalities. Smart data is about exploiting this PCS big data to get deep insights and make it actionable, and making it possible to facilitate building intelligent systems and applications. This article discusses key AI research in semantic computing, cognitive computing, and perceptual computing. Their synergistic use is expected to power future progress in building intelligent systems and applications for rapidly expanding markets in multiple industries. Over the next two years, this column on IoT will explore many challenges and technologies on intelligent use and applications of IoT data.",
"title": ""
},
{
"docid": "f530b8b9fc2565687ccc28ba6a3a72ca",
"text": "Design of an electric machine such as the axial flux permanent magnet synchronous motor (AFPMSM) requires a 3-D finite-element method (FEM) analysis. The AFPMSM with a 3-D FEM model involves too much time and effort to analyze. To deal with this problem, we apply a surrogate assisted multi-objective optimization (SAMOO) algorithm that can realize an accurate and well-distributed Pareto front set with a few number of function calls, and considers various design variables in the motor design process. The superior performance of the SAMOO is verified by comparing it with conventional multi-objective optimization algorithms in a test function. Finally, the optimal design result of the AFPMSM for the electric bicycle is obtained by using the SAMOO algorithm.",
"title": ""
},
{
"docid": "fedcb2bd51b9fd147681ae23e03c7336",
"text": "Epidemiological studies have revealed the important role that foodstuffs of vegetable origin have to play in the prevention of numerous illnesses. The natural antioxidants present in such foodstuffs, among which the fl avonoids are widely present, may be responsible for such an activity. Flavonoids are compounds that are low in molecular weight and widely distributed throughout the vegetable kingdom. They may be of great utility in states of accute or chronic diarrhoea through the inhibition of intestinal secretion and motility, and may also be benefi cial in the reduction of chronic infl ammatory damage in the intestine, by affording protection against oxidative stress and by preserving mucosal function. For this reason, the use of these agents is recommended in the treatment of infl ammatory bowel disease, in which various factors are involved in extreme immunological reactions, which lead to chronic intestinal infl ammation.",
"title": ""
}
] |
scidocsrr
|
943b6a72a22a46e6175f0db92c920e72
|
Popularity and Quality in Social News Aggregators: A Study of Reddit and Hacker News
|
[
{
"docid": "c77fad43abe34ecb0a451a3b0b5d684e",
"text": "Search engine click logs provide an invaluable source of relevance information, but this information is biased. A key source of bias is presentation order: the probability of click is influenced by a document's position in the results page. This paper focuses on explaining that bias, modelling how probability of click depends on position. We propose four simple hypotheses about how position bias might arise. We carry out a large data-gathering effort, where we perturb the ranking of a major search engine, to see how clicks are affected. We then explore which of the four hypotheses best explains the real-world position effects, and compare these to a simple logistic regression model. The data are not well explained by simple position models, where some users click indiscriminately on rank 1 or there is a simple decay of attention over ranks. A â cascade' model, where users view results from top to bottom and leave as soon as they see a worthwhile document, is our best explanation for position bias in early ranks",
"title": ""
},
{
"docid": "957170b015e5acd4ab7ce076f5a4c900",
"text": "On many social networking web sites such as Facebook and Twitter, resharing or reposting functionality allows users to share others' content with their own friends or followers. As content is reshared from user to user, large cascades of reshares can form. While a growing body of research has focused on analyzing and characterizing such cascades, a recent, parallel line of work has argued that the future trajectory of a cascade may be inherently unpredictable. In this work, we develop a framework for addressing cascade prediction problems. On a large sample of photo reshare cascades on Facebook, we find strong performance in predicting whether a cascade will continue to grow in the future. We find that the relative growth of a cascade becomes more predictable as we observe more of its reshares, that temporal and structural features are key predictors of cascade size, and that initially, breadth, rather than depth in a cascade is a better indicator of larger cascades. This prediction performance is robust in the sense that multiple distinct classes of features all achieve similar performance. We also discover that temporal features are predictive of a cascade's eventual shape. Observing independent cascades of the same content, we find that while these cascades differ greatly in size, we are still able to predict which ends up the largest.",
"title": ""
},
{
"docid": "437d9a2146e05be85173b14176e4327c",
"text": "Can a system of distributed moderation quickly and consistently separate high and low quality comments in an online conversation? Analysis of the site Slashdot.org suggests that the answer is a qualified yes, but that important challenges remain for designers of such systems. Thousands of users act as moderators. Final scores for comments are reasonably dispersed and the community generally agrees that moderations are fair. On the other hand, much of a conversation can pass before the best and worst comments are identified. Of those moderations that were judged unfair, only about half were subsequently counterbalanced by a moderation in the other direction. And comments with low scores, not at top-level, or posted late in a conversation were more likely to be overlooked by moderators.",
"title": ""
},
{
"docid": "b5a809969347e24eb0192c04ef6dd21f",
"text": "News articles are extremely time sensitive by nature. There is also intense competition among news items to propagate as widely as possible. Hence, the task of predicting the popularity of news items on the social web is both interesting and challenging. Prior research has dealt with predicting eventual online popularity based on early popularity. It is most desirable, however, to predict the popularity of items prior to their release, fostering the possibility of appropriate decision making to modify an article and the manner of its publication. In this paper, we construct a multi-dimensional feature space derived from properties of an article and evaluate the efficacy of these features to serve as predictors of online popularity. We examine both regression and classification algorithms and demonstrate that despite randomness in human behavior, it is possible to predict ranges of popularity on twitter with an overall 84% accuracy. Our study also serves to illustrate the differences between traditionally prominent sources and those immensely popular on the social web.",
"title": ""
}
] |
[
{
"docid": "390f92430582d13bc2b22a9047ea01a6",
"text": "This paper considers a proportional hazards model, which allows one to examine the extent to which covariates interact nonlinearly with an exposure variable, for analysis of lifetime data. A local partial-likelihood technique is proposed to estimate nonlinear interactions. Asymptotic normality of the proposed estimator is established. The baseline hazard function, the bias and the variance of the local likelihood estimator are consistently estimated. In addition, a one-step local partial-likelihood estimator is presented to facilitate the computation of the proposed procedure and is demonstrated to be as efficient as the fully iterated local partial-likelihood estimator. Furthermore, a penalized local likelihood estimator is proposed to select important risk variables in the model. Numerical examples are used to illustrate the effectiveness of the proposed procedures.",
"title": ""
},
{
"docid": "5c90f5a934a4d936257467a14a058925",
"text": "We present a new autoencoder-type architecture that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the prior distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex",
"title": ""
},
{
"docid": "e226452a288c3067ef8ee613f0b64090",
"text": "Deep neural networks with discrete latent variables offer the promise of better symbolic reasoning, and learning abstractions that are more useful to new tasks. There has been a surge in interest in discrete latent variable models, however, despite several recent improvements, the training of discrete latent variable models has remained challenging and their performance has mostly failed to match their continuous counterparts. Recent work on vector quantized autoencoders (VQVAE) has made substantial progress in this direction, with its perplexity almost matching that of a VAE on datasets such as CIFAR-10. In this work, we investigate an alternate training technique for VQ-VAE, inspired by its connection to the Expectation Maximization (EM) algorithm. Training the discrete bottleneck with EM helps us achieve better image generation results on CIFAR-10, and together with knowledge distillation, allows us to develop a non-autoregressive machine translation model whose accuracy almost matches a strong greedy autoregressive baseline Transformer, while being 3.3 times faster at inference.",
"title": ""
},
{
"docid": "6eed03674521ecf9a558ab0059fc167f",
"text": "University professors traditionally struggle to incorporate software testing into their course curriculum. Worries include double-grading for correctness of both source and test code and finding time to teach testing as a topic. Test-driven development (TDD) has been suggested as a possible solution to improve student software testing skills and to realize the benefits of testing. According to most existing studies, TDD improves software quality and student productivity. This paper surveys the current state of TDD experiments conducted exclusively at universities. Similar surveys compare experiments in both the classroom and industry, but none have focused strictly on academia.",
"title": ""
},
{
"docid": "0232c4cfec6d4ac0339104c563506245",
"text": "We propose Multi-Task Learning with Low Rank Attribute Embedding (MTL-LORAE) to address the problem of person re-identification on multi-cameras. Re-identifications on different cameras are considered as related tasks, which allows the shared information among different tasks to be explored to improve the re-identification accuracy. The MTL-LORAE framework integrates low-level features with mid-level attributes as the descriptions for persons. To improve the accuracy of such description, we introduce the low-rank attribute embedding, which maps original binary attributes into a continuous space utilizing the correlative relationship between each pair of attributes. In this way, inaccurate attributes are rectified and missing attributes are recovered. The resulting objective function is constructed with an attribute embedding error and a quadratic loss concerning class labels. It is solved by an alternating optimization strategy. The proposed MTL-LORAE is tested on four datasets and is validated to outperform the existing methods with significant margins.",
"title": ""
},
{
"docid": "7973cb32f19b61b0cc88671e4939e32b",
"text": "Trolling behaviors are extremely diverse, varying by context, tactics, motivations, and impact. Definitions, perceptions of, and reactions to online trolling behaviors vary. Since not all trolling is equal or deviant, managing these behaviors requires context sensitive strategies. This paper describes appropriate responses to various acts of trolling in context, based on perceptions of college students in North America. In addition to strategies for dealing with deviant trolling, this paper illustrates the complexity of dealing with socially and politically motivated trolling.",
"title": ""
},
{
"docid": "72c0fecdbcc27b6af98373dc3c03333b",
"text": "The amino acid sequence of the heavy chain of Bombyx mori silk fibroin was derived from the gene sequence. The 5,263-residue (391-kDa) polypeptide chain comprises 12 low-complexity \"crystalline\" domains made up of Gly-X repeats and covering 94% of the sequence; X is Ala in 65%, Ser in 23%, and Tyr in 9% of the repeats. The remainder includes a nonrepetitive 151-residue header sequence, 11 nearly identical copies of a 43-residue spacer sequence, and a 58-residue C-terminal sequence. The header sequence is homologous to the N-terminal sequence of other fibroins with a completely different crystalline region. In Bombyx mori, each crystalline domain is made up of subdomains of approximately 70 residues, which in most cases begin with repeats of the GAGAGS hexapeptide and terminate with the GAAS tetrapeptide. Within the subdomains, the Gly-X alternance is strict, which strongly supports the classic Pauling-Corey model, in which beta-sheets pack on each other in alternating layers of Gly/Gly and X/X contacts. When fitting the actual sequence to that model, we propose that each subdomain forms a beta-strand and each crystalline domain a two-layered beta-sandwich, and we suggest that the beta-sheets may be parallel, rather than antiparallel, as has been assumed up to now.",
"title": ""
},
{
"docid": "1c19d0b156673e70544fe93154f1ae33",
"text": "Active learning is a supervised machine learning technique in which the learner is in control of the data used for learning. That control is utilized by the learner to ask an oracle, typically a human with extensive knowledge of the domain at hand, about the classes of the instances for which the model learned so far makes unreliable predictions. The active learning process takes as input a set of labeled examples, as well as a larger set of unlabeled examples, and produces a classifier and a relatively small set of newly labeled data. The overall goal is to create as good a classifier as possible, without having to mark-up and supply the learner with more data than necessary. The learning process aims at keeping the human annotation effort to a minimum, only asking for advice where the training utility of the result of such a query is high. Active learning has been successfully applied to a number of natural language processing tasks, such as, information extraction, named entity recognition, text categorization, part-of-speech tagging, parsing, and word sense disambiguation. This report is a literature survey of active learning from the perspective of natural language processing.",
"title": ""
},
{
"docid": "830abfc28745f469cd24bb730111afcb",
"text": "User interface (UI) is point of interaction between user and computer software. The success and failure of a software application depends on User Interface Design (UID). Possibility of using a software, easily using and learning are issues influenced by UID. The UI is significant in designing of educational software (e-Learning). Principles and concepts of learning should be considered in addition to UID principles in UID for e-learning. In this regard, to specify the logical relationship between education, learning, UID and multimedia at first we readdress the issues raised in previous studies. It is followed by examining the principle concepts of e-learning and UID. Then, we will see how UID contributes to e-learning through the educational software built by authors. Also we show the way of using UI to improve learning and motivating the learners and to improve the time efficiency of using e-learning software. Keywords—e-Learning, User Interface Design, Self learning, Educational Multimedia",
"title": ""
},
{
"docid": "8848ddd97501ff8aa5e571852e7fb447",
"text": "Sensor network nodes exhibit characteristics of both embedded systems and general-purpose systems. They must use little energy and be robust to environmental conditions, while also providing common services that make it easy to write applications. In TinyOS, the current state of the art in sensor node operating systems, reusable components implement common services, but each node runs a single statically-linked system image, making it hard to run multiple applications or incrementally update applications. We present SOS, a new operating system for mote-class sensor nodes that takes a more dynamic point on the design spectrum. SOS consists of dynamically-loaded modules and a common kernel, which implements messaging, dynamic memory, and module loading and unloading, among other services. Modules are not processes: they are scheduled cooperatively and there is no memory protection. Nevertheless, the system protects against common module bugs using techniques such as typed entry points, watchdog timers, and primitive resource garbage collection. Individual modules can be added and removed with minimal system interruption. We describe SOS's design and implementation, discuss tradeoffs, and compare it with TinyOS and with the Maté virtual machine. Our evaluation shows that despite the dynamic nature of SOS and its higher-level kernel interface, its long term total usage nearly identical to that of systems such as Matè and TinyOS.",
"title": ""
},
{
"docid": "a934b69f281d0bb693982fbc48a4c677",
"text": "We investigate the impact of preextracting and tokenizing bigram collocations on topic models. Using extensive experiments on four different corpora, we show that incorporating bigram collocations in the document representation creates more parsimonious models and improves topic coherence. We point out some problems in interpreting test likelihood and test perplexity to compare model fit, and suggest an alternate measure that penalizes model complexity. We show how the Akaike information criterion is a more appropriate measure, which suggests that using a modest number (up to 1000) of top-ranked bigrams is the optimal topic modelling configuration. Using these 1000 bigrams also results in improved topic quality over unigram tokenization. Further increases in topic quality can be achieved by using up to 10,000 bigrams, but this is at the cost of a more complex model. We also show that multiword (bigram and longer) named entities give consistent results, indicating that they should be represented as single tokens. This is the first work to explicitly study the effect of n-gram tokenization on LDA topic models, and the first work to make empirical recommendations to topic modelling practitioners, challenging the standard practice of unigram-based tokenization.",
"title": ""
},
{
"docid": "df2be33740334d9e9db5d9f2911153ed",
"text": "Mobile devices such as smartphones and tablets offer great new possibilities for the creation of 3D games and virtual reality environments. However, interaction with objects in these virtual worlds is often difficult -- for example due to the devices' small form factor. In this paper, we define different 3D visualization concepts and evaluate related interactions such as navigation and selection of objects. Detailed experiments with a smartphone and a tablet illustrate the advantages and disadvantages of the various 3D visualization concepts. Our results provide new insight with respect to interaction and highlight important aspects for the design of interactive virtual environments on mobile devices and related applications -- especially for mobile 3D gaming.",
"title": ""
},
{
"docid": "8efee8d7c3bf229fa5936209c43a7cff",
"text": "This research investigates the meaning of “human-computer relationship” and presents techniques for constructing, maintaining, and evaluating such relationships, based on research in social psychology, sociolinguistics, communication and other social sciences. Contexts in which relationships are particularly important are described, together with specific benefits (like trust) and task outcomes (like improved learning) known to be associated with relationship quality. We especially consider the problem of designing for long-term interaction, and define relational agents as computational artifacts designed to establish and maintain long-term social-emotional relationships with their users. We construct the first such agent, and evaluate it in a controlled experiment with 101 users who were asked to interact daily with an exercise adoption system for a month. Compared to an equivalent task-oriented agent without any deliberate social-emotional or relationship-building skills, the relational agent was respected more, liked more, and trusted more, even after four weeks of interaction. Additionally, users expressed a significantly greater desire to continue working with the relational agent after the termination of the study. We conclude by discussing future directions for this research together with ethical and other ramifications of this work for HCI designers.",
"title": ""
},
{
"docid": "87a04076b2137b67d6f04172e7def48b",
"text": "An architecture for low-noise spatial cancellation of co-channel interferer (CCI) at RF in a digital beamforming (DBF)/MIMO receiver (RX) array is presented. The proposed RF cancellation can attenuate CCI prior to the ADC in a DBF/MIMO RX array while preserving a field-of-view (FoV) in each array element, enabling subsequent DSP for multi-beamforming. A novel hybrid-coupler/polyphase-filter based input coupling scheme that simplifies spatial selection of CCI and enables low-noise cancellation is described. A 4-element 10GHz prototype is implemented in 65nm CMOS that achieves >20dB spatial cancellation of CCI while adding <;1.5dB output noise.",
"title": ""
},
{
"docid": "324bbe1712342fcdbc29abfbebfaf29c",
"text": "Non-interactive zero-knowledge proofs are a powerful cryptographic primitive used in privacypreserving protocols. We design and build C∅C∅, the first system enabling developers to build efficient, composable, non-interactive zero-knowledge proofs for generic, user-defined statements. C∅C∅ extends state-of-the-art SNARK constructions by applying known strengthening transformations to yield UC-composable zero-knowledge proofs suitable for modular use in larger cryptographic protocols. To attain fast practical performance, C∅C∅ includes a library of several “SNARK-friendly” cryptographic primitives. These primitives are used in the strengthening transformations in order to reduce the overhead of achieving composable security. Our open-source library of optimized arithmetic circuits for these functions are up to 40× more efficient than standard implementations and are thus of independent interest for use in other NIZK projects. Finally, we evaluate C∅C∅ on applications such as anonymous credentials, private smart contracts, and nonoutsourceable proof-of-work puzzles and demonstrate 5× to 8× speedup in these application settings compared to naive implementations.",
"title": ""
},
{
"docid": "5f9cd16a420b2f6b04e504d2b2dae111",
"text": "This paper addresses on-chip solar energy harvesting and proposes a circuit that can be employed to generate high voltages from integrated photodiodes. The proposed circuit uses a switched-inductor approach to avoid stacking photodiodes to generate high voltages. The effect of parasitic photodiodes present in integrated circuits (ICs) is addressed and a solution to minimize their impact is presented. The proposed circuit employs two switch transistors and two off-chip components: an inductor and a capacitor. A theoretical analysis of a switched-inductor dc-dc converter is carried out and a mathematical model of the energy harvester is developed. Measurements taken from a fabricated IC are presented and shown to be in good agreement with hardware measurements. Measurement results show that voltages of up to 2.81 V (depending on illumination and loading conditions) can be generated from a single integrated photodiode. The energy harvester circuit achieves a maximum conversion efficiency of 59%.",
"title": ""
},
{
"docid": "5859379f3c4c5a7186c9dc8c85e1e384",
"text": "Purpose – Investigate the use of two imaging-based methods – coded pattern projection and laser-based triangulation – to generate 3D models as input to a rapid prototyping pipeline. Design/methodology/approach – Discusses structured lighting technologies as suitable imaging-based methods. Two approaches, coded-pattern projection and laser-based triangulation, are specifically identified and discussed in detail. Two commercial systems are used to generate experimental results. These systems include the Genex Technologies 3D FaceCam and the Integrated Vision Products Ranger System. Findings – Presents 3D reconstructions of objects from each of the commercial systems. Research limitations/implications – Provides background in imaging-based methods for 3D data collection and model generation. A practical limitation is that imaging-based systems do not currently meet accuracy requirements, but continued improvements in imaging systems will minimize this limitation. Practical implications – Imaging-based approaches to 3D model generation offer potential to increase scanning time and reduce scanning complexity. Originality/value – Introduces imaging-based concepts to the rapid prototyping pipeline.",
"title": ""
},
{
"docid": "9d867cf4f8e5456e3b01c0768bd1dfaa",
"text": "This paper introduces a Projected Principal Component Analysis (Projected-PCA), which employees principal component analysis to the projected (smoothed) data matrix onto a given linear space spanned by covariates. When it applies to high-dimensional factor analysis, the projection removes noise components. We show that the unobserved latent factors can be more accurately estimated than the conventional PCA if the projection is genuine, or more precisely, when the factor loading matrices are related to the projected linear space. When the dimensionality is large, the factors can be estimated accurately even when the sample size is finite. We propose a flexible semi-parametric factor model, which decomposes the factor loading matrix into the component that can be explained by subject-specific covariates and the orthogonal residual component. The covariates' effects on the factor loadings are further modeled by the additive model via sieve approximations. By using the newly proposed Projected-PCA, the rates of convergence of the smooth factor loading matrices are obtained, which are much faster than those of the conventional factor analysis. The convergence is achieved even when the sample size is finite and is particularly appealing in the high-dimension-low-sample-size situation. This leads us to developing nonparametric tests on whether observed covariates have explaining powers on the loadings and whether they fully explain the loadings. The proposed method is illustrated by both simulated data and the returns of the components of the S&P 500 index.",
"title": ""
},
{
"docid": "21e9a263934e09654d3b5500fb39e362",
"text": "BACKGROUND\nOlder people complain of difficulties in recalling telephone numbers and being able to dial them in the correct order. This study examined the developmental trend of verbal forward digit span across adulthood and aging in a Spanish population, as an index of one of the components of Baddeley’s working memory model—the phonological loop—, which illustrates these two aspects.\n\n\nMETHOD\nA verbal digit span was administered to an incidental sample of 987 participants ranging from 35 to 90 years old. The maximum length was defined that participants could recall of at least two out of three series in the same order as presented with no errors. Demographic variables of gender and educational level were also examined.\n\n\nRESULTS\nThe ANOVA showed that the three main factors (age group, gender and educational level) were significant, but none of the interactions was. Verbal forward digit span decreases during the lifespan, but gender and educational level affect it slightly.\n\n\nCONCLUSION\nPhonological loop is affected by age. The verbal forward digit span in this study is generally lower than the one reported in other studies.",
"title": ""
},
{
"docid": "40a0e4f114b066ef7c090517a6befad5",
"text": "Utility asset managers and engineers are concerned about the life and reliability of their power transformers which depends on the continued life of the paper insulation. The ageing rate of the paper is affected by water, oxygen and acids. Traditionally, the ageing rate of paper has been studied in sealed vessels however this approach does not allow the possibility to assess the affect of oxygen on paper with different water content. The ageing rate of paper has been studied for dry paper in air (excess oxygen). In these experiments we studied the ageing rate of Kraft and thermally upgraded Kraft paper in medium and high oxygen with varying water content. Furthermore, the oxygen content of the oil in sealed vessels is low which represents only sealed transformers. The ageing rate of the paper has not been determined for free breathing transformers with medium or high oxygen content and for different wetness of paper. In these ageing experiments the water and oxygen content was controlled using a special test rig to compare the ageing rate to previous work and to determine the ageing effect of paper by combining temperature, water content of paper and oxygen content of the oil. We found that the ageing rate of paper with the same water content increased with oxygen content in the oil. Hence, new life curves were developed based on the water content of the paper and the oxygen content of the oil.",
"title": ""
}
] |
scidocsrr
|
ec74bf2fedc7fd1ae83658c9d7d0dc61
|
A field study of API learning obstacles
|
[
{
"docid": "639ef3a979e916a6e38b32243235b73a",
"text": "Little is known about the specific kinds of questions programmers ask when evolving a code base and how well existing tools support those questions. To better support the activity of programming, answers are needed to three broad research questions: 1) What does a programmer need to know about a code base when evolving a software system? 2) How does a programmer go about finding that information? 3) How well do existing tools support programmers in answering those questions? We undertook two qualitative studies of programmers performing change tasks to provide answers to these questions. In this paper, we report on an analysis of the data from these two user studies. This paper makes three key contributions. The first contribution is a catalog of 44 types of questions programmers ask during software evolution tasks. The second contribution is a description of the observed behavior around answering those questions. The third contribution is a description of how existing deployed and proposed tools do, and do not, support answering programmers' questions.",
"title": ""
}
] |
[
{
"docid": "74eb19a956a8910fbfd50090fb04946c",
"text": "In this paper, we explore student dropout behavior in Massive Open Online Courses(MOOC). We use as a case study a recent Coursera class from which we develop a survival model that allows us to measure the influence of factors extracted from that data on student dropout rate. Specifically we explore factors related to student behavior and social positioning within discussion forums using standard social network analytic techniques. The analysis reveals several significant predictors of dropout.",
"title": ""
},
{
"docid": "6f31b0ba60dccb6f1c4ac3e4161f8a44",
"text": "In this work, we propose an alternative solution for parallel wave generation by WaveNet. In contrast to parallel WaveNet (Oord et al., 2018), we distill a Gaussian inverse autoregressive flow from the autoregressive WaveNet by minimizing a novel regularized KL divergence between their highly-peaked output distributions. Our method computes the KL divergence in closed-form, which simplifies the training algorithm and provides very efficient distillation. In addition, we propose the first text-to-wave neural architecture for speech synthesis, which is fully convolutional and enables fast end-to-end training from scratch. It significantly outperforms the previous pipeline that connects a text-to-spectrogram model to a separately trained WaveNet (Ping et al., 2018). We also successfully distill a parallel waveform synthesizer conditioned on the hidden representation in this end-to-end model. 2",
"title": ""
},
{
"docid": "289849c6cb55ed61d28c8fe5132fedde",
"text": "An adaptive multilevel wavelet collocation method for solving multi-dimensional elliptic problems with localized structures is described. The method is based on multi-dimensional second generation wavelets, and is an extension of the dynamically adaptive second generation wavelet collocation method for evolution problems [Int. J. Comp. Fluid Dyn. 17 (2003) 151]. Wavelet decomposition is used for grid adaptation and interpolation, while a hierarchical finite difference scheme, which takes advantage of wavelet multilevel decomposition, is used for derivative calculations. The multilevel structure of the wavelet approximation provides a natural way to obtain the solution on a near optimal grid. In order to accelerate the convergence of the solver, an iterative procedure analogous to the multigrid algorithm is developed. The overall computational complexity of the solver is O(N ), where N is the number of adapted grid points. The accuracy and computational efficiency of the method are demonstrated for the solution of twoand three-dimensional elliptic test problems.",
"title": ""
},
{
"docid": "a51803d5c0753f64f5216d2cc225d172",
"text": "Twitter is a free social networking and micro-blogging service that enables its millions of users to send and read each other's \"tweets,\" or short, 140-character messages. The service has more than 190 million registered users and processes about 55 million tweets per day. Useful information about news and geopolitical events lies embedded in the Twitter stream, which embodies, in the aggregate, Twitter users' perspectives and reactions to current events. By virtue of sheer volume, content embedded in the Twitter stream may be useful for tracking or even forecasting behavior if it can be extracted in an efficient manner. In this study, we examine the use of information embedded in the Twitter stream to (1) track rapidly-evolving public sentiment with respect to H1N1 or swine flu, and (2) track and measure actual disease activity. We also show that Twitter can be used as a measure of public interest or concern about health-related events. Our results show that estimates of influenza-like illness derived from Twitter chatter accurately track reported disease levels.",
"title": ""
},
{
"docid": "a7bd7a5b7d79ce8c5691abfdcecfeec7",
"text": "We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state with deep networks, and provide a framework for its use in tracking and prediction tasks. We show that our proposed model greatly outperforms standard deconvolutional methods and GANs for image generation, producing clear, photo-realistic images. We also develop a convolutional neural network model for state estimation and compare the result to an Extended Kalman Filter to estimate robot trajectories. We validate all models on a real robotic system.",
"title": ""
},
{
"docid": "353500d18d56c0bf6dc13627b0517f41",
"text": "In order to accelerate the learning process in high dimensional reinforcement learning problems, TD methods such as Q-learning and Sarsa are usually combined with eligibility traces. The recently introduced DQN (Deep Q-Network) algorithm, which is a combination of Q-learning with a deep neural network, has achieved good performance on several games in the Atari 2600 domain. However, the DQN training is very slow and requires too many time steps to converge. In this paper, we use the eligibility traces mechanism and propose the deep Q(λ) network algorithm. The proposed method provides faster learning in comparison with the DQN method. Empirical results on a range of games show that the deep Q(λ) network significantly reduces learning time.",
"title": ""
},
{
"docid": "6fe413cf75a694217c30a9ef79fab589",
"text": "Zusammenfassung) Biometrics have been used for secure identification and authentication for more than two decades since biometric data is unique, non-transferable, unforgettable, and always with us. Recently, biometrics has pervaded other aspects of security applications that can be listed under the topic of “Biometric Cryptosystems”. Although the security of some of these systems is questionable when they are utilized alone, integration with other technologies such as digital signatures or Identity Based Encryption (IBE) schemes results in cryptographically secure applications of biometrics. It is exactly this field of biometric cryptosystems that we focused in this thesis. In particular, our goal is to design cryptographic protocols for biometrics in the framework of a realistic security model with a security reduction. Our protocols are designed for biometric based encryption, signature and remote authentication. We first analyze the recently introduced biometric remote authentication schemes designed according to the security model of Bringer et al.. In this model, we show that one can improve the database storage cost significantly by designing a new architecture, which is a two-factor authentication protocol. This construction is also secure against the new attacks we present, which disprove the claimed security of remote authentication schemes, in particular the ones requiring a secure sketch. Thus, we introduce a new notion called “Weak-identity Privacy” and propose a new construction by combining cancelable biometrics and distributed remote authentication in order to obtain a highly secure biometric authentication system. We continue our research on biometric remote authentication by analyzing the security issues of multi-factor biometric authentication (MFBA). We formally describe the security model for MFBA that captures simultaneous attacks against these systems and define the notion of user privacy, where the goal of the adversary is to impersonate a client to the server. We design a new protocol by combining bipartite biotokens, homomorphic encryption and zero-knowledge proofs and provide a security reduction to achieve user privacy. The main difference of this MFBA protocol is that the server-side computations are performed in the encrypted domain but without requiring a decryption key for the authentication decision of the server. Thus, leakage of the secret key of any system component does not affect the security of the scheme as opposed to the current biometric systems involving crypto-",
"title": ""
},
{
"docid": "ccd356a943f19024478c42b5db191293",
"text": "This paper discusses the relationship between concepts of narrative, patterns of interaction within computer games constituting gameplay gestalts, and the relationship between narrative and the gameplay gestalt. The repetitive patterning involved in gameplay gestalt formation is found to undermine deep narrative immersion. The creation of stronger forms of interactive narrative in games requires the resolution of this confl ict. The paper goes on to describe the Purgatory Engine, a game engine based upon more fundamentally dramatic forms of gameplay and interaction, supporting a new game genre referred to as the fi rst-person actor. The fi rst-person actor does not involve a repetitive gestalt mode of gameplay, but defi nes gameplay in terms of character development and dramatic interaction.",
"title": ""
},
{
"docid": "e5667a65bc628b93a1d5b0e37bfb8694",
"text": "The problem of determining whether an object is in motion, irrespective of camera motion, is far from being solved. We address this challenging task by learning motion patterns in videos. The core of our approach is a fully convolutional network, which is learned entirely from synthetic video sequences, and their ground-truth optical flow and motion segmentation. This encoder-decoder style architecture first learns a coarse representation of the optical flow field features, and then refines it iteratively to produce motion labels at the original high-resolution. We further improve this labeling with an objectness map and a conditional random field, to account for errors in optical flow, and also to focus on moving things rather than stuff. The output label of each pixel denotes whether it has undergone independent motion, i.e., irrespective of camera motion. We demonstrate the benefits of this learning framework on the moving object segmentation task, where the goal is to segment all objects in motion. Our approach outperforms the top method on the recently released DAVIS benchmark dataset, comprising real-world sequences, by 5.6%. We also evaluate on the Berkeley motion segmentation database, achieving state-of-the-art results.",
"title": ""
},
{
"docid": "12363093cb0441e0817d4c92ab88e7fb",
"text": "Imperforate hymen, a condition in which the hymen has no aperture, usually occurs congenitally, secondary to failure of development of a lumen. A case of a documented simulated \"acquired\" imperforate hymen is presented in this article. The patient, a 5-year-old girl, was the victim of sexual abuse. Initial examination showed tears, scars, and distortion of the hymen, laceration of the perineal body, and loss of normal anal tone. Follow-up evaluations over the next year showed progressive healing. By 7 months after the injury, the hymen was replaced by a thick, opaque scar with no orifice. Patients with an apparent imperforate hymen require a sensitive interview and careful visual inspection of the genital and anal areas to delineate signs of injury. The finding of an apparent imperforate hymen on physical examination does not eliminate the possibility of antecedent vaginal penetration and sexual abuse.",
"title": ""
},
{
"docid": "81b8c8490d47eea2b73b1a368d17d4b2",
"text": "With the emergence of online social networks, the social network-based recommendation approach is popularly used. The major benefit of this approach is the ability of dealing with the problems with cold-start users. In addition to social networks, user trust information also plays an important role to obtain reliable recommendations. Although matrix factorization (MF) becomes dominant in recommender systems, the recommendation largely relies on the initialization of the user and item latent feature vectors. Aiming at addressing these challenges, we develop a novel trust-based approach for recommendation in social networks. In particular, we attempt to leverage deep learning to determinate the initialization in MF for trust-aware social recommendations and to differentiate the community effect in user’s trusted friendships. A two-phase recommendation process is proposed to utilize deep learning in initialization and to synthesize the users’ interests and their trusted friends’ interests together with the impact of community effect for recommendations. We perform extensive experiments on real-world social network data to demonstrate the accuracy and effectiveness of our proposed approach in comparison with other state-of-the-art methods.",
"title": ""
},
{
"docid": "893a8c073b8bd935fbea419c0f3e0b17",
"text": "Computing as a service model in cloud has encouraged High Performance Computing to reach out to wider scientific and industrial community. Many small and medium scale HPC users are exploring Infrastructure cloud as a possible platform to run their applications. However, there are gaps between the characteristic traits of an HPC application and existing cloud scheduling algorithms. In this paper, we propose an HPC-aware scheduler and implement it atop Open Stack scheduler. In particular, we introduce topology awareness and consideration for homogeneity while allocating VMs. We demonstrate the benefits of these techniques by evaluating them on a cloud setup on Open Cirrus test-bed.",
"title": ""
},
{
"docid": "0879399fcb38c103a0e574d6d9010215",
"text": "We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors selfcitations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method,1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.",
"title": ""
},
{
"docid": "1f4d29037bdb9da92843ca6ce4ab592d",
"text": "Utilizing cumulative correlation information already existing in an evolutionary process, this paper proposes a predictive approach to the reproduction mechanism of new individuals for differential evolution (DE) algorithms. DE uses a distributed model (DM) to generate new individuals, which is relatively explorative, whilst evolution strategy (ES) uses a centralized model (CM) to generate offspring, which through adaptation retains a convergence momentum. This paper adopts a key feature in the CM of a covariance matrix adaptation ES, the cumulatively learned evolution path (EP), to formulate a new evolutionary algorithm (EA) framework, termed DEEP, standing for DE with an EP. Without mechanistically combining two CM and DM based algorithms together, the DEEP framework offers advantages of both a DM and a CM and hence substantially enhances performance. Under this architecture, a self-adaptation mechanism can be built inherently in a DEEP algorithm, easing the task of predetermining algorithm control parameters. Two DEEP variants are developed and illustrated in the paper. Experiments on the CEC'13 test suites and two practical problems demonstrate that the DEEP algorithms offer promising results, compared with the original DEs and other relevant state-of-the-art EAs.",
"title": ""
},
{
"docid": "09c7331d77c5a9a2812df90e6e9256ea",
"text": "We present a technique for approximating a light probe image as a constellation of light sources based on a median cut algorithm. The algorithm is efficient, simple to implement, and can realistically represent a complex lighting environment with as few as 64 point light sources.",
"title": ""
},
{
"docid": "79351983ed6ba7bd3400b1a08c458fde",
"text": "The intranuclear location of genomic loci and the dynamics of these loci are important parameters for understanding the spatial and temporal regulation of gene expression. Recently it has proven possible to visualize endogenous genomic loci in live cells by the use of transcription activator-like effectors (TALEs), as well as modified versions of the bacterial immunity clustered regularly interspersed short palindromic repeat (CRISPR)/CRISPR-associated protein 9 (Cas9) system. Here we report the design of multicolor versions of CRISPR using catalytically inactive Cas9 endonuclease (dCas9) from three bacterial orthologs. Each pair of dCas9-fluorescent proteins and cognate single-guide RNAs (sgRNAs) efficiently labeled several target loci in live human cells. Using pairs of differently colored dCas9-sgRNAs, it was possible to determine the intranuclear distance between loci on different chromosomes. In addition, the fluorescence spatial resolution between two loci on the same chromosome could be determined and related to the linear distance between them on the chromosome's physical map, thereby permitting assessment of the DNA compaction of such regions in a live cell.",
"title": ""
},
{
"docid": "a928aa788221fc7f9a13d05a9d36badf",
"text": "Segment routing is an emerging traffic engineering technique relying on Multi-protocol Label-Switched (MPLS) label stacking to steer traffic using the source-routing paradigm. Traffic flows are enforced through a given path by applying a specifically designed stack of labels (i.e., the segment list). Each packet is then forwarded along the shortest path toward the network element represented by the top label. Unlike traditional MPLS networks, segment routing maintains a per-flow state only at the ingress node; no signaling protocol is required to establish new flows or change the routing of active flows. Thus, control plane scalability is greatly improved. Several segment routing use cases have recently been proposed. As an example, it can be effectively used to dynamically steer traffic flows on paths characterized by low latency values. However, this may suffer from some potential issues. Indeed, deployed MPLS equipment typically supports a limited number of stacked labels. Therefore, it is important to define the proper procedures to minimize the required segment list depth. This work is focused on two relevant segment routing use cases: dynamic traffic recovery and traffic engineering in multi-domain networks. Indeed, in both use cases, the utilization of segment routing can significantly simplify the network operation with respect to traditional Internet Protocol (IP)/MPLS procedures. Thus, two original procedures based on segment routing are proposed for the aforementioned use cases. Both procedures are evaluated including a simulative analysis of the segment list depth. Moreover, an experimental demonstration is performed in a multi-layer test bed exploiting a software-defined-networking-based implementation of segment routing.",
"title": ""
},
{
"docid": "4c165c15a3c6f069f702a54d0dab093c",
"text": "We propose a simple method for improving the security of hashed passwords: the maintenance of additional ``honeywords'' (false passwords) associated with each user's account. An adversary who steals a file of hashed passwords and inverts the hash function cannot tell if he has found the password or a honeyword. The attempted use of a honeyword for login sets off an alarm. An auxiliary server (the ``honeychecker'') can distinguish the user password from honeywords for the login routine, and will set off an alarm if a honeyword is submitted.",
"title": ""
},
{
"docid": "afe1be9e13ca6e2af2c5177809e7c893",
"text": "Scar evaluation and revision techniques are chief among the most important skills in the facial plastic and reconstructive surgeon’s armamentarium. Often minimized in importance, these techniques depend as much on a thorough understanding of facial anatomy and aesthetics, advanced principles of wound healing, and an appreciation of the overshadowing psychological trauma as they do on thorough technical analysis and execution [1,2]. Scar revision is unique in the spectrum of facial plastic and reconstructive surgery because the initial traumatic event and its immediate treatment usually cannot be controlled. Patients who are candidates for scar revision procedures often present after significant loss of regional tissue, injury that crosses anatomically distinct facial aesthetic units, wound closure by personnel less experienced in plastic surgical technique, and poor post injury wound management [3,4]. While no scar can be removed completely, plastic surgeons can often improve the appearance of a scar, making it less obvious through the injection or application of certain steroid medications or through surgical procedures known as scar revisions.There are many variables affect the severity of scarring, including the size and depth of the wound, blood supply to the area, the thickness and color of your skin, and the direction of the scar [5,6].",
"title": ""
}
] |
scidocsrr
|
670794c7489e23ba6a16301cfeb0dbb6
|
Structured Dialogue Policy with Graph Neural Networks
|
[
{
"docid": "f7bdf07ef7a45c3e261e4631743c1882",
"text": "Deep reinforcement learning (RL) methods have significant potential for dialogue policy optimisation. However, they suffer from a poor performance in the early stages of learning. This is especially problematic for on-line learning with real users. Two approaches are introduced to tackle this problem. Firstly, to speed up the learning process, two sampleefficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actorcritic with experience replay (eNACER) are presented. For TRACER, the trust region helps to control the learning step size and avoid catastrophic model changes. For eNACER, the natural gradient identifies the steepest ascent direction in policy space to speed up the convergence. Both models employ off-policy learning with experience replay to improve sampleefficiency. Secondly, to mitigate the cold start issue, a corpus of demonstration data is utilised to pre-train the models prior to on-line reinforcement learning. Combining these two approaches, we demonstrate a practical approach to learning deep RLbased dialogue policies and demonstrate their effectiveness in a task-oriented information seeking domain.",
"title": ""
},
{
"docid": "6a19410817766b052a2054b2cb3efe42",
"text": "Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan—where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (‘bots’), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.",
"title": ""
},
{
"docid": "925e86fe893f41794a747267608b20e1",
"text": "Dialogue assistants are rapidly becoming an indispensable daily aid. To avoid the significant effort needed to hand-craft the required dialogue flow, the Dialogue Management (DM) module can be cast as a continuous Markov Decision Process (MDP) and trained through Reinforcement Learning (RL). Several RL models have been investigated over recent years. However, the lack of a common benchmarking framework makes it difficult to perform a fair comparison between different models and their capability to generalise to different environments. Therefore, this paper proposes a set of challenging simulated environments for dialogue model development and evaluation. To provide some baselines, we investigate a number of representative parametric algorithms, namely deep reinforcement learning algorithms DQN, A2C and Natural Actor-Critic and compare them to a non-parametric model, GP-SARSA. Both the environments and policy models are implemented using the publicly available PyDial toolkit and released on-line, in order to establish a testbed framework for further experiments and to facilitate experimental reproducibility.",
"title": ""
},
{
"docid": "9d4c04d810e3c0f2211546c6da0e3f8d",
"text": "In this paper, we propose to use deep policy networks which are trained with an advantage actor-critic method for statistically optimised dialogue systems. First, we show that, on summary state and action spaces, deep Reinforcement Learning (RL) outperforms Gaussian Processes methods. Summary state and action spaces lead to good performance but require pre-engineering effort, RL knowledge, and domain expertise. In order to remove the need to define such summary spaces, we show that deep RL can also be trained efficiently on the original state and action spaces. Dialogue systems based on partially observable Markov decision processes are known to require many dialogues to train, which makes them unappealing for practical deployment. We show that a deep RL method based on an actor-critic architecture can exploit a small amount of data very efficiently. Indeed, with only a few hundred dialogues collected with a handcrafted policy, the actorcritic deep learner is considerably bootstrapped from a combination of supervised and batch RL. In addition, convergence to an optimal policy is significantly sped up compared to other deep RL methods initialized on the data with batch RL. All experiments are performed on a restaurant domain derived from the Dialogue State Tracking Challenge 2 (DSTC2) dataset.",
"title": ""
}
] |
[
{
"docid": "bf650f61037d232f6773ac2516f76017",
"text": "In this paper we present Semantic Stixels, a novel vision-based scene model geared towards automated driving. Our model jointly infers the geometric and semantic layout of a scene and provides a compact yet rich abstraction of both cues using Stixels as primitive elements. Geometric information is incorporated into our model in terms of pixel-level disparity maps derived from stereo vision. For semantics, we leverage a modern deep learning-based scene labeling approach that provides an object class label for each pixel. Our experiments involve an in-depth analysis and a comprehensive assessment of the constituent parts of our approach using three public benchmark datasets. We evaluate the geometric and semantic accuracy of our model and analyze the underlying run-times and the complexity of the obtained representation. Our results indicate that the joint treatment of both cues on the Semantic Stixel level yields a highly compact environment representation while maintaining an accuracy comparable to the two individual pixel-level input data sources. Moreover, our framework compares favorably to related approaches in terms of computational costs and operates in real-time.",
"title": ""
},
{
"docid": "903b6a31c0d55fff97f0dae0f7eaff8b",
"text": "The major parameters of transformers used in wireless Power Transfer (WPT) are self inductance, mutual inductance and coefficient of coupling. Due to unique and complex nature of these transformers, determining these parameters by calculation has limitations which however can be overcome by use of Finite Element Analysis (FEA). This paper presents and compares the parameters of a circular and rectangular coil transformer modelled under different conditions using Ansys Maxwell software. An introduction to WPT system and its transformer model is also presented.",
"title": ""
},
{
"docid": "f0c25bb609bc6946b558bcd0ccdaee22",
"text": "A biologically motivated computational model of bottom-up visual selective attention was used to examine the degree to which stimulus salience guides the allocation of attention. Human eye movements were recorded while participants viewed a series of digitized images of complex natural and artificial scenes. Stimulus dependence of attention, as measured by the correlation between computed stimulus salience and fixation locations, was found to be significantly greater than that expected by chance alone and furthermore was greatest for eye movements that immediately follow stimulus onset. The ability to guide attention of three modeled stimulus features (color, intensity and orientation) was examined and found to vary with image type. Additionally, the effect of the drop in visual sensitivity as a function of eccentricity on stimulus salience was examined, modeled, and shown to be an important determiner of attentional allocation. Overall, the results indicate that stimulus-driven, bottom-up mechanisms contribute significantly to attentional guidance under natural viewing conditions.",
"title": ""
},
{
"docid": "e364a2ac82f42c87f88b6ed508dc0d8e",
"text": "In order to work well, many computer vision algorithms require that their parameters be adjusted according to the image noise level, making it an important quantity to estimate. We show how to estimate an upper bound on the noise level from a single image based on a piecewise smooth image prior model and measured CCD camera response functions. We also learn the space of noise level functions how noise level changes with respect to brightness and use Bayesian MAP inference to infer the noise level function from a single image. We illustrate the utility of this noise estimation for two algorithms: edge detection and featurepreserving smoothing through bilateral filtering. For a variety of different noise levels, we obtain good results for both these algorithms with no user-specified inputs.",
"title": ""
},
{
"docid": "137287318bc2a50feeb026add3f58a43",
"text": "BACKGROUND\nThe use of bioactive proteins, such as rhBMP-2, may improve bone regeneration in oral and maxillofacial surgery.\n\n\nPURPOSE\nAnalyze the effect of using bioactive proteins for bone regeneration in implant-based rehabilitation.\n\n\nMATERIALS AND METHODS\nSeven databases were screened. Only clinical trials that evaluated the use of heterologous sources of bioactive proteins for bone formation prior to implant-based rehabilitation were included. Statistical analyses were carried out using a random-effects model by comparing the standardized mean difference between groups for bone formation, and risk ratio for implant survival (P ≤ .05).\n\n\nRESULTS\nSeventeen studies were included in the qualitative analysis, and 16 in the meta-analysis. For sinus floor augmentation, bone grafts showed higher amounts of residual bone graft particles than bioactive treatments (P ≤ .05). While for alveolar ridge augmentation bioactive treatments showed a higher level of bone formation than control groups (P ≤ .05). At 3 years of follow-up, no statistically significant differences were observed for implant survival (P > .05).\n\n\nCONCLUSIONS\nBioactive proteins may improve bone formation in alveolar ridge augmentation, and reduce residual bone grafts in sinus floor augmentation. Further studies are needed to evaluate the long-term effect of using bioactive treatments for implant-based rehabilitation.",
"title": ""
},
{
"docid": "72c79b86a91f7c8453cd6075314a6b4d",
"text": "This talk aims to introduce LATEX users to XSL-FO. It does not attempt to give an exhaustive view of XSL-FO, but allows a LATEX user to get started. We show the common and different points between these two approaches of word processing.",
"title": ""
},
{
"docid": "cb381ae2d80b62e9b78f2da5ccfcd5b7",
"text": "Human adults attribute character traits to faces readily and with high consensus. In two experiments investigating the development of face-to-trait inference, adults and children ages 3 through 10 attributed trustworthiness, dominance, and competence to pairs of faces. In Experiment 1, the attributions of 3- to 4-year-olds converged with those of adults, and 5- to 6-year-olds' attributions were at adult levels of consistency. Children ages 3 and above consistently attributed the basic mean/nice evaluation not only to faces varying in trustworthiness (Experiment 1) but also to faces varying in dominance and competence (Experiment 2). This research suggests that the predisposition to judge others using scant facial information appears in adultlike forms early in childhood and does not require prolonged social experience.",
"title": ""
},
{
"docid": "fbce9042585954b38f79be6b024759f5",
"text": "When making choices in software projects, engineers and other stakeholders engage in decision making that involves uncertain future outcomes. Research in psychology, behavioral economics and neuroscience has questioned many of the classical assumptions of how such decisions are made. This literature review aims to characterize the assumptions that underpin the study of these decisions in Software Engineering. We identify empirical research on this subject and analyze how the role of time has been characterized in the study of decision making in SE. The literature review aims to support the development of descriptive frameworks for empirical studies of intertemporal decision making in practice.",
"title": ""
},
{
"docid": "db911364f218bf795fee94681582d7df",
"text": "We describe an approach to quantifying the impact of network latency on interactive response and show that the adequacy of thin-client computing is highly variable and depend on both the application and available network quality. If near ideal network conditions (low latency and high bandwidth) can be guaranteed, thin clients offer a good computing experience. As network quality degrades, interactive performance suffers. It is latency - not bandwidth -that is the greater challenge. Tightly coupled tasks such as graphics editing suffer more than loosely coupled tasks such as Web browsing. The combination of worst anticipated network quality and most tightly coupled tasks determine whether a thin-client approach is satisfactory for an organization.",
"title": ""
},
{
"docid": "86dae0e1ca1593b82978c58a573e4688",
"text": "We introduce the first end-to-end coreference resolution model and show that it significantly outperforms all previous work without using a syntactic parser or handengineered mention detector. The key idea is to directly consider all spans in a document as potential mentions and learn distributions over possible antecedents for each. The model computes span embeddings that combine context-dependent boundary representations with a headfinding attention mechanism. It is trained to maximize the marginal likelihood of gold antecedent spans from coreference clusters and is factored to enable aggressive pruning of potential mentions. Experiments demonstrate state-of-the-art performance, with a gain of 1.5 F1 on the OntoNotes benchmark and by 3.1 F1 using a 5-model ensemble, despite the fact that this is the first approach to be successfully trained with no external resources.",
"title": ""
},
{
"docid": "e018139a38e5b1b3a3299626dd2c5295",
"text": "The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform ∫ Rd K(x, y)g(y)dy at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most O ( r2 N d p logN + ( βr d p + α ) log p ) time using p processes. This parallel algorithm was then instantiated in the form of the opensource DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms and an analogue of a 3D generalized Radon transform were respectively observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. These experiments at least partially support the theoretical argument that, given p = O(Nd) processes, the running-time of the parallel algorithm is O ( (r2 + βr + α) logN ) .",
"title": ""
},
{
"docid": "acbf633cbf612cd0d203d9c191a156da",
"text": "In this work an efficient parallel implementation of the Chirp Scaling Algorithm for Synthetic Aperture Radar processing is presented. The architecture selected for the implementation is the general purpose graphic processing unit, as it is well suited for scientific applications and real-time implementation of algorithms. The analysis of a first implementation led to several improvements which resulted in an important speed-up. Details of the issues found are explained, and the performance improvement of their correction explicitly shown.",
"title": ""
},
{
"docid": "b43118e150870aab96af1a7b32515202",
"text": "Algorithm visualization (AV) technology graphically illustrates how algorithms work. Despite the intuitive appeal of the technology, it has failed to catch on in mainstream computer science education. Some have attributed this failure to the mixed results of experimental studies designed to substantiate AV technology’s educational effectiveness. However, while several integrative reviews of AV technology have appeared, none has focused specifically on the software’s effectiveness by analyzing this body of experimental studies as a whole. In order to better understand the effectiveness of AV technology, we present a systematic metastudy of 24 experimental studies. We pursue two separate analyses: an analysis of independent variables, in which we tie each study to a particular guiding learning theory in an attempt to determine which guiding theory has had the most predictive success; and an analysis of dependent variables, which enables us to determine which measurement techniques have been most sensitive to the learning benefits of AV technology. Our most significant finding is that how students use AV technology has a greater impact on effectiveness than what AV technology shows them. Based on our findings, we formulate an agenda for future research into AV effectiveness. A META-STUDY OF ALGORITHM VISUALIZATION EFFECTIVENESS 3",
"title": ""
},
{
"docid": "a697f85ad09699ddb38994bd69b11103",
"text": "We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. Our algorithm performs a subsampled Cholesky factorization, which we analyze using matrix martingales. As part of the analysis, we give a proof of a concentration inequality for matrix martingales where the differences are sums of conditionally independent variables.",
"title": ""
},
{
"docid": "542683765586010b828af95c7a109fdc",
"text": "This paper suggests asymmetric stator teeth design to reduce torque ripple and back EMF Total Harmonic Distortion(THD) for Interior Permanent Magnet Synchronous Machine(IPMSM). IPMSM which has 8 poles, 12 slots is analyzed in this study. From changing design parameter in stator structure, 8 comparison models are analyzed. Analysis of proposed method is carried out using Finite Element Method(FEM). Suggested method has advantage to reduce torque ripple and back electromotive force(EMF) harmonics without average torque decrease. Comparison between reference model and comparison models applying proposed method proceeds to verify advantage of this method.",
"title": ""
},
{
"docid": "49f132862ca2c4a07d6233e8101a87ff",
"text": "Genetic data as a category of personal data creates a number of challenges to the traditional understanding of personal data and the rules regarding personal data processing. Although the peculiarities of and heightened risks regarding genetic data processing were recognized long before the data protection reform in the EU, the General Data Protection Regulation (GDPR) seems to pay no regard to this. Furthermore, the GDPR will create more legal grounds for (sensitive) personal data (incl. genetic data) processing whilst restricting data subjects’ means of control over their personal data. One of the reasons for this is that, amongst other aims, the personal data reform served to promote big data business in the EU. The substantive clauses of the GDPR concerning big data, however, do not differentiate between the types of personal data being processed. Hence, like all other categories of personal data, genetic data is subject to the big data clauses of the GDPR as well; thus leading to the question whether the GDPR is creating a pathway for ‘big genetic data’. This paper aims to analyse the implications that the role of the GDPR as a big data enabler bears on genetic data processing and the respective rights of the data",
"title": ""
},
{
"docid": "1a7eed6c41824906f947aecbfb4a4a19",
"text": "QoS routing is an important research issue in wireless sensor networks (WSNs), especially for mission-critical monitoring and surveillance systems which requires timely and reliable data delivery. Existing work exploits multipath routing to guarantee both reliability and delay QoS constraints in WSNs. However, the multipath routing approach suffers from a significant energy cost. In this work, we exploit the geographic opportunistic routing (GOR) for QoS provisioning with both end-to-end reliability and delay constraints in WSNs. Existing GOR protocols are not efficient for QoS provisioning in WSNs, in terms of the energy efficiency and computation delay at each hop. To improve the efficiency of QoS routing in WSNs, we define the problem of efficient GOR for multiconstrained QoS provisioning in WSNs, which can be formulated as a multiobjective multiconstraint optimization problem. Based on the analysis and observations of different routing metrics in GOR, we then propose an Efficient QoS-aware GOR (EQGOR) protocol for QoS provisioning in WSNs. EQGOR selects and prioritizes the forwarding candidate set in an efficient manner, which is suitable for WSNs in respect of energy efficiency, latency, and time complexity. We comprehensively evaluate EQGOR by comparing it with the multipath routing approach and other baseline protocols through ns-2 simulation and evaluate its time complexity through measurement on the MicaZ node. Evaluation results demonstrate the effectiveness of the GOR approach for QoS provisioning in WSNs. EQGOR significantly improves both the end-to-end energy efficiency and latency, and it is characterized by the low time complexity.",
"title": ""
},
{
"docid": "38d04471b8166ef7a0955881db67f494",
"text": "Changes in educational thinking and in medical program accreditation provide an opportunity to reconsider approaches to undergraduate medical education. Current developments in competency-based medical education (CBME), in particular, present both possibilities and challenges for undergraduate programs. CBME does not specify particular learning strategies or formats, but rather provides a clear description of intended outcomes. This approach has the potential to yield authentic curricula for medical practice and to provide a seamless linkage between all stages of lifelong learning. At the same time, the implementation of CBME in undergraduate education poses challenges for curriculum design, student assessment practices, teacher preparation, and systemic institutional change, all of which have implications for student learning. Some of the challenges of CBME are similar to those that can arise in the implementation of any integrated program, while others are specific to the adoption of outcome frameworks as an organizing principle for curriculum design. This article reviews a number of issues raised by CBME in the context of undergraduate programs and provides examples of best practices that might help to address these issues.",
"title": ""
},
{
"docid": "6be88914654c736c8e1575aeb37532a3",
"text": "Coding EMRs with diagnosis and procedure codes is an indispensable task for billing, secondary data analyses, and monitoring health trends. Both speed and accuracy of coding are critical. While coding errors could lead to more patient-side financial burden and mis-interpretation of a patient's well-being, timely coding is also needed to avoid backlogs and additional costs for the healthcare facility. In this paper, we present a new neural network architecture that combines ideas from few-shot learning matching networks, multi-label loss functions, and convolutional neural networks for text classification to significantly outperform other state-of-the-art models. Our evaluations are conducted using a well known deidentified EMR dataset (MIMIC) with a variety of multi-label performance measures.",
"title": ""
},
{
"docid": "8292d5c1e13042aa42f1efb60058ef96",
"text": "The epithelial-to-mesenchymal transition (EMT) is a vital control point in metastatic breast cancer (MBC). TWIST1, SNAIL1, SLUG, and ZEB1, as key EMT-inducing transcription factors (EMT-TFs), are involved in MBC through different signaling cascades. This updated meta-analysis was conducted to assess the correlation between the expression of EMT-TFs and prognostic value in MBC patients. A total of 3,218 MBC patients from fourteen eligible studies were evaluated. The pooled hazard ratios (HR) for EMT-TFs suggested that high EMT-TF expression was significantly associated with poor prognosis in MBC patients (HRs = 1.72; 95% confidence intervals (CIs) = 1.53-1.93; P = 0.001). In addition, the overexpression of SLUG was the most impactful on the risk of MBC compared with TWIST1 and SNAIL1, which sponsored fixed models. Strikingly, the increased risk of MBC was less associated with ZEB1 expression. However, the EMT-TF expression levels significantly increased the risk of MBC in the Asian population (HR = 2.11, 95% CI = 1.70-2.62) without any publication bias (t = 1.70, P = 0.11). These findings suggest that the overexpression of potentially TWIST1, SNAIL1 and especially SLUG play a key role in the aggregation of MBC treatment as well as in the improvement of follow-up plans in Asian MBC patients.",
"title": ""
}
] |
scidocsrr
|
44259203b55988aa10dae885672cd1a7
|
Comparative PSCAD and Matlab/Simulink simulation models of power losses for SiC MOSFET and Si IGBT devices
|
[
{
"docid": "86ac69a113d41fe7e0914c2ab2c9c700",
"text": "A 6.5kV 25A dual IGBT module is customized and packaged specially for high voltage low current application like solid state transformer and its characteristics and losses have been tested under the low current operation and compared with 10kV SiC MOSFET. Based on the test results, the switching losses under different frequencies in a 20kVA Solid-State Transformer (SST) has been calculated for both devices. The result shows 10kV SiC MOSFET has 7–10 times higher switching frequency capability than 6.5kV Si IGBT in the SST application.",
"title": ""
}
] |
[
{
"docid": "ab3c82329efd192ab4ebc1fdeafd00f2",
"text": "As one of the oldest and most influential foreign language pedagogical journals, The Modern Language Journal (MLJ) offers valuable insights into how technological advances have affected language teaching and learning at various points in history. The present article will review the proposed pedagogical use of technological resources by means of a critical analysis of articles published in the MLJ since its first edition in 1916. The assessment of how previous technical capabilities have been implemented for pedagogical purposes represents a necessary background for the assessment of the pedagogical potential of present-day technologies. In this article I argue that, whereas most “new technologies” (radio, television, VCR, computers) may have been revolutionary in the overall context of human interaction, it is not clear that they have achieved equal degrees of pedagogical benefit in the realm of second language teaching. I further claim that the pedagogical effectiveness of different technologies is related to four major questions: (a) Is increased technological sophistication correlated to increased pedagogical effectiveness? (b) Which technical attributes specific to new technologies can be profitably exploited for pedagogical purposes? (c) How can new technologies be successfully integrated into the curriculum? and (d) Do new technologies provide for an efficient use of human and material resources?",
"title": ""
},
{
"docid": "a12acd38f518c3dc07dcea3205fcdb3e",
"text": "Domain adaptation is an important tool to transfer knowledge about a task (e.g. classification) learned in a source domain to a second, or target domain. Current approaches assume that task-relevant target-domain data is available during training. We demonstrate how to perform domain adaptation when no such task-relevant target-domain data is available. To tackle this issue, we propose zero-shot deep domain adaptation (ZDDA), which uses privileged information from taskirrelevant dual-domain pairs. ZDDA learns a source-domain representation which is not only tailored for the task of interest but also close to the target-domain representation. Therefore, the source-domain task of interest solution (e.g. a classifier for classification tasks) which is jointly trained with the source-domain representation can be applicable to both the source and target representations. Using the MNIST, FashionMNIST, NIST, EMNIST, and SUN RGB-D datasets, we show that ZDDA can perform domain adaptation in classification tasks without access to task-relevant target-domain training data. We also extend ZDDA to perform sensor fusion in the SUN RGB-D scene classification task by simulating task-relevant target-domain representations with task-relevant source-domain data. To the best of our knowledge, ZDDA is the first domain adaptation and sensor fusion method which requires no taskrelevant target-domain data. The underlying principle is not particular to computer vision data, but should be extensible to other domains.",
"title": ""
},
{
"docid": "0f555a4c2415b6a5995905f1594871d4",
"text": "With the ultimate intent of improving the quality of life, identification of human's affective states on the collected electroencephalogram (EEG) has attracted lots of attention recently. In this domain, the existing methods usually use only a few labeled samples to classify affective states consisting of over thousands of features. Therefore, important information may not be well utilized and performance is lowered due to the randomness caused by the small sample problem. However, this issue has rarely been discussed in the previous studies. Besides, many EEG channels are irrelevant to the specific learning tasks, which introduce lots of noise to the systems and further lower the performance in the recognition of affective states. To address these two challenges, in this paper, we propose a novel Deep Belief Networks (DBN) based model for affective state recognition from EEG signals. Specifically, signals from each EEG channel are firstly processed with a DBN for effectively extracting critical information from the over thousands of features. The extracted low dimensional characteristics are then utilized in the learning to avoid the small sample problem. For the noisy channel problem, a novel stimulus-response model is proposed. The optimal channel set is obtained according to the response rate of each channel. Finally, a supervised Restricted Boltzmann Machine (RBM) is applied on the combined low dimensional characteristics from the optimal EEG channels. To evaluate the performance of the proposed Supervised DBN based Affective State Recognition (SDA) model, we implement it on the Deap Dataset and compare it with five baselines. Extensive experimental results show that the proposed algorithm can successfully handle the aforementioned two challenges and significantly outperform the baselines by 11.5% to 24.4%, which validates the effectiveness of the proposed algorithm in the task of affective state recognition.",
"title": ""
},
{
"docid": "012d69ddc3410c85d265be54ae07767f",
"text": "The family of intelligent IPS-drivers was invented to drive high power IGBT modules with blocking voltages up to 6,500V. They may be used use in industrial drives, power supplies, transportation, renewable energies as well as induction heating applications. The IPS- drivers decrease the switching losses and offer a reliable protection for high power IGBT modules. Varying the software enables an easy adaptation to specific applications. The main features of the IPS-drivers are: variable gate ON and OFF resistors, advanced desaturation and di/dt protections, active feedback clamping, high peak output current and output power, short signal transition times, and multiple soft shut down.",
"title": ""
},
{
"docid": "110742230132649f178d2fa99c8ffade",
"text": "Recent approaches based on artificial neural networks (ANNs) have shown promising results for named-entity recognition (NER). In order to achieve high performances, ANNs need to be trained on a large labeled dataset. However, labels might be difficult to obtain for the dataset on which the user wants to perform NER: label scarcity is particularly pronounced for patient note de-identification, which is an instance of NER. In this work, we analyze to what extent transfer learning may address this issue. In particular, we demonstrate that transferring an ANN model trained on a large labeled dataset to another dataset with a limited number of labels improves upon the state-of-the-art results on two different datasets for patient note de-identification.",
"title": ""
},
{
"docid": "e3b1e52066d20e7c92e936cdb72cc32b",
"text": "This paper presents a new approach to power system automation, based on distributed intelligence rather than traditional centralized control. The paper investigates the interplay between two international standards, IEC 61850 and IEC 61499, and proposes a way of combining of the application functions of IEC 61850-compliant devices with IEC 61499-compliant “glue logic,” using the communication services of IEC 61850-7-2. The resulting ability to customize control and automation logic will greatly enhance the flexibility and adaptability of automation systems, speeding progress toward the realization of the smart grid concept.",
"title": ""
},
{
"docid": "83a1a1a87dccee17530348c0213a2c5d",
"text": "Network Design: The architecture of GeoConGAN is based on the CycleGAN [13], i.e. we train two conditional generator and two discriminator networks for synthetic and real images, respectively. Recently, also methods using only one generator and discriminator for enrichment of synthetic images from unpaired data have been proposed. Shrivastava et al. [9] and Liu et al. [5] both employ an L1 loss between the conditional synthetic input and the generated output (in addition to the common discriminator loss) due to the lack of image pairs. This loss forces the generated image to be similar to the synthetic image in all aspects, i.e. it might hinder the generator in producing realistic outputs if the synthetic data is not already close. Instead, we decided to use the combination of cycle-consistency and geometric consistency loss to enable the generator networks to move farther from the synthetic data thus approaching the distribution of real world data more closely while preserving the pose of the hand. Our GeoConGAN contains ResNet generator and Least Squares PatchGAN discriminator networks. Training Details: We train GeoConGAN in Tensorflow [1] for 20,000 iterations with a batch size of 8. We initialize the Adam optimizer [4] with a learning rate of 0.0002, β1 = 0.5, and β2 = 0.999.",
"title": ""
},
{
"docid": "59b64075583ebae3fdcef92cdac328e9",
"text": "on an earlier version are acknowledged with gratitude. 1 North (1981, p. 166) comes close to linking the institutional changes of the late eighteenth century with the Industrial 1 Revolution when he maintains that it was explained by \" a combination of better-specified and enforced property rights and increasingly efficient and expanding markets. \" North and Weingast (1989, p. 831) are more prudent and wonder if arguing that without the Glorious Revolution the British economy would have followed a very different path and would not have experienced an Industrial Revolution would be \" claiming too much. \" Introduction. The new institutional economics has, so far, had little to say about the Industrial",
"title": ""
},
{
"docid": "7cd091555dd870cc1a71a4318bb5ff8d",
"text": "This paper presents the design and simulation of a wideband, medium gain, light weight, wide bandwidth pyramidal horn antenna feed for microwave applications. The horn was designed using approximation method to calculate the gain in mat lab and simulated using CST microwave studio. The proposed antenna operates within 1-2 GHz (L-band). The horn is supported by a rectangular wave guide. It is linearly polarized and shows wide bandwidth with a gain of 15.3dB. The horn is excited with the monopole which is loaded with various top hat loading such as rectangular disc, circular disc, annular disc, L-type, T-type, Cone shape, U-shaped plates etc. and checked their performances for return loss as well as bandwidth. The circular disc and annular ring gives the low return loss and wide bandwidth as well as low VSWR. The annular ring gave good VSWR and return loss compared to the circular disc. The far field radiation pattern is obtained as well as Efield & H-field analysis for L-band pyramidal horn has been observed, simulated and optimized using CST Microwave Studio. The simulation results show that the pyramidal horn structure exhibits low VSWR as well as good radiation pattern over L-band.",
"title": ""
},
{
"docid": "63a5292e2314ffc9167ec4a9be1e1427",
"text": "Distributed Artificial Intelligence (DAI) has existed as a s ubfield of AI for less than two decades. DAI is concerned with systems that consist of multiple indep endent entities that interact in a domain. Traditionally, DAI has been divided into two sub-disciplin es: Distributed Problem Solving (DPS) focuses on the information management aspects of systems with sever al branches working together towards a common goal; Multiagent Systems (MAS) deals with behavior m anagement in collections of several independent entities, or agents. This survey of MAS is inten ded to serve as an introduction to the field and as an organizational framework. A series of general mult iagent scenarios are presented. For each scenario, the issues that arise are described along with a sa mpling of the techniques that exist to deal with them. The presented techniques are not exhaustive, but they highlight how multiagent systems can be and have been used to build complex systems. When options exi st, the techniques presented are biased towards machine learning approaches. Additional opportun ities for applying machine learning to MAS are highlighted and robotic soccer is presented as an approp riate test-bed for MAS. This survey does not focus exclusively on robotic systems since much of the prior research in non-robotic MAS applies to robotic systems as well. However, several robotic MAS, incl uding all of those presented in this issue, are discussed.",
"title": ""
},
{
"docid": "2693030e6575cb7faec59aaec6387e2c",
"text": "Human Resource (HR) applications can be used to provide fair and consistent decisions, and to improve the effectiveness of decision making processes. Besides that, among the challenge for HR professionals is to manage organization talents, especially to ensure the right person for the right job at the right time. For that reason, in this article, we attempt to describe the potential to implement one of the talent management tasks i.e. identifying existing talent by predicting their performance as one of HR application for talent management. This study suggests the potential HR system architecture for talent forecasting by using past experience knowledge known as Knowledge Discovery in Database (KDD) or Data Mining. This article consists of three main parts; the first part deals with the overview of HR applications, the prediction techniques and application, the general view of Data mining and the basic concept of talent management in HRM. The second part is to understand the use of Data Mining technique in order to solve one of the talent management tasks, and the third part is to propose the potential HR system architecture for talent forecasting. Keywords—HR Application, Knowledge Discovery in Database (KDD), Talent Forecasting.",
"title": ""
},
{
"docid": "5350af2d42f9321338e63666dcd42343",
"text": "Robot-aided physical therapy should encourage subject's voluntary participation to achieve rapid motor function recovery. In order to enhance subject's cooperation during training sessions, the robot should allow deviation in the prescribed path depending on the subject's modified limb motions subsequent to the disability. In the present work, an interactive training paradigm based on the impedance control was developed for a lightweight intrinsically compliant parallel ankle rehabilitation robot. The parallel ankle robot is powered by pneumatic muscle actuators (PMAs). The proposed training paradigm allows the patients to modify the robot imposed motions according to their own level of disability. The parallel robot was operated in four training modes namely position control, zero-impedance control, nonzero-impedance control with high compliance, and nonzero-impedance control with low compliance to evaluate the performance of proposed control scheme. The impedance control scheme was evaluated on 10 neurologically intact subjects. The experimental results show that an increase in robotic compliance encouraged subjects to participate more actively in the training process. This work advances the current state of the art in the compliant actuation of parallel ankle rehabilitation robots in the context of interactive training.",
"title": ""
},
{
"docid": "9e9be149fc44552b6ac9eb2d90d4a4ba",
"text": "In this work, a level set energy for segmenting the lungs from digital Posterior-Anterior (PA) chest x-ray images is presented. The primary challenge in using active contours for lung segmentation is local minima due to shading effects and presence of strong edges due to the rib cage and clavicle. We have used the availability of good contrast at the lung boundaries to extract a multi-scale set of edge/corner feature points and drive our active contour model using these features. We found these features when supplemented with a simple region based data term and a shape term based on the average lung shape, able to handle the above local minima issues. The algorithm was tested on 1130 clinical images, giving promising results.",
"title": ""
},
{
"docid": "1eb7b1b8fd3284524c0aac5e86fbf947",
"text": "The implementation of a computer game for learning about geography by primary school students is the focus of this article. Researchers designed and developed a three-dimensional educational computer game. Twenty four students in fourth and fifth grades in a private school in Ankara, Turkey learnt about world continents and countries through this game for three weeks. The effects of the game environment on students’ achievement and motivation and related implementation issues were examined through both quantitative and qualitative methods. An analysis of pre and post achievement tests showed that students made significant learning gains by participating in the game-based learning environment. When comparing their motivations while learning in the game-based learning environment and in their traditional school environment, it was found that students demonstrated statistically significant higher intrinsic motivations and statistically significant lower extrinsic motivations learning in the game-based environment. In addition, they had decreased focus on getting grades and they were more independent while participating in the game-based activities. These positive effects on learning and motivation, and the positive attitudes of students and teachers suggest that computer games can be used as an ICT tool in formal learning environments to support students in effective geography learning. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "669e395f80b4cac7b1765be0e8afd2db",
"text": "Content Security Policy (CSP) is a browser security mechanism that aims to protect websites from content injection attacks. To adopt CSP, website developers need to manually compile a list of allowed content sources. Nearly all websites require modifications to comply with CSP’s default behavior, which blocks inline scripts and the use of the eval() function. Alternatively, websites could adopt a policy that allows the use of this unsafe functionality, but this opens up potential attack vectors. In this paper, our measurements on a large corpus of web applications provide a key insight on the amount of efforts web developers required to adapt to CSP. Our results also identified errors in CSP policies that are set by website developers on their websites. To address these issues and make adoption of CSP easier and error free, we implemented UserCSP a tool as a Firefox extension. The UserCSP uses dynamic analysis to automatically infer CSP policies, facilitates testing, and gives savvy users the authority to enforce client-side policies on websites.",
"title": ""
},
{
"docid": "c009c5cf0e85081f71247815d0a1ae29",
"text": "This paper describes a low-power receiver front-end in a bidirectional near-ground source-series terminated (SST) interface implemented in a 40-nm CMOS process, which supports low-common mode differential NRZ signaling up to 16-Gb/s data rates. The high-speed operation is enabled by utilizing a common-gate amplifier stage with replica transconductance impedance calibration that accurately terminates the channel in the presence of receiver input loading. The near-ground low-impedance receiver also incorporates common-mode gain cancellation and in-situ equalization calibration to achieve reliable data reception at 16 Gb/s with better than 0.4 mW/Gb/s power efficiency over a memory link with more than 15 dB loss at the Nyquist frequency.",
"title": ""
},
{
"docid": "38cb7fa09dc3d350971ffd43087d372c",
"text": "Objectives. The purpose of this study was to describe changes in critical thinking ability and disposition over a 4-year Doctor of Pharmacy curriculum. Methods. Two standardized tests, the California Critical Thinking Skills Test (CCTST) and California Critical Thinking Dispositions Inventory (CCTDI) were used to follow the development of critical thinking ability and disposition during a 4-year professional pharmacy program. The tests were given to all pharmacy students admitted to the PharmD program at the College of Pharmacy of North Dakota State University (NDSU) on the first day of classes, beginning in 1997, and repeated late in the spring semester each year thereafter. Results. Increases in CCTST scores were noted as students progressed through each year of the curriculum, with a 14% total increase by graduation (P< 0.001). That the increase was from a testing effect is unlikely because students who took a different version at graduation scored no differently than students who took the original version. There was no increase in CCTDI score. Conclusion. The generic critical thinking ability of pharmacy students at NDSU’s College of Pharmacy appeared to increase over the course of the program, while their motivation to think critically did not appear to increase.",
"title": ""
},
{
"docid": "68f01bee5a0e228a0cd89446b958502a",
"text": "Gradient Boosting for Conditional Random Fields Report Title In this paper, we present a gradient boosting algorithm for tree-shaped conditional random fields (CRF). Conditional random fields are an important class of models for accurate structured prediction, but effective design of the feature functions is a major challenge when applying CRF models to real world data. Gradient boosting, which can induce and select functions, is a natural candidate solution for the problem. However, it is non-trivial to derive gradient boosting algorithms for CRFs, due to the dense Hessian matrices introduced by variable dependencies. We address this challenge by deriving a Markov Chain mixing rate bound to quantify the dependencies, and introduce a gradient boosting algorithm that iteratively optimizes an adaptive upper bound of the objective function. The resulting algorithm induces and selects features for CRFs via functional space optimization, with provable convergence guarantees. Experimental results on three real world datasets demonstrate that the mixing rate based upper bound is effective for training CRFs with non-linear potentials. 2",
"title": ""
},
{
"docid": "87f05972a93b2b432d0dad6d55e97502",
"text": "The daunting volumes of community-contributed media contents on the Internet have become one of the primary sources for online advertising. However, conventional advertising treats image and video advertising as general text advertising by displaying relevant ads based on the contents of the Web page, without considering the inherent characteristics of visual contents. This article presents a contextual advertising system driven by images, which automatically associates relevant ads with an image rather than the entire text in a Web page and seamlessly inserts the ads in the nonintrusive areas within each individual image. The proposed system, called ImageSense, supports scalable advertising of, from root to node, Web sites, pages, and images. In ImageSense, the ads are selected based on not only textual relevance but also visual similarity, so that the ads yield contextual relevance to both the text in the Web page and the image content. The ad insertion positions are detected based on image salience, as well as face and text detection, to minimize intrusiveness to the user. We evaluate ImageSense on a large-scale real-world images and Web pages, and demonstrate the effectiveness of ImageSense for online image advertising.",
"title": ""
}
] |
scidocsrr
|
886d389bb3092565d22c2a426c291f05
|
A Review of the Effects of Physical Activity and Exercise on Cognitive and Brain Functions in Older Adults
|
[
{
"docid": "af7803b0061e75659f718d56ba9715b3",
"text": "An emerging body of multidisciplinary literature has documented the beneficial influence of physical activity engendered through aerobic exercise on selective aspects of brain function. Human and non-human animal studies have shown that aerobic exercise can improve a number of aspects of cognition and performance. Lack of physical activity, particularly among children in the developed world, is one of the major causes of obesity. Exercise might not only help to improve their physical health, but might also improve their academic performance. This article examines the positive effects of aerobic physical activity on cognition and brain function, at the molecular, cellular, systems and behavioural levels. A growing number of studies support the idea that physical exercise is a lifestyle factor that might lead to increased physical and mental health throughout life.",
"title": ""
}
] |
[
{
"docid": "eb3e81bc96c21bd54d0498b21ef08c09",
"text": "Advanced travel information and warning, if provided accurately, can help road users avoid traffic congestion through dynamic route planning and behavior change. It also enables traffic control centres mitigate the impact of congestion by activating Intelligent Transport System (ITS) proactively. Deep learning has become increasingly popular in recent years, following a surge of innovative GPU technology, high-resolution, big datasets and thriving machine learning algorithms. However, there are few examples exploiting this emerging technology to develop applications for traffic prediction. This is largely due to the difficulty in capturing random, seasonal, non-linear, and spatio-temporal correlated nature of traffic data. In this paper, we propose a data-driven modelling approach with a novel hierarchical D-CLSTM-t deep learning model for short-term traffic speed prediction, a framework combined with convolutional neural network (CNN) and long short-term memory (LSTM) models. A deep CNN model is employed to learn the spatio-temporal traffic patterns of the input graphs, which are then fed into a deep LSTM model for sequence learning. To capture traffic seasonal variations, time of the day and day of the week indicators are fused with trained features. The model is trained end-to-end to predict travel speed in 15 to 90 minutes in the future. We compare the model performance against other baseline models including CNN, LGBM, LSTM, and traditional speed-flow curves. Experiment results show that the D-CLSTM-t outperforms other models considerably. Model tests show that speed upstream also responds sensibly to a sudden accident occurring downstream. Our D-CLSTM-t model framework is also highly scalable for future extension such as for network-wide traffic prediction, which can also be improved by including additional features such as weather, long term seasonality and accident information.",
"title": ""
},
{
"docid": "9824a6ec0809cefdec77a52170670d17",
"text": "The use of planar fluidic devices for performing small-volume chemistry was first proposed by analytical chemists, who coined the term “miniaturized total chemical analysis systems” ( TAS) for this concept. More recently, the TAS field has begun to encompass other areas of chemistry and biology. To reflect this expanded scope, the broader terms “microfluidics” and “lab-on-a-chip” are now often used in addition to TAS. Most microfluidics researchers rely on micromachining technologies at least to some extent to produce microflow systems based on interconnected micrometer-dimensioned channels. As members of the microelectromechanical systems (MEMS) community know, however, one can do more with these techniques. It is possible to impart higher levels of functionality by making features in different materials and at different levels within a microfluidic device. Increasingly, researchers have considered how to integrate electrical or electrochemical function into chips for purposes as diverse as heating, temperature sensing, electrochemical detection, and pumping. MEMS processes applied to new materials have also resulted in new approaches for fabrication of microchannels. This review paper explores these and other developments that have emerged from the increasing interaction between the MEMS and microfluidics worlds.",
"title": ""
},
{
"docid": "d63609f3850ceb80945ab72b242fcfe3",
"text": "Code review is the manual assessment of source code by humans, mainly intended to identify defects and quality problems. Modern Code Review (MCR), a lightweight variant of the code inspections investigated since the 1970s, prevails today both in industry and open-source software (OSS) systems. The objective of this paper is to increase our understanding of the practical benefits that the MCR process produces on reviewed source code. To that end, we empirically explore the problems fixed through MCR in OSS systems. We manually classified over 1,400 changes taking place in reviewed code from two OSS projects into a validated categorization scheme. Surprisingly, results show that the types of changes due to the MCR process in OSS are strikingly similar to those in the industry and academic systems from literature, featuring the similar 75:25 ratio of maintainability-related to functional problems. We also reveal that 7–35% of review comments are discarded and that 10–22% of the changes are not triggered by an explicit review comment. Patterns emerged in the review data; we investigated them revealing the technical factors that influence the number of changes due to the MCR process. We found that bug-fixing tasks lead to fewer changes and tasks with more altered files and a higher code churn have more changes. Contrary to intuition, the person of the reviewer had no impact on the number of changes.",
"title": ""
},
{
"docid": "81312e4811dfce560ced2e2840953e59",
"text": "A method for automatically assessing the quality of retinal images is presented. It is based on the idea that images of good quality possess some common features that should help define a model of what a good ophthalmic image is. The proposed features are the histogram of the edge magnitude distribution in the image as well as the local histograms of pixel gray-scale values. Histogram matching functions are proposed and experiments show that these features help discriminate between good and bad images.",
"title": ""
},
{
"docid": "3b28914f2786e959d55bd93bd9556d34",
"text": "In today‘s world the word automation seems to be very common but not practically applied to our country at an extent till yet. This breakthrough prospective can be a major part of the smart city. Most of the home automation system available in market lacks certain important features and they are object dependent and expensive. In this paper an internet of thing enabled, cost-effective home automation is developed. Unlike the conventional approach, this system make use NodeRed which is a open source tool for building internet of thing and uses visual programming through nodes to perform certain task. This system can control the outlets, various things inside the home from anywhere in the world. Every part of the home will be equipped with wireless sensor technology that will log the data to the webserver. These WSN technology will be interconnected through MQTT (Telemetry Transport) protocol which is a publish and subscribe tool to establish the communication between different devices. The third feature can revolutionize everything that is notification. User of the home will get notified through email or twitter if any uncertainty is happen. Webserver will access the home automation system through Ngrok which is a secure introspectable webhook development tool. This whole concept make it advance home automation through internet of things.",
"title": ""
},
{
"docid": "8b058101eea74a417ce53ccf4d6eaa4b",
"text": "ÐThe purpose of the architecture evaluation of a software system is to analyze the architecture to identify potential risks and to verify that the quality requirements have been addressed in the design. This survey shows the state of the research at this moment, in this domain, by presenting and discussing eight of the most representative architecture analysis methods. The selection of the studied methods tries to cover as many particular views of objective reflections as possible to be derived from the general goal. The role of the discussion is to offer guidelines related to the use of the most suitable method for an architecture assessment process. We will concentrate on discovering similarities and differences between these eight available methods by making classifications, comparisons and appropriateness studies. Index TermsÐSoftware architecture, analysis techniques and methods, quality attributes, scenarios.",
"title": ""
},
{
"docid": "902ca8c9a7cd8384143654ee302eca82",
"text": "The Paper presents the outlines of the Field Programmable Gate Array (FPGA) implementation of Real Time speech enhancement by Spectral Subtraction of acoustic noise using Dynamic Moving Average Method. It describes an stand alone algorithm for Speech Enhancement and presents a architecture for the implementation. The traditional Spectral Subtraction method can only suppress stationary acoustic noise from speech by subtracting the spectral noise bias calculated during non-speech activity, while adding the unique option of dynamic moving averaging to it, it can now periodically upgrade the estimation and cope up with changes in noise level. Signal to Noise Ratio (SNR) has been tested at different noisy environment and the improvement in SNR certifies the effectiveness of the algorithm. The FPGA implementation presented in this paper, works on streaming speech signals and can be used in factories, bus terminals, Cellular Phones, or in outdoor conferences where a large number of people have gathered. The Table in the Experimental Result section consolidates our claim of optimum resouce usage.",
"title": ""
},
{
"docid": "b02782c0ce9512a0c1084bcb96a01636",
"text": "OBJECTIVE\nRecently, public attention has focused on the possibility that social networking sites such as MySpace and Facebook are being widely used to sexually solicit underage youth, consequently increasing their vulnerability to sexual victimization. Beyond anecdotal accounts, however, whether victimization is more commonly reported in social networking sites is unknown.\n\n\nPARTICIPANTS AND METHODS\nThe Growing up With Media Survey is a national cross-sectional online survey of 1588 youth. Participants were 10- to 15-year-old youth who have used the Internet at least once in the last 6 months. The main outcome measures were unwanted sexual solicitation on the Internet, defined as unwanted requests to talk about sex, provide personal sexual information, and do something sexual, and Internet harassment, defined as rude or mean comments, or spreading of rumors.\n\n\nRESULTS\nFifteen percent of all of the youth reported an unwanted sexual solicitation online in the last year; 4% reported an incident on a social networking site specifically. Thirty-three percent reported an online harassment in the last year; 9% reported an incident on a social networking site specifically. Among targeted youth, solicitations were more commonly reported via instant messaging (43%) and in chat rooms (32%), and harassment was more commonly reported in instant messaging (55%) than through social networking sites (27% and 28%, respectively).\n\n\nCONCLUSIONS\nBroad claims of victimization risk, at least defined as unwanted sexual solicitation or harassment, associated with social networking sites do not seem justified. Prevention efforts may have a greater impact if they focus on the psychosocial problems of youth instead of a specific Internet application, including funding for online youth outreach programs, school antibullying programs, and online mental health services.",
"title": ""
},
{
"docid": "781fbf087201e480899f8bfb7e0e1838",
"text": "The term \"Ehlers-Danlos syndrome\" (EDS) groups together an increasing number of heritable connective tissue disorders mainly featuring joint hypermobility and related complications, dermal dysplasia with abnormal skin texture and repair, and variable range of the hollow organ and vascular dysfunctions. Although the nervous system is not considered a primary target of the underlying molecular defect, recently, increasing attention has been posed on neurological manifestations of EDSs, such as musculoskeletal pain, fatigue, headache, muscle weakness and paresthesias. Here, a comprehensive overview of neurological findings of these conditions is presented primarily intended for the clinical neurologist. Features are organized under various subheadings, including pain, fatigue, headache, stroke and cerebrovascular disease, brain and spine structural anomalies, epilepsy, muscular findings, neuropathy and developmental features. The emerging picture defines a wide spectrum of neurological manifestations that are unexpectedly common and potentially disabling. Their evaluation and correct interpretation by the clinical neurologist is crucial for avoiding superfluous investigations, wrong therapies, and inappropriate referral. A set of basic tools for patient's recognition is offered for raising awareness among neurologists on this underdiagnosed group of hereditary disorders.",
"title": ""
},
{
"docid": "050c60c23b15c92da6c2cec6213b68e3",
"text": "In this paper, the human brainstorming process is modeled, based on which two versions of Brain Storm Optimization (BSO) algorithm are introduced. Simulation results show that both BSO algorithms perform reasonably well on ten benchmark functions, which validates the effectiveness and usefulness of the proposed BSO algorithms. Simulation results also show that one of the BSO algorithms, BSO-II, performs better than the other BSO algorithm, BSO-I, in general. Furthermore, average inter-cluster distance Dc and inter-cluster diversity De are defined, which can be used to measure and monitor the distribution of cluster centroids and information entropy of the population over iterations. Simulation results illustrate that further improvement could be achieved by taking advantage of information revealed by Dc and or De, which points at one direction for future research on BSO algorithms. DOI: 10.4018/jsir.2011100103 36 International Journal of Swarm Intelligence Research, 2(4), 35-62, October-December 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. a lot of evolutionary algorithms out there in the literature. The most popular evolutionary algorithms are evolutionary programming (Fogel, 1962), genetic algorithm (Holland, 1975), evolution strategy (Rechenberg, 1973), and genetic programming (Koza, 1992), which were inspired by biological evolution. In evolutionary algorithms, population of individuals survives into the next iteration. Which individual has higher probability to survive is proportional to its fitness value according to some evaluation function. The survived individuals are then updated by utilizing evolutionary operators such as crossover operator and mutation operator, etc. In evolutionary programming and evolution strategy, only the mutation operation is employed, while in genetic algorithms and genetic programming, both the mutation operation and crossover operation are employed. The optimization problems to be optimized by evolutionary algorithms do not need to be mathematically represented as continuous and differentiable functions, they can be represented in any form. Only requirement for representing optimization problems is that each individual can be evaluated as a value called fitness value. Therefore, evolutionary algorithms can be applied to solve more general optimization problems, especially those that are very difficult, if not impossible, for traditional hill-climbing algorithms to solve. Recently, another kind of algorithms, called swarm intelligence algorithms, is attracting more and more attentions from researchers. Swarm intelligence algorithms are usually nature-inspired optimization algorithms instead of evolution-inspired optimization algorithms such as evolutionary algorithms. Similar to evolutionary algorithms, a swarm intelligence algorithm is also a population-based optimization algorithm. Different from the evolutionary algorithms, each individual in a swarm intelligence algorithm represents a simple object such as ant, bird, fish, etc. So far, a lot of swarm intelligence algorithms have been proposed and studied. Among them are particle swarm optimization(PSO) (Eberhart & Shi, 2007; Shi & Eberhart, 1998), ant colony optimization algorithm(ACO) (Dorigo, Maniezzo, & Colorni, 1996), bacterial forging optimization algorithm(BFO) (Passino, 2010), firefly optimization algorithm (FFO) (Yang, 2008), bee colony optimization algorithm (BCO) (Tovey, 2004), artificial immune system (AIS) (de Castro & Von Zuben, 1999), fish school search optimization algorithm(FSO) (Bastos-Filho, De Lima Neto, Lins, Nascimento, & Lima, 2008), shuffled frog-leaping algorithm (SFL) (Eusuff & Lansey, 2006), intelligent water drops algorithm (IWD) (Shah-Hosseini, 2009), to just name a few. In a swarm intelligence algorithm, an individual represents a simple object such as birds in PSO, ants in ACO, bacteria in BFO, etc. These simple objects cooperate and compete among themselves to have a high tendency to move toward better and better search areas. As a consequence, it is the collective behavior of all individuals that makes a swarm intelligence algorithm to be effective in problem optimization. For example, in PSO, each particle (individual) is associated with a velocity. The velocity of each particle is dynamically updated according to its own historical best performance and its companions’ historical best performance. All the particles in the PSO population fly through the solution space in the hope that particles will fly towards better and better search areas with high probability. Mathematically, the updating process of the population of individuals over iterations can be looked as a mapping process from one population of individuals to another population of individuals from one iteration to the next iteration, which can be represented as Pt+1 = f(Pt), where Pt is the population of individuals at the iteration t, f() is the mapping function. Different evolutionary algorithm or swarm intelligence algorithm has a different mapping function. Through the mapping function, we expect the population of individuals will update to better and better solutions over iterations. Therefore mapping functions should possess the property of convergence. For nonlinear and complicated problems, mapping functions more 26 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the publisher's webpage: www.igi-global.com/article/optimization-algorithm-basedbrainstorming-process/62645",
"title": ""
},
{
"docid": "cdd43b3baa9849441817b5f31d7cb0e0",
"text": "Traffic light control systems are widely used to monitor and control the flow of automobiles through the junction of many roads. They aim to realize smooth motion of cars in the transportation routes. However, the synchronization of multiple traffic light systems at adjacent intersections is a complicated problem given the various parameters involved. Conventional systems do not handle variable flows approaching the junctions. In addition, the mutual interference between adjacent traffic light systems, the disparity of cars flow with time, the accidents, the passage of emergency vehicles, and the pedestrian crossing are not implemented in the existing traffic system. This leads to traffic jam and congestion. We propose a system based on PIC microcontroller that evaluates the traffic density using IR sensors and accomplishes dynamic timing slots with different levels. Moreover, a portable controller device is designed to solve the problem of emergency vehicles stuck in the overcrowded roads.",
"title": ""
},
{
"docid": "1bfcb3fd0199ebf0727488b1652b8a04",
"text": "Filter-bank multicarrier (FBMC) transmission system was proposed as an alternative approach to orthogonal frequency division multiplexing (OFDM) system since it has a higher spectral efficiency. One of the characteristics of FBMC is that the demodulated transmitted symbols are accompanied by interference terms caused by the neighboring transmitted data in time-frequency domain. The presence of this interference is an issue for some multiple-input multiple-output (MIMO) schemes and until today their combination with FBMC remains an open problem. We can cite, among these techniques, the Alamouti scheme and the maximum likelihood detection (MLD) with spatial multiplexing (SM). In this paper, we shall propose a new FBMC scheme and transmission strategy in order to avoid this interference term. This proposed scheme (called FFT-FBMC) transforms the FBMC system into an equivalent system formulated as OFDM regardless of some residual interference. Thus, any OFDM transmission technique can be performed straightforwardly to the proposed FBMC scheme with a corresponding complexity growth compared to the classical FBMC. First, we will develop the FFT-FBMC in the case of single-input single-output (SISO) configuration. Then, we extend its application to SM-MIMO configuration with MLD and Alamouti coding scheme. Simulation results show that FFT-FBMC can almost reach the OFDM performance, but it remains slightly outperformed by OFDM.",
"title": ""
},
{
"docid": "7ca7bca5a704681e8b8c7d213c6ad990",
"text": "Three experiments in naming Chinese characters are presented here to address the relationships between character frequency, consistency, and regularity effects in Chinese character naming. Significant interactions between character consistency and frequency were found across the three experiments, regardless of whether the phonetic radical of the phonogram is a legitimate character in its own right or not. These findings suggest that the phonological information embedded in Chinese characters has an influence upon the naming process of Chinese characters. Furthermore, phonetic radicals exist as computation units mainly because they are structures occurring systematically within Chinese characters, not because they can function as recognized, freestanding characters. On the other hand, the significant interaction between regularity and consistency found in the first experiment suggests that these two factors affect Chinese character naming in different ways. These findings are accounted for within interactive activation frameworks and a connectionist model.",
"title": ""
},
{
"docid": "fb5bfea8df55ad373f3480a2c54a6a88",
"text": "Teaching reading comprehension in K – 12 faces a number of challenges. Among them are identifying the portions of a text that are difficult for a student, comprehending major critical ideas, and understanding context-dependent polysemous words. We present a simple, unsupervised but robust and accurate syntactic method for achieving the first objective and a modified hierarchical lexical method for the second objective. Focusing on pinpointing troublesome sentences instead of the overall readability and on concepts central to a reading, we believe these methods will greatly facilitate efforts to help students improve reading skills.",
"title": ""
},
{
"docid": "4c1a0453ff1a2f54599c65ef073019e5",
"text": "Job matching which benefit job seekers, employees and employers is very important today. In this work, a deep neural network model is proposed to predict an employee's future career details, which includes position name, salary and company scale based on the online resume data. Like most NLP tasks, the input features are multi-field, non-sparse, discrete and categorical, while their dependencies are mostly unknown. Previous works were mostly focused on engineering, which resulted in a large feature space and heavy computation. To solve this task, we use embedding layers to explore feature interactions and merge two automatically learned features extracted from the resumes. Experimental results on over 70,000 real-word online resumes show that our model outperforms shallow models, like SVM and Random Forests, in effectiveness and accuracy.",
"title": ""
},
{
"docid": "6576e5e58ea3298889a4ae27c86a49c9",
"text": "Jordan Baker instinctively avoided clever shrewd men . . . because she felt safer on a plane where any divergence from a code would be thought impossible. She was incurably dishonest. She wasn’t able to endure being at a disadvantage, and given this unwillingness I suppose she had begun dealing in subterfuges when she was very young in order to keep that cool insolent smile turned to the world and yet satisfy the demands of her hard jaunty body. --F. Scott Fitzgerald, The Great Gatsby (63)",
"title": ""
},
{
"docid": "60a655d6b6d79f55151e871d2f0d4d34",
"text": "The clinical characteristics of drug hypersensitivity reactions are very heterogeneous as drugs can actually elicit all types of immune reactions. The majority of allergic reactions involve either drug-specific IgE or T cells. Their stimulation leads to quite distinct immune responses, which are classified according to Gell and Coombs. Here, an extension of this subclassification, which considers the distinct T-cell functions and immunopathologies, is presented. These subclassifications are clinically useful, as they require different treatment and diagnostic steps. Copyright © 2007 S. Karger AG, Basel",
"title": ""
},
{
"docid": "bd2fcdd0b7139bf719f1ec7ffb4fe5d5",
"text": "Much is known on how facial expressions of emotion are produced, including which individual muscles are most active in each expression. Yet, little is known on how this information is interpreted by the human visual system. This paper presents a systematic study of the image dimensionality of facial expressions of emotion. In particular, we investigate how recognition degrades when the resolution of the image (i.e., number of pixels when seen as a 5.3 by 8 degree stimulus) is reduced. We show that recognition is only impaired in practice when the image resolution goes below 20 × 30 pixels. A study of the confusion tables demonstrates that each expression of emotion is consistently confused by a small set of alternatives and that the confusion is not symmetric, i.e., misclassifying emotion a as b does not imply we will mistake b for a. This asymmetric pattern is consistent over the different image resolutions and cannot be explained by the similarity of muscle activation. Furthermore, although women are generally better at recognizing expressions of emotion at all resolutions, the asymmetry patterns are the same. We discuss the implications of these results for current models of face perception.",
"title": ""
},
{
"docid": "a64b0763172d2141337bbccb9407fe8a",
"text": "UNLABELLED\nType B malleolar fractures (AO/ASIF classification) are usually stable ankle joint fractures. Nonetheless, some show a residual instability after internal fixation requiring further stabilization. How often does such a situation occur and can these unstable fractures be recognized beforehand?From 1995 to 1997, 111 malleolar fractures (three type A, 90 type B, 18 type C) were operated on. Seventeen out of 90 patients (19%) with a type B fracture showed residual instability after internal fixation (one unilateral, four bimalleolar and 12 trimalleolar fractures). Five of these patients showed a dislocation in the sagittal plane (anteroposterior) clinically or on the radiographs, five a dislocation in the coronal plane with dislocation of the tibia on the medial aspect of the ankle joint, and four an incongruency on the medial aspect of the joint. In three cases, no preoperative abnormality indicating instability was found. The fractures were all fixed using an additional positioning screw. In 11 patients, the positioning screw was removed after 8-12 weeks, in six patients removal was performed after 1 year along with removal of the plate. All 17 patients were reviewed 1 year after internal fixation, 16/17 showed a good or excellent result with identical or only minor impairment of range of motion of the ankle joint.\n\n\nCONCLUSION\nUnstable ankle joints after internal fixation of type B malleolar fractures exist. Residual instability most often occurs after trimalleolar fractures with initial joint dislocation. Treatment with an additional positioning screw generally produced a satisfactory result.",
"title": ""
},
{
"docid": "a40c00b1dc4a8d795072e0a8cec09d7a",
"text": "Summary form only given. Most of current job scheduling systems for supercomputers and clusters provide batch queuing support. With the development of metacomputing and grid computing, users require resources managed by multiple local job schedulers. Advance reservations are becoming essential for job scheduling systems to be utilized within a large-scale computing environment with geographically distributed resources. COSY is a lightweight implementation of such a local job scheduler with support for both queue scheduling and advance reservations. COSY queue scheduling utilizes the FCFS algorithm with backfilling mechanisms and priority management. Advance reservations with COSY can provide effective QoS support for exact start time and latest completion time. Scheduling polices are defined to reject reservations with too short notice time so that there is no start time advantage to making a reservation over submitting to a queue. Further experimental results show that as a larger percentage of reservation requests are involved, a longer mandatory shortest notice time for advance reservations must be applied in order not to sacrifice queue scheduling efficiency.",
"title": ""
}
] |
scidocsrr
|
6b25019cb01e2696faa57ce8d67c8d61
|
Early Prediction of Diabetes Complications from Electronic Health Records: A Multi-Task Survival Analysis Approach
|
[
{
"docid": "02e3ce674a40204d830f12164215cfbd",
"text": "Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL. First, we classify different MTL algorithms into several categories: feature learning approach, low-rank approach, task clustering approach, task relation learning approach, dirty approach, multi-level approach and deep learning approach. In order to compare different approaches, we discuss the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, reinforcement learning, multi-view learning and graphical models. When the number of tasks is large or the data dimensionality is high, batch MTL models are difficult to handle this situation and online, parallel and distributed MTL models as well as feature hashing are reviewed to reveal the computational and storage advantages. Many real-world applications use MTL to boost their performance and we introduce some representative works. Finally, we present theoretical analyses and discuss several future directions for MTL.",
"title": ""
}
] |
[
{
"docid": "893fe4d696f782dadb7be2b2db40550f",
"text": "Compared to the categorical approach that represents affective states as several discrete classes (e.g., positive and negative), the dimensional approach represents affective states as continuous numerical values on multiple dimensions, such as the valence-arousal (VA) space, thus allowing for more fine-grained sentiment analysis. In building dimensional sentiment applications, affective lexicons with valence-arousal ratings are useful resources but are still very rare. Therefore, this study proposes a weighted graph model that considers both the relations of multiple nodes and their similarities as weights to automatically determine the VA ratings of affective words. Experiments on both English and Chinese affective lexicons show that the proposed method yielded a smaller error rate on VA prediction than the linear regression, kernel method, and pagerank algorithm used in previous studies.",
"title": ""
},
{
"docid": "9b160fe780000fa624f45a8edd699ba6",
"text": "In this paper, to solve the problems of hexapod robot's foot frequent collisions with hard ground impact and other issues, we establish the foot-modeled and its parameters identification in the geological environment, and compile Fortran language which is based on the foot-modeled with the secondary development program by the ADAMS dynamic link and the overall hexagonal hexapod robot dynamics simulates in co-simulation of MATLAB and ADAMS. Through the analysis of the simulation results, the correctness and universality of the foot-modeled and the geological environment is verified.",
"title": ""
},
{
"docid": "8640245c98b1b39b0cc2e4e0466c8a0e",
"text": "Object detection and semantic segmentation are two strongly correlated tasks, yet typically solved separately or sequentially with substantially different techniques. Motivated by the complementary effect observed from the typical failure cases of the two tasks, we propose a unified framework for joint object detection and semantic segmentation. By enforcing the consistency between final detection and segmentation results, our unified framework can effectively leverage the advantages of leading techniques for these two tasks. Furthermore, both local and global context information are integrated into the framework to better distinguish the ambiguous samples. By jointly optimizing the model parameters for all the components, the relative importance of different component is automatically learned for each category to guarantee the overall performance. Extensive experiments on the PASCAL VOC 2010 and 2012 datasets demonstrate encouraging performance of the proposed unified framework for both object detection and semantic segmentation tasks.",
"title": ""
},
{
"docid": "2a388893c88a9cdf44ed5ace584fbad7",
"text": "Bayesian network (BN) classifiers with powerful reasoning capabilities have been increasingly utilized to detect intrusion with reasonable accuracy and efficiency. However, existing BN classifiers for intrusion detection suffer two problems. First, such BN classifiers are often trained from data using heuristic methods that usually select suboptimal models. Second, the classifiers are trained using very large datasets which may be time consuming to obtain in practice. When the size of training dataset is small, the performance of a single BN classifier is significantly reduced due to its inability to represent the whole probability distribution. To alleviate these problems, we build a Bayesian classifier by Bayesian Model Averaging(BMA) over the k-best BN classifiers, called Bayesian Network Model Averaging (BNMA) classifier. We train and evaluate BNMA classifier on the NSL-KDD dataset, which is less redundant, thus more judicial than the commonly used KDD Cup 99 dataset. We show that the BNMA classifier performs significantly better in terms of detection accuracy than the Naive Bayes classifier and the BN classifier built with heuristic method. We also show that the BNMA classifier trained using a smaller dataset outperforms two other classifiers trained using a larger dataset. This also implies that the BNMA is beneficial in accelerating the detection process due to its less dependance on the potentially prolonged process of collecting large training datasets.",
"title": ""
},
{
"docid": "abdc80a5e567ded6d20b9a00ce1030f7",
"text": "OBJECTIVE\nThere is increasing recognition that autism spectrum disorder (ASD) and attention-deficit/hyperactivity disorder (ADHD) are associated with significant costs and burdens. However, research on their impact has focused mostly on the caregivers of young children; few studies have examined caregiver burden as children transition into adolescence and young adulthood, and no one has compared the impact of ASD to other neurodevelopmental disorders (e.g., ADHD).\n\n\nMETHOD\nWe conducted an observational study of 192 families caring for a young person (aged 14 to 24 years) with a childhood diagnosis of ASD or ADHD (n = 101 and n = 91, respectively) in the United Kingdom. A modified stress-appraisal model was used to investigate the correlates of caregiver burden as a function of family background (parental education), primary stressors (symptoms), primary appraisal (need), and resources (use of services).\n\n\nRESULTS\nBoth disorders were associated with a high level of caregiver burden, but it was significantly greater in ASD. In both groups, caregiver burden was mainly explained by the affected young person's unmet need. Domains of unmet need most associated with caregiver burden in both groups included depression/anxiety and inappropriate behavior. Specific to ASD were significant associations between burden and unmet needs in domains such as social relationships and major mental health problems.\n\n\nCONCLUSIONS\nAdolescence and young adulthood are associated with high levels of caregiver burden in both disorders; in ASD, the level is comparable to that reported by persons caring for individuals with a brain injury. Interventions are required to reduce caregiver burden in this population.",
"title": ""
},
{
"docid": "c388c22f5d97fc172187ba1fd352cef0",
"text": "Analysis of a driver's head behavior is an integral part of a driver monitoring system. In particular, the head pose and dynamics are strong indicators of a driver's focus of attention. Many existing state-of-the-art head dynamic analyzers are, however, limited to single-camera perspectives, which are susceptible to occlusion of facial features from spatially large head movements away from the frontal pose. Nonfrontal glances away from the road ahead, however, are of special interest since interesting events, which are critical to driver safety, occur during those times. In this paper, we present a distributed camera framework for head movement analysis, with emphasis on the ability to robustly and continuously operate even during large head movements. The proposed system tracks facial features and analyzes their geometric configuration to estimate the head pose using a 3-D model. We present two such solutions that additionally exploit the constraints that are present in a driving context and video data to improve tracking accuracy and computation time. Furthermore, we conduct a thorough comparative study with different camera configurations. For experimental evaluations, we collected a novel head pose data set from naturalistic on-road driving in urban streets and freeways, with particular emphasis on events inducing spatially large head movements (e.g., merge and lane change). Our analyses show promising results.",
"title": ""
},
{
"docid": "6e4dcb451292cc38cb72300a24135c1b",
"text": "This survey gives state-of-the-art of genetic algorithm (GA) based clustering techniques. Clustering is a fundamental and widely applied method in understanding and exploring a data set. Interest in clustering has increased recently due to the emergence of several new areas of applications including data mining, bioinformatics, web use data analysis, image analysis etc. To enhance the performance of clustering algorithms, Genetic Algorithms (GAs) is applied to the clustering algorithm. GAs are the best-known evolutionary techniques. The capability of GAs is applied to evolve the proper number of clusters and to provide appropriate clustering. This paper present some existing GA based clustering algorithms and their application to different problems and domains.",
"title": ""
},
{
"docid": "874dd5c2b3b3edc0d13aac33b60da21f",
"text": "Firefighters suffer a variety of life-threatening risks, including line-of-duty deaths, injuries, and exposures to hazardous substances. Support for reducing these risks is important. We built a partially occluded object reconstruction method on augmented reality glasses for first responders. We used a deep learning based on conditional generative adversarial networks to train associations between the various images of flammable and hazardous objects and their partially occluded counterparts. Our system then reconstructed an image of a new flammable object. Finally, the reconstructed image was superimposed on the input image to provide \"transparency\". The system imitates human learning about the laws of physics through experience by learning the shape of flammable objects and the flame characteristics.",
"title": ""
},
{
"docid": "666f38debffefa5d2bbe289c2c4ac68d",
"text": "Android's graphical authentication mechanism requires users to unlock their devices by \"drawing\" a pattern that connects a sequence of contact points arranged in a 3x3 grid. Prior studies demonstrated that human-generated 3x3 patterns are weak (CCS'13); large portions can be trivially guessed with sufficient training. An obvious solution would be to increase the grid size to increase the complexity of chosen patterns. In this paper we ask the question: Does increasing the grid size increase the security of human-generated patterns? We conducted two large studies to answer this question, and our analysis shows that for both 3x3 and 4x4 patterns, there is a high incidence of repeated patterns and symmetric pairs (patterns that derive from others based on a sequence of flips and rotations), and many 4x4 patterns are expanded versions of 3x3 patterns. Leveraging this information, we developed an advanced guessing algorithm and used it to quantified the strength of the patterns using the partial guessing entropy. We find that guessing the first 20% (G0.2) of patterns for both 3x3 and 4x4 can be done as efficiently as guessing a random 2-digit PIN. While guessing larger portions of 4x4 patterns (G0.5) requires 2-bits more entropy than guessing the same ratio of 3x3 patterns, it remains on the order of cracking random 3-digit PINs. Of the patterns tested, our guessing algorithm successful cracks 15% of 3x3 patterns within 20 guesses (a typical phone lockout) and 19% of 4x4 patterns within 20 guesses; however, after 50,000 guesses, we correctly guess 95.9% of 3x3 patterns but only 66.7% of 4x4 patterns. While there may be some benefit to expanding the grid size to 4x4, we argue the majority of patterns chosen by users will remain trivially guessable and insecure against broad guessing attacks.",
"title": ""
},
{
"docid": "6c89c95f3fcc3c0f1da3f4ae16e0475e",
"text": "Understanding search queries is a hard problem as it involves dealing with “word salad” text ubiquitously issued by users. However, if a query resembles a well-formed question, a natural language processing pipeline is able to perform more accurate interpretation, thus reducing downstream compounding errors. Hence, identifying whether or not a query is well formed can enhance query understanding. Here, we introduce a new task of identifying a well-formed natural language question. We construct and release a dataset of 25,100 publicly available questions classified into well-formed and non-wellformed categories and report an accuracy of 70.7% on the test set. We also show that our classifier can be used to improve the performance of neural sequence-to-sequence models for generating questions for reading comprehension.",
"title": ""
},
{
"docid": "419116a3660f1c1f7127de31f311bd1e",
"text": "Unlike dimensionality reduction (DR) tools for single-view data, e.g., principal component analysis (PCA), canonical correlation analysis (CCA) and generalized CCA (GCCA) are able to integrate information from multiple feature spaces of data. This is critical in multi-modal data fusion and analytics, where samples from a single view may not be enough for meaningful DR. In this work, we focus on a popular formulation of GCCA, namely, MAX-VAR GCCA. The classic MAX-VAR problem is optimally solvable via eigen-decomposition, but this solution has serious scalability issues. In addition, how to impose regularizers on the sought canonical components was unclear - while structure-promoting regularizers are often desired in practice. We propose an algorithm that can easily handle datasets whose sample and feature dimensions are both large by exploiting data sparsity. The algorithm is also flexible in incorporating regularizers on the canonical components. Convergence properties of the proposed algorithm are carefully analyzed. Numerical experiments are presented to showcase its effectiveness.",
"title": ""
},
{
"docid": "f1910095f08fc72f81c39cc01890c474",
"text": "In today’s competitive business environment, there is a strong need for businesses to collect, monitor, and analyze user-generated data on their own and on their competitors’ social media sites, such as Facebook, Twitter, and blogs. To achieve a competitive advantage, it is often necessary to listen to and understand what customers are saying about competitors’ products and services. Current social media analytics frameworks do not provide benchmarks that allow businesses to compare customer sentiment on social media to easily understand where businesses are doing well and where they need to improve. In this paper, we present a social media competitive analytics framework with sentiment benchmarks that can be used to glean industry-specific marketing intelligence. Based on the idea of the proposed framework, new social media competitive analytics with sentiment benchmarks can be developed to enhance marketing intelligence and to identify specific actionable areas in which businesses are leading and lagging to further improve their customers’ experience using customer opinions gleaned from social media. Guided by the proposed framework, an innovative business-driven social media competitive analytics tool named VOZIQ is developed. We use VOZIQ to analyze tweets associated with five large retail sector companies and to generate meaningful business insight reports. 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3f7d77aafcc5c256394bb97e0b1fdc77",
"text": "Ischiofemoral impingement (IFI) is the entrapment of the quadratus femoris muscle (QFM) between the trochanter minor of the femur and the ischium-hamstring tendon. Patients with IFI generally present with hip pain, which may radiate toward the knee. Although there is no specific diagnostic clinical test for this disorder, the presence of QFM edema/fatty replacement and narrowing of the ischiofemoral space and the quadratus femoris space on magnetic resonance imaging (MRI) are suggestive of IFI. The optimal treatment strategy of this syndrome remains obscure. Patients may benefit from a conservative treatment regimen that includes rest, activity restriction, nonsteroidal anti-inflammatory drugs, and rehabilitation procedures, just as with other impingement syndromes. Herein we report an 11-year-old girl with IFI who was successfully treated conservatively. To our knowledge, our case is the youngest patient reported in the English literature. MRI remains an important tool in the diagnosis of IFI, and radiologists should be aware of the specific features of this entity.",
"title": ""
},
{
"docid": "6209ab862101c29f8fdf302bf33684bb",
"text": "In multimedia applications, the text and image components in a web document form a pairwise constraint that potentially indicates the same semantic concept. This paper studies cross-modal learning via the pairwise constraint and aims to find the common structure hidden in different modalities. We first propose a compound regularization framework to address the pairwise constraint, which can be used as a general platform for developing cross-modal algorithms. For unsupervised learning, we propose a multi-modal subspace clustering method to learn a common structure for different modalities. For supervised learning, to reduce the semantic gap and the outliers in pairwise constraints, we propose a cross-modal matching method based on compound ℓ21 regularization. Extensive experiments demonstrate the benefits of joint text and image modeling with semantically induced pairwise constraints, and they show that the proposed cross-modal methods can further reduce the semantic gap between different modalities and improve the clustering/matching accuracy.",
"title": ""
},
{
"docid": "1705d7b788adc7f7f02fc2a8ffa2fb46",
"text": "Early school withdrawal, commonly referred to as dropout, is associated with a plethora of negative outcomes for students, schools, and society. Student engagement, however, presents as a promising theoretical model and cornerstone of school completion interventions. The purpose of the present study was to validate the Student Engagement Instrument-Elementary Version (SEI-E). The psychometric properties of this measure were assessed based on the responses of an ethnically diverse sample of 1,943 students from an urban locale. Exploratory and confirmatory factor analyses indicated that the 4-factor model of student engagement provided the best fit for the current data, which is divergent from previous SEI studies suggesting 5- and 6-factor models. Discussion and implications of these findings are presented in the context of student engagement and dropout prevention.",
"title": ""
},
{
"docid": "a0f46c67118b2efec2bce2ecd96d11d6",
"text": "This paper describes the implementation of a service to identify and geo-locate real world events that may be present as social activity signals in two different social networks. Specifically, we focus on content shared by users on Twitter and Instagram in order to design a system capable of fusing data across multiple networks. Past work has demonstrated that it is indeed possible to detect physical events using various social network platforms. However, many of these signals need corroboration in order to handle events that lack proper support within a single network. We leverage this insight to design an unsupervised approach that can correlate event signals across multiple social networks. Our algorithm can detect events and identify the location of the event occurrence. We evaluate our algorithm using both simulations and real world datasets collected using Twitter and Instagram. The results indicate that our algorithm significantly improves false positive elimination and attains high precision compared to baseline methods on real world datasets.",
"title": ""
},
{
"docid": "62905338d0cbdd0e5ed4b45ebe885193",
"text": "We used functional magnetic resonance imaging to investigate neural processes when music gains reward value the first time it is heard. The degree of activity in the mesolimbic striatal regions, especially the nucleus accumbens, during music listening was the best predictor of the amount listeners were willing to spend on previously unheard music in an auction paradigm. Importantly, the auditory cortices, amygdala, and ventromedial prefrontal regions showed increased activity during listening conditions requiring valuation, but did not predict reward value, which was instead predicted by increasing functional connectivity of these regions with the nucleus accumbens as the reward value increased. Thus, aesthetic rewards arise from the interaction between mesolimbic reward circuitry and cortical networks involved in perceptual analysis and valuation.",
"title": ""
},
{
"docid": "fdaf6fd66dd44b8c2c0f21d19781d9e1",
"text": "Continuous evolution in process and in equipment in arc welding improves compatibility in energy domain. It helps optimize resources such as energy, materials, labor etc, and yield desired productivity and improved quality engineering goals. Modern inverter technology provides multitude of benefits virtually to each entity associated with the process. Efficiency, settling time of output variables and compactness and light weight are major pre-requisites of arc welding equipment. They are all achieved if switching frequency of inverter is kept high. Generally, higher switching frequency is achieved through soft switching. This paper elaborates that input-material-sensitive hard-switched topology is functionally superior to soft-switched topologies and generate superior design for manufacturability issues. It achieves power density ideal for welding power range.",
"title": ""
},
{
"docid": "8750fc51d19bbf0cbae2830638f492fd",
"text": "Smartphones are increasingly becoming an ordinary part of our daily lives. With their remarkable capacity, applications used in these devices are extremely varied. In terms of language teaching, the use of these applications has opened new windows of opportunity, innovatively shaping the way instructors teach and students learn. This 4 week-long study aimed to investigate the effectiveness of a mobile application on teaching 40 figurative idioms from the Michigan Corpus of Academic Spoken English (MICASE) corpus compared to traditional activities. Quasi-experimental research design with pretest and posttest was employed to determine the differences between the scores of the control (n=25) and the experimental group (n=25) formed with convenience sampling. Results indicate that participants in the experimental group performed significantly better in the posttest, demonstrating the effectiveness of the mobile application used in this study on learning idioms. The study also provides recommendations towards the use of mobile applications in teaching vocabulary.",
"title": ""
}
] |
scidocsrr
|
7eb1e698c5e83b70ad71a3ae466cdf8e
|
Multi-Objective Model Selection via Racing
|
[
{
"docid": "e494f926c9b2866d2c74032d200e4d0a",
"text": "This chapter describes a new algorithm for training Support Vector Machines: Sequential Minimal Optimization, or SMO. Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because large matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while a standard projected conjugate gradient (PCG) chunking algorithm scales somewhere between linear and cubic in the training set size. SMO's computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. For the MNIST database, SMO is as fast as PCG chunking; while for the UCI Adult database and linear SVMs, SMO can be more than 1000 times faster than the PCG chunking algorithm.",
"title": ""
}
] |
[
{
"docid": "76ce7807d5afcb5fb5e1d4bf65d01489",
"text": "Tile antiradical activities of various antioxidants were determined using the free radical, 2.2-Diphenyl-l-pict3,1hydrazyl (DPPI-I°). In its radical form, DPPI-I ° has an absorption band at 515 nm which disappears upon reduction by an antiradical compound. Twenty compounds were reacted with the DPPI-I ° and shown to follow one of three possible reaction kinetic types. Ascorbie acid, isoascorbic acid and isoeugenol reacted quickly with the DPPI-I ° reaching a steady state immediately. Rosmarinic acid and 6-tocopherol reacted a little slower and reached a steady state within 30 rain. The remaining compounds reacted more progressively with the DPPH ° reaching a steady state from I to 6 h. Caffeic acid, gentisic acid and gallic acid showed the highest antiradical activities with a stoichiometo, of 4 to 6 reduced DPPH ° molecules pet\" molecule of antioxidant. Vanillin, phenol, y-resort3'lic acid and vanillic acid were found to be poor antiradical compounds. The stoichiometry, for the other 13 phenolic compounds varied from one to three reduced DPPH ° molecules pet\" molecule of antioxidant. Possible mechanisms are proposed to explain the e.werimental results.",
"title": ""
},
{
"docid": "b73f4816e11353d1f7cbf8862dd90de3",
"text": "We propose using relaxed deep supervision (RDS) within convolutional neural networks for edge detection. The conventional deep supervision utilizes the general groundtruth to guide intermediate predictions. Instead, we build hierarchical supervisory signals with additional relaxed labels to consider the diversities in deep neural networks. We begin by capturing the relaxed labels from simple detectors (e.g. Canny). Then we merge them with the general groundtruth to generate the RDS. Finally we employ the RDS to supervise the edge network following a coarse-to-fine paradigm. These relaxed labels can be seen as some false positives that are difficult to be classified. Weconsider these false positives in the supervision, and are able to achieve high performance for better edge detection. Wecompensate for the lack of training images by capturing coarse edge annotations from a large dataset of image segmentations to pretrain the model. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on the well-known BSDS500 dataset (ODS F-score of .792) and obtains superior cross-dataset generalization results on NYUD dataset.",
"title": ""
},
{
"docid": "3b2607bda35e535c2c4410e4c2b21a4f",
"text": "There has been recent interest in designing systems that use the tongue as an input interface. Prior work however either require surgical procedures or in-mouth sensor placements. In this paper, we introduce TongueSee, a non-intrusive tongue machine interface that can recognize a rich set of tongue gestures using electromyography (EMG) signals from the surface of the skin. We demonstrate the feasibility and robustness of TongueSee with experimental studies to classify six tongue gestures across eight participants. TongueSee achieves a classification accuracy of 94.17% and a false positive probability of 0.000358 per second using three-protrusion preamble design.",
"title": ""
},
{
"docid": "c2ed9f4fa8059b70387505225d5d7c21",
"text": "Accurate positioning systems can be realized via ultra-wideband signals due to their high time resolution. In this article, position estimation is studied for UWB systems. After a brief introduction to UWB signals and their positioning applications, two-step positioning systems are investigated from a UWB perspective. It is observed that time-based positioning is well suited for UWB systems. Then time-based UWB ranging is studied in detail, and the main challenges, theoretical limits, and range estimation algorithms are presented. Performance of some practical time-based ranging algorithms is investigated and compared against the maximum likelihood estimator and the theoretical limits. The trade-off between complexity and accuracy is observed.",
"title": ""
},
{
"docid": "a23f733b0261099012ddaf4ea12e9e8e",
"text": "This study analyzes the political agenda of the European Parliament (EP) plenary, how it has evolved over time, and the manner in which Members of the European Parliament (MEPs) have reacted to external and internal stimuli when making plenary speeches. To unveil the plenary agenda and detect latent themes in legislative speeches over time, MEP speech content is analyzed using a new dynamic topic modeling method based on two layers of Non-negative Matrix Factorization (NMF). This method is applied to a new corpus of all English language legislative speeches in the EP plenary from the period 1999-2014. Our findings suggest that two-layer NMF is a valuable alternative to existing dynamic topic modeling approaches found in the literature, and can unveil niche topics and associated vocabularies not captured by existing methods. Substantively, our findings suggest that the political agenda of the EP evolves significantly over time and reacts to exogenous events such as EU Treaty referenda and the emergence of the Euro-crisis. MEP contributions to the plenary agenda are also found to be impacted upon by voting behaviour and the committee structure of the Parliament. ∗Insight Centre for Data Analytics & School of Computer Science, University College Dublin, Ireland (derek.greene@ucd.ie) †School of Politics & International Relations, University College Dublin, Ireland (james.cross@ucd.ie).",
"title": ""
},
{
"docid": "61a50d735f6cc037f8c383fc29365f9a",
"text": "Traffic sign detection is a technology by which a vehicle is able to recognize the different traffic signs located on the road and used to regulate the traffic. Traffic signs are detected by analyzes color information contained on the images having capability of detection and recognition of traffic signs even with bad visual artifacts those originate from different conditions. The feature based method is intended for traffic sign detection, in this method two sets of features are to be detected in the reference and sensed images, identifying key points in the images and match among those points to find the similarity, the SURF descriptor is used for key points and point matching. After detecting the shape of the traffic sign the optical character recognition (OCR) method is used to recognize the character present in the detected shape. A technique, based on Maximally Stable Extremal Regions (MSER) region and canny edge detector is supervised for character recognition in traffic sign detection.",
"title": ""
},
{
"docid": "203e785e24430d4b0c9c1c1b13d2a254",
"text": "The impact of cardiovascular disease was compared in non-diabetics and diabetics in the Framingham cohort. In the first 20 years of the study about 6% of the women and 8% of the men were diagnosed as diabetics. The incidence of cardiovascular disease among diabetic men was twice that among non-diabetic men. Among diabetic women the incidence of cardiovascular disease was three times that among non-diabetic women. Judging from a comparison of standardized coefficients for the regression of incidence of cardiovascular disease on specified risk factors, there is no indication that the relationship of risk factors to the subsequent development of cardiovascular disease is different for diabetics and non-diabetics. This study suggests that the role of diabetes as a cardiovascular risk factor does not derive from an altered ability to contend with known risk factors.",
"title": ""
},
{
"docid": "1c6cfa3ca676a8ee8b6ceef6c992312b",
"text": "The paper presents some of the results obtained by studying Petri nets’ capability for modeling and analysis of Supply Chain performances. It is well known that the absence of coordination in Supply Chain management causes the so-called Bullwhip Effect, in which fluctuations in orders increase as they move up the chain. A simple three-stage supply chain with one player at each stage – a retailer, a wholesaler and a manufacturer – is considered. The model of the chain is developed using a timed, hierarchical coloured Petri Net. Simulation and performance analysis have been performed applying software package CPN Tools.",
"title": ""
},
{
"docid": "733ddc5a642327364c2bccb6b1258fac",
"text": "Human memory is unquestionably a vital cognitive ability but one that can often be unreliable. External memory aids such as diaries, photos, alarms and calendars are often employed to assist in remembering important events in our past and future. The recent trend for lifelogging, continuously documenting ones life through wearable sensors and cameras, presents a clear opportunity to augment human memory beyond simple reminders and actually improve its capacity to remember. This article surveys work from the fields of computer science and psychology to understand the potential for such augmentation, the technologies necessary for realising this opportunity and to investigate what the possible benefits and ethical pitfalls of using such technology might be.",
"title": ""
},
{
"docid": "87eafc3005bc936c0d6765285295f37e",
"text": "A microbial fuel cell (MFC) is a bioreactor that converts chemical energy in the chemical bonds in organic compounds to electrical energy through catalytic reactions of microorganisms under anaerobic conditions. It has been known for many years that it is possible to generate electricity directly by using bacteria to break down organic substrates. The recent energy crisis has reinvigorated interests in MFCs among academic researchers as a way to generate electric power or hydrogen from biomass without a net carbon emission into the ecosystem. MFCs can also be used in wastewater treatment facilities to break down organic matters. They have also been studied for applications as biosensors such as sensors for biological oxygen demand monitoring. Power output and Coulombic efficiency are significantly affected by the types of microbe in the anodic chamber of an MFC, configuration of the MFC and operating conditions. Currently, real-world applications of MFCs are limited because of their low power density level of several thousand mW/m2. Efforts are being made to improve the performance and reduce the construction and operating costs of MFCs. This article presents a critical review on the recent advances in MFC research with emphases on MFC configurations and performances.",
"title": ""
},
{
"docid": "5abe5696969eca4d19a55e3492af03a8",
"text": "In the era of big data, analyzing and extracting knowledge from large-scale data sets is a very interesting and challenging task. The application of standard data mining tools in such data sets is not straightforward. Hence, a new class of scalable mining method that embraces the huge storage and processing capacity of cloud platforms is required. In this work, we propose a novel distributed partitioning methodology for prototype reduction techniques in nearest neighbor classification. These methods aim at representing original training data sets as a reduced number of instances. Their main purposes are to speed up the classification process and reduce the storage requirements and sensitivity to noise of the nearest neighbor rule. However, the standard prototype reduction methods cannot cope with very large data sets. To overcome this limitation, we develop a MapReduce-based framework to distribute the functioning of these algorithms through a cluster of computing elements, proposing several algorithmic strategies to integrate multiple partial solutions (reduced sets of prototypes) into a single one. The proposed model enables prototype reduction algorithms to be applied over big data classification problems without significant accuracy loss. We test the speeding up capabilities of our model with data sets up to 5.7 millions of Email addresses: triguero@decsai.ugr.es (Isaac Triguero), dperalta@decsai.ugr.es (Daniel Peralta), jaume.bacardit@newcastle.ac.uk (Jaume Bacardit), sglopez@ujaen.es (Salvador Garćıa), herrera@decsai.ugr.es (Francisco Herrera) Preprint submitted to Neurocomputing March 3, 2014 instances. The results show that this model is a suitable tool to enhance the performance of the nearest neighbor classifier with big data.",
"title": ""
},
{
"docid": "6077f587e262eac1280da4c401603b3a",
"text": "The present research examined the effects of directed attention on speed of information transmission in the visual system. Ss judged the temporal order of 2 stimuli while directing attention toward 1 of the stimuli or away from both stimuli. Perception of temporal order was influenced by directed attention: Given equal onset times, the attended stimulus appeared to occur before the unattended stimulus. Direction of attention also influenced the perception of simultaneity. The findings support the notion that attention affects the speed of transmission of information in the visual system. To account for the pattern of temporal order and simultaneity judgments, a model is proposed in which the temporal profile of visual responses is affected by directed attention.",
"title": ""
},
{
"docid": "240bde45a9abbb69c442939d67a11e4f",
"text": "Big data analytics is the journey to turn data into insights for more informed business and operational decisions. As the chemical engineering community is collecting more data (volume) from different sources (variety), this journey becomes more challenging in terms of using the right data and the right tools (analytics) to make the right decisions in real time (velocity). This article highlights recent big data advancements in five industries, including chemicals, energy, semiconductors, pharmaceuticals, and food, and then discusses technical, platform, and culture challenges. To reach the next milestone in multiplying successes to the enterprise level, government, academia, and industry need to collaboratively focus on workforce development and innovation.",
"title": ""
},
{
"docid": "fb5a3c43655886c0387e63cd02fccd50",
"text": "Android is the most widely used smartphone OS with 82.8% market share in 2015 (IDC, 2015). It is therefore the most widely targeted system by malware authors. Researchers rely on dynamic analysis to extract malware behaviors and often use emulators to do so. However, using emulators lead to new issues. Malware may detect emulation and as a result it does not execute the payload to prevent the analysis. Dealing with virtual device evasion is a never-ending war and comes with a non-negligible computation cost (Lindorfer et al., 2014). To overcome this state of affairs, we propose a system that does not use virtual devices for analysing malware behavior. Glassbox is a functional prototype for the dynamic analysis of malware applications. It executes applications on real devices in a monitored and controlled environment. It is a fully automated system that installs, tests and extracts features from the application for further analysis. We present the architecture of the platform and we compare it with existing Android dynamic analysis platforms. Lastly, we evaluate the capacity of Glassbox to trigger application behaviors by measuring the average coverage of basic blocks on the AndroCoverage dataset (AndroCoverage, 2016). We show that it executes on average 13.52% more basic blocks than the Monkey program.",
"title": ""
},
{
"docid": "a6fd8b8506a933a7cc0530c6ccda03a8",
"text": "Native ecosystems are continuously being transformed mostly into agricultural lands. Simultaneously, a large proportion of fields are abandoned after some years of use. Without any intervention, altered landscapes usually show a slow reversion to native ecosystems, or to novel ecosystems. One of the main barriers to vegetation regeneration is poor propagule supply. Many restoration programs have already implemented the use of artificial perches in order to increase seed availability in open areas where bird dispersal is limited by the lack of trees. To evaluate the effectiveness of this practice, we performed a series of meta-analyses comparing the use of artificial perches versus control sites without perches. We found that setting-up artificial perches increases the abundance and richness of seeds that arrive in altered areas surrounding native ecosystems. Moreover, density of seedlings is also higher in open areas with artificial perches than in control sites without perches. Taken together, our results support the use of artificial perches to overcome the problem of poor seed availability in degraded fields, promoting and/or accelerating the restoration of vegetation in concordance with the surrounding landscape.",
"title": ""
},
{
"docid": "ff6a487e49d1fed033ad082ad7cd0524",
"text": "We present a novel technique for shadow removal based on an information theoretic approach to intrinsic image analysis. Our key observation is that any illumination change in the scene tends to increase the entropy of observed texture intensities. Similarly, the presence of texture in the scene increases the entropy of the illumination function. Consequently, we formulate the separation of an image into texture and illumination components as minimization of entropies of each component. We employ a non-parametric kernel-based quadratic entropy formulation, and present an efficient multi-scale iterative optimization algorithm for minimization of the resulting energy functional. Our technique may be employed either fully automatically, using a proposed learning based method for automatic initialization, or alternatively with small amount of user interaction. As we demonstrate, our method is particularly suitable for aerial images, which consist of either distinctive texture patterns, e.g. building facades, or soft shadows with large diffuse regions, e.g. cloud shadows.",
"title": ""
},
{
"docid": "8a30f829e308cb75164d1a076fa99390",
"text": "This paper proposes a planning method based on forward path generation and backward tracking algorithm for Automatic Parking Systems, especially suitable for backward parking situations. The algorithm is based on the steering property that backward moving trajectory coincides with the forward moving trajectory for the identical steering angle. The basic path planning is divided into two segments: a collision-free locating segment and an entering segment that considers the continuous steering angles for connecting the two paths. MATLAB simulations were conducted, along with experiments involving parallel and perpendicular situations.",
"title": ""
},
{
"docid": "bba4d637cf40e81ea89e61e875d3c425",
"text": "Recent years have witnessed the fast development of UAVs (unmanned aerial vehicles). As an alternative to traditional image acquisition methods, UAVs bridge the gap between terrestrial and airborne photogrammetry and enable flexible acquisition of high resolution images. However, the georeferencing accuracy of UAVs is still limited by the low-performance on-board GNSS and INS. This paper investigates automatic geo-registration of an individual UAV image or UAV image blocks by matching the UAV image(s) with a previously taken georeferenced image, such as an individual aerial or satellite image with a height map attached or an aerial orthophoto with a DSM (digital surface model) attached. As the biggest challenge for matching UAV and aerial images is in the large differences in scale and rotation, we propose a novel feature matching method for nadir or slightly tilted images. The method is comprised of a dense feature detection scheme, a one-to-many matching strategy and a global geometric verification scheme. The proposed method is able to find thousands of valid matches in cases where SIFT and ASIFT fail. Those matches can be used to geo-register the whole UAV image block towards the reference image data. When the reference images offer high georeferencing accuracy, the UAV images can also be geolocalized in a global coordinate system. A series of experiments involving different scenarios was conducted to validate the proposed method. The results demonstrate that our approach achieves not only decimeter-level registration accuracy, but also comparable global accuracy as the reference images.",
"title": ""
},
{
"docid": "36ad496263674c6f0f8d250d73b230fe",
"text": "We rst review how wavelets may be used for multi-resolution image processing, describing the lter-bank implementation of the discrete wavelet transform (dwt) and how it may be extended via separable ltering for processing images and other multi-dimensional signals. We then show that the condition for inversion of the dwt (perfect reconstruction) forces many commonly used wavelets to be similar in shape, and that this shape produces severe shift variance (variation of dwt coeecient energy at any given scale with shift of the input signal). It is also shown that separable ltering with the dwt prevents the transform from providing directionally selective lters for diagonal image features. Complex wavelets can provide both shift invariance and good directional se-lectivity, with only modest increases in signal redundancy and computation load. However development of a complex wavelet transform (cwt) with perfect reconstruction and good lter characteristics has proved diicult until recently. We now propose the dual-tree cwt as a solution to this problem, yielding a transform with attractive properties for a range of signal and image processing applications, including motion estimation, denoising, texture analysis and synthesis, and object segmentation.",
"title": ""
},
{
"docid": "1bf2f9e48a67842412a3b32bb2dd3434",
"text": "Since Paul Broca, the relationship between mind and brain has been the central preoccupation of cognitive neuroscience. In the 19th century, recognition that mental faculties might be understood by observations of individuals with brain damage led to vigorous debates about the properties of mind. By the end of the First World War, neurologists had outlined basic frameworks for the neural organization of language, perception, and motor cognition. Geschwind revived these frameworks in the 1960s and by the 1980s, lesion studies had incorporated methods from experimental psychology, models from cognitive science, formalities from computational approaches, and early developments in structural brain imaging. Around the same time, functional neuroimaging entered the scene. Early xenon probes evolved to the present-day wonders of BOLD and perfusion imaging. In a quick two decades, driven by these technical advances, centers for cognitive neuroscience now dot the landscape, journals such as this one are thriving, and the annual meeting of the Society for Cognitive Neuroscience is overflowing. In these heady times, a group of young cognitive neuroscientists training at a center in which human lesion studies and functional neuroimaging are pursued with similar vigor inquire about the relative impact of these two methods on the field. Fellows and colleagues, in their article titled ‘‘Method matters: An empirical study of impact on cognitive neuroscience,’’ point out that the nature of the evidence derived from the two methods are different. Importantly, they have complementary strengths and weaknesses. A critical difference highlighted in their article is that functional imaging by necessity provides correlational data, whereas lesion studies can support necessity claims for a specific brain region in a particular function. The authors hypothesize that despite the obvious growth of functional imaging in the last decade or so, lesion studies would have a disproportionate impact on cognitive neuroscience because they offer the possibility of establishing a causal role for structure in behavior in a way that is difficult to establish using functional imaging. The authors did not confirm this hypothesis. Using bibliometric methods, they found that functional imaging studies were cited three times as often as lesion studies, in large part because imaging studies were more likely to be published in high-impact journals. Given the complementary nature of the evidence from both methods, they anticipated extensive cross-method references. However, they found a within-method bias to citations generally, and, furthermore, functional imaging articles cited lesion studies considerably less often than the converse. To confirm the trends indicated by Fellows and colleagues, I looked at the distribution of cognitive neuroscience methods in the abstracts accepted for the 2005 Annual Meeting of the Cognitive Neuroscience Society (see Figure 1). Imaging studies composed over a third of all abstracts, followed by electrophysiological studies, the bulk of which were event-related potential (ERP) and magnetoencephalogram (MEG) studies. Studies that used patient populations composed 16% of the abstracts. The patient studies were almost evenly split between those focused on understanding a disease (47%), such as autism or schizophrenia, and those in which structure–function relationships were a consideration (53%). These observations do not speak of the final impact of these studies, but they do point out the relative lack of patient-based studies, particularly those addressing basic cognitive neuroscience questions. Fellows and colleagues pose the following question: Despite the greater ‘‘in-principle’’ inferential strength of lesion than functional imaging studies, why in practice do they have less impact on the field? They suggest that sociologic and practical considerations, rather than scientific merit, might be at play. Here, I offer my speculations on the factors that contribute to the relative impact of these methods. These speculations are not intended to be comprehensive. Rather they are intended to begin conversations in response to the question posed by Fellows and colleagues. In my view, the disproportionate impact of functional imaging compared to lesion studies is driven by three factors: the appeal of novelty and technology, by ease of access to neural data, and, in a subtle way, to the pragmatics of hypothesis testing. First, novelty is intrinsically appealing. As a clinician, I often encounter patients requesting the latest medications, even when they are more expensive and not demonstrably better than older ones. As scions of the enlightenment, many of us believe in progress, and that things newer are generally things better. Lesion studies have been around for a century and a half. Any advances made now are likely to be incremental. By contrast, functional imaging is truly a new way to examine the University of Pennsylvania",
"title": ""
}
] |
scidocsrr
|
ac4d2a8d5dc71c2e4efd8ff6c53750be
|
Melody Extraction From Polyphonic Music Signals Using Pitch Contour Characteristics
|
[
{
"docid": "e8933b0afcd695e492d5ddd9f87aeb81",
"text": "This article proposes a method for the automatic transcription of the melody, bass line, and chords in polyphonic pop music. The method uses a frame-wise pitch-salience estimator as a feature extraction front-end. For the melody and bass-line transcription, this is followed by acoustic modeling of note events and musicological modeling of note transitions. The acoustic models include a model for the target notes (i.e., melody or bass notes) and a background model. The musicological model involves key estimation and note bigrams that determine probabilities for transitions between target notes. A transcription of the melody or the bass line is obtained using Viterbi search via the target and the background note models. The performance of the melody and the bass-line transcription is evaluated using approximately 8.5 hours of realistic polyphonic music. The chord transcription maps the pitch salience estimates to a pitch-class representation and uses trained chord models and chord-transition probabilities to produce a transcription consisting of major and minor triads. For chords, the evaluation material consists of the first eight Beatles albums. The method is computationally efficient and allows causal implementation, so it can process streaming audio. Transcription of music refers to the analysis of an acoustic music signal for producing a parametric representation of the signal. The representation may be a music score with a meticulous arrangement for each instrument or an approximate description of melody and chords in the piece, for example. The latter type of transcription is commonly used in commercial songbooks of pop music and is usually sufficient for musicians or music hobbyists to play the piece. On the other hand, more detailed transcriptions are often employed in classical music to preserve the exact arrangement of the composer.",
"title": ""
},
{
"docid": "b1422b2646f02a5a84a6a4b13f5ae7d8",
"text": "Two experiments examined the influence of timbre on auditory stream segregation. In experiment 1, listeners heard sequences of orchestral tones equated for pitch and loudness, and they rated how strongly the instruments segregated. Multidimensional scaling analyses of these ratings revealed that segregation was based on the static and dynamic acoustic attributes that influenced similarity judgements in a previous experiment (P Iverson & CL Krumhansl, 1993). In Experiment 2, listeners heard interleaved melodies and tried to recognize the melodies played by a target timbre. The results extended the findings of Experiment 1 to tones varying pitch. Auditory stream segregation appears to be influenced by gross differences in static spectra and by dynamic attributes, including attack duration and spectral flux. These findings support a gestalt explanation of stream segregation and provide evidence against peripheral channel model.",
"title": ""
}
] |
[
{
"docid": "bf524461eb7eec362103452ed7c7f552",
"text": "Over many years Jacques Mehler has provided us all with a wealth of surprising and complex results on both nature and nurture in language acquisition. He has shown that there are powerful and enduring effects of early (and even prenatal) experience on infant language perception, and also considerable prior knowledge that infants bring to the language acquisition task. He has shown strong age effects on second-language acquisition and its neural organization, and has also shown that profi ciency predicts cerebral organization for the second language. In his honor, we focus here on one of the problems he has addressed-the no tion of a critical period for language acquisition-and attempt to sort out the current state of the evidence. In recent years there has been much discussion about whether there is a critical, or sensitive, period for language acquisition. Two issues are implicit in this discussion: First, what would constitute evidence for a critical period, particularly in humans, where the time scale for develop ment is greater than that in the well-studied nonhuman cases, and where proficient behavioral outcomes might be achieved by more than one route? Second, what is the import of establishing, or failing to establish, such a critical period? What does this mean for our understanding of the computational and neural mechanisms underLying language acquisition? In this chapter we address these issues explicitly, by briefly reviewing the available evidence on a critical period for human language acquisi tion, and then by asking whether the evidence meets the expected criteria for critical or sensitive periods seen in other well-studied domains in hu man and nonhuman development. We conclude by stating what we think 482 Neuport, Bave/ier & Neville the outcome of this issue means (and does not mean) for our understand ing of language acquisition. What Is a Critical or Sensitive Period? Before beginning, we should state briefly what we (and others) mean by a critical or sensitive period. A critical or sensitive period for learning is shown when there is a relationship between the age (more technically, the developmental state of the organism) at which some crucial experience is presented to the organism and the amount of learning which results. ill most domains with critical or sensitive periods, the privileged time for learning occurs during early development, but this is not necessarily the case (ef. bonding in sheep, which occurs immediately surrounding partu rition). The important feature is …",
"title": ""
},
{
"docid": "612271aa8848349735422395a91ffe7b",
"text": "The contamination of groundwater by heavy metal, originating either from natural soil sources or from anthropogenic sources is a matter of utmost concern to the public health. Remediation of contaminated groundwater is of highest priority since billions of people all over the world use it for drinking purpose. In this paper, thirty five approaches for groundwater treatment have been reviewed and classified under three large categories viz chemical, biochemical/biological/biosorption and physico-chemical treatment processes. Comparison tables have been provided at the end of each process for a better understanding of each category. Selection of a suitable technology for contamination remediation at a particular site is one of the most challenging job due to extremely complex soil chemistry and aquifer characteristics and no thumb-rule can be suggested regarding this issue. In the past decade, iron based technologies, microbial remediation, biological sulphate reduction and various adsorbents played versatile and efficient remediation roles. Keeping the sustainability issues and environmental ethics in mind, the technologies encompassing natural chemistry, bioremediation and biosorption are recommended to be adopted in appropriate cases. In many places, two or more techniques can work synergistically for better results. Processes such as chelate extraction and chemical soil washings are advisable only for recovery of valuable metals in highly contaminated industrial sites depending on economical feasibility.",
"title": ""
},
{
"docid": "c7808ecbca4c5bf8e8093dce4d8f1ea7",
"text": "41 Abstract— This project deals with a design and motion planning algorithm of a caterpillar-based pipeline robot that can be used for inspection of 80–100-mm pipelines in an indoor pipeline environment. The robot system consists of a Robot body, a control system, a CMOS camera, an accelerometer, a temperature sensor, a ZigBee module. The robot module will be designed with the help of CAD tool. The control system consists of Atmega16 micro controller and Atmel studio IDE. The robot system uses a differential drive to steer the robot and spring loaded four-bar mechanisms to assure that the robot expands to have grip of the pipe walls. Unique features of this robot are the caterpillar wheel, the four-bar mechanism supports the well grip of wall, a simple and easy user interface.",
"title": ""
},
{
"docid": "00669cc35f09b699e08fa7c8cc3701c8",
"text": "Want to get experience? Want to get any ideas to create new things in your life? Read interpolation of spatial data some theory for kriging now! By reading this book as soon as possible, you can renew the situation to get the inspirations. Yeah, this way will lead you to always think more and more. In this case, this book will be always right for you. When you can observe more about the book, you will know why you need this.",
"title": ""
},
{
"docid": "61def8d760de928a8cae89f2699c51cf",
"text": "OBJECTIVES\nTo describe the development and validation of a cancer awareness questionnaire (CAQ) based on a literature review of previous studies, focusing on cancer awareness and prevention.\n\n\nMATERIALS AND METHODS\nA total of 388 Chinese undergraduate students in a private university in Kuala Lumpur, Malaysia, were recruited to evaluate the developed self-administered questionnaire. The CAQ consisted of four sections: awareness of cancer warning signs and screening tests; knowledge of cancer risk factors; barriers in seeking medical advice; and attitudes towards cancer and cancer prevention. The questionnaire was evaluated for construct validity using principal component analysis and internal consistency using Cronbach's alpha (α) coefficient. Test-retest reliability was assessed with a 10-14 days interval and measured using Pearson product-moment correlation.\n\n\nRESULTS\nThe initial 77-item CAQ was reduced to 63 items, with satisfactory construct validity, and a high total internal consistency (Cronbach's α=0.77). A total of 143 students completed the questionnaire for the test-retest reliability obtaining a correlation of 0.72 (p<0.001) overall.\n\n\nCONCLUSIONS\nThe CAQ could provide a reliable and valid measure that can be used to assess cancer awareness among local Chinese undergraduate students. However, further studies among students from different backgrounds (e.g. ethnicity) are required in order to facilitate the use of the cancer awareness questionnaire among all university students.",
"title": ""
},
{
"docid": "9d580c5b482a039b773d58714ee18ebb",
"text": "We develop a recurrent reinforcement learning (RRL) system that directly induces portfolio management policies from time series of asset pri ces and indicators, while accounting for transaction costs. The RRL approach le arns a direct mapping from indicator series to portfolio weights, bypassing the need to explicitly model the time series of price returns. The resulting polici es dynamically optimize the portfolio Sharpe ratio, while incorporating changing c onditions and transaction costs. A key problem with many portfolio optimization m ethods, including Markowitz, is discovering ”corner solutions” with weight c oncentrated on just a few assets. In a dynamic context, naive portfolio algorithm s can exhibit switching behavior, particularly when transaction costs are ignored . In this work, we extend the RRL approach to produce better diversified portfoli os and smoother asset allocations over time. The solutions we propose are to inclu de realistic transaction costs and to shrink portfolio weights toward the prior p tfolio. The methods are assessed on a global asset allocation problem consistin g of the Pacific, North America and Europe MSCI International Equity Indices.",
"title": ""
},
{
"docid": "09af9b0987537e54b7456fb36407ffe3",
"text": "The introduction of high-speed backplane transceivers inside FPGAs has addressed critical issues such as the ease in scalability of performance, high availability, flexible architectures, the use of standards, and rapid time to market. These have been crucial to address the ever-increasing demand for bandwidth in communication and storage systems [1-3], requiring novel techniques in receiver (RX) and clocking circuits.",
"title": ""
},
{
"docid": "5cc3ce9628b871d57f086268ae1510e0",
"text": "Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power system, and to help the customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using Deep Reinforcement Learning, a hybrid type of methods that combines Reinforcement Learning with Deep Learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and Deep Policy Gradient, both of them being extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly-dimensional database includes information about photovoltaic power generation, electric vehicles as well as buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide realtime feedback to consumers to encourage more efficient use of electricity.",
"title": ""
},
{
"docid": "6c6afdefc918e6dfdb6bc5f5bb96cf45",
"text": "Due to the complexity and uncertainty of socioeconomic environments and cognitive diversity of group members, the cognitive information over alternatives provided by a decision organization consisting of several experts is usually uncertain and hesitant. Hesitant fuzzy preference relations provide a useful means to represent the hesitant cognitions of the decision organization over alternatives, which describe the possible degrees that one alternative is preferred to another by using a set of discrete values. However, in order to depict the cognitions over alternatives more comprehensively, besides the degrees that one alternative is preferred to another, the decision organization would give the degrees that the alternative is non-preferred to another, which may be a set of possible values. To effectively handle such common cases, in this paper, the dual hesitant fuzzy preference relation (DHFPR) is introduced and the methods for group decision making (GDM) with DHFPRs are investigated. Firstly, a new operator to aggregate dual hesitant fuzzy cognitive information is developed, which treats the membership and non-membership information fairly, and can generate more neutral results than the existing dual hesitant fuzzy aggregation operators. Since compatibility is a very effective tool to measure the consensus in GDM with preference relations, then two compatibility measures for DHFPRs are proposed. After that, the developed aggregation operator and compatibility measures are applied to GDM with DHFPRs and two GDM methods are designed, which can be applied to different decision making situations. Each GDM method involves a consensus improving model with respect to DHFPRs. The model in the first method reaches the desired consensus level by adjusting the group members’ preference values, and the model in the second method improves the group consensus level by modifying the weights of group members according to their contributions to the group decision, which maintains the group members’ original opinions and allows the group members not to compromise for reaching the desired consensus level. In actual applications, we may choose a proper method to solve the GDM problems with DHFPRs in light of the actual situation. Compared with the GDM methods with IVIFPRs, the proposed methods directly apply the original DHFPRs to decision making and do not need to transform them into the IVIFPRs, which can avoid the loss and distortion of original information, and thus can generate more precise decision results.",
"title": ""
},
{
"docid": "2ebf4b32598ba3cd74513f1bab8fe447",
"text": "Anti-N-methyl-D-aspartate receptor (NMDAR) encephalitis is an autoimmune disorder of the central nervous system (CNS). Its immunopathogenesis has been proposed to include early cerebrospinal fluid (CSF) lymphocytosis, subsequent CNS disease restriction and B cell mechanism predominance. There are limited data regarding T cell involvement in the disease. To contribute to the current knowledge, we investigated the complex system of chemokines and cytokines related to B and T cell functions in CSF and sera samples from anti-NMDAR encephalitis patients at different time-points of the disease. One patient in our study group had a long-persisting coma and underwent extraordinary immunosuppressive therapy. Twenty-seven paired CSF/serum samples were collected from nine patients during the follow-up period (median 12 months, range 1–26 months). The patient samples were stratified into three periods after the onset of the first disease symptom and compared with the controls. Modified Rankin score (mRS) defined the clinical status. The concentrations of the chemokines (C-X-C motif ligand (CXCL)10, CXCL8 and C-C motif ligand 2 (CCL2)) and the cytokines (interferon (IFN)γ, interleukin (IL)4, IL7, IL15, IL17A and tumour necrosis factor (TNF)α) were measured with Luminex multiple bead technology. The B cell-activating factor (BAFF) and CXCL13 concentrations were determined via enzyme-linked immunosorbent assay. We correlated the disease period with the mRS, pleocytosis and the levels of all of the investigated chemokines and cytokines. Non-parametric tests were used, a P value <0.05 was considered to be significant. The increased CXCL10 and CXCL13 CSF levels accompanied early-stage disease progression and pleocytosis. The CSF CXCL10 and CXCL13 levels were the highest in the most complicated patient. The CSF BAFF levels remained unchanged through the periods. In contrast, the CSF levels of T cell-related cytokines (INFγ, TNFα and IL17A) and IL15 were slightly increased at all of the periods examined. No dynamic changes in chemokine and cytokine levels were observed in the peripheral blood. Our data support the hypothesis that anti-NMDAR encephalitis is restricted to the CNS and that chemoattraction of immune cells dominates at its early stage. Furthermore, our findings raise the question of whether T cells are involved in this disease.",
"title": ""
},
{
"docid": "2baad8633f9a76199f205a7560fed30c",
"text": "Mobile Cloud Computing (MCC) has revolutionized the way in which mobile subscribers across the globe leverage services on the go. The mobile devices have evolved from mere devices that enabled voice calls only a few years back to smart devices that enable the user to access value added services anytime, anywhere. MCC integrates cloud computing into the mobile environment and overcomes obstacles related to performance (e.g. battery life, storage, and bandwidth), environment (e.g. heterogeneity, scalability, availability) and security (e.g. reliability and privacy).",
"title": ""
},
{
"docid": "9006586ffd85d5c2fb7611b3b0332519",
"text": "Systematic compositionality is the ability to recombine meaningful units with regular and predictable outcomes, and it’s seen as key to the human capacity for generalization in language. Recent work (Lake and Baroni, 2018) has studied systematic compositionality in modern seq2seq models using generalization to novel navigation instructions in a grounded environment as a probing tool. Lake and Baroni’s main experiment required the models to quickly bootstrap the meaning of new words. We extend this framework here to settings where the model needs only to recombine well-trained functional words (such as “around” and “right”) in novel contexts. Our findings confirm and strengthen the earlier ones: seq2seq models can be impressively good at generalizing to novel combinations of previously-seen input, but only when they receive extensive training on the specific pattern to be generalized (e.g., generalizing from many examples of “X around right” to “jump around right”), while failing when generalization requires novel application of compositional rules (e.g., inferring the meaning of “around right” from those of “right” and “around”).",
"title": ""
},
{
"docid": "1a66d5b6925bb30e5cadcdd23d43ef97",
"text": "The measurement of enterprise resource planning (ERP) systems success or effectiveness is critical to our understanding of the value and efficacy of ERP investment and managerial actions. Whether traditional information systems success models can be extended to investigating ERP systems success is yet to be investigated. This paper proposes a partial extension and respecification of the DeLone and MacLean model of IS success to ERP systems. The purpose of the present research is to re-examine the updated DeLone and McLean model [W. DeLone, E. McLean, The DeLone McLean model of information system success: a ten-year update, Journal of Management Information Systems 19 (4) (2003) 3–9] of ERP systems success. The updated DeLone and McLean model was applied to collect data from the questionnaires answered by 204 users of ERP systems at three high-tech firms in Taiwan. Finally, this study suggests that system quality, service quality, and information quality are most important successful factors. # 2007 Elsevier B.V. All rights reserved. www.elsevier.com/locate/compind Computers in Industry 58 (2007) 783–793",
"title": ""
},
{
"docid": "7e57c7abcd4bcb79d5f0fe8b6cd9a836",
"text": "Among the many viruses that are known to infect the human liver, hepatitis B virus (HBV) and hepatitis C virus (HCV) are unique because of their prodigious capacity to cause persistent infection, cirrhosis, and liver cancer. HBV and HCV are noncytopathic viruses and, thus, immunologically mediated events play an important role in the pathogenesis and outcome of these infections. The adaptive immune response mediates virtually all of the liver disease associated with viral hepatitis. However, it is becoming increasingly clear that antigen-nonspecific inflammatory cells exacerbate cytotoxic T lymphocyte (CTL)-induced immunopathology and that platelets enhance the accumulation of CTLs in the liver. Chronic hepatitis is characterized by an inefficient T cell response unable to completely clear HBV or HCV from the liver, which consequently sustains continuous cycles of low-level cell destruction. Over long periods of time, recurrent immune-mediated liver damage contributes to the development of cirrhosis and hepatocellular carcinoma.",
"title": ""
},
{
"docid": "1b8e90d78ca21fcaa5cca628cba4111a",
"text": "The Rutgers Master II-ND glove is a haptic interface designed for dextrous interactions with virtual environments. The glove provides force feedback up to 16 N each to the thumb, index, middle, and ring fingertips. It uses custom pneumatic actuators arranged in a direct-drive configuration in the palm. Unlike commercial haptic gloves, the direct-drive actuators make unnecessary cables and pulleys, resulting in a more compact and lighter structure. The force-feedback structure also serves as position measuring exoskeleton, by integrating noncontact Hall-effect and infrared sensors. The glove is connected to a haptic-control interface that reads its sensors and servos its actuators. The interface has pneumatic servovalves, signal conditioning electronics, A/D/A boards, power supply and an imbedded Pentium PC. This distributed computing assures much faster control bandwidth than would otherwise be possible. Communication with the host PC is done over an RS232 line. Comparative data with the CyberGrasp commercial haptic glove is presented.",
"title": ""
},
{
"docid": "8bfdf2be75d41df6fe4738231241c1a3",
"text": "In this paper, we investigate whether an a priori disambiguation of word senses is strictly necessary or whether the meaning of a word in context can be disambiguated through composition alone. We evaluate the performance of off-the-shelf singlevector and multi-sense vector models on a benchmark phrase similarity task and a novel task for word-sense discrimination. We find that single-sense vector models perform as well or better than multi-sense vector models despite arguably less clean elementary representations. Our findings furthermore show that simple composition functions such as pointwise addition are able to recover sense specific information from a single-sense vector model remark-",
"title": ""
},
{
"docid": "97c9d91709c98cd6dd803ffc9810d88f",
"text": "Relying entirely on an attention mechanism, the Transformer introduced by Vaswani et al. (2017) achieves state-of-the-art results for machine translation. In contrast to recurrent and convolutional neural networks, it does not explicitly model relative or absolute position information in its structure. Instead, it requires adding representations of absolute positions to its inputs. In this work we present an alternative approach, extending the self-attention mechanism to efficiently consider representations of the relative positions, or distances between sequence elements. On the WMT 2014 English-to-German and English-to-French translation tasks, this approach yields improvements of 1.3 BLEU and 0.3 BLEU over absolute position representations, respectively. Notably, we observe that combining relative and absolute position representations yields no further improvement in translation quality. We describe an efficient implementation of our method and cast it as an instance of relation-aware self-attention mechanisms that can generalize to arbitrary graphlabeled inputs.",
"title": ""
},
{
"docid": "8a33040d6464f7792b3eeee1e0760925",
"text": "We live in a data abundance era. Availability of large volume of diverse multimedia data streams (ranging from video, to tweets, to activity, and to PM2.5) can now be used to solve many critical societal problems. Causal modeling across multimedia data streams is essential to reap the potential of this data. However, effective frameworks combining formal abstract approaches with practical computational algorithms for causal inference from such data are needed to utilize available data from diverse sensors. We propose a causal modeling framework that builds on data-driven techniques while emphasizing and including the appropriate human knowledge in causal inference. We show that this formal framework can help in designing a causal model with a systematic approach that facilitates framing sharper scientific questions, incorporating expert's knowledge as causal assumptions, and evaluating the plausibility of these assumptions. We show the applicability of the framework in a an important Asthma management application using meteorological and pollution data streams.",
"title": ""
},
{
"docid": "13cb793ca9cdf926da86bb6fc630800a",
"text": "In this paper, we present the first formal study of how mothers of young children (aged three and under) use social networking sites, particularly Facebook and Twitter, including mothers' perceptions of which SNSes are appropriate for sharing information about their children, changes in post style and frequency after birth, and the volume and nature of child-related content shared in these venues. Our findings have implications for improving the utility and usability of SNS tools for mothers of young children, as well as for creating and improving sociotechnical systems related to maternal and child health.",
"title": ""
}
] |
scidocsrr
|
73e3e8d381b2ce9015d0c43fd6812524
|
Glassbox: Dynamic Analysis Platform for Malware Android Applications on Real Devices
|
[
{
"docid": "5184c27b7387a0cbedb1c3a393f797fa",
"text": "Emulator-based dynamic analysis has been widely deployed in Android application stores. While it has been proven effective in vetting applications on a large scale, it can be detected and evaded by recent Android malware strains that carry detection heuristics. Using such heuristics, an application can check the presence or contents of certain artifacts and infer the presence of emulators. However, there exists little work that systematically discovers those heuristics that would be eventually helpful to prevent malicious applications from bypassing emulator-based analysis. To cope with this challenge, we propose a framework called Morpheus that automatically generates such heuristics. Morpheus leverages our insight that an effective detection heuristic must exploit discrepancies observable by an application. To this end, Morpheus analyzes the application sandbox and retrieves observable artifacts from both Android emulators and real devices. Afterwards, Morpheus further analyzes the retrieved artifacts to extract and rank detection heuristics. The evaluation of our proof-of-concept implementation of Morpheus reveals more than 10,000 novel detection heuristics that can be utilized to detect existing emulator-based malware analysis tools. We also discuss the discrepancies in Android emulators and potential countermeasures.",
"title": ""
},
{
"docid": "35dda21bd1f2c06a446773b0bfff2dd7",
"text": "Mobile devices and their application marketplaces drive the entire economy of the today’s mobile landscape. Android platforms alone have produced staggering revenues, exceeding five billion USD, which has attracted cybercriminals and increased malware in Android markets at an alarming rate. To better understand this slew of threats, we present CopperDroid, an automatic VMI-based dynamic analysis system to reconstruct the behaviors of Android malware. The novelty of CopperDroid lies in its agnostic approach to identify interesting OSand high-level Android-specific behaviors. It reconstructs these behaviors by observing and dissecting system calls and, therefore, is resistant to the multitude of alterations the Android runtime is subjected to over its life-cycle. CopperDroid automatically and accurately reconstructs events of interest that describe, not only well-known process-OS interactions (e.g., file and process creation), but also complex intraand inter-process communications (e.g., SMS reception), whose semantics are typically contextualized through complex Android objects. Because CopperDroid’s reconstruction mechanisms are agnostic to the underlying action invocation methods, it is able to capture actions initiated both from Java and native code execution. CopperDroid’s analysis generates detailed behavioral profiles that abstract a large stream of low-level—often uninteresting—events into concise, high-level semantics, which are well-suited to provide insightful behavioral traits and open the possibility to further research directions. We carried out an extensive evaluation to assess the capabilities and performance of CopperDroid on more than 2,900 Android malware samples. Our experiments show that CopperDroid faithfully reconstructs OSand Android-specific behaviors. Additionally, we demonstrate how CopperDroid can be leveraged to disclose additional behaviors through the use of a simple, yet effective, app stimulation technique. Using this technique, we successfully triggered and disclosed additional behaviors on more than 60% of the analyzed malware samples. This qualitatively demonstrates the versatility of CopperDroid’s ability to improve dynamic-based code coverage.",
"title": ""
}
] |
[
{
"docid": "fdbf20917751369d7ffed07ecedc9722",
"text": "In order to evaluate the effect of static magnetic field (SMF) on morphological and physiological responses of soybean to water stress, plants were grown under well-watered (WW) and water-stress (WS) conditions. The adverse effects of WS given at different growth stages was found on growth, yield, and various physiological attributes, but WS at the flowering stage severely decreased all of above parameters in soybean. The result indicated that SMF pretreatment to the seeds significantly increased the plant growth attributes, biomass accumulation, and photosynthetic performance under both WW and WS conditions. Chlorophyll a fluorescence transient from SMF-treated plants gave a higher fluorescence yield at J–I–P phase. Photosynthetic pigments, efficiency of PSII, performance index based on absorption of light energy, photosynthesis, and nitrate reductase activity were also higher in plants emerged from SMF-pretreated seeds which resulted in an improved yield of soybean. Thus SMF pretreatment mitigated the adverse effects of water stress in soybean.",
"title": ""
},
{
"docid": "defbecacc15af7684a6f9722349f42e3",
"text": "We present a novel, unsupervised, and distance measure agnostic method for search space reduction in spell correction using neural character embeddings. The embeddings are learned by skip-gram word2vec training on sequences generated from dictionary words in a phonetic informationretentive manner. We report a very high performance in terms of both success rates and reduction of search space on the Birkbeck spelling error corpus. To the best of our knowledge, this is the first application of word2vec to spell correction.",
"title": ""
},
{
"docid": "840999138cfa5714894d3fcd63401c0f",
"text": "Due to the \"curse of dimensionality\" problem, it is very expensive to process the nearest neighbor (NN) query in high-dimensional spaces; and hence, approximate approaches, such as Locality-Sensitive Hashing (LSH), are widely used for their theoretical guarantees and empirical performance. Current LSH-based approaches target at the L1 and L2 spaces, while as shown in previous work, the fractional distance metrics (Lp metrics with 0 < p < 1) can provide more insightful results than the usual L1 and L2 metrics for data mining and multimedia applications. However, none of the existing work can support multiple fractional distance metrics using one index. In this paper, we propose LazyLSH that answers approximate nearest neighbor queries for multiple Lp metrics with theoretical guarantees. Different from previous LSH approaches which need to build one dedicated index for every query space, LazyLSH uses a single base index to support the computations in multiple Lp spaces, significantly reducing the maintenance overhead. Extensive experiments show that LazyLSH provides more accurate results for approximate kNN search under fractional distance metrics.",
"title": ""
},
{
"docid": "8d91b88e9f57181e9c5427b8578bc322",
"text": "AIM\n This paper reports on a study that looked at the characteristics of exemplary nurse leaders in times of change from the perspective of frontline nurses.\n\n\nBACKGROUND\n Large-scale changes in the health care system and their associated challenges have highlighted the need for strong leadership at the front line.\n\n\nMETHODS\n In-depth personal interviews with open-ended questions were the primary means of data collection. The study identified and explored six frontline nurses' perceptions of the qualities of nursing leaders through qualitative content analysis. This study was validated by results from the current literature.\n\n\nRESULTS\n The frontline nurses described several common characteristics of exemplary nurse leaders, including: a passion for nursing; a sense of optimism; the ability to form personal connections with their staff; excellent role modelling and mentorship; and the ability to manage crisis while guided by a set of moral principles. All of these characteristics pervade the current literature regarding frontline nurses' perspectives on nurse leaders.\n\n\nCONCLUSION\n This study identified characteristics of nurse leaders that allowed them to effectively assist and support frontline nurses in the clinical setting.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\n The findings are of significance to leaders in the health care system and in the nursing profession who are in a position to foster development of leaders to mentor and encourage frontline nurses.",
"title": ""
},
{
"docid": "21d65dfd7d864520584cfcdb605ebdb0",
"text": "Statistical debugging aims to automate the process of isolating bugs by profiling several runs of the program and using statistical analysis to pinpoint the likely causes of failure. In this paper, we investigate the impact of using richer program profiles such as path profiles on the effectiveness of bug isolation. We describe a statistical debugging tool called HOLMES that isolates bugs by finding paths that correlate with failure. We also present an adaptive version of HOLMES that uses iterative, bug-directed profiling to lower execution time and space overheads. We evaluate HOLMES using programs from the SIR benchmark suite and some large, real-world applications. Our results indicate that path profiles can help isolate bugs more precisely by providing more information about the context in which bugs occur. Moreover, bug-directed profiling can efficiently isolate bugs with low overheads, providing a scalable and accurate alternative to sparse random sampling.",
"title": ""
},
{
"docid": "a4933829bafd2d1e7c3ae3a9ab50c165",
"text": "Head drop is a symptom commonly seen in patients with amyotrophic lateral sclerosis. These patients usually experience neck pain and have difficulty in swallowing and breathing. Static neck braces are used in current treatment. These braces, however, immobilize the head in a single configuration, which causes muscle atrophy. This letter presents the design of a dynamic neck brace for the first time in the literature, which can both measure and potentially assist in the head motion of the human user. This letter introduces the brace design method and validates its capability to perform measurements. The brace is designed based on kinematics data collected from a healthy individual via a motion capture system. A pilot study was conducted to evaluate the wearability of the brace and the accuracy of measurements with the brace. This study recruited ten participants who performed a series of head motions. The results of this human study indicate that the brace is wearable by individuals who vary in size, the brace allows nearly $70\\%$ of the overall range of head rotations, and the sensors on the brace give accurate motion of the head with an error of under $5^{\\circ }$ when compared to a motion capture system. We believe that this neck brace can be a valid and accurate measurement tool for human head motion. This brace will be a big improvement in the available technologies to measure head motion as these are currently done in the clinic using hand-held protractors in two orthogonal planes.",
"title": ""
},
{
"docid": "70d901bae1e40dc5c585ae1f73c00776",
"text": "Sexual abuse includes any activity with a child, before the age of legal consent, that is for the sexual gratification of an adult or a significantly older child. Sexual mistreatment of children by family members (incest) and nonrelatives known to the child is the most common type of sexual abuse. Intrafamiliar sexual abuse is difficult to document and manage, because the child must be protected from additional abuse and coercion not to reveal or to deny the abuse, while attempts are made to preserve the family unit. The role of a comprehensive forensic medical examination is of major importance in the full investigation of such cases and the building of an effective prosecution in the court. The protection of the sexually abused child from any additional emotional trauma during the physical examination is of great importance. A brief assessment of the developmental, behavioral, mental and emotional status should also be obtained. The physical examination includes inspection of the whole body with special attention to the mouth, breasts, genitals, perineal region, buttocks and anus. The next concern for the doctor is the collection of biologic evidence, provided that the alleged sexual abuse has occurred within the last 72 hours. Cultures and serologic tests for sexually transmitted diseases are decided by the doctor according to the special circumstances of each case. Pregnancy test should also be performed in each case of a girl in reproductive age.",
"title": ""
},
{
"docid": "9403a8cb9c0d0d2a7f7634785b9fdab3",
"text": "Images have become one of the most popular types of media through which users convey their emotions within online social networks. Although vast amount of research is devoted to sentiment analysis of textual data, there has been very limited work that focuses on analyzing sentiment of image data. In this work, we propose a novel visual sentiment prediction framework that performs image understanding with Convolutional Neural Networks (CNN). Specifically, the proposed sentiment prediction framework performs transfer learning from a CNN with millions of parameters, which is pre-trained on large-scale data for object recognition. Experiments conducted on two real-world datasets from Twitter and Tumblr demonstrate the effectiveness of the proposed visual sentiment analysis framework.",
"title": ""
},
{
"docid": "6b6fd5bfbe1745a49ce497490cef949d",
"text": "This paper investigates optimal power allocation strategies over a bank of independent parallel Gaussian wiretap channels where a legitimate transmitter and a legitimate receiver communicate in the presence of an eavesdropper and an unfriendly jammer. In particular, we formulate a zero-sum power allocation game between the transmitter and the jammer where the payoff function is the secrecy rate. We characterize the optimal power allocation strategies as well as the Nash equilibrium in some asymptotic regimes. We also provide a set of results that cast further insight into the problem. Our scenario, which is applicable to current OFDM communications systems, demonstrates that transmitters that adapt to jammer experience much higher secrecy rates than non-adaptive transmitters.",
"title": ""
},
{
"docid": "f70dc802c631c4bda7de2de78217411a",
"text": "Researchers, technology reviewers, and governmental agencies have expressed concern that automation may necessitate the introduction of added displays to indicate vehicle intent in vehicle-to-pedestrian interactions. An automated online methodology for obtaining communication intent perceptions for 30 external vehicle-to-pedestrian display concepts was implemented and tested using Amazon Mechanic Turk. Data from 200 qualified participants was quickly obtained and processed. In addition to producing a useful early-stage evaluation of these specific design concepts, the test demonstrated that the methodology is scalable so that a large number of design elements or minor variations can be assessed through a series of runs even on much larger samples in a matter of hours. Using this approach, designers should be able to refine concepts both more quickly and in more depth than available development resources typically allow. Some concerns and questions about common assumptions related to the implementation of vehicle-to-pedestrian displays are posed.",
"title": ""
},
{
"docid": "3f0a9507d6538827faa5a42e87dc2115",
"text": "Traditional machine learning requires data to be described by attributes prior to applying a learning algorithm. In text classification tasks, many feature engineering methodologies have been proposed to extract meaningful features, however, no best practice approach has emerged. Traditional methods of feature engineering have inherent limitations due to loss of information and the limits of human design. An alternative is to use deep learning to automatically learn features from raw text data. One promising deep learning approach is to use convolutional neural networks. These networks can learn abstract text concepts from character representations and be trained to perform discriminate tasks, such as classification. In this paper, we propose a new approach to encoding text for use with convolutional neural networks that greatly reduces memory requirements and training time for learning from character-level text representations. Additionally, this approach scales well with alphabet size allowing us to preserve more information from the original text, potentially enhancing classification performance. By training tweet sentiment classifiers, we demonstrate that our approach uses less computational resources, allows faster training for networks and achieves similar, or better performance compared to the previous method of character encoding.",
"title": ""
},
{
"docid": "5ebf60a0f113ec60c4f9f3c2089e86cb",
"text": "A rapidly burgeoning literature documents copious sex influences on brain anatomy, chemistry and function. This article highlights some of the more intriguing recent discoveries and their implications. Consideration of the effects of sex can help to explain seemingly contradictory findings. Research into sex influences is mandatory to fully understand a host of brain disorders with sex differences in their incidence and/or nature. The striking quantity and diversity of sex-related influences on brain function indicate that the still widespread assumption that sex influences are negligible cannot be justified, and probably retards progress in our field.",
"title": ""
},
{
"docid": "be43ca444001f766e14dd042c411a34f",
"text": "During crowded events, cellular networks face voice and data traffic volumes that are often orders of magnitude higher than what they face during routine days. Despite the use of portable base stations for temporarily increasing communication capacity and free Wi-Fi access points for offloading Internet traffic from cellular base stations, crowded events still present significant challenges for cellular network operators looking to reduce dropped call events and improve Internet speeds. For effective cellular network design, management, and optimization, it is crucial to understand how cellular network performance degrades during crowded events, what causes this degradation, and how practical mitigation schemes would perform in real-life crowded events. This paper makes a first step towards this end by characterizing the operational performance of a tier-1 cellular network in the United States during two high-profile crowded events in 2012. We illustrate how the changes in population distribution, user behavior, and application workload during crowded events result in significant voice and data performance degradation, including more than two orders of magnitude increase in connection failures. Our findings suggest two mechanisms that can improve performance without resorting to costly infrastructure changes: radio resource allocation tuning and opportunistic connection sharing. Using trace-driven simulations, we show that more aggressive release of radio resources via 1-2 seconds shorter RRC timeouts as compared to routine days helps to achieve better tradeoff between wasted radio resources, energy consumption, and delay during crowded events; and opportunistic connection sharing can reduce connection failures by 95% when employed by a small number of devices in each cell sector.",
"title": ""
},
{
"docid": "27cb4869713ddbd3100fd4ca89002cfb",
"text": "Simulations of Very-low-frequency (VLF) transmitter signals are conducted using three models: the long-wave propagation capability, a finite-difference (FD) time-domain model, and an FD frequency-domain model. The FD models are corrected using Richardson extrapolation to minimize the numerical dispersion inherent in these models. Using identical ionosphere and ground parameters, the three models are shown to agree very well in their simulated VLF signal amplitude and phase, to within 1 dB of amplitude and a few degrees of phase, for a number of different simulation paths and transmitter frequencies. Furthermore, the three models are shown to produce comparable phase changes for the same ionosphere perturbations, again to within a few degrees. Finally, we show that the models reproduce the phase data of existing VLF transmitter–receiver pairs reasonably well, although the nighttime variation in the measured phase data is not captured by the simplified characterization of the ionosphere.",
"title": ""
},
{
"docid": "27b5cf1967c6dc0a91d04565ae5dbf70",
"text": "Crowdsourcing provides a popular paradigm for data collection at scale. We study the problem of selecting subsets of workers from a given worker pool to maximize the accuracy under a budget constraint. One natural question is whether we should hire as many workers as the budget allows, or restrict on a small number of topquality workers. By theoretically analyzing the error rate of a typical setting in crowdsourcing, we frame the worker selection problem into a combinatorial optimization problem and propose an algorithm to solve it efficiently. Empirical results on both simulated and real-world datasets show that our algorithm is able to select a small number of high-quality workers, and performs as good as, sometimes even better than, the much larger crowds as the budget allows.",
"title": ""
},
{
"docid": "fb0fdbdff165a83671dd9373b36caac4",
"text": "In this paper, we propose a system, that automatically transfers human body motion captured from an ordinary video camera to an unknown 3D character mesh. In our system, no manual intervention is required for specifying the internal skeletal structure or defining how the mesh surfaces deform. A sparse graph is generated from the input polygons based on their connectivity and geometric distributions. To estimate articulated body parts in the video, a progressive particle filter is used for identifying correspondences. We anticipate our proposed system can bring animation to a new audience with a more intuitive user interface.",
"title": ""
},
{
"docid": "296dcc0d1959823d1b5dce85e1263ef2",
"text": "BACKGROUND\nViolence against women is a serious human rights abuse and public health issue. Despite growing evidence of the size of the problem, current evidence comes largely from industrialised settings, and methodological differences limit the extent to which comparisons can be made between studies. We aimed to estimate the extent of physical and sexual intimate partner violence against women in 15 sites in ten countries: Bangladesh, Brazil, Ethiopia, Japan, Namibia, Peru, Samoa, Serbia and Montenegro, Thailand, and the United Republic of Tanzania.\n\n\nMETHODS\nStandardised population-based household surveys were done between 2000 and 2003. Women aged 15-49 years were interviewed and those who had ever had a male partner were asked in private about their experiences of physically and sexually violent and emotionally abusive acts.\n\n\nFINDINGS\n24,097 women completed interviews, with around 1500 interviews per site. The reported lifetime prevalence of physical or sexual partner violence, or both, varied from 15% to 71%, with two sites having a prevalence of less than 25%, seven between 25% and 50%, and six between 50% and 75%. Between 4% and 54% of respondents reported physical or sexual partner violence, or both, in the past year. Men who were more controlling were more likely to be violent against their partners. In all but one setting women were at far greater risk of physical or sexual violence by a partner than from violence by other people.\n\n\nINTERPRETATION\nThe findings confirm that physical and sexual partner violence against women is widespread. The variation in prevalence within and between settings highlights that this violence in not inevitable, and must be addressed.",
"title": ""
},
{
"docid": "cb49d71778f873d2f21df73b9e781c8e",
"text": "Many people with mental health problems do not use mental health care, resulting in poorer clinical and social outcomes. Reasons for low service use rates are still incompletely understood. In this longitudinal, population-based study, we investigated the influence of mental health literacy, attitudes toward mental health services, and perceived need for treatment at baseline on actual service use during a 6-month follow-up period, controlling for sociodemographic variables, symptom level, and a history of lifetime mental health service use. Positive attitudes to mental health care, higher mental health literacy, and more perceived need at baseline significantly predicted use of psychotherapy during the follow-up period. Greater perceived need for treatment and better literacy at baseline were predictive of taking psychiatric medication during the following 6 months. Our findings suggest that mental health literacy, attitudes to treatment, and perceived need may be targets for interventions to increase mental health service use.",
"title": ""
},
{
"docid": "b5f7511566b902bc206228dc3214c211",
"text": "In the imitation learning paradigm algorithms learn from expert demonstrations in order to become able to accomplish a particular task. Daumé III et al. (2009) framed structured prediction in this paradigm and developed the search-based structured prediction algorithm (Searn) which has been applied successfully to various natural language processing tasks with state-of-the-art performance. Recently, Ross et al. (2011) proposed the dataset aggregation algorithm (DAgger) and compared it with Searn in sequential prediction tasks. In this paper, we compare these two algorithms in the context of a more complex structured prediction task, namely biomedical event extraction. We demonstrate that DAgger has more stable performance and faster learning than Searn, and that these advantages are more pronounced in the parameter-free versions of the algorithms.",
"title": ""
},
{
"docid": "a15ba068638d0df0bd1a501dde97a67e",
"text": "Part of understanding the meaning and power of algorithms means asking what new demands they might make of ethical frameworks, and how they might be held accountable to ethical standards. I develop a definition of networked information algorithms (NIAs) as assemblages of institutionally situated code, practices, and norms with the power to create, sustain, and signify relationships among people and data through minimally observable, semiautonomous action. Starting from Merrill’s prompt to see ethics as the study of ‘‘what we ought to do,’’ I examine ethical dimensions of contemporary NIAs. Specifically, in an effort to sketch an empirically grounded, pragmatic ethics of algorithms, I trace an algorithmic assemblage’s power to convene constituents, suggest actions based on perceived similarity and probability, and govern the timing and timeframes of ethical action. University of Southern California, Los Angeles, CA, USA Corresponding Author: Mike Ananny, University of Southern California, 3502 Watt Way, Los Angeles, CA 90089, USA. Email: ananny@usc.edu Science, Technology, & Human Values 1-25 a The Author(s) 2015 Reprints and permission: sagepub.com/journalsPermissions.nav DOI: 10.1177/0162243915606523 sthv.sagepub.com",
"title": ""
}
] |
scidocsrr
|
af9985d0bbdd7ed220ef19db4974a657
|
Direct Torque Control for Induction Motor Using Fuzzy Logic
|
[
{
"docid": "75e9b017838ccfdcac3b85030470a3bd",
"text": "The new \"Direct Self-Control\" (DSC) is a simple method of signal processing, which gives converter fed three-phase machines an excellent dynamic performance. To control the torque e.g. of an induction motor it is sufficient to process the measured signals of the stator currents and the total flux linkages only. Optimal performance of drive systems is accomplished in steady state as well as under transient conditions by combination of several two limits controls. The expenses are less than in the case of proposed predictive control systems or FAM, if the converters switching frequency has to be kept minimal.",
"title": ""
}
] |
[
{
"docid": "67f37768d01c6f445fe069a31e99b8e2",
"text": "WELCOME TO CLOUD TIDBITS! In each issue, I'll be looking at a different “tidbit” of technology that I consider unique or eye-catching and of particular interest to IEEE Cloud Computing readers. Today's tidbit is VoltDB, a new cloud database. This system caught my eye for several reasons. First, it's the latest database designed by Michael Stonebraker, the database pioneer best known for Ingres, PostgreSQL, Illustra, Streambase, and more recently, Vertica. But interestingly, in this goaround, Stonebraker declared that he has thrown “all previous database architecture out the window” and “started over with a complete rewrite.”1 What's resulted is something totally different from every other database-including all the columnand table-oriented NoSQL systems. Moreover, VoltDB claims a 50 to 100x speed improvement over other relational database management systems (RDBMSs) and NoSQL systems. It sounds too good to be true. What we have is nothing short of a whole class of SQL, as compared to the “NoSQL” compromises detailed above. This “total rearchitecture,” called NewSQL, supports 100 percent in memory operation, supports SQL and stored procedures, and has a loosely coupled scale-out capability perfectly matched to cloud computing platforms. Wait a minute! That doesn't sound possible. That's precisely why I thought it made for a perfect tidbit.",
"title": ""
},
{
"docid": "099bd9e751b8c1e3a07ee06f1ba4b55b",
"text": "This paper presents a robust stereo-vision-based drivable road detection and tracking system that was designed to navigate an intelligent vehicle through challenging traffic scenarios and increment road safety in such scenarios with advanced driver-assistance systems (ADAS). This system is based on a formulation of stereo with homography as a maximum a posteriori (MAP) problem in a Markov random held (MRF). Under this formulation, we develop an alternating optimization algorithm that alternates between computing the binary labeling for road/nonroad classification and learning the optimal parameters from the current input stereo pair itself. Furthermore, online extrinsic camera parameter reestimation and automatic MRF parameter tuning are performed to enhance the robustness and accuracy of the proposed system. In the experiments, the system was tested on our experimental intelligent vehicles under various real challenging scenarios. The results have substantiated the effectiveness and the robustness of the proposed system with respect to various challenging road scenarios such as heterogeneous road materials/textures, heavy shadows, changing illumination and weather conditions, and dynamic vehicle movements.",
"title": ""
},
{
"docid": "a20b684deeb401855cbdc12cab90610a",
"text": "A zero knowledge interactive proof system allows one person to convince another person of some fact without revealing the information about the proof. In particular, it does not enable the verifier to later convince anyone else that the prover has a proof of the theorem or even merely that the theorem is true (much less that he himself has a proof). This paper reviews the field of zero knowledge proof systems giving a brief overview of zero knowledge proof systems and the state of current research in this field.",
"title": ""
},
{
"docid": "fb43cec4064dfad44d54d1f2a4981262",
"text": "Knowledge representation is a major topic in AI, and many studies attempt to represent entities and relations of know ledge base in a continuous vector space. Among these attempts, translation-based methods build entity and relati on vectors by minimizing the translation loss from a head entity to a tail one. In spite of the success of these methods, translation-based methods also suffer from the oversimpli fied loss metric, and are not competitive enough to model various and complex entities/relations in knowledge bases. To address this issue, we propose TransA, an adaptive metric approach for embedding, utilizing the metric learning idea s to provide a more flexible embedding method. Experiments are conducted on the benchmark datasets and our proposed method makes significant and consistent improvements over the state-of-the-art baselines.",
"title": ""
},
{
"docid": "ce35a38f1ab8264554ca19fbe8017b82",
"text": "Since the BOSS competition, in 2010, most steganalysis approaches use a learning methodology involving two steps: feature extraction, such as the Rich Models (RM), for the image representation, and use of the Ensemble Classifier (EC) for the learning step. In 2015, Qian et al. have shown that the use of a deep learning approach that jointly learns and computes the features, was very promising for the steganalysis. In this paper, we follow-up the study of Qian et al., and show that in the scenario where the steganograph always uses the same embedding key for embedding with the simulator in the different images, due to intrinsic joint minimization and the preservation of spatial information, the results obtained from a Convolutional Neural Network (CNN) or a Fully Connected Neural Network (FNN), if well parameterized, surpass the conventional use of a RM with an EC. First, numerous experiments were conducted in order to find the best ”shape” of the CNN. Second, experiments were carried out in the clairvoyant scenario in order to compare the CNN and FNN to an RM with an EC. The results show more than 16% reduction in the classification error with our CNN or FNN. Third, experiments were also performed in a cover-source mismatch setting. The results show that the CNN and FNN are naturally robust to the mismatch problem. In Addition to the experiments, we provide discussions on the internal mechanisms of a CNN, and weave links with some previously stated ideas, in order to understand the results we obtained. We also have a discussion on the scenario ”same embedding key”.",
"title": ""
},
{
"docid": "4f3fe8ea0487690b4a8f61b488e96d53",
"text": "Multiple instance learning (MIL) is a variation of supervised learning where a single class label is assigned to a bag of instances. In this paper, we state the MIL problem as learning the Bernoulli distribution of the bag label where the bag label probability is fully parameterized by neural networks. Furthermore, we propose a neural network-based permutation-invariant aggregation operator that corresponds to the attention mechanism. Notably, an application of the proposed attention-based operator provides insight into the contribution of each instance to the bag label. We show empirically that our approach achieves comparable performance to the best MIL methods on benchmark MIL datasets and it outperforms other methods on a MNIST-based MIL dataset and two real-life histopathology datasets without sacrificing interpretability.",
"title": ""
},
{
"docid": "02ffa1b39ac9e76239eff040121938a3",
"text": "Machine learning can be utilized in many different ways in the field of automatic manufacturing and logistics. In this thesis supervised machine learning have been utilized to train a classifiers for detection and recognition of objects in images. The techniques AdaBoost and Random forest have been examined, both are based on decision trees. The thesis has considered two applications: barcode detection and optical character recognition (OCR). Supervised machine learning methods are highly appropriate in both applications since both barcodes and printed characters generally are rather distinguishable. The first part of this thesis examines the use of machine learning for barcode detection in images, both traditional 1D-barcodes and the more recent Maxi-codes, which is a type of two-dimensional barcode. In this part the focus has been to train classifiers with the technique AdaBoost. The Maxi-code detection is mainly done with Local binary pattern features. For detection of 1D-codes, features are calculated from the structure tensor. The classifiers have been evaluated with around 200 real test images, containing barcodes, and shows promising results. The second part of the thesis involves optical character recognition. The focus in this part has been to train a Random forest classifier by using the technique point pair features. The performance has also been compared with the more proven and widely used Haar-features. Although, the result shows that Haar-features are superior in terms of accuracy. Nevertheless the conclusion is that point pairs can be utilized as features for Random forest in OCR.",
"title": ""
},
{
"docid": "c6cfc50062e42f51c9ac0db3b4faed83",
"text": "We put forward two new measures of security for threshold schemes secure in the adaptive adversary model: security under concurrent composition; and security without the assumption of reliable erasure. Using novel constructions and analytical tools, in both these settings, we exhibit efficient secure threshold protocols for a variety of cryptographic applications. In particular, based on the recent scheme by Cramer-Shoup, we construct adaptively secure threshold cryptosystems secure against adaptive chosen ciphertext attack under the DDH intractability assumption. Our techniques are also applicable to other cryptosystems and signature schemes, like RSA, DSS, and ElGamal. Our techniques include the first efficient implementation, for a wide but special class of protocols, of secure channels in erasure-free adaptive model. Of independent interest, we present the notion of a committed proof.",
"title": ""
},
{
"docid": "d483da5197688c5deede276b63d81867",
"text": "We present a stochastic model of the daily operations of an airline. Its primary purpose is to evaluate plans, such as crew schedules, as well as recovery policies in a random environment. We describe the structure of the stochastic model, sources of disruptions, recovery policies, and performance measures. Then, we describe SimAir—our simulation implementation of the stochastic model, and we give computational results. Finally, we give future directions for the study of airline recovery policies and planning under uncertainty.",
"title": ""
},
{
"docid": "d212f981eb8cc6054b2651009179b722",
"text": "A sixth-order 10.7-MHz bandpass switched-capacitor filter based on a double terminated ladder filter is presented. The filter uses a multipath operational transconductance amplifier (OTA) that presents both better accuracy and higher slew rate than previously reported Class-A OTA topologies. Design techniques based on charge cancellation and slower clocks are used to reduce the overall capacitance from 782 down to 219 unity capacitors. The filter's center frequency and bandwidth are 10.7 MHz and 400 kHz, respectively, and a passband ripple of 1 dB in the entire passband. The quality factor of the resonators used as filter terminations is around 32. The measured (filter + buffer) third-intermodulation (IM3) distortion is less than -40 dB for a two-tone input signal of +3-dBm power level each. The signal-to-noise ratio is roughly 58 dB while the IM3 is -45 dB; the power consumption for the standalone filter is 42 mW. The chip was fabricated in a 0.35-mum CMOS process; filter's area is 0.84 mm2",
"title": ""
},
{
"docid": "7381d61eea849ecdf74c962042d0c5ff",
"text": "Unmanned aerial vehicle (UAV) synthetic aperture radar (SAR) is very important for battlefield awareness. For SAR systems mounted on a UAV, the motion errors can be considerably high due to atmospheric turbulence and aircraft properties, such as its small size, which makes motion compensation (MOCO) in UAV SAR more urgent than other SAR systems. In this paper, based on 3-D motion error analysis, a novel 3-D MOCO method is proposed. The main idea is to extract necessary motion parameters, i.e., forward velocity and displacement in line-of-sight direction, from radar raw data, based on an instantaneous Doppler rate estimate. Experimental results show that the proposed method is suitable for low- or medium-altitude UAV SAR systems equipped with a low-accuracy inertial navigation system.",
"title": ""
},
{
"docid": "f60426bdd66154a7d2cb6415abd8f233",
"text": "In the rapidly expanding field of parallel processing, job schedulers are the “operating systems” of modern big data architectures and supercomputing systems. Job schedulers allocate computing resources and control the execution of processes on those resources. Historically, job schedulers were the domain of supercomputers, and job schedulers were designed to run massive, long-running computations over days and weeks. More recently, big data workloads have created a need for a new class of computations consisting of many short computations taking seconds or minutes that process enormous quantities of data. For both supercomputers and big data systems, the efficiency of the job scheduler represents a fundamental limit on the efficiency of the system. Detailed measurement and modeling of the performance of schedulers are critical for maximizing the performance of a large-scale computing system. This paper presents a detailed feature analysis of 15 supercomputing and big data schedulers. For big data workloads, the scheduler latency is the most important performance characteristic of the scheduler. A theoretical model of the latency of these schedulers is developed and used to design experiments targeted at measuring scheduler latency. Detailed benchmarking of four of the most popular schedulers (Slurm, Son of Grid Engine, Mesos, and Hadoop YARN) are conducted. The theoretical model is compared with data and demonstrates that scheduler performance can be characterized by two key parameters: the marginal latency of the scheduler ts and a nonlinear exponent αs. For all four schedulers, the utilization of the computing system decreases to <10% for computations lasting only a few seconds. Multi-level schedulers (such as LLMapReduce) that transparently aggregate short computations can improve utilization for these short computations to >90% for all four of the schedulers that were tested.",
"title": ""
},
{
"docid": "1ade3a53c754ec35758282c9c51ced3d",
"text": "Radical hysterectomy represents the treatment of choice for FIGO stage IA2–IIA cervical cancer. It is associated with several serious complications such as urinary and anorectal dysfunction due to surgical trauma to the autonomous nervous system. In order to determine those surgical steps involving the risk of nerve injury during both classical and nerve-sparing radical hysterectomy, we investigated the relationships between pelvic fascial, vascular and nervous structures in a large series of embalmed and fresh female cadavers. We showed that the extent of potential denervation after classical radical hysterectomy is directly correlated with the radicality of the operation. The surgical steps that carry a high risk of nerve injury are the resection of the uterosacral and vesicouterine ligaments and of the paracervix. A nerve-sparing approach to radical hysterectomy for cervical cancer is feasible if specific resection limits, such as the deep uterine vein, are carefully identified and respected. However, a nerve-sparing surgical effort should be balanced with the oncological priorities of removal of disease and all its potential routes of local spread. L'hystérectomie radicale est le traitement de choix pour les cancers du col utérin de stade IA2–IIA de la Fédération Internationale de Gynécologie Obstétrique (FIGO). Cette intervention comporte plusieurs séquelles graves, telles que les dysfonctions urinaires ou ano-rectales, par traumatisme chirurgical des nerfs végétatifs pelviens. Pour mettre en évidence les temps chirurgicaux impliquant un risque de lésion nerveuse lors d'une hystérectomie radicale classique et avec préservation nerveuse, nous avons recherché les rapports entre le fascia pelvien, les structures vasculaires et nerveuses sur une large série de sujets anatomiques féminins embaumés et non embaumés. Nous avons montré que l'étendue de la dénervation potentielle après hystérectomie radicale classique était directement en rapport avec le caractère radical de l'intervention. Les temps chirurgicaux à haut risque pour des lésions nerveuses sont la résection des ligaments utéro-sacraux, des ligaments vésico-utérins et du paracervix. L'hystérectomie radicale avec préservation nerveuse est possible si des limites de résection spécifiques telle que la veine utérine profonde sont soigneusement identifiées et respectées. Cependant une chirurgie de préservation nerveuse doit être mise en balance avec les priorités carcinologiques d'exérèse du cancer et de toutes ses voies potentielles de dissémination locale.",
"title": ""
},
{
"docid": "73d0013bb021a9ad2100cc8e3f938ec8",
"text": "The rapid development of electron tomography, in particular the introduction of novel tomographic imaging modes, has led to the visualization and analysis of three-dimensional structural and chemical information from materials at the nanometre level. In addition, the phase information revealed in electron holograms allows electrostatic and magnetic potentials to be mapped quantitatively with high spatial resolution and, when combined with tomography, in three dimensions. Here we present an overview of the techniques of electron tomography and electron holography and demonstrate their capabilities with the aid of case studies that span materials science and the interface between the physical sciences and the life sciences.",
"title": ""
},
{
"docid": "19b915816b9e93731b900f84bc40ad5b",
"text": "It is a truth universally acknowledged that \"a picture is worth a thousand words\". The emerge of digital media has taken this saying to a complete new level. By using steganography, one can hide not only 1000, but thousands of words even in an average sized image. This article presents various types of techniques used by modern digital steganography, as well as the implementation of the least significant bit (LSB) method. The main objective is to develop an application that uses LSB insertion in order to encode data into a cover image. Both a serial and parallel version are proposed and an analysis of the performances is made using images ranging from 1:9 to 131 megapixels.",
"title": ""
},
{
"docid": "b4a2c3679fe2490a29617c6a158b9dbc",
"text": "We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.",
"title": ""
},
{
"docid": "97e5f2e774b58f7533242114e5e06159",
"text": "We address the problem of phase retrieval, which is frequently encountered in optical imaging. The measured quantity is the magnitude of the Fourier spectrum of a function (in optics, the function is also referred to as an object). The goal is to recover the object based on the magnitude measurements. In doing so, the standard assumptions are that the object is compactly supported and positive. In this paper, we consider objects that admit a sparse representation in some orthonormal basis. We develop a variant of the Fienup algorithm to incorporate the condition of sparsity and to successively estimate and refine the phase starting from the magnitude measurements. We show that the proposed iterative algorithm possesses Cauchy convergence properties. As far as the modality is concerned, we work with measurements obtained using a frequency-domain optical-coherence tomography experimental setup. The experimental results on real measured data show that the proposed technique exhibits good reconstruction performance even with fewer coefficients taken into account for reconstruction. It also suppresses the autocorrelation artifacts to a significant extent since it estimates the phase accurately.",
"title": ""
},
{
"docid": "5ae07e0d3157b62f6d5e0e67d2b7f2ea",
"text": "G. Francis and F. Hermens (2002) used computer simulations to claim that many current models of metacontrast masking can account for the findings of V. Di Lollo, J. T. Enns, and R. A. Rensink (2000). They also claimed that notions of reentrant processing are not necessary because all of V. Di Lollo et al. 's data can be explained by feed-forward models. The authors show that G. Francis and F. Hermens's claims are vitiated by inappropriate modeling of attention and by ignoring important aspects of V. Di Lollo et al. 's results.",
"title": ""
},
{
"docid": "3f629998235c1cfadf67cf711b07f8b9",
"text": "The capacity to gather and timely deliver to the service level any relevant information that can characterize the service-provisioning environment, such as computing resources/capabilities, physical device location, user preferences, and time constraints, usually defined as context-awareness, is widely recognized as a core function for the development of modern ubiquitous and mobile systems. Much work has been done to enable context-awareness and to ease the diffusion of context-aware services; at the same time, several middleware solutions have been designed to transparently implement context management and provisioning in the mobile system. However, to the best of our knowledge, an in-depth analysis of the context data distribution, namely, the function in charge of distributing context data to interested entities, is still missing. Starting from the core assumption that only effective and efficient context data distribution can pave the way to the deployment of truly context-aware services, this article aims at putting together current research efforts to derive an original and holistic view of the existing literature. We present a unified architectural model and a new taxonomy for context data distribution by considering and comparing a large number of solutions. Finally, based on our analysis, we draw some of the research challenges still unsolved and identify some possible directions for future work.",
"title": ""
},
{
"docid": "660f957b70e53819724e504ed3de0776",
"text": "We propose several econometric measures of connectedness based on principalcomponents analysis and Granger-causality networks, and apply them to the monthly returns of hedge funds, banks, broker/dealers, and insurance companies. We find that all four sectors have become highly interrelated over the past decade, likely increasing the level of systemic risk in the finance and insurance industries through a complex and time-varying network of relationships. These measures can also identify and quantify financial crisis periods, and seem to contain predictive power in out-of-sample tests. Our results show an asymmetry in the degree of connectedness among the four sectors, with banks playing a much more important role in transmitting shocks than other financial institutions. & 2011 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
e608e0118e95e5c620a4c4a712704093
|
Code Obfuscation Literature Survey
|
[
{
"docid": "7d7e4ddaa9c582c28e9186036fc0a375",
"text": "It has become common to distribute software in forms that are isomorphic to the original source code. An important example is Java bytecode. Since such codes are easy to decompile, they increase the risk of malicious reverse engineering attacks.In this paper we describe the design of a Java code obfuscator, a tool which - through the application of code transformations - converts a Java program into an equivalent one that is more difficult to reverse engineer.We describe a number of transformations which obfuscate control-flow. Transformations are evaluated with respect to potency (To what degree is a human reader confused?), resilience (How well are automatic deobfuscation attacks resisted?), cost (How much time/space overhead is added?), and stealth (How well does obfuscated code blend in with the original code?).The resilience of many control-altering transformations rely on the resilience of opaque predicates. These are boolean valued expressions whose values are known to the obfuscator but difficult to determine for an automatic deobfuscator. We show how to construct resilient, cheap, and stealthy opaque predicates based on the intractability of certain static analysis problems such as alias analysis.",
"title": ""
}
] |
[
{
"docid": "405dce1cbea2315c9d602f0fdaaf32af",
"text": "A single chip NFC transceiver supporting not only NFC active and passive mode but also 13.56 MHz RFID reader and tag mode was designed and fabricated. The proposed NFC transceiver can operate as a RFID tag even without external power supply thanks to a dual antenna structure for initiator and target. The area increment due to additional target antenna is negligible because the target antenna is constructed by using a shielding layer of initiator antenna.",
"title": ""
},
{
"docid": "81d11f44d55e57d95a04f9a1ea35223c",
"text": "In many research fields such as Psychology, Linguistics, Cognitive Science and Artificial Intelligence, computing semantic similarity between words is an important issue. In this paper a new semantic similarity metric, that exploits some notions of the feature based theory of similarity and translates it into the information theoretic domain, which leverages the notion of Information Content (IC), is presented. In particular, the proposed metric exploits the notion of intrinsic IC which quantifies IC values by scrutinizing how concepts are arranged in an ontological structure. In order to evaluate this metric, an on line experiment asking the community of researchers to rank a list of 65 word pairs has been conducted. The experiment’s web setup allowed to collect 101 similarity ratings and to differentiate native and non-native English speakers. Such a large and diverse dataset enables to confidently evaluate similarity metrics by correlating them with human assessments. Experimental evaluations using WordNet indicate that the proposed metric, coupled with the notion of intrinsic IC, yields results above the state of the art. Moreover, the intrinsic IC formulation also improves the accuracy of other IC-based metrics. In order to investigate the generality of both the intrinsic IC formulation and proposed similarity metric a further evaluation using the MeSH biomedical ontology has been performed. Even in this case significant results were obtained. The proposed metric and several others have been implemented in the Java WordNet Similarity Library.",
"title": ""
},
{
"docid": "97c3860dfb00517f744fd9504c4e7f9f",
"text": "The plastic film surface treatment load is considered as a nonlinear capacitive load, which is rather difficult for designing of an inverter. The series resonant inverter (SRI) connected to the load via transformer has been found effective for it's driving. In this paper, a surface treatment based on a pulse density modulation (PDM) and pulse frequency modulation (PFM) hybrid control scheme is described. The PDM scheme is used to regulate the output power of the inverter and the PFM scheme is used to compensate for temperature and other environmental influences on the discharge. Experimental results show that the PDM and PFM hybrid control series-resonant inverter (SRI) makes the corona discharge treatment simple and compact, thus leading to higher efficiency.",
"title": ""
},
{
"docid": "d76d1c068f4f2f7d4af1b5bc268aaca9",
"text": "This paper proposes a secure image steganography technique to hide a secret image using the key. The secret image itself is not hidden, instead a key is generated and the key is hidden in the cover image. Using the key the secret image can be extracted. Integer Wavelet Transform (IWT) is used to hide the key. So it is very secure and robust because no one can realize the hidden information and it cannot be lost due to noise or any signal processing operations. Experimental results show very good Peak Signal to Noise Ratio (PSNR), which is a measure of security. In this technique the secret information is hidden in the middle bit-planes of the integer wavelet coefficients in high frequency sub-bands.",
"title": ""
},
{
"docid": "14863b1ca1d21c16319e40a34a0e3893",
"text": "Amyloid-beta peptide is central to the pathology of Alzheimer's disease, because it is neurotoxic--directly by inducing oxidant stress, and indirectly by activating microglia. A specific cell-surface acceptor site that could focus its effects on target cells has been postulated but not identified. Here we present evidence that the 'receptor for advanced glycation end products' (RAGE) is such a receptor, and that it mediates effects of the peptide on neurons and microglia. Increased expressing of RAGE in Alzheimer's disease brain indicates that it is relevant to the pathogenesis of neuronal dysfunction and death.",
"title": ""
},
{
"docid": "f1fcc04fdc1a8c45b0ef670328c3e98e",
"text": "T digital divide has loomed as a public policy issue for over a decade. Yet, a theoretical account for the effects of the digital divide is currently lacking. This study examines three levels of the digital divide. The digital access divide (the first-level digital divide) is the inequality of access to information technology (IT) in homes and schools. The digital capability divide (the second-level digital divide) is the inequality of the capability to exploit IT arising from the first-level digital divide and other contextual factors. The digital outcome divide (the third-level digital divide) is the inequality of outcomes (e.g., learning and productivity) of exploiting IT arising from the second-level digital divide and other contextual factors. Drawing on social cognitive theory and computer self-efficacy literature, we developed a model to show how the digital access divide affects the digital capability divide and the digital outcome divide among students. The digital access divide focuses on computer ownership and usage in homes and schools. The digital capability divide and the digital outcome divide focus on computer self-efficacy and learning outcomes, respectively. This model was tested using data collected from over 4,000 students in Singapore. The results generate insights into the relationships among the three levels of the digital divide and provide a theoretical account for the effects of the digital divide. While school computing environments help to increase computer self-efficacy for all students, these factors do not eliminate knowledge the gap between students with and without home computers. Implications for theory and practice are discussed.",
"title": ""
},
{
"docid": "2a58189812fe0f585794bed8734c632a",
"text": "China has become one of the largest entertainment markets in the world in recent years. Due to the success of Xiaomi, many Chinese pop music industry entrepreneurs believe \"Fans Economy\" works in the pop music industry. \"Fans Economy\" is based on the assumption that pop music consumer market could be segmented based on artists. Each music artist has its own exclusive loyal fans. In this paper, we provide an insightful study of the pop music artists and fans social network. Particularly, we segment the pop music consumer market and pop music artists respectively. Our results show that due to the Matthew Effect and limited diversity of consumer market, \"Fans Economy\" does not work for the Chinese pop music industry.",
"title": ""
},
{
"docid": "e2c239bed763d13117e943ef988827f1",
"text": "This paper presents a comprehensive review of 196 studies which employ operational research (O.R.) and artificial intelligence (A.I.) techniques in the assessment of bank performance. Several key issues in the literature are highlighted. The paper also points to a number of directions for future research. We first discuss numerous applications of data envelopment analysis which is the most widely applied O.R. technique in the field. Then we discuss applications of other techniques such as neural networks, support vector machines, and multicriteria decision aid that have also been used in recent years, in bank failure prediction studies and the assessment of bank creditworthiness and underperformance. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a07a7aec933bb6bde818cd97c639a218",
"text": "This paper presents a framework for evaluating and designing game design patterns commonly called as “achievements”. The results are based on empirical studies of a variety of popular achievement systems. The results, along with the framework for analyzing and designing achievements, present two definitions of game achievements. From the perspective of the achievement system, an achievement appears as a challenge consisting of a signifying element, rewards and completion logics whose fulfilment conditions are defined through events in other systems (usually games). From the perspective of a single game, an achievement appears as an optional challenge provided by a meta-game that is independent of a single game session and yields possible reward(s).",
"title": ""
},
{
"docid": "d79525e98ae6f0805c2fbe937c84e2e0",
"text": "A feature central to the success of e-commerce Web sites is the design of an effective interface to present product information. However, the suitability of the prevalent information formats in supporting various online shopping tasks is not known. Using the cognitive fit theory as the theoretical framework, we developed a research model to investigate the fit between information format and shopping task, and exam150 HONG, THONG, AND TAM ine its influence on consumers’ online shopping performance and perceptions of shopping experience. The competition for attention theory from the marketing literature and the scanpath theory from vision research were employed to support the analyses. An experiment was conducted to examine the effects of two types of information formats (list versus matrix) in the context of two types of shopping tasks (searching versus browsing). The results show that when there is a match between the information format and the shopping task, consumers can search the information space more efficiently and have better recall of product information. Specifically, the list format better supports browsing tasks, and the matrix format facilitates searching tasks. However, a match between the information format and the shopping task has no effect on cognitive effort or attitude toward using the Web site. Overall, this research supports the application of the cognitive fit theory to the study of Web interface design. It also demonstrates the value in integrating findings from cognitive science and vision research to understand the processes involved. As the information format has been shown to affect consumers’ online shopping behavior, even when the information content is held constant, the practical implications for Web site designers include providing both types of information format on their Web sites and matching the appropriate information format to the individual consumer’s task.",
"title": ""
},
{
"docid": "bb7a444ec16a2235cc7bd6a5cde8b12a",
"text": "Although baseline requirements for nicotinamide adenine dinucleotide (NAD+) synthesis can be met either with dietary tryptophan or with less than 20 mg of daily niacin, which consists of nicotinic acid and/or nicotinamide, there is growing evidence that substantially greater rates of NAD+ synthesis may be beneficial to protect against neurological degeneration, Candida glabrata infection, and possibly to enhance reverse cholesterol transport. The distinct and tissue-specific biosynthetic and/or ligand activities of tryptophan, nicotinic acid, nicotinamide, and the newly identified NAD+ precursor, nicotinamide riboside, reviewed herein, are responsible for vitamin-specific effects and side effects. Because current data suggest that nicotinamide riboside may be the only vitamin precursor that supports neuronal NAD+ synthesis, we present prospects for human nicotinamide riboside supplementation and propose areas for future research.",
"title": ""
},
{
"docid": "89a11e5525d086b6b480fba368fb7924",
"text": "OBJECTIVE\nMost BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping.\n\n\nAPPROACH\nA simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)-(d) are investigated.\n\n\nMAIN RESULTS\nWithout any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance--competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation.\n\n\nSIGNIFICANCE\nA high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI.",
"title": ""
},
{
"docid": "ce87a635c0c3aaa17e7b83d5fb52adce",
"text": "We present a novel definition of the reinforcement learning state, actions and reward function that allows a deep Q-network (DQN) to learn to control an optimization hyperparameter. Using Q-learning with experience replay, we train two DQNs to accept a state representation of an objective function as input and output the expected discounted return of rewards, or q-values, connected to the actions of either adjusting the learning rate or leaving it unchanged. The two DQNs learn a policy similar to a line search, but differ in the number of allowed actions. The trained DQNs in combination with a gradient-based update routine form the basis of the Q-gradient descent algorithms. To demonstrate the viability of this framework, we show that the DQN’s q-values associated with optimal action converge and that the Q-gradient descent algorithms outperform gradient descent with an Armijo or nonmonotone line search. Unlike traditional optimization methods, Q-gradient descent can incorporate any objective statistic and by varying the actions we gain insight into the type of learning rate adjustment strategies that are successful for neural network optimization.",
"title": ""
},
{
"docid": "7a6e9ac3ae7df091d8d9dd63cb4425d2",
"text": "INTRODUCTION\nDental measurements are an integral part of the orthodontic records necessary for proper diagnosis and treatment planning. In this study, we investigated the reliability and accuracy of dental measurements made on cone-beam computed tomography (CBCT) reconstructions.\n\n\nMETHODS\nThirty human skulls were scanned with dental CBCT, and 3-dimensional reconstructions of the dentitions were generated. Ten measurements (overbite, overjet, maxillary and mandibular intermolar and intercanine widths, arch length available, and arch length required) were made directly on the dentitions of the skulls with a high-precision digital caliper and on the digital reconstructions with commercially available software. Reliability and accuracy were assessed by using intraclass correlation and paired Student t tests. A P value of < or = 0.05 was used to assign statistical significance.\n\n\nRESULTS\nBoth the CBCT and the caliper measurements were highly reliable (r >0.90). The CBCT measurements tended to slightly underestimate the anatomic truth. This was statistically significant only for compounded measurements.\n\n\nCONCLUSIONS\nDental measurements from CBCT volumes can be used for quantitative analysis. With the CBCT images, we found a small systematic error, which became statistically significant only when combining several measurements. An adjustment for this error allows for improved accuracy.",
"title": ""
},
{
"docid": "0116258c0580d61ff762f203e1a134b7",
"text": "Cyber-attacks have greatly increased over the years, and the attackers have progressively improved in devising attacks towards specific targets. To aid in identifying and defending against cyber-attacks we propose a cyber attack taxonomy called AVOIDIT (Attack Vector, Operational Impact, Defense, Information Impact, and Target). We use five major classifiers to characterize the nature of an attack: classification by attack vector, classification by operational impact, classification by defense, classification by informational impact, and classification by attack target. Classification by defense is oriented towards providing information to the network administrator regarding attack mitigation or remediation strategies. Contrary to the existing taxonomies, AVOIDIT efficiently classifies blended attacks. We further propose an efficient cause, action, defense, analysis, and target (CADAT) process used to facilitate attack classification. AVOIDIT and CADAT are used by an issue resolution system (IRS) to educate the defender on possible cyber-attacks and the development of potential security policies. We validate the proposed AVOIDIT taxonomy using cyber-attacks scenarios and highlight future work intended to simulate AVOIDIT’s use within the IRS. Keywords—Taxonomy; Cyber Attack Taxonomy; Vulnerability; Computer Security; Cyberspace; Issue Resolution System",
"title": ""
},
{
"docid": "9d30cfbc7d254882e92cad01f5bd17c7",
"text": "Data from culture studies have revealed that Enterococcus faecalis is occasionally isolated from primary endodontic infections but frequently recovered from treatment failures. This molecular study was undertaken to investigate the prevalence of E. faecalis in endodontic infections and to determine whether this species is associated with particular forms of periradicular diseases. Samples were taken from cases of untreated teeth with asymptomatic chronic periradicular lesions, acute apical periodontitis, or acute periradicular abscesses, and from root-filled teeth associated with asymptomatic chronic periradicular lesions. DNA was extracted from the samples, and a 16S rDNA-based nested polymerase chain reaction assay was used to identify E. faecalis. This species occurred in seven of 21 root canals associated with asymptomatic chronic periradicular lesions, in one of 10 root canals associated with acute apical periodontitis, and in one of 19 pus samples aspirated from acute periradicular abscesses. Statistical analysis showed that E. faecalis was significantly more associated with asymptomatic cases than with symptomatic ones. E. faecalis was detected in 20 of 30 cases of persistent endodontic infections associated with root-filled teeth. When comparing the frequencies of this species in 30 cases of persistent infections with 50 cases of primary infections, statistical analysis demonstrated that E. faecalis was strongly associated with persistent infections. The average odds of detecting E. faecalis in cases of persistent infections associated with treatment failure were 9.1. The results of this study indicated that E. faecalis is significantly more associated with asymptomatic cases of primary endodontic infections than with symptomatic ones. Furthermore, E. faecalis was much more likely to be found in cases of failed endodontic therapy than in primary infections.",
"title": ""
},
{
"docid": "6b19e6116b8f366b3fea76d45f867d9d",
"text": "This report describes a Ku-band amplifier GaN MMIC. The amplifier MMIC delivers a measured saturated power of 20 W and gain of 20 dB under CW operation. To enhance the linearity of the two stage amplifier composed of the MMIC and GaN Internally Matched FET, a diode linearizer has also been built into the MMIC. The linearizer offers 5dB better linear output power, defined the output power at IM3 of -25 dBc, compared with that of MMIC without the linearizer. To our knowledge, this is the first report to present GaN-based high power amplifier MMIC with a built-in linearizer which can enhance the linearity of a PA system.",
"title": ""
},
{
"docid": "419499ced8902a00909c32db352ea7f5",
"text": "Software defined networks provide new opportunities for automating the process of network debugging. Many tools have been developed to verify the correctness of network configurations on the control plane. However, due to software bugs and hardware faults of switches, the correctness of control plane may not readily translate into that of data plane. To bridge this gap, we present VeriDP, which can monitor \"whether actual forwarding behaviors are complying with network configurations\". Given that policies are well-configured, operators can leverage VeriDP to monitor the correctness of the network data plane. In a nutshell, VeriDP lets switches tag packets that they forward, and report tags together with headers to the verification server before the packets leave the network. The verification server pre-computes all header-to-tag mappings based on the configuration, and checks whether the reported tags agree with the mappings. We prototype VeriDP with both software and hardware OpenFlow switches, and use emulation to show that VeriDP can detect common data plane fault including black holes and access violations, with a minimal impact on the data plane.",
"title": ""
},
{
"docid": "73d4a47d4aba600b4a3bcad6f7f3588f",
"text": "Humans can easily perform tasks that use vision and language jointly, such as describing a scene and answering questions about objects in the scene and how they are related. Image captioning and visual question & answer are two popular research tasks that have emerged from advances in deep learning and the availability of datasets that specifically address these problems. However recent work has shown that deep learning based solutions to these tasks are just as brittle as solutions for only vision or only natural language tasks. Image captioning is vulnerable to adversarial perturbations; novel objects, which are not described in training data, and contextual biases in training data can degrade performance in surprising ways. For these reasons, it is important to find ways in which general-purpose knowledge can guide connectionist models. We investigate challenges to integrate existing ontologies and knowledge bases with deep learning solutions, and possible approaches for overcoming such challenges. We focus on geo-referenced data such as geo-tagged images and videos that capture outdoor scenery. Geo-knowledge bases are domain specific knowledge bases that contain concepts and relations that describe geographic objects. This work proposes to increase the robustness of automatic scene description and inference by leveraging geo-knowledge bases along with the strengths of deep learning for visual object detection and classification.",
"title": ""
},
{
"docid": "ca57740ceb8496de84299be1e5a57754",
"text": "AIMS\nRheumatic heart disease (RHD) accounts for over a million premature deaths annually; however, there is little contemporary information on presentation, complications, and treatment.\n\n\nMETHODS AND RESULTS\nThis prospective registry enrolled 3343 patients (median age 28 years, 66.2% female) presenting with RHD at 25 hospitals in 12 African countries, India, and Yemen between January 2010 and November 2012. The majority (63.9%) had moderate-to-severe multivalvular disease complicated by congestive heart failure (33.4%), pulmonary hypertension (28.8%), atrial fibrillation (AF) (21.8%), stroke (7.1%), infective endocarditis (4%), and major bleeding (2.7%). One-quarter of adults and 5.3% of children had decreased left ventricular (LV) systolic function; 23% of adults and 14.1% of children had dilated LVs. Fifty-five percent (n = 1761) of patients were on secondary antibiotic prophylaxis. Oral anti-coagulants were prescribed in 69.5% (n = 946) of patients with mechanical valves (n = 501), AF (n = 397), and high-risk mitral stenosis in sinus rhythm (n = 48). However, only 28.3% (n = 269) had a therapeutic international normalized ratio. Among 1825 women of childbearing age (12-51 years), only 3.6% (n = 65) were on contraception. The utilization of valvuloplasty and valve surgery was higher in upper-middle compared with lower-income countries.\n\n\nCONCLUSION\nRheumatic heart disease patients were young, predominantly female, and had high prevalence of major cardiovascular complications. There is suboptimal utilization of secondary antibiotic prophylaxis, oral anti-coagulation, and contraception, and variations in the use of percutaneous and surgical interventions by country income level.",
"title": ""
}
] |
scidocsrr
|
3a7df3460dcdcb5d608a6ede41201ac6
|
Evaluate the Malignancy of Pulmonary Nodules Using the 3D Deep Leaky Noisy-or Network
|
[
{
"docid": "f3e56a991e197428110afbd0fd8ac63e",
"text": "PURPOSE\nThe development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.\n\n\nMETHODS\nSeven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (\"nodule > or =3 mm,\" \"nodule <3 mm,\" and \"non-nodule > or =3 mm\"). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus.\n\n\nRESULTS\nThe Database contains 7371 lesions marked \"nodule\" by at least one radiologist. 2669 of these lesions were marked \"nodule > or =3 mm\" by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings.\n\n\nCONCLUSIONS\nThe LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice.",
"title": ""
}
] |
[
{
"docid": "cc6b9165f395e832a396d59c85f482cc",
"text": "Vision-based automatic counting of people has widespread applications in intelligent transportation systems, security, and logistics. However, there is currently no large-scale public dataset for benchmarking approaches on this problem. This work fills this gap by introducing the first real-world RGBD People Counting DataSet (PCDS) containing over 4, 500 videos recorded at the entrance doors of buses in normal and cluttered conditions. It also proposes an efficient method for counting people in real-world cluttered scenes related to public transportations using depth videos. The proposed method computes a point cloud from the depth video frame and re-projects it onto the ground plane to normalize the depth information. The resulting depth image is analyzed for identifying potential human heads. The human head proposals are meticulously refined using a 3D human model. The proposals in each frame of the continuous video stream are tracked to trace their trajectories. The trajectories are again refined to ascertain reliable counting. People are eventually counted by accumulating the head trajectories leaving the scene. To enable effective head and trajectory identification, we also propose two different compound features. A thorough evaluation on PCDS demonstrates that our technique is able to count people in cluttered scenes with high accuracy at 45 fps on a 1.7 GHz processor, and hence it can be deployed for effective real-time people counting for intelligent transportation systems.",
"title": ""
},
{
"docid": "2b4dfa33051baf223c1a111980ef9d56",
"text": "A working model of Ontology based chatbot is proposed that handles queries from users for an E-commerce website. It is mainly concerned with providing user the total control over the search result on the website. This chatbot helps the user by mapping relationships of the various entities required by the user, thus providing detailed and accurate information there by overcoming the drawbacks of traditional chatbots. The Ontology template is developed using Protégé which stores the knowledge acquired from the website APIs while the dialog manager is handled using Wit.ai. The integration of the dialog manager and the ontology template is managed through Python. The related response to the query will be formatted and returned to the user on the dialog manager. General Terms Artificial intelligence and Machine learning",
"title": ""
},
{
"docid": "28cfe864acc8c40eb8759261273cf3bb",
"text": "Mobile-edge computing (MEC) has recently emerged as a promising paradigm to liberate mobile devices from increasingly intensive computation workloads, as well as to improve the quality of computation experience. In this paper, we investigate the tradeoff between two critical but conflicting objectives in multi-user MEC systems, namely, the power consumption of mobile devices and the execution delay of computation tasks. A power consumption minimization problem with task buffer stability constraints is formulated to investigate the tradeoff, and an online algorithm that decides the local execution and computation offloading policy is developed based on Lyapunov optimization. Specifically, at each time slot, the optimal frequencies of the local CPUs are obtained in closed forms, while the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method. Performance analysis is conducted for the proposed algorithm, which indicates that the power consumption and execution delay obeys an $\\left[O\\left(1\\slash V\\right),O\\left(V\\right)\\right]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters to the system performance.",
"title": ""
},
{
"docid": "8775af6029924a390cfb51aa17f99a2a",
"text": "Machine learning is increasingly used to make sense of the physical world yet may suffer from adversarial manipulation. We examine the Viola-Jones 2D face detection algorithm to study whether images can be created that humans do not notice as faces yet the algorithm detects as faces. We show that it is possible to construct images that Viola-Jones recognizes as containing faces yet no human would consider a face. Moreover, we show that it is possible to construct images that fool facial detection even when they are printed and then photographed.",
"title": ""
},
{
"docid": "ff10bbde3ed18eea73375540135f99f4",
"text": "Recurrent neural networks (RNNs) represent the state of the art in translation, image captioning, and speech recognition. They are also capable of learning algorithmic tasks such as long addition, copying, and sorting from a set of training examples. We demonstrate that RNNs can learn decryption algorithms – the mappings from plaintext to ciphertext – for three polyalphabetic ciphers (Vigenere, Autokey, and Enigma). Most notably, we demonstrate that an RNN with a 3000-unit Long Short-Term Memory (LSTM) cell can learn the decryption function of the Enigma machine. We argue that our model learns efficient internal representations of these ciphers 1) by exploring activations of individual memory neurons and 2) by comparing memory usage across the three ciphers. To be clear, our work is not aimed at ’cracking’ the Enigma cipher. However, we do show that our model can perform elementary cryptanalysis by running known-plaintext attacks on the Vigenere and Autokey ciphers. Our results indicate that RNNs can learn algorithmic representations of black box polyalphabetic ciphers and that these representations are useful for cryptanalysis.",
"title": ""
},
{
"docid": "5ebdda11fbba5d0633a86f2f52c7a242",
"text": "What is index modulation (IM)? This is an interesting question that we have started to hear more and more frequently over the past few years. The aim of this paper is to answer this question in a comprehensive manner by covering not only the basic principles and emerging variants of IM, but also reviewing the most recent as well as promising advances in this field toward the application scenarios foreseen in next-generation wireless networks. More specifically, we investigate three forms of IM: spatial modulation, channel modulation and orthogonal frequency division multiplexing (OFDM) with IM, which consider the transmit antennas of a multiple-input multiple-output system, the radio frequency mirrors (parasitic elements) mounted at a transmit antenna and the subcarriers of an OFDM system for IM techniques, respectively. We present the up-to-date advances in these three promising frontiers and discuss possible future research directions for IM-based schemes toward low-complexity, spectrum- and energy-efficient next-generation wireless networks.",
"title": ""
},
{
"docid": "3c30209d29779153b4cb33d13d101cf8",
"text": "Acceptance-based interventions such as mindfulness-based stress reduction program and acceptance and commitment therapy are alternative therapies for cognitive behavioral therapy for treating chronic pain patients. To assess the effects of acceptance-based interventions on patients with chronic pain, we conducted a systematic review and meta-analysis of controlled and noncontrolled studies reporting effects on mental and physical health of pain patients. All studies were rated for quality. Primary outcome measures were pain intensity and depression. Secondary outcomes were anxiety, physical wellbeing, and quality of life. Twenty-two studies (9 randomized controlled studies, 5 clinical controlled studies [without randomization] and 8 noncontrolled studies) were included, totaling 1235 patients with chronic pain. An effect size on pain of 0.37 was found for the controlled studies. The effect on depression was 0.32. The quality of the studies was not found to moderate the effects of acceptance-based interventions. The results suggest that at present mindfulness-based stress reduction program and acceptance and commitment therapy are not superior to cognitive behavioral therapy but can be good alternatives. More high-quality studies are needed. It is recommended to focus on therapies that integrate mindfulness and behavioral therapy. Acceptance-based therapies have small to medium effects on physical and mental health in chronic pain patients. These effects are comparable to those of cognitive behavioral therapy.",
"title": ""
},
{
"docid": "4b878ffe2fd7b1f87e2f06321e5f03fa",
"text": "Physical unclonable function (PUF) leverages the immensely complex and irreproducible nature of physical structures to achieve device authentication and secret information storage. To enhance the security and robustness of conventional PUFs, reconfigurable physical unclonable functions (RPUFs) with dynamically refreshable challenge-response pairs (CRPs) have emerged recently. In this paper, we propose two novel physically reconfigurable PUF (P-RPUF) schemes that exploit the process parameter variability and programming sensitivity of phase change memory (PCM) for CRP reconfiguration and evaluation. The first proposed PCM-based P-RPUF scheme extracts its CRPs from the measurable differences of the PCM cell resistances programmed by randomly varying pulses. An imprecisely controlled regulator is used to protect the privacy of the CRP in case the configuration state of the RPUF is divulged. The second proposed PCM-based RPUF scheme produces the random response by counting the number of programming pulses required to make the cell resistance converge to a predetermined target value. The merging of CRP reconfiguration and evaluation overcomes the inherent vulnerability of P-RPUF devices to malicious prediction attacks by limiting the number of accessible CRPs between two consecutive reconfigurations to only one. Both schemes were experimentally evaluated on 180-nm PCM chips. The obtained results demonstrated their quality for refreshable key generation when appropriate fuzzy extractor algorithms are incorporated.",
"title": ""
},
{
"docid": "8acf936fab6889108c37621ca1731080",
"text": "The present study represented a preliminary effort to empirically examine the efficacy of subtitled movie on listening comprehension of intermediate English as a Foreign Language students. To achieve this purpose, out of a total of 200 intermediate students, 90 were picked based on a proficiency test. The material consisted of six episodes (approximately 5 minutes each) of a DVD entitled ‘Wild Weather’. The students viewed only one of the three treatment conditions: English subtitles, Persian subtitles, no subtitles. After each viewing session, six sets of multiple-choice tests were administered to examine listening comprehension rates. The results revealed that the English subtitles group performed at a considerably higher level than the Persian subtitles group, which in turn performed at a substantially higher level than the no subtitle group on the listening test. Introduction With the increasing access to TV, video equipment and more recently, the computers, teachers have found more opportunities to use audio-visual materials at all levels of foreign language teaching, and they have frequently used them effectively in language classes (Kikuchi, 1997, p. 2; see also Canning-Wilson, 2000; Kothari, Pandey & Chudgar, 2004; Lewis & Anping, 2002; Meskill, 1996; Ryan, 1998; Weyer, 1999). In the same line, Richards and Gordon (2004, p. 2) maintain that video, as a medium, enables learners to use visual information to enhance comprehension. It allows learners to observe the gestures, facial expressions and other aspects of body language that accompany speech. It also presents authentic language as well as cultural information about speakers of English. For many years, a widespread view on audio comprehension held that both targetlanguage captions and native-language subtitles were anathema to developing listenBritish Journal of Educational Technology Vol 42 No 1 2011 181–192 doi:10.1111/j.1467-8535.2009.01004.x © 2009 The Authors. British Journal of Educational Technology © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. ing comprehension; but this popular view has not been well tested (Robin, 2007, p. 111). So, more and more English as a Foreign Language (EFL) teachers have begun, in recent years, to use movies in their classes at different levels; however, what has unfairly remained unresolved is the use of subtitles in movies. Teachers of English are sometimes in a dilemma whether they should show a film with or without subtitles and in what language and, above all, which way will benefit their students most in relation to listening comprehension. A huge gap is observed between the use of subtitled films and listening comprehension in academic settings in Iran, too. In order to bridge this gap, the researchers took up this issue and conducted a study in order to determine the role of unsubtitled/subtitled films in language learning/teaching in the Iranian contexts. In fact, the study aimed to find out which of the following is likely to be more effective in developing listening comprehension: bimodal subtitling (English subtitles with English dialogues), standard subtitling (Persian subtitles with English dialogues) or English dialogues with no subtitle. Review of literature Markham (1989) investigated the effects of subtitled TV upon the listening comprehension of beginner, intermediate and advanced learners of English. He used two subtitled videos on topics not known to the learners. Each group viewed both movies with and without subtitle. He measured the participants’ comprehension through multiplechoice questions based on the language of the video. Coming to a point where all three groups using the subtitles performed significantly better, he speculated that ESL (English as a Second Language) students might be able to improve their listening and reading comprehension simultaneously (see De Bot, Jagt, Janssen, Kessels & Schils, 1986; Holobow, Lambert & Sayegh, 1984). In a study, Garza (1991) compared Russian and ESL learners’ comprehension of video segments with second language captions to that of video segments without captions. Five segments of authentic American and Russian video on a particular genre of video (drama, comedy, news, animation and music), each between 2 and 4 minutes in length, were selected. A 10-item (multiple-choice) comprehension test was used to measure students’ comprehension of the video segments. A total of 140 students, with varying levels of proficiency in Russian, viewed the captioned or captionless Russian video segments. Comparison of the comprehension test scores of the two groups of students revealed that students who viewed the video segments with captions gained the highest scores. Garza’s data clearly showed that a textually enhanced visual channel, which presents information redundant to that presented by the auditory channel, facilitates students’ comprehension. In their study, Neuman and Koskinen (1992) investigated whether comprehensible input, delivered by captioned television programmes, affected the acquisition of vocabulary and of conceptual knowledge. The participants were children in immersion programmes, and the video material was of science lessons. They picked out 90 of the most difficult words from these video lessons as target words, 10 for each week. Partici182 British Journal of Educational Technology Vol 42 No 1 2011 © 2009 The Authors. British Journal of Educational Technology © 2009 Becta. pants were assigned to one of four treatment groups: captioned TV, TV without captions, reading along and listening to the soundtrack, and reading only. Results for the vocabulary acquisition strand of their study, which used word recognition tests and tests of the words in context sentences, showed that the captioned TV group performed consistently better (see Danan, 1992). Danan (1992) investigated the effects of different subtitling conditions on vocabulary recall. She found, like Holobow et al (1984), that reversed subtitling not only produced the most favourable results, but that bimodal input also positively increased vocabulary recall. The results also showed benefits for beginners using such bimodal input, which was not the case in the study conducted by Holobow et al. She explains the success of reversed subtitling for vocabulary recall through the way in which translation facilitates foreign language encoding and that it may help with the segmentation problems. She continues that students often have difficulty recognising word boundaries in the spoken language, especially if they are not familiar with some of the words. Listening to and reading the text at the same time can at least help students distinguish known from unknown words (Danan, p. 521). Automatic reading of subtitles, however, does not prevent the processing of the soundtrack. To demonstrate this point, d’Ydewalle and Pavakanun (1997, as cited in Kothari et al 2004, p. 29) carried out another group of cognitive experiments and relied on a double-task technique measuring reaction times to a flashing light during a television programme. Their findings confirmed the value of bimodal L2 input for intermediate-advanced levels of L2 learners. According to their study, the slower reactions in the presence of both sound and subtitles suggested that more complex, simultaneous processing of the soundtrack and the subtitles was occurring. According to them, with both subtitles and sound, attention seemed in fact to be divided between the two according to the viewers’ needs, with more time usually devoted to subtitles for the processing of complex information. One study by Koostra, Jonannes and Beentjes (1999) focused on 246 Dutch children in Grade 4 (before any formal instruction in English) and Grade 6 (following 1 year of English at school) after they watched a 15-minute American documentary shown twice with or without subtitles. The study demonstrated that children acquired more English vocabulary from watching subtitled television, although even children in the condition without subtitles learned some new words. Children in the subtitled condition also performed significantly better on a word recognition test, consisting of words heard in the soundtrack and words that could have been used in the context of the particular programme. To examine the effect of captioning on aural word recognition skills, Markham (1999) designed another experiment involving multiple-choice tests administered orally. A total of 118 advanced ESL students watched two short video programmes (12 and 13 minutes respectively) with or without captions. In the subsequent listening tests, participants heard sentences directly taken from the script and immediately followed by four single words (one key word which belonged to a sentence just heard and three Subtitled films and listening comprehension 183 © 2009 The Authors. British Journal of Educational Technology © 2009 Becta. distracters). The tests showed that the availability of subtitles during the screening significantly improved the students’ ability to identify the key words when they subsequently heard them again. To test how subtitling affects listening ability regardless of semantic information, so as to assess recognition memory in relation to sound alone, Bird and Williams (2002) focused on the implicit and explicit learning of spoken words and non-words. Implicit learning pertained to auditory word recognition while explicit learning referred to the intentional recollection and conscious retention of aural stimuli. A first experiment with 16 English native and 16 advanced non-native speakers demonstrated that participants in the captioned condition were better able to implicitly retain the phonological information they had just processed. They also showed superior explicit recognition memory when asked to aurally identify words that had been presented in a previous phase. A second experim",
"title": ""
},
{
"docid": "9e656ee8b73eef700080901ea74e3839",
"text": "The challenge of extending the autonomy in AUV deployments is one of the most important issues in oceanographic research today. The possibility of maintaining a team of AUV under deployment in a defined area of interest for a long period could provide an additional source of information [8]. All this data in combination with the measures provided by buoys and sea gliders used for slow motion and long range operations will be very valuable. A group of low cost AUV's in alternative automatic switching system navigation-charging operation, could allow a kind of continuous surveying operation. This work is the continuation of the ideas that some of the authors previously presented in the AUV 2010 conference at MBARI [8]. At this conference was proposed the great interest for researching oceanic processes on two areas near Cartagena, Spain: cape Tiñoso and the Mar Menor a shallow coastal lagoon. Both areas require a different research structure configuration because of their opposite characteristics. The Mar Menor is a shallow salty lagoon 20 miles long with 7 m of maximum depth and particular features. This lagoon seems to present a sort of oceanic behavior and can be compared with the major oceans but a minor scale. The second area considered is cape Tiñoso, a very deep area in the Mediterranean Sea where the presence of a self-break provides an interesting potential for the research of the effect of upwelling currents.",
"title": ""
},
{
"docid": "cf8e0226564c4e94b378e6117b89ad7d",
"text": "Developing CPU scheduling algorithms and understanding their impact in practice can be difficult and time consuming due to the need to modify and test operating system kernel code and measure the resulting performance on a consistent workload of real applications. As processor is the important resource, CPU scheduling becomes very important in accomplishing the operating system (OS) design goals. The intention should be allowed as many as possible running processes at all time in order to make best use of CPU. This paper presents a state diagram that depicts the comparative study of various scheduling algorithms for a single CPU and shows which algorithm is best for the particular situation. Using this representation, it becomes much easier to understand what is going on inside the system and why a different set of processes is a candidate for the allocation of the CPU at different time. The objective of the study is to analyze the high efficient CPU scheduler on design of the high quality scheduling algorithms which suits the scheduling goals.",
"title": ""
},
{
"docid": "5345678a15bd57fa3e073dee2ff82c0b",
"text": "A data evolution life cycle typically consists of four stages of data activities: collection, organization, presentation, and application. During the cycle, data evolve as per the needs and specifications of a theory. In this paper we propose a concept of theory-specific data quality (DQ) that stipulates DQ is defined and measured as the extent to which data meet the needs and specifications of a theory. Depending on when a theory is applied during data evolution, we define DQ respectively at collection, organization, presentation, and application levels. We derive measurement attributes based on a fishbone cause-effect diagram. We compare our models with existing ones and point our future research directions to further validate and apply the proposed measurement models.",
"title": ""
},
{
"docid": "1011ab88dfc0dba5dd773f9c50c4d8cc",
"text": "Cataract is the leading cause of blindness and posterior subcapsular cataract (PSC) leads to significant visual impairment. An automatic approach for detecting PSC opacity in retro-illumination images is investigated. The features employed include intensity, edge, size and spatial location. The system was tested using 441 images. The automatic detection was compared with the human expert. The sensitivity and specificity are 82.6% and 80% respectively. The preliminary research indicates it is feasible to apply automatic detection in the clinical screening of PSC in the future.",
"title": ""
},
{
"docid": "5227121a2feb59fc05775e2623239da9",
"text": "BACKGROUND\nCriminal offenders with a diagnosis of psychopathy or borderline personality disorder (BPD) share an impulsive nature but tend to differ in their style of emotional response. This study aims to use multiple psychophysiologic measures to compare emotional responses to unpleasant and pleasant stimuli.\n\n\nMETHODS\nTwenty-five psychopaths as defined by the Hare Psychopathy Checklist and 18 subjects with BPD from 2 high-security forensic treatment facilities were included in the study along with 24 control subjects. Electrodermal response was used as an indicator of emotional arousal, modulation of the startle reflex as a measure of valence, and electromyographic activity of the corrugator muscle as an index of emotional expression.\n\n\nRESULTS\nCompared with controls, psychopaths were characterized by decreased electrodermal responsiveness, less facial expression, and the absence of affective startle modulation. A higher percentage of psychopaths showed no startle reflex. Subjects with BPD showed a response pattern very similar to that of controls, ie, they showed comparable autonomic arousal, and their startle responses were strongest to unpleasant slides and weakest to pleasant slides. However, corrugator electromyographic activity in subjects with BPD demonstrated little facial modulation when they viewed either pleasant or unpleasant slides.\n\n\nCONCLUSIONS\nThe results support the theory that psychopaths are characterized by a pronounced lack of fear in response to aversive events. Furthermore, the results suggest a general deficit in processing affective information, regardless of whether stimuli are negative or positive. Emotional hyporesponsiveness was specific to psychopaths, since results for offenders with BPD indicate a widely adequate processing of emotional stimuli.",
"title": ""
},
{
"docid": "b18bb896338bdfddfd0a3e0a0518e8fe",
"text": "Recent studies have shown that deep neural networks (DNN) are vulnerable to adversarial samples: maliciously-perturbed samples crafted to yield incorrect model outputs. Such attacks can severely undermine DNN systems, particularly in security-sensitive settings. It was observed that an adversary could easily generate adversarial samples by making a small perturbation on irrelevant feature dimensions that are unnecessary for the current classification task. To overcome this problem, we introduce a defensive mechanism called DeepMask. By identifying and removing unnecessary features in a DNN model, DeepMask limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs. Comparing with other defensive approaches, DeepMask is easy to implement and computationally efficient. Experimental results show that DeepMask can increase the performance of state-of-the-art DNN models against adversarial samples.",
"title": ""
},
{
"docid": "d473619f76f81eced041df5bc012c246",
"text": "Monocular visual odometry (VO) and simultaneous localization and mapping (SLAM) have seen tremendous improvements in accuracy, robustness, and efficiency, and have gained increasing popularity over recent years. Nevertheless, not so many discussions have been carried out to reveal the influences of three very influential yet easily overlooked aspects, such as photometric calibration, motion bias, and rolling shutter effect. In this work, we evaluate these three aspects quantitatively on the state of the art of direct, feature-based, and semi-direct methods, providing the community with useful practical knowledge both for better applying existing methods and developing new algorithms of VO and SLAM. Conclusions (some of which are counterintuitive) are drawn with both technical and empirical analyses to all of our experiments. Possible improvements on existing methods are directed or proposed, such as a subpixel accuracy refinement of oriented fast and rotated brief (ORB)-SLAM, which boosts its performance.",
"title": ""
},
{
"docid": "43fc5fee6e45f32b449312b0f7fa3101",
"text": "BACKGROUND\nMuch of the developing world, particularly sub-Saharan Africa, exhibits high levels of morbidity and mortality associated with diarrhea, acute respiratory infection, and malaria. With the increasing awareness that the aforementioned infectious diseases impose an enormous burden on developing countries, public health programs therein could benefit from parsimonious general-purpose forecasting methods to enhance infectious disease intervention. Unfortunately, these disease time-series often i) suffer from non-stationarity; ii) exhibit large inter-annual plus seasonal fluctuations; and, iii) require disease-specific tailoring of forecasting methods.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nIn this longitudinal retrospective (01/1996-06/2004) investigation, diarrhea, acute respiratory infection of the lower tract, and malaria consultation time-series are fitted with a general-purpose econometric method, namely the multiplicative Holt-Winters, to produce contemporaneous on-line forecasts for the district of Niono, Mali. This method accommodates seasonal, as well as inter-annual, fluctuations and produces reasonably accurate median 2- and 3-month horizon forecasts for these non-stationary time-series, i.e., 92% of the 24 time-series forecasts generated (2 forecast horizons, 3 diseases, and 4 age categories = 24 time-series forecasts) have mean absolute percentage errors circa 25%.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThe multiplicative Holt-Winters forecasting method: i) performs well across diseases with dramatically distinct transmission modes and hence it is a strong general-purpose forecasting method candidate for non-stationary epidemiological time-series; ii) obliquely captures prior non-linear interactions between climate and the aforementioned disease dynamics thus, obviating the need for more complex disease-specific climate-based parametric forecasting methods in the district of Niono; furthermore, iii) readily decomposes time-series into seasonal components thereby potentially assisting with programming of public health interventions, as well as monitoring of disease dynamics modification. Therefore, these forecasts could improve infectious diseases management in the district of Niono, Mali, and elsewhere in the Sahel.",
"title": ""
},
{
"docid": "c71b4a8d6d9ffc64c9e86aab40d9784f",
"text": "Voice impersonation is not the same as voice transformation, although the latter is an essential element of it. In voice impersonation, the resultant voice must convincingly convey the impression of having been naturally produced by the target speaker, mimicking not only the pitch and other perceivable signal qualities, but also the style of the target speaker. In this paper, we propose a novel neural-network based speech quality- and style-mimicry framework for the synthesis of impersonated voices. The framework is built upon a fast and accurate generative adversarial network model. Given spectrographic representations of source and target speakers' voices, the model learns to mimic the target speaker's voice quality and style, regardless of the linguistic content of either's voice, generating a synthetic spectrogram from which the time-domain signal is reconstructed using the Griffin-Lim method. In effect, this model reframes the well-known problem of style-transfer for images as the problem of style-transfer for speech signals, while intrinsically addressing the problem of durational variability of speech sounds. Experiments demonstrate that the model can generate extremely convincing samples of impersonated speech. It is even able to impersonate voices across different genders effectively. Results are qualitatively evaluated using standard procedures for evaluating synthesized voices.",
"title": ""
},
{
"docid": "b4409a8e8a47bc07d20cebbfaccb83fd",
"text": "We evaluate two decades of proposals to replace text passwords for general-purpose user authentication on the web using a broad set of twenty-five usability, deployability and security benefits that an ideal scheme might provide. The scope of proposals we survey is also extensive, including password management software, federated login protocols, graphical password schemes, cognitive authentication schemes, one-time passwords, hardware tokens, phone-aided schemes and biometrics. Our comprehensive approach leads to key insights about the difficulty of replacing passwords. Not only does no known scheme come close to providing all desired benefits: none even retains the full set of benefits that legacy passwords already provide. In particular, there is a wide range from schemes offering minor security benefits beyond legacy passwords, to those offering significant security benefits in return for being more costly to deploy or more difficult to use. We conclude that many academic proposals have failed to gain traction because researchers rarely consider a sufficiently wide range of real-world constraints. Beyond our analysis of current schemes, our framework provides an evaluation methodology and benchmark for future web authentication proposals.",
"title": ""
}
] |
scidocsrr
|
75a800343177a572e7d7e368a2bb87af
|
Probability of stroke: a risk profile from the Framingham Study.
|
[
{
"docid": "cf506587f2699d88e4a2e0be36ccac41",
"text": "A complete list of the titles in this series appears at the end of this volume. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic format.",
"title": ""
}
] |
[
{
"docid": "afe24ba1c3f3423719a98e1a69a3dc70",
"text": "This brief presents a nonisolated multilevel linear amplifier with nonlinear component (LINC) power amplifier (PA) implemented in a standard 0.18-μm complementary metal-oxide- semiconductor process. Using a nonisolated power combiner, the overall power efficiency is increased by reducing the wasted power at the combined out-phased signal; however, the efficiency at low power still needs to be improved. To further improve the efficiency of the low-power (LP) mode, we propose a multiple-output power-level LINC PA, with load modulation implemented by switches. In addition, analysis of the proposed design on the system level as well as the circuit level was performed to optimize its performance. The measurement results demonstrate that the proposed technique maintains more than 45% power-added efficiency (PAE) for peak power at 21 dB for the high-power mode and 17 dBm for the LP mode at 600 MHz. The PAE for a 6-dB peak-to-average ratio orthogonal frequency-division multiplexing modulated signal is higher than 24% PAE in both power modes. To the authors' knowledge, the proposed output-phasing PA is the first implemented multilevel LINC PA that uses quarter-wave lines without multiple power supply sources.",
"title": ""
},
{
"docid": "5318baa10a6db98a0f31c6c30fdf6104",
"text": "In image analysis, the images are often represented by multiple visual features (also known as multiview features), that aim to better interpret them for achieving remarkable performance of the learning. Since the processes of feature extraction on each view are separated, the multiple visual features of images may include overlap, noise, and redundancy. Thus, learning with all the derived views of the data could decrease the effectiveness. To address this, this paper simultaneously conducts a hierarchical feature selection and a multiview multilabel (MVML) learning for multiview image classification, via embedding a proposed a new block-row regularizer into the MVML framework. The block-row regularizer concatenating a Frobenius norm (F-norm) regularizer and an l2,1-norm regularizer is designed to conduct a hierarchical feature selection, in which the F-norm regularizer is used to conduct a high-level feature selection for selecting the informative views (i.e., discarding the uninformative views) and the 12,1-norm regularizer is then used to conduct a low-level feature selection on the informative views. The rationale of the use of a block-row regularizer is to avoid the issue of the over-fitting (via the block-row regularizer), to remove redundant views and to preserve the natural group structures of data (via the F-norm regularizer), and to remove noisy features (the 12,1-norm regularizer), respectively. We further devise a computationally efficient algorithm to optimize the derived objective function and also theoretically prove the convergence of the proposed optimization method. Finally, the results on real image datasets show that the proposed method outperforms two baseline algorithms and three state-of-the-art algorithms in terms of classification performance.",
"title": ""
},
{
"docid": "a999bf3da879dde7fc2acb8794861daf",
"text": "Most OECD Member countries have sought to renew their systems and structures of public management in the last 10-15 years. Some started earlier than others and the emphasis will vary among Member countries according to their historic traditions and institutions. There is no single best model of public management, but what stands out most clearly is the extent to which countries have pursued and are pursuing broadly common approaches to public management reform. This is most probably because countries have been responding to essentially similar pressures to reform.",
"title": ""
},
{
"docid": "8c3ecd27a695fef2d009bbf627820a0d",
"text": "This paper presents a novel attention mechanism to improve stereo-vision based object recognition systems in terms of recognition performance and computational efficiency at the same time. We utilize the Stixel World, a compact medium-level 3D representation of the local environment, as an early focus-of-attention stage for subsequent system modules. In particular, the search space of computationally expensive pattern classifiers is significantly narrowed down. We explicitly couple the 3D Stixel representation with prior knowledge about the object class of interest, i.e. 3D geometry and symmetry, to precisely focus processing on well-defined local regions that are consistent with the environment model. Experiments are conducted on large real-world datasets captured from a moving vehicle in urban traffic. In case of vehicle recognition as an experimental testbed, we demonstrate that the proposed Stixel-based attention mechanism significantly reduces false positive rates at constant sensitivity levels by up to a factor of 8 over state-of-the-art. At the same time, computational costs are reduced by more than an order of magnitude.",
"title": ""
},
{
"docid": "3dfa264bb5b7e4620a2a9efa70c99db4",
"text": "Recent advances in pattern analysis techniques together with the advent of miniature vibration sensors and high speed data acquisition technologies provide a unique opportunity to develop and implement in-situ, beneficent, and non-intrusive condition monitoring and quality assessment methods for a broad range of rotating machineries. This invited paper provides an overview of such a framework. It provides a review of classical methods used in vibration signal processing in both time and frequency domain. Subsequently, a collection of recent computational intelligence based methods in this problem domain with case studies using both single and multi-dimensional signals is presented. The datasets used in these case studies have been acquired from a variety of real-life problems 1 Vibration and Condition Monitoring Vibration signals provide useful information that leads to insights on the operating condition of the equipment under test [1, 2]. By inspecting the physical characteristics of the vibration signals, one is able to detect the presence of a fault in an operating machine, to localise the position of a crack in gear, to diagnose the health state of a ball bearing, etc. For decades, researchers are looking at means to diagnose automatically the health state of rotating machines, from the smaller bearings and gears to the larger combustion engines and turbines. With the advent of wireless technologies and miniature transducers, we are now able to monitor machine operating condition in real time and, with the aid of computational intelligence and pattern recognition technique, in an automated fashion. This paper draws from a collection of past and recent works in the area of automatic machine condition monitoring using vibration signals. Typically, vibration signals are acquired through vibration sensors. The three main classes of vibration sensors are displacement sensors, velocity sensors, and accelerometers. Displacement sensors can be non-contact sensors as in the case of optical sensors and they are more sensitive in the lower frequency range, typically less than 1 kHz. Velocity sensors, on the other hand, operate more effectively with flat amplitude response in the 10 Hz to 2 kHz range. Among these sensors, accelerometers have the best amplitude response in the high frequency range up to tens of kHz. Usually, accelerometers are built using capacitive sensing, or more commonly, a piezoelectric mechanism. Accelerometers are usually light weight ranging from 0.4 gram to 50 gram. 1.1 Advantages of vibration signal monitoring Vibration signal processing has some obvious advantages. First, vibration sensors are non-intrusive, and at times non-contact. As such, we can perform diagnostic in a non-destructive manner. Second, vibration signals can be obtained online and in-situ. This is a desired feature for production lines. The trending capability also provides means to predictive maintenance of the machineries. As such, unnecessary downtime for preventive maintenance can be minimized. Third, the vibration sensors are inexpensive and widely available. Modern mobile smart devices are equipped with one tri-axial accelerometer typically. Moreover, the technologies to acquire and convert the analogue outputs from the sensors are affordable nowadays. Last but not least, techniques for diagnosing a wide range",
"title": ""
},
{
"docid": "bc018ef7cbcf7fc032fe8556016d08b1",
"text": "This paper presents a simple, efficient, yet robust approach, named joint-scale local binary pattern (JLBP), for texture classification. In the proposed approach, the joint-scale strategy is developed firstly, and the neighborhoods of different scales are fused together by a simple arithmetic operation. And then, the descriptor is extracted from the mutual integration of the local patches based on the conventional local binary pattern (LBP). The proposed scheme can not only describe the micro-textures of a local structure, but also the macro-textures of a larger area because of the joint of multiple scales. Further, motivated by the completed local binary pattern (CLBP) scheme, the completed JLBP (CJLBP) is presented to enhance its power. The proposed descriptor is evaluated in relation to other recent LBP-based patterns and non-LBP methods on popular benchmark texture databases, Outex, CURet and UIUC. Generally, the experimental results show that the new method performs better than the state-of-the-art techniques.",
"title": ""
},
{
"docid": "a0f4b7f3f9f2a5d430a3b8acead2b746",
"text": "Imagining a scene described in natural language with realistic layout and appearance of entities is the ultimate test of spatial, visual, and semantic world knowledge. Towards this goal, we present the Composition, Retrieval and Fusion Network (Craft), a model capable of learning this knowledge from video-caption data and applying it while generating videos from novel captions. Craft explicitly predicts a temporal-layout of mentioned entities (characters and objects), retrieves spatio-temporal entity segments from a video database and fuses them to generate scene videos. Our contributions include sequential training of components of Craft while jointly modeling layout and appearances, and losses that encourage learning compositional representations for retrieval. We evaluate Craft on semantic fidelity to caption, composition consistency, and visual quality. Craft outperforms direct pixel generation approaches and generalizes well to unseen captions and to unseen video databases with no text annotations. We demonstrate Craft on Flintstones, a new richly annotated video-caption dataset with over 25000 videos. For a glimpse of videos generated by Craft, see https://youtu.be/688Vv86n0z8. Fred wearing a red hat is walking in the living room Retrieve Compose Retrieve Compose Retrieve Pebbles is sitting at a table in a room watching the television Retrieve Compose Retrieve Compose Compose Retrieve Retrieve Fuse",
"title": ""
},
{
"docid": "274373d46b748d92e6913496507353b1",
"text": "This paper introduces a blind watermarking based on a convolutional neural network (CNN). We propose an iterative learning framework to secure robustness of watermarking. One loop of learning process consists of the following three stages: Watermark embedding, attack simulation, and weight update. We have learned a network that can detect a 1-bit message from a image sub-block. Experimental results show that this learned network is an extension of the frequency domain that is widely used in existing watermarking scheme. The proposed scheme achieved robustness against geometric and signal processing attacks with a learning time of one day.",
"title": ""
},
{
"docid": "90d82110c2b10c98c5cb99d68ebb9df3",
"text": "Purpose – The purpose of this paper is to investigate the demographic characteristics of small and medium enterprises (SMEs) with regards to their patterns of internet-based information and communications technology (ICT) adoption, taking into account the dimensions of ICT benefits, barriers, and subsequently adoption intention. Design/methodology/approach – A questionnaire-based survey is used to collect data from 406 managers or owners of SMEs in Malaysia. Findings – The results reveal that the SMEs would adopt internet-based ICT regardless of years of business start-up and internet experience. Some significant differences are spotted between manufacturing and service SMEs in terms of their demographic characteristics and internet-based ICT benefits, barriers, and adoption intention. Both the industry types express intention to adopt internet-based ICT, with the service-based SMEs demonstrating greater intention. Research limitations/implications – The paper focuses only on the SMEs in the southern region of Malaysia. Practical implications – The findings offer valuable insights to the SMEs – in particular promoting internet-based ICT adoption for future business success. Originality/value – This paper is perhaps one of the first to comprehensively investigate the relationship between demographic characteristics of SMEs and the various variables affecting their internet-based ICT adoption intention.",
"title": ""
},
{
"docid": "4d4540a59e637f9582a28ed62083bfd6",
"text": "Targeted sentiment analysis classifies the sentiment polarity towards each target entity mention in given text documents. Seminal methods extract manual discrete features from automatic syntactic parse trees in order to capture semantic information of the enclosing sentence with respect to a target entity mention. Recently, it has been shown that competitive accuracies can be achieved without using syntactic parsers, which can be highly inaccurate on noisy text such as tweets. This is achieved by applying distributed word representations and rich neural pooling functions over a simple and intuitive segmentation of tweets according to target entity mentions. In this paper, we extend this idea by proposing a sentencelevel neural model to address the limitation of pooling functions, which do not explicitly model tweet-level semantics. First, a bi-directional gated neural network is used to connect the words in a tweet so that pooling functions can be applied over the hidden layer instead of words for better representing the target and its contexts. Second, a three-way gated neural network structure is used to model the interaction between the target mention and its surrounding contexts. Experiments show that our proposed model gives significantly higher accuracies compared to the current best method for targeted sentiment analysis.",
"title": ""
},
{
"docid": "641fa9e397e1ce6e320ec4cacfd3064f",
"text": "Machine translation is a natural candidate problem for reinforcement learning from human feedback: users provide quick, dirty ratings on candidate translations to guide a system to improve. Yet, current neural machine translation training focuses on expensive human-generated reference translations. We describe a reinforcement learning algorithm that improves neural machine translation systems from simulated human feedback. Our algorithm combines the advantage actor-critic algorithm (Mnih et al., 2016) with the attention-based neural encoderdecoder architecture (Luong et al., 2015). This algorithm (a) is well-designed for problems with a large action space and delayed rewards, (b) effectively optimizes traditional corpus-level machine translation metrics, and (c) is robust to skewed, high-variance, granular feedback modeled after actual human behaviors.",
"title": ""
},
{
"docid": "8d40a30ba43e055cf830af0514f01c9d",
"text": "The rapid growth of data size and accessibility in recent years has instigated a shift of philosophy in algorithm design for artificial intelligence. Instead of engineering algorithms by hand, the ability to learn composable systems automatically from massive amounts of data has led to groundbreaking performance in important domains such as computer vision, speech recognition, and natural language processing. The most popular class of techniques used in these domains is called deep learning , and is seeing significant attention from industry. However, these models require incredible amounts of data and compute power to train, and are limited by the need for better hardware acceleration to accommodate scaling beyond current data and model sizes. While the current solution has been to use clusters of graphics processing units (GPU) as general purpose processors (GPGPU), the use of field programmable gate arrays (FPGA) provide an interesting alternative. Current trends in design tools for FPGAs have made them more compatible with the high-level software practices typically practiced in the deep learning community, making FPGAs more accessible to those who build and deploy models. Since FPGA architectures are flexible, this could also allow researchers the ability to explore model-level optimizations beyond what is possible on fixed architectures such as GPUs. As well, FPGAs tend to provide high performance per watt of power consumption, which is of particular importance for application scientists interested in large scale server-based deployment or resource-limited embedded applications. This review takes a look at deep learning and FPGAs from a hardware acceleration perspective, identifying trends and innovations that make these technologies a natural fit, and motivates a discussion on how FPGAs may best serve the needs of the deep learning community moving forward.",
"title": ""
},
{
"docid": "7e4a4e76ba976a24151b243148a2feb4",
"text": "Amodel based clustering procedure for data of mixed type, clustMD, is developed using a latent variable model. It is proposed that a latent variable, following a mixture of Gaussian distributions, generates the observed data of mixed type. The observed data may be any combination of continuous, binary, ordinal or nominal variables. clustMD employs a parsimonious covariance structure for the latent variables, leading to a suite of six clustering models that vary in complexity and provide an elegant and unified approach to clustering mixed data. An expectation maximisation (EM) algorithm is used to estimate clustMD; in the presence of nominal data a Monte Carlo EM algorithm is required. The clustMD model is illustrated by clustering simulated mixed type data and prostate cancer patients, on whom mixed data have been recorded.",
"title": ""
},
{
"docid": "2d54a447df50a31c6731e513bfbac06b",
"text": "Lumbar intervertebral disc diseases are among the main causes of lower back pain (LBP). Desiccation is a common disease resulting from various reasons and ultimately most people are affected by desiccation at some age. We propose a probabilistic model that incorporates intervertebral disc appearance and contextual information for automating the diagnosis of lumbar disc desiccation. We utilize a Gibbs distribution for processing localized lumbar intervertebral discs' appearance and contextual information. We use 55 clinical T2-weighted MRI for lumbar area and achieve over 96% accuracy on a cross validation experiment.",
"title": ""
},
{
"docid": "40c9250b3fb527425138bc41acf8fd4e",
"text": "Noise pollution is a major problem in cities around the world. The current methods to assess it neglect to represent the real exposure experienced by the citizens themselves, and therefore could lead to wrong conclusions and a biased representations. In this paper we present a novel approach to monitor noise pollution involving the general public. Using their mobile phones as noise sensors, we provide a low cost solution for the citizens to measure their personal exposure to noise in their everyday environment and participate in the creation of collective noise maps by sharing their geo-localized and annotated measurements with the community. Our prototype, called NoiseTube, can be found online [1].",
"title": ""
},
{
"docid": "db41f44f0ecccdd1828ac2789c2cedc9",
"text": "Porter’s generic strategy matrix, which highlights cost leadership, differentiation and focus as the three basic choices for firms, has dominated corporate competitive strategy for the last thirty years. According to this model, a venture can choose how it wants to compete, based on the match between its type of competitive advantage and the market target pursued, as the key determinants of choice (Akan, Allen, Helms & Spralls, 2006:43).",
"title": ""
},
{
"docid": "4fb301cffa66f37c07bd6c44a108e142",
"text": "Unambiguous identities of resources are important aspect for semantic web. This paper addresses the personal identity issue in the context of bibliographies. Because of abbreviations or misspelling of names in publications or bibliographies, an author may have multiple names and multiple authors may share the same name. Such name ambiguity affects the performance of identity matching, document retrieval and database federation, and causes improper attribution of research credit. This paper describes a new K-means clustering algorithm based on an extensible Naïve Bayes probability model to disambiguate authors with the same first name initial and last name in the bibliographies and proposes a canonical name. The model captures three types of bibliographic information: coauthor names, the title of the paper and the title of the journal or proceeding. The algorithm achieves best accuracies of 70.1% and 73.6% on disambiguating 6 different J Anderson s and 9 different \"J Smith\" s based on the citations collected from researchers publication web pages.",
"title": ""
},
{
"docid": "ab0d19b1cb4a0f5d283f67df35c304f4",
"text": "OBJECTIVE\nWe compared temperament and character traits in children and adolescents with bipolar disorder (BP) and healthy control (HC) subjects.\n\n\nMETHOD\nSixty nine subjects (38 BP and 31 HC), 8-17 years old, were assessed with the Kiddie Schedule for Affective Disorders and Schizophrenia-Present and Lifetime. Temperament and character traits were measured with parent and child versions of the Junior Temperament and Character Inventory.\n\n\nRESULTS\nBP subjects scored higher on novelty seeking, harm avoidance, and fantasy subscales, and lower on reward dependence, persistence, self-directedness, and cooperativeness compared to HC (all p < 0.007), by child and parent reports. These findings were consistent in both children and adolescents. Higher parent-rated novelty seeking, lower self-directedness, and lower cooperativeness were associated with co-morbid attention-deficit/hyperactivity disorder (ADHD). Lower parent-rated reward dependence was associated with co-morbid conduct disorder, and higher child-rated persistence was associated with co-morbid anxiety.\n\n\nCONCLUSIONS\nThese findings support previous reports of differences in temperament in BP children and adolescents and may assist in a greater understating of BP children and adolescents beyond mood symptomatology.",
"title": ""
},
{
"docid": "96a280588f4f5e61a4470ffc1277efa9",
"text": "Hyperspectral data acquired from field-based platforms present new challenges for their analysis, particularly for complex vertical surfaces exposed to large changes in the geometry and intensity of illumination. The use of hyperspectral data to map rock types on a vertical mine face is demonstrated, with a view to providing real-time information for automated mining applications. The performance of two classification techniques, namely, spectral angle mapper (SAM) and support vector machines (SVMs), is compared rigorously using a spectral library acquired under various conditions of illumination. SAM and SVM are then applied to a mine face, and results are compared with geological boundaries mapped in the field. Effects of changing conditions of illumination, including shadow, were investigated by applying SAM and SVM to imagery acquired at different times of the day. As expected, classification of the spectral libraries showed that, on average, SVM gave superior results for SAM, although SAM performed better where spectra were acquired under conditions of shadow. In contrast, when applied to hypserspectral imagery of a mine face, SVM did not perform as well as SAM. Shadow, through its impact upon spectral curve shape and albedo, had a profound impact on classification using SAM and SVM.",
"title": ""
},
{
"docid": "f90e6d3084733994935fcbee64286aec",
"text": "To find the position of an acoustic source in a room, typically, a set of relative delays among different microphone pairs needs to be determined. The generalized cross-correlation (GCC) method is the most popular to do so and is well explained in a landmark paper by Knapp and Carter. In this paper, the idea of cross-correlation coefficient between two random signals is generalized to the multichannel case by using the notion of spatial prediction. The multichannel spatial correlation matrix is then deduced and its properties are discussed. We then propose a new method based on the multichannel spatial correlation matrix for time delay estimation. It is shown that this new approach can take advantage of the redundancy when more than two microphones are available and this redundancy can help the estimator to better cope with noise and reverberation.",
"title": ""
}
] |
scidocsrr
|
4562553f10e039c1f88b0b00caa38a37
|
Parallel matrix factorization for low-rank tensor completion
|
[
{
"docid": "36f2be7a14eeb10ad975aa00cfd30f36",
"text": "Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a K-way tensor of length n and Tucker rank r from Gaussian measurements requires Ω(rnK−1) observations. In contrast, a certain (intractable) nonconvex formulation needs only O(r +nrK) observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with O(rbK/2cndK/2e) observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bound for the sum-of-nuclear-norms model follows from a new result on recovering signals with multiple sparse structures (e.g. sparse, low rank), which perhaps surprisingly demonstrates the significant suboptimality of the commonly used recovery approach via minimizing the sum of individual sparsity inducing norms (e.g. l1, nuclear norm). Our new formulation for low-rank tensor recovery however opens the possibility in reducing the sample complexity by exploiting several structures jointly.",
"title": ""
},
{
"docid": "d97e9181f01f195c0b299ce8893ddbbd",
"text": "Linear algebra is a powerful and proven tool in Web search. Techniques, such as the PageRank algorithm of Brin and Page and the HITS algorithm of Kleinberg, score Web pages based on the principal eigenvector (or singular vector) of a particular non-negative matrix that captures the hyperlink structure of the Web graph. We propose and test a new methodology that uses multilinear algebra to elicit more information from a higher-order representation of the hyperlink graph. We start by labeling the edges in our graph with the anchor text of the hyperlinks so that the associated linear algebra representation is a sparse, three-way tensor. The first two dimensions of the tensor represent the Web pages while the third dimension adds the anchor text. We then use the rank-1 factors of a multilinear PARAFAC tensor decomposition, which are akin to singular vectors of the SVD, to automatically identify topics in the collection along with the associated authoritative Web pages.",
"title": ""
}
] |
[
{
"docid": "e66f7a7e3fcb833edde92bba24cb7145",
"text": "Essential oils are complex blends of a variety of volatile molecules such as terpenoids, phenol-derived aromatic components, and aliphatic components having a strong interest in pharmaceutical, sanitary, cosmetic, agricultural, and food industries. Since the middle ages, essential oils have been widely used for bactericidal, virucidal, fungicidal, antiparasitical, insecticidal, and other medicinal properties such as analgesic, sedative, anti-inflammatory, spasmolytic, and locally anaesthetic remedies. In this review their nanoencapsulation in drug delivery systems has been proposed for their capability of decreasing volatility, improving the stability, water solubility, and efficacy of essential oil-based formulations, by maintenance of therapeutic efficacy. Two categories of nanocarriers can be proposed: polymeric nanoparticulate formulations, extensively studied with significant improvement of the essential oil antimicrobial activity, and lipid carriers, including liposomes, solid lipid nanoparticles, nanostructured lipid particles, and nano- and microemulsions. Furthermore, molecular complexes such as cyclodextrin inclusion complexes also represent a valid strategy to increase water solubility and stability and bioavailability and decrease volatility of essential oils.",
"title": ""
},
{
"docid": "4ac3affdf995c4bb527229da0feb411d",
"text": "Algorand is a new cryptocurrency that confirms transactions with latency on the order of a minute while scaling to many users. Algorand ensures that users never have divergent views of confirmed transactions, even if some of the users are malicious and the network is temporarily partitioned. In contrast, existing cryptocurrencies allow for temporary forks and therefore require a long time, on the order of an hour, to confirm transactions with high confidence.\n Algorand uses a new Byzantine Agreement (BA) protocol to reach consensus among users on the next set of transactions. To scale the consensus to many users, Algorand uses a novel mechanism based on Verifiable Random Functions that allows users to privately check whether they are selected to participate in the BA to agree on the next set of transactions, and to include a proof of their selection in their network messages. In Algorand's BA protocol, users do not keep any private state except for their private keys, which allows Algorand to replace participants immediately after they send a message. This mitigates targeted attacks on chosen participants after their identity is revealed.\n We implement Algorand and evaluate its performance on 1,000 EC2 virtual machines, simulating up to 500,000 users. Experimental results show that Algorand confirms transactions in under a minute, achieves 125x Bitcoin's throughput, and incurs almost no penalty for scaling to more users.",
"title": ""
},
{
"docid": "2f2b45468b05bf5c7b4666006df1389b",
"text": "If an outbound flow is observed at the boundary of a protected network, destined to an IP address within a few addresses of a known malicious IP address, should it be considered a suspicious flow? Conventional blacklisting is not going to cut it in this situation, and the established fact that malicious IP addresses tend to be highly clustered in certain portions of IP address space, should indeed raise suspicions. We present a new approach for perimeter defense that addresses this concern. At the heart of our approach, we attempt to infer internal, hidden boundaries in IP address space, that lie within publicly known boundaries of registered IP netblocks. Our hypothesis is that given a known bad IP address, other IP address in the same internal contiguous block are likely to share similar security properties, and may therefore be vulnerable to being similarly hacked and used by attackers in the future. In this paper, we describe how we infer hidden internal boundaries in IPv4 netblocks, and what effect this has on being able to predict malicious IP addresses.",
"title": ""
},
{
"docid": "352ae5b752217faa02c20a93f110bcd6",
"text": "This paper serves to prove the thesis that a computational trick can open entirely new approaches to theory. We illustrate by describing such random matrix techniques as the stochastic operator approach, the method of ghosts and shadows, and the method of “Riccatti Diffusion/Sturm Sequences,” giving new insights into the deeper mathematics underneath random matrix theory.",
"title": ""
},
{
"docid": "9c35b7e3bf0ef3f3117c6ba8a9ad1566",
"text": "Stochastic gradient descent (SGD) is a widely used optimization algorithm in machine learning. In order to accelerate the convergence of SGD, a few advanced techniques have been developed in recent years, including variance reduction, stochastic coordinate sampling, and Nesterov’s acceleration method. Furthermore, in order to improve the training speed and/or leverage larger-scale training data, asynchronous parallelization of SGD has also been studied. Then, a natural question is whether these techniques can be seamlessly integrated with each other, and whether the integration has desirable theoretical guarantee on its convergence. In this paper, we provide our formal answer to this question. In particular, we consider the asynchronous parallelization of SGD, accelerated by leveraging variance reduction, coordinate sampling, and Nesterov’s method. We call the new algorithm asynchronous accelerated SGD (AASGD). Theoretically, we proved a convergence rate of AASGD, which indicates that (i) the three acceleration methods are complementary to each other and can make their own contributions to the improvement of convergence rate; (ii) asynchronous parallelization does not hurt the convergence rate, and can achieve considerable speedup under appropriate parameter setting. Empirically, we tested AASGD on a few benchmark datasets. The experimental results verified our theoretical findings and indicated that AASGD could be a highly effective and efficient algorithm for practical use.",
"title": ""
},
{
"docid": "818c075d79a51fcab4c38031f14a98ef",
"text": "This paper presents a statistical approach to collaborative ltering and investigates the use of latent class models for predicting individual choices and preferences based on observed preference behavior. Two models are discussed and compared: the aspect model, a probabilistic latent space model which models individual preferences as a convex combination of preference factors, and the two-sided clustering model, which simultaneously partitions persons and objects into clusters. We present EM algorithms for di erent variants of the aspect model and derive an approximate EM algorithmbased on a variational principle for the two-sided clustering model. The bene ts of the di erent models are experimentally investigated on a large movie data set.",
"title": ""
},
{
"docid": "353bbc5e68ec1d53b3cd0f7c352ee699",
"text": "• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.",
"title": ""
},
{
"docid": "f1f424a703eefaabe8c704bd07e21a21",
"text": "It is more convincing for users to have their own 3-D body shapes in the virtual fitting room when they shop clothes online. However, existing methods are limited for ordinary users to efficiently and conveniently access their 3-D bodies. We propose an efficient data-driven approach and develop an android application for 3-D body customization. Users stand naturally and their photos are taken from front and side views with a handy phone camera. They can wear casual clothes like a short-sleeved/long-sleeved shirt and short/long pants. First, we develop a user-friendly interface to semi-automatically segment the human body from photos. Then, the segmented human contours are scaled and translated to the ones under our virtual camera configurations. Through this way, we only need one camera to take photos of human in two views and do not need to calibrate the camera, which satisfy the convenience requirement. Finally, we learn body parameters that determine the 3-D body from dressed-human silhouettes with cascaded regressors. The regressors are trained using a database containing 3-D naked and dressed body pairs. Body parameters regression only costs 1.26 s on an android phone, which ensures the efficiency of our method. We invited 12 volunteers for tests, and the mean absolute estimation error for chest/waist/hip size is 2.89/1.93/2.22 centimeters. We additionally use 637 synthetic data to evaluate the main procedures of our approach.",
"title": ""
},
{
"docid": "34b7073f947888694053cb421544cb37",
"text": "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.",
"title": ""
},
{
"docid": "a73f07080a2f93a09b05b58184acf306",
"text": "This survey paper categorises, compares, and summarises from almost all published technical and review articles in automated fraud detection within the last 10 years. It defines the professional fraudster, formalises the main types and subtypes of known fraud, and presents the nature of data evidence collected within affected industries. Within the business context of mining the data to achieve higher cost savings, this research presents methods and techniques together with their problems. Compared to all related reviews on fraud detection, this survey covers much more technical articles and is the only one, to the best of our knowledge, which proposes alternative data and solutions from related domains.",
"title": ""
},
{
"docid": "619f38266a35e76a77fb4141879e1e68",
"text": "In article various approaches to measurement of efficiency of innovations and the problems arising at their measurement are considered, the system of an indistinct conclusion for the solution of a problem of obtaining recommendations about measurement of efficiency of innovations is offered.",
"title": ""
},
{
"docid": "df11dd8d4a4945f37ad3771cc6655120",
"text": "In this paper, we consider the problem of open information extraction (OIE) for extracting entity and relation level intermediate structures from sentences in open-domain. We focus on four types of valuable intermediate structures (Relation, Attribute, Description, and Concept), and propose a unified knowledge expression form, SAOKE, to express them. We publicly release a data set which contains 48,248 sentences and the corresponding facts in the SAOKE format labeled by crowdsourcing. To our knowledge, this is the largest publicly available human labeled data set for open information extraction tasks. Using this labeled SAOKE data set, we train an end-to-end neural model using the sequence-to-sequence paradigm, called Logician, to transform sentences into facts. For each sentence, different to existing algorithms which generally focus on extracting each single fact without concerning other possible facts, Logician performs a global optimization over all possible involved facts, in which facts not only compete with each other to attract the attention of words, but also cooperate to share words. An experimental study on various types of open domain relation extraction tasks reveals the consistent superiority of Logician to other states-of-the-art algorithms. The experiments verify the reasonableness of SAOKE format, the valuableness of SAOKE data set, the effectiveness of the proposed Logician model, and the feasibility of the methodology to apply end-to-end learning paradigm on supervised data sets for the challenging tasks of open information extraction.",
"title": ""
},
{
"docid": "b89f2c70e3c9e2258c2cdf3f9b2bfb1b",
"text": "One-size-fits-all protocols are hard to achieve in Byzantine fault tolerance (BFT). As an alternative, BFT users, e.g., enterprises, need an easy and efficient method to choose the most convenient protocol that matches their preferences best. The various BFT protocols that have been proposed so far differ significantly in their characteristics and performance which makes choosing the ‘preferred’ protocol hard. In addition, if the state of the deployed system is too fluctuating, then perhaps using multiple protocols at once is needed; this requires a dynamic selection mechanism to move from one protocol to another. In this paper, we present the first BFT selection model and algorithm that can be used to choose the most convenient protocol according to user preferences. The selection algorithm applies some mathematical formulas to make the selection process easy and automatic. The algorithm operates in three modes: Static, Dynamic, and Heuristic. The Static mode addresses the cases where a single protocol is needed; the Dynamic mode assumes that the system conditions are quite fluctuating and thus requires runtime decisions, and the Heuristic mode is similar to the Dynamic mode but it uses additional heuristics to improve user choices. We give some examples to describe how selection occurs. We show that our approach is automated, easy, and yields reasonable results that match reality. To the best of our knowledge, this is the first work that addresses selection in BFT.",
"title": ""
},
{
"docid": "a1ebca14dcf943116b2808b9d954f6f4",
"text": "In this work, the human parsing task, namely decomposing a human image into semantic fashion/body regions, is formulated as an active template regression (ATR) problem, where the normalized mask of each fashion/body item is expressed as the linear combination of the learned mask templates, and then morphed to a more precise mask with the active shape parameters, including position, scale and visibility of each semantic region. The mask template coefficients and the active shape parameters together can generate the human parsing results, and are thus called the structure outputs for human parsing. The deep Convolutional Neural Network (CNN) is utilized to build the end-to-end relation between the input human image and the structure outputs for human parsing. More specifically, the structure outputs are predicted by two separate networks. The first CNN network is with max-pooling, and designed to predict the template coefficients for each label mask, while the second CNN network is without max-pooling to preserve sensitivity to label mask position and accurately predict the active shape parameters. For a new image, the structure outputs of the two networks are fused to generate the probability of each label for each pixel, and super-pixel smoothing is finally used to refine the human parsing result. Comprehensive evaluations on a large dataset well demonstrate the significant superiority of the ATR framework over other state-of-the-arts for human parsing. In particular, the F1-score reaches 64.38 percent by our ATR framework, significantly higher than 44.76 percent based on the state-of-the-art algorithm [28].",
"title": ""
},
{
"docid": "140266d9b788417d62ceee20c38f5e92",
"text": "Statistical dependencies in the responses of sensory neurons govern both the amount of stimulus information conveyed and the means by which downstream neurons can extract it. Although a variety of measurements indicate the existence of such dependencies, their origin and importance for neural coding are poorly understood. Here we analyse the functional significance of correlated firing in a complete population of macaque parasol retinal ganglion cells using a model of multi-neuron spike responses. The model, with parameters fit directly to physiological data, simultaneously captures both the stimulus dependence and detailed spatio-temporal correlations in population responses, and provides two insights into the structure of the neural code. First, neural encoding at the population level is less noisy than one would expect from the variability of individual neurons: spike times are more precise, and can be predicted more accurately when the spiking of neighbouring neurons is taken into account. Second, correlations provide additional sensory information: optimal, model-based decoding that exploits the response correlation structure extracts 20% more information about the visual scene than decoding under the assumption of independence, and preserves 40% more visual information than optimal linear decoding. This model-based approach reveals the role of correlated activity in the retinal coding of visual stimuli, and provides a general framework for understanding the importance of correlated activity in populations of neurons.",
"title": ""
},
{
"docid": "da7beedfca8e099bb560120fc5047399",
"text": "OBJECTIVE\nThis study aims to assess the relationship of late-night cell phone use with sleep duration and quality in a sample of Iranian adolescents.\n\n\nMETHODS\nThe study population consisted of 2400 adolescents, aged 12-18 years, living in Isfahan, Iran. Age, body mass index, sleep duration, cell phone use after 9p.m., and physical activity were documented. For sleep assessment, the Pittsburgh Sleep Quality Index questionnaire was used.\n\n\nRESULTS\nThe participation rate was 90.4% (n=2257 adolescents). The mean (SD) age of participants was 15.44 (1.55) years; 1270 participants reported to use cell phone after 9p.m. Overall, 56.1% of girls and 38.9% of boys reported poor quality sleep, respectively. Wake-up time was 8:17 a.m. (2.33), among late-night cell phone users and 8:03a.m. (2.11) among non-users. Most (52%) late-night cell phone users had poor sleep quality. Sedentary participants had higher sleep latency than their peers. Adjusted binary and multinomial logistic regression models showed that late-night cell users were 1.39 times more likely to have a poor sleep quality than non-users (p-value<0.001).\n\n\nCONCLUSION\nLate-night cell phone use by adolescents was associated with poorer sleep quality. Participants who were physically active had better sleep quality and quantity. As part of healthy lifestyle recommendations, avoidance of late-night cell phone use should be encouraged in adolescents.",
"title": ""
},
{
"docid": "2729749e10b5c6f055b10eebb0c5f179",
"text": "An emerging solution for prolonging the lifetime of energy constrained relay nodes in wireless networks is to avail the ambient radio-frequency (RF) signal and to simultaneously harvest energy and process information. In this paper, an amplify-and-forward (AF) relaying network is considered, where an energy constrained relay node harvests energy from the received RF signal and uses that harvested energy to forward the source information to the destination. Based on the time switching and power splitting receiver architectures, two relaying protocols, namely, i) time switching-based relaying (TSR) protocol and ii) power splitting-based relaying (PSR) protocol are proposed to enable energy harvesting and information processing at the relay. In order to determine the throughput, analytical expressions for the outage probability and the ergodic capacity are derived for delay-limited and delay-tolerant transmission modes, respectively. The numerical analysis provides practical insights into the effect of various system parameters, such as energy harvesting time, power splitting ratio, source transmission rate, source to relay distance, noise power, and energy harvesting efficiency, on the performance of wireless energy harvesting and information processing using AF relay nodes. In particular, the TSR protocol outperforms the PSR protocol in terms of throughput at relatively low signal-to-noise-ratios and high transmission rates.",
"title": ""
},
{
"docid": "7bf137d513e7a310e121eecb5f59ae27",
"text": "BACKGROUND\nChildren with intellectual disability are at heightened risk for behaviour problems and diagnosed mental disorder.\n\n\nMETHODS\nThe present authors studied the early manifestation and continuity of problem behaviours in 205 pre-school children with and without developmental delays.\n\n\nRESULTS\nBehaviour problems were quite stable over the year from age 36-48 months. Children with developmental delays were rated higher on behaviour problems than their non-delayed peers, and were three times as likely to score in the clinical range. Mothers and fathers showed high agreement in their rating of child problems, especially in the delayed group. Parenting stress was also higher in the delayed group, but was related to the extent of behaviour problems rather than to the child's developmental delay.\n\n\nCONCLUSIONS\nOver time, a transactional model fit the relationship between parenting stress and behaviour problems: high parenting stress contributed to a worsening in child behaviour problems over time, and high child behaviour problems contributed to a worsening in parenting stress. Findings for mothers and fathers were quite similar.",
"title": ""
},
{
"docid": "2b97be612e11b8fefc1f8dcf8ff47603",
"text": "Images of an object under different illumination are known to provide strong cues about the object surface. A mathematical formalization of how to recover the normal map of such a surface leads to the so-called uncalibrated photometric stereo problem. In the simplest instance, this problem can be reduced to the task of identifying only three parameters: the so-called generalized bas-relief (GBR) ambiguity. The challenge is to find additional general assumptions about the object, that identify these parameters uniquely. Current approaches are not consistent, i.e., they provide different solutions when run multiple times on the same data. To address this limitation, we propose exploiting local diffuse reflectance (LDR) maxima, i.e., points in the scene where the normal vector is parallel to the illumination direction (see Fig. 1). We demonstrate several noteworthy properties of these maxima: a closed-form solution, computational efficiency and GBR consistency. An LDR maximum yields a simple closed-form solution corresponding to a semi-circle in the GBR parameters space (see Fig. 2); because as few as two diffuse maxima in different images identify a unique solution, the identification of the GBR parameters can be achieved very efficiently; finally, the algorithm is consistent as it always returns the same solution given the same data. Our algorithm is also remarkably robust: It can obtain an accurate estimate of the GBR parameters even with extremely high levels of outliers in the detected maxima (up to 80 % of the observations). The method is validated on real data and achieves state-of-the-art results.",
"title": ""
}
] |
scidocsrr
|
9edfd93c8767e9298d8c03a834e1a49a
|
WADaR: Joint Wrapper and Data Repair
|
[
{
"docid": "dc6aafe2325dfdea5e758a30c90d8940",
"text": "When a query is submitted to a search engine, the search engine returns a dynamically generated result page containing the result records, each of which usually consists of a link to and/or snippet of a retrieved Web page. In addition, such a result page often also contains information irrelevant to the query, such as information related to the hosting site of the search engine and advertisements. In this paper, we present a technique for automatically producing wrappers that can be used to extract search result records from dynamically generated result pages returned by search engines. Automatic search result record extraction is very important for many applications that need to interact with search engines such as automatic construction and maintenance of metasearch engines and deep Web crawling. The novel aspect of the proposed technique is that it utilizes both the visual content features on the result page as displayed on a browser and the HTML tag structures of the HTML source file of the result page. Experimental results indicate that this technique can achieve very high extraction accuracy.",
"title": ""
},
{
"docid": "4a53c792868e971cddfee8210f7eafb6",
"text": "We present an unsupervised approach for harvesting the data exposed by a set of structured and partially overlapping data-intensive web sources. Our proposal comes within a formal framework tackling two problems: the data extraction problem, to generate extraction rules based on the input websites, and the data integration problem, to integrate the extracted data in a unified schema. We introduce an original algorithm, WEIR, to solve the stated problems and formally prove its correctness. WEIR leverages the overlapping data among sources to make better decisions both in the data extraction (by pruning rules that do not lead to redundant information) and in the data integration (by reflecting local properties of a source over the mediated schema). Along the way, we characterize the amount of redundancy needed by our algorithm to produce a solution, and present experimental results to show the benefits of our approach with respect to existing solutions.",
"title": ""
}
] |
[
{
"docid": "b3c947eb12abdc0abf7f3bc0de9e74fc",
"text": "This paper describes the development of two nine-storey elevators control system for a residential building. The control system adopts PLC as controller, and uses a parallel connection dispatching rule based on \"minimum waiting time\" to run two elevators in parallel mode. The paper gives the basic structure, control principle and realization method of the PLC control system in detail. It also presents the ladder diagram of the key aspects of the system. The system has simple peripheral circuit and the operation result showed that it enhanced the reliability and pe.rformance of the elevators.",
"title": ""
},
{
"docid": "55694b963cde47e9aecbeb21fb0e79cf",
"text": "The rise of Uber as the global alternative taxi operator has attracted a lot of interest recently. Aside from the media headlines which discuss the new phenomenon, e.g. on how it has disrupted the traditional transportation industry, policy makers, economists, citizens and scientists have engaged in a discussion that is centred around the means to integrate the new generation of the sharing economy services in urban ecosystems. In this work, we aim to shed new light on the discussion, by taking advantage of a publicly available longitudinal dataset that describes the mobility of yellow taxis in New York City. In addition to movement, this data contains information on the fares paid by the taxi customers for each trip. As a result we are given the opportunity to provide a first head to head comparison between the iconic yellow taxi and its modern competitor, Uber, in one of the world’s largest metropolitan centres. We identify situations when Uber X, the cheapest version of the Uber taxi service, tends to be more expensive than yellow taxis for the same journey. We also demonstrate how Uber’s economic model effectively takes advantage of well known patterns in human movement. Finally, we take our analysis a step further by proposing a new mobile application that compares taxi prices in the city to facilitate traveller’s taxi choices, hoping to ultimately to lead to a reduction of commuter costs. Our study provides a case on how big datasets that become public can improve urban services for consumers by offering the opportunity for transparency in economic sectors that lack up to date regulations.",
"title": ""
},
{
"docid": "80b999a5c44d87cd3464facb6eea6bb8",
"text": "The aim of this study was to assess the efficacy of cognitive training, specifically computerized cognitive training (CCT) and virtual reality cognitive training (VRCT), programs for individuals living with mild cognitive impairment (MCI) or dementia and therefore at high risk of cognitive decline. After searching a range of academic databases (CINHAL, PSYCinfo, and Web of Science), the studies evaluated (N = 16) were categorized as CCT (N = 10), VRCT (N = 3), and multimodal interventions (N = 3). Effect sizes were calculated, but a meta-analysis was not possible because of the large variability of study design and outcome measures adopted. The cognitive domains of attention, executive function, and memory (visual and verbal) showed the most consistent improvements. The positive effects on psychological outcomes (N = 6) were significant reductions on depressive symptoms (N = 3) and anxiety (N = 2) and improved perceived use of memory strategy (N = 1). Assessments of activities of daily living demonstrated no significant improvements (N = 8). Follow-up studies (N = 5) demonstrated long-term improvements in cognitive and psychological outcomes (N = 3), and the intervention groups showed a plateau effect of cognitive functioning compared with the cognitive decline experienced by control groups (N = 2). CCT and VRCT were moderately effective in long-term improvement of cognition for those at high risk of cognitive decline. Total intervention time did not mediate efficacy. Future research needs to improve study design by including larger samples, longitudinal designs, and a greater range of outcome measures, including functional and quality of life measures, to assess the wider effect of cognitive training on individuals at high risk of cognitive decline.",
"title": ""
},
{
"docid": "8913c543d350ff147b9f023729f4aec3",
"text": "The reality gap, which often makes controllers evolved in simulation inefficient once transferred onto the physical robot, remains a critical issue in evolutionary robotics (ER). We hypothesize that this gap highlights a conflict between the efficiency of the solutions in simulation and their transferability from simulation to reality: the most efficient solutions in simulation often exploit badly modeled phenomena to achieve high fitness values with unrealistic behaviors. This hypothesis leads to the transferability approach, a multiobjective formulation of ER in which two main objectives are optimized via a Pareto-based multiobjective evolutionary algorithm: 1) the fitness; and 2) the transferability, estimated by a simulation-to-reality (STR) disparity measure. To evaluate this second objective, a surrogate model of the exact STR disparity is built during the optimization. This transferability approach has been compared to two reality-based optimization methods, a noise-based approach inspired from Jakobi's minimal simulation methodology and a local search approach. It has been validated on two robotic applications: 1) a navigation task with an e-puck robot; and 2) a walking task with a 8-DOF quadrupedal robot. For both experimental setups, our approach successfully finds efficient and well-transferable controllers only with about ten experiments on the physical robot.",
"title": ""
},
{
"docid": "b3dcbd8a41e42ae6e748b07c18dbe511",
"text": "There is inconclusive evidence whether practicing tasks with computer agents improves people’s performance on these tasks. This paper studies this question empirically using extensive experiments involving bilateral negotiation and threeplayer coordination tasks played by hundreds of human subjects. We used different training methods for subjects, including practice interactions with other human participants, interacting with agents from the literature, and asking participants to design an automated agent to serve as their proxy in the task. Following training, we compared the performance of subjects when playing state-of-the-art agents from the literature. The results revealed that in the negotiation settings, in most cases, training with computer agents increased people’s performance as compared to interacting with people. In the three player coordination game, training with computer agents increased people’s performance when matched with the state-of-the-art agent. These results demonstrate the efficacy of using computer agents as tools for improving people’s skills when interacting in strategic settings, saving considerable effort and providing better performance than when interacting with human counterparts.",
"title": ""
},
{
"docid": "a77c113c691a61101cba1136aaf4b90c",
"text": "This paper describes the acquisition, preparation and properties of a corpus extracted from the official documents of the United Nations (UN). This corpus is available in all 6 official languages of the UN, consisting of around 300 million words per language. We describe the methods we used for crawling, document formatting, and sentence alignment. This corpus also includes a common test set for machine translation. We present the results of a French-Chinese machine translation experiment performed on this corpus.",
"title": ""
},
{
"docid": "7fab7940321a606b10225d14df46ce65",
"text": "Domain adaptation aims to learn models on a supervised source domain that perform well on an unsupervised target. Prior work has examined domain adaptation in the context of stationary domain shifts, i.e. static data sets. However, with large-scale or dynamic data sources, data from a defined domain is not usually available all at once. For instance, in a streaming data scenario, dataset statistics effectively become a function of time. We introduce a framework for adaptation over non-stationary distribution shifts applicable to large-scale and streaming data scenarios. The model is adapted sequentially over incoming unsupervised streaming data batches. This enables improvements over several batches without the need for any additionally annotated data. To demonstrate the effectiveness of our proposed framework, we modify associative domain adaptation to work well on source and target data batches with unequal class distributions. We apply our method to several adaptation benchmark datasets for classification and show improved classifier accuracy not only for the currently adapted batch, but also when applied on future stream batches. Furthermore, we show the applicability of our associative learning modifications to semantic segmentation, where we achieve competitive results.",
"title": ""
},
{
"docid": "288f8a2dab0c32f85c313f5a145e47a5",
"text": "Neural networks have a smooth initial inductive bias, such that small changes in input do not lead to large changes in output. However, in reinforcement learning domains with sparse rewards, value functions have non-smooth structure with a characteristic asymmetric discontinuity whenever rewards arrive. We propose a mechanism that learns an interpolation between a direct value estimate and a projected value estimate computed from the encountered reward and the previous estimate. This reduces the need to learn about discontinuities, and thus improves the value function approximation. Furthermore, as the interpolation is learned and state-dependent, our method can deal with heterogeneous observability. We demonstrate that this one change leads to significant improvements on multiple Atari games, when applied to the state-of-the-art A3C algorithm. 1 Motivation The central problem of reinforcement learning is value function approximation: how to accurately estimate the total future reward from a given state. Recent successes have used deep neural networks to approximate the value function, resulting in state-of-the-art performance in a variety of challenging domains [9]. Neural networks are most effective when the desired target function is smooth. However, value functions are, by their very nature, discontinuous functions with sharp variations over time. In this paper we introduce a representation of value that matches the natural temporal structure of value functions. A value function represents the expected sum of future discounted rewards. If non-zero rewards occur infrequently but reliably, then an accurate prediction of the cumulative discounted reward rises as such rewarding moments approach and drops immediately after. This is depicted schematically with the dashed black line in Figure 1. The true value function is quite smooth, except immediately after receiving a reward when there is a sharp drop. This is a pervasive scenario because many domains associate positive or negative reinforcements to salient events (like picking up an object, hitting a wall, or reaching a goal position). The problem is that the agent’s observations tend to be smooth in time, so learning an accurate value estimate near those sharp drops puts strain on the function approximator – especially when employing differentiable function approximators such as neural networks that naturally make smooth maps from observations to outputs. To address this problem, we incorporate the temporal structure of cumulative discounted rewards into the value function itself. The main idea is that, by default, the value function can respect the reward sequence. If no reward is observed, then the next value smoothly matches the previous value, but 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: After the same amount of training, our proposed method (red) produces much more accurate estimates of the true value function (dashed black), compared to the baseline (blue). The main plot shows discounted future returns as a function of the step in a sequence of states; the inset plot shows the RMSE when training on this data, as a function of network updates. See section 4 for details. becomes a little larger due to the discount. If a reward is observed, it should be subtracted out from the previous value: in other words a reward that was expected has now been consumed. The natural value approximator (NVA) combines the previous value with the observed rewards and discounts, which makes this sequence of values easy to represent by a smooth function approximator such as a neural network. Natural value approximators may also be helpful in partially observed environments. Consider a situation in which an agent stands on a hill top. The goal is to predict, at each step, how many steps it will take until the agent has crossed a valley to another hill top in the distance. There is fog in the valley, which means that if the agent’s state is a single observation from the valley it will not be able to accurately predict how many steps remain. In contrast, the value estimate from the initial hill top may be much better, because the observation is richer. This case is depicted schematically in Figure 2. Natural value approximators may be effective in these situations, since they represent the current value in terms of previous value estimates. 2 Problem definition We consider the typical scenario studied in reinforcement learning, in which an agent interacts with an environment at discrete time intervals: at each time step t the agent selects an action as a function of the current state, which results in a transition to the next state and a reward. The goal of the agent is to maximize the discounted sum of rewards collected in the long run from a set of initial states [12]. The interaction between the agent and the environment is modelled as a Markov Decision Process (MDP). An MDP is a tuple (S,A, R, γ, P ) where S is a state space, A is an action space, R : S×A×S → D(R) is a reward function that defines a distribution over the reals for each combination of state, action, and subsequent state, P : S × A → D(S) defines a distribution over subsequent states for each state and action, and γt ∈ [0, 1] is a scalar, possibly time-dependent, discount factor. One common goal is to make accurate predictions under a behaviour policy π : S → D(A) of the value vπ(s) ≡ E [R1 + γ1R2 + γ1γ2R3 + . . . | S0 = s] . (1) The expectation is over the random variables At ∼ π(St), St+1 ∼ P (St, At), and Rt+1 ∼ R(St, At, St+1), ∀t ∈ N. For instance, the agent can repeatedly use these predictions to improve its policy. The values satisfy the recursive Bellman equation [2] vπ(s) = E [Rt+1 + γt+1vπ(St+1) | St = s] . We consider the common setting where the MDP is not known, and so the predictions must be learned from samples. The predictions made by an approximate value function v(s;θ), where θ are parameters that are learned. The approximation of the true value function can be formed by temporal 2 difference (TD) learning [10], where the estimate at time t is updated towards Z t ≡ Rt+1 + γt+1v(St+1;θ) or Z t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)v(St+n;θ) ,(2) where Z t is the n-step bootstrap target, and the TD-error is δ n t ≡ Z t − v(St;θ). 3 Proposed solution: Natural value approximators The conventional approach to value function approximation produces a value estimate from features associated with the current state. In states where the value approximation is poor, it can be better to rely more on a combination of the observed sequence of rewards and older but more reliable value estimates that are projected forward in time. Combining these estimates can potentially be more accurate than using one alone. These ideas lead to an algorithm that produces three estimates of the value at time t. The first estimate, Vt ≡ v(St;θ), is a conventional value function estimate at time t. The second estimate, Gpt ≡ Gβt−1 −Rt γt if γt > 0 and t > 0 , (3) is a projected value estimate computed from the previous value estimate, the observed reward, and the observed discount for time t. The third estimate, Gβt ≡ βtG p t + (1− βt)Vt = (1− βt)Vt + βt Gβt−1 −Rt γt , (4) is a convex combination of the first two estimates1 formed by a time-dependent blending coefficient βt. This coefficient is a learned function of state β(·;θ) : S → [0, 1], over the same parameters θ, and we denote βt ≡ β(St;θ). We call Gβt the natural value estimate at time t and we call the overall approach natural value approximators (NVA). Ideally, the natural value estimate will become more accurate than either of its constituents from training. The value is learned by minimizing the sum of two losses. The first loss captures the difference between the conventional value estimate Vt and the target Zt, weighted by how much it is used in the natural value estimate, JV ≡ E [ [[1− βt]]([[Zt]]− Vt) ] , (5) where we introduce the stop-gradient identity function [[x]] = x that is defined to have a zero gradient everywhere, that is, gradients are not back-propagated through this function. The second loss captures the difference between the natural value estimate and the target, but it provides gradients only through the coefficient βt, Jβ ≡ E [ ([[Zt]]− (βt [[Gpt ]] + (1− βt)[[Vt]])) ] . (6) These two losses are summed into a joint loss, J = JV + cβJβ , (7) where cβ is a scalar trade-off parameter. When conventional stochastic gradient descent is applied to minimize this loss, the parameters of Vt are adapted with the first loss and parameters of βt are adapted with the second loss. When bootstrapping on future values, the most accurate value estimate is best, so using Gβt instead of Vt leads to refined prediction targets Z t ≡ Rt+1 + γt+1G β t+1 or Z β,n t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)G β t+n . (8) 4 Illustrative Examples We now provide some examples of situations where natural value approximations are useful. In both examples, the value function is difficult to estimate well uniformly in all states we might care about, and the accuracy can be improved by using the natural value estimate Gβt instead of the direct value estimate Vt. Note the mixed recursion in the definition, G depends on G , and vice-versa. 3 Sparse rewards Figure 1 shows an example of value function approximation. To separate concerns, this is a supervised learning setup (regression) with the true value targets provided (dashed black line). Each point 0 ≤ t ≤ 100 on the horizontal axis corresponds to one state St in a single sequence. The shape of the target values stems from a handful of reward events, and discounting with γ = 0.9. We mimic observations that smoothly vary across time by 4 equally spaced radial basis functions, so St ∈ R. The approximators v(s) and β(s) are two small neural networks with one hidden layer of 32 ReLU units each, and a single linear or sigmoid output unit, respectively. The input",
"title": ""
},
{
"docid": "95a102f45ff856d2064d8042b0b1a0ad",
"text": "Diagnosis and monitoring of health is a very important task in health care industry. Due to time constraint people are not visiting hospitals, which could lead to lot of health issues in one instant of time. Priorly most of the health care systems have been developed to predict and diagnose the health of the patients by which people who are busy in their schedule can also monitor their health at regular intervals. Many studies have shown that early prediction is the best way to cure health because early diagnosis will help and alert the patients to know the health status. In this paper, we review the various Internet of Things (IoT) enable devices and its actual implementation in the area of health care children’s, monitoring of the patients etc. Further, this paper addresses how different innovations as server, ambient intelligence and sensors can be leveraged in health care context; determines how they can facilitate economies and societies in terms of suitable development. KeywordsInternet of Things (IoT);ambient intelligence; monitoring; innovations; leveraged. __________________________________________________*****_________________________________________________",
"title": ""
},
{
"docid": "6c5cabfa5ee5b9d67ef25658a4b737af",
"text": "Sentence compression is the task of producing a summary of a single sentence. The compressed sentence should be shorter, contain the important content from the original, and itself be grammatical. The three papers discussed here take different approaches to identifying important content, determining which sentences are grammatical, and jointly optimizing these objectives. One family of approaches we will discuss is those that are tree-based, which create a compressed sentence by making edits to the syntactic tree of the original sentence. A second type of approach is sentence-based, which generates strings directly. Orthogonal to either of these two approaches is whether sentences are treated in isolation or if the surrounding discourse affects compressions. We compare a tree-based, a sentence-based, and a discourse-based approach and conclude with ideas for future work in this area. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-10-20. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/929 Methods for Sentence Compression",
"title": ""
},
{
"docid": "0bf227d17e76d1fb16868ff90d75e94c",
"text": "The high-efficiency current-mode (CM) and voltage-mode (VM) Class-E power amplifiers (PAs) for MHz wireless power transfer (WPT) systems are first proposed in this paper and the design methodology for them is presented. The CM/VM Class-E PA is able to deliver the increasing/decreasing power with the increasing load and the efficiency maintains high even when the load varies in a wide range. The high efficiency and certain operation mode are realized by introducing an impedance transformation network with fixed components. The efficiency, output power, circuit tolerance, and robustness are all taken into consideration in the design procedure, which makes the CM and the VM Class-E PAs especially practical and efficient to real WPT systems. 6.78-MHz WPT systems with the CM and the VM Class-E PAs are fabricated and compared to that with the classical Class-E PA. The measurement results show that the output power is proportional to the load for the CM Class-E PA and is inversely proportional to the load for the VM Class-E PA. The efficiency for them maintains high, over 83%, when the load of PA varies from 10 to 100 $\\Omega$, while the efficiency of the classical Class-E is about 60% in the worst case. The experiment results validate the feasibility of the proposed design methodology and show that the CM and the VM Class-E PAs present superior performance in WPT systems compared to the traditional Class-E PA.",
"title": ""
},
{
"docid": "3a58c1a2e4428c0b875e1202055e5b13",
"text": "Short texts usually encounter data sparsity and ambiguity problems in representations for their lack of context. In this paper, we propose a novel method to model short texts based on semantic clustering and convolutional neural network. Particularly, we first discover semantic cliques in embedding spaces by a fast clustering algorithm. Then, multi-scale semantic units are detected under the supervision of semantic cliques, which introduce useful external knowledge for short texts. These meaningful semantic units are combined and fed into convolutional layer, followed by max-pooling operation. Experimental results on two open benchmarks validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "cc1ae8daa1c1c4ee2b3b4a65ef48b6f5",
"text": "The use of entropy as a distance measure has several benefits. Amongst other things it provides a consistent approach to handling of symbolic attributes, real valued attributes and missing values. The approach of taking all possible transformation paths is discussed. We describe K*, an instance-based learner which uses such a measure, and results are presented which compare favourably with several machine learning algorithms.",
"title": ""
},
{
"docid": "5d1e77b6b09ebac609f2e518b316bd49",
"text": "Principles of muscle coordination in gait have been based largely on analyses of body motion, ground reaction force and EMG measurements. However, data from dynamical simulations provide a cause-effect framework for analyzing these measurements; for example, Part I (Gait Posture, in press) of this two-part review described how force generation in a muscle affects the acceleration and energy flow among the segments. This Part II reviews the mechanical and coordination concepts arising from analyses of simulations of walking. Simple models have elucidated the basic multisegmented ballistic and passive mechanics of walking. Dynamical models driven by net joint moments have provided clues about coordination in healthy and pathological gait. Simulations driven by muscle excitations have highlighted the partial stability afforded by muscles with their viscoelastic-like properties and the predictability of walking performance when minimization of metabolic energy per unit distance is assumed. When combined with neural control models for exciting motoneuronal pools, simulations have shown how the integrative properties of the neuro-musculo-skeletal systems maintain a stable gait. Other analyses of walking simulations have revealed how individual muscles contribute to trunk support and progression. Finally, we discuss how biomechanical models and simulations may enhance our understanding of the mechanics and muscle function of walking in individuals with gait impairments.",
"title": ""
},
{
"docid": "ce282fba1feb109e03bdb230448a4f8a",
"text": "The goal of two-sample tests is to assess whether two samples, SP ∼ P and SQ ∼ Q, are drawn from the same distribution. Perhaps intriguingly, one relatively unexplored method to build two-sample tests is the use of binary classifiers. In particular, construct a dataset by pairing the n examples in SP with a positive label, and by pairing the m examples in SQ with a negative label. If the null hypothesis “P = Q” is true, then the classification accuracy of a binary classifier on a held-out subset of this dataset should remain near chance-level. As we will show, such Classifier Two-Sample Tests (C2ST) learn a suitable representation of the data on the fly, return test statistics in interpretable units, have a simple null distribution, and their predictive uncertainty allow to interpret where P and Q differ. The goal of this paper is to establish the properties, performance, and uses of C2ST. First, we analyze their main theoretical properties. Second, we compare their performance against a variety of state-of-the-art alternatives. Third, we propose their use to evaluate the sample quality of generative models with intractable likelihoods, such as Generative Adversarial Networks (GANs). Fourth, we showcase the novel application of GANs together with C2ST for causal discovery.",
"title": ""
},
{
"docid": "ea4f56a1cc4622a102720beb5c2c189d",
"text": "Food detection, classification, and analysis have been the topic of indepth studies for a variety of applications related to eating habits and dietary assessment. For the specific topic of calorie measurement of food portions with single and mixed food items, the research community needs a dataset of images for testing and training. In this paper we introduce FooDD: a Food Detection Dataset of 3000 images that offer variety of food photos taken from different cameras with different illuminations. We also provide examples of food detection using graph cut segmentation and deep learning algorithms.",
"title": ""
},
{
"docid": "a651ae33adce719033dad26b641e6086",
"text": "Knowledge base(KB) plays an important role in artificial intelligence. Much effort has been taken to both manually and automatically construct web-scale knowledge bases. Comparing with manually constructed KBs, automatically constructed KB is broader but with more noises. In this paper, we study the problem of improving the quality for automatically constructed web-scale knowledge bases, in particular, lexical taxonomies of isA relationships. We find that these taxonomies usually contain cycles, which are often introduced by incorrect isA relations. Inspired by this observation, we introduce two kinds of models to detect incorrect isA relations from cycles. The first one eliminates cycles by extracting directed acyclic graphs, and the other one eliminates cycles by grouping nodes into different levels. We implement our models on Probase, a state-of-the-art, automatically constructed, web-scale taxonomy. After processing tens of millions of relations, our models eliminate 74 thousand wrong relations with 91% accuracy.",
"title": ""
},
{
"docid": "5692d2ee410c804e32ebebbcc129c8d6",
"text": "Aimed at the industrial sorting technology problems, this paper researched correlative algorithm of image processing and analysis, and completed the construction of robot vision sense. the operational process was described as follows: the camera acquired image sequences of the metal work piece in the sorting region. Image sequence was analyzed to use algorithms of image pre-processing, Hough circle detection, corner detection and contour recognition. in the mean time, this paper also explained the characteristics of three main function model (image pre-processing, corner detection and contour recognition), and proposed algorithm of multi-objective center and a corner recognition. the simulated results show that the sorting system can effectively solve the sorting problem of regular geometric work piece, and accurately calculate center and edge of geometric work piece to achieve the sorting purpose.",
"title": ""
},
{
"docid": "3ea05bc5dd97a1f76e343b42f9553662",
"text": "End-to-End Large Scale Machine Learning with KeystoneML",
"title": ""
},
{
"docid": "b480111b47176fe52cd6f9ca296dc666",
"text": "We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning. Fig. 1: Our automatic colorization of grayscale input; more examples in Figs. 3 and 4.",
"title": ""
}
] |
scidocsrr
|
38a215132b2199d8b4df37fb15634494
|
Transporting information and energy simultaneously
|
[
{
"docid": "8836fddeb496972fa38005fd2f8a4ed4",
"text": "Energy harvesting has grown from long-established concepts into devices for powering ubiquitously deployed sensor networks and mobile electronics. Systems can scavenge power from human activity or derive limited energy from ambient heat, light, radio, or vibrations. Ongoing power management developments enable battery-powered electronics to live longer. Such advances include dynamic optimization of voltage and clock rate, hybrid analog-digital designs, and clever wake-up procedures that keep the electronics mostly inactive. Exploiting renewable energy resources in the device's environment, however, offers a power source limited by the device's physical survival rather than an adjunct energy store. Energy harvesting's true legacy dates to the water wheel and windmill, and credible approaches that scavenge energy from waste heat or vibration have been around for many decades. Nonetheless, the field has encountered renewed interest as low-power electronics, wireless standards, and miniaturization conspire to populate the world with sensor networks and mobile devices. This article presents a whirlwind survey through energy harvesting, spanning historic and current developments.",
"title": ""
}
] |
[
{
"docid": "12932a77e9fabb8273175a6ca8fc5f49",
"text": "There are nearly a million known species of flying insects and 13 000 species of flying warm-blooded vertebrates, including mammals, birds and bats. While in flight, their wings not only move forward relative to the air, they also flap up and down, plunge and sweep, so that both lift and thrust can be generated and balanced, accommodate uncertain surrounding environment, with superior flight stability and dynamics with highly varied speeds and missions. As the size of a flyer is reduced, the wing-to-body mass ratio tends to decrease as well. Furthermore, these flyers use integrated system consisting of wings to generate aerodynamic forces, muscles to move the wings, and sensing and control systems to guide and manoeuvre. In this article, recent advances in insect-scale flapping-wing aerodynamics, flexible wing structures, unsteady flight environment, sensing, stability and control are reviewed with perspective offered. In particular, the special features of the low Reynolds number flyers associated with small sizes, thin and light structures, slow flight with comparable wind gust speeds, bioinspired fabrication of wing structures, neuron-based sensing and adaptive control are highlighted.",
"title": ""
},
{
"docid": "412c61657893bb1ed2f579936d47dc02",
"text": "In this paper, we propose a simple yet effective method for multiple music source separation using convolutional neural networks. Stacked hourglass network, which was originally designed for human pose estimation in natural images, is applied to a music source separation task. The network learns features from a spectrogram image across multiple scales and generates masks for each music source. The estimated mask is refined as it passes over stacked hourglass modules. The proposed framework is able to separate multiple music sources using a single network. Experimental results on MIR-1K and DSD100 datasets validate that the proposed method achieves competitive results comparable to the state-of-the-art methods in multiple music source separation and singing voice separation tasks.",
"title": ""
},
{
"docid": "cbdace4636017f925b89ecf266fde019",
"text": "It is traditionally known that wideband apertures lose bandwidth when placed over a ground plane. To overcome this issue, this paper introduces a new non-symmetric tightly coupled dipole element for wideband phased arrays. The proposed array antenna incorporates additional degrees of freedom to control capacitance and cancel the ground plane inductance. Specifically, each arm on the dipole is different than the other (or non-symmetric). The arms are identical near the center feed section but dissimilar towards the ends, forming a ball-and-cup. It is demonstrated that the non-symmetric qualities achieve wideband performance. Concurrently, a design example for planar installation with balun and matching network is presented to cover X-band. The balun avoids extraneous radiation, maintains the array's low-profile height and is printed on top of the ground plane connecting to the array aperture with 180° out of phase vertical twin-wire transmission lines. To demonstrate the concept, a 64-element array with integrated feed and matching network is designed, fabricated and verified experimentally. The array aperture is placed λ/7 (at 8 GHz) above the ground plane and shown to maintain a active VSWR less than 2 from 8-12.5 GHz while scanning up to 70° and 60° in E- and H-plane, respectively. The array's simulated diagonal plane cross-polarization is approximately 10 dB below the co-polarized component during 60° diagonal scan and follows the theoretical limit for an infinite current sheet.",
"title": ""
},
{
"docid": "10f6ae0e9c254279b0cf0f5e98caa9cd",
"text": "The automatic assessment of photo quality from an aesthetic perspective is a very challenging problem. Most existing research has predominantly focused on the learning of a universal aesthetic model based on hand-crafted visual descriptors . However, this research paradigm can achieve only limited success because (1) such hand-crafted descriptors cannot well preserve abstract aesthetic properties , and (2) such a universal model cannot always capture the full diversity of visual content. To address these challenges, we propose in this paper a novel query-dependent aesthetic model with deep learning for photo quality assessment. In our method, deep aesthetic abstractions are discovered from massive images , whereas the aesthetic assessment model is learned in a query- dependent manner. Our work addresses the first problem by learning mid-level aesthetic feature abstractions via powerful deep convolutional neural networks to automatically capture the underlying aesthetic characteristics of the massive training images . Regarding the second problem, because photographers tend to employ different rules of photography for capturing different images , the aesthetic model should also be query- dependent . Specifically, given an image to be assessed, we first identify which aesthetic model should be applied for this particular image. Then, we build a unique aesthetic model of this type to assess its aesthetic quality. We conducted extensive experiments on two large-scale datasets and demonstrated that the proposed query-dependent model equipped with learned deep aesthetic abstractions significantly and consistently outperforms state-of-the-art hand-crafted feature -based and universal model-based methods.",
"title": ""
},
{
"docid": "982af44d0c5fc3d0bddd2804cee77a04",
"text": "Coprime array offers a larger array aperture than uniform linear array with the same number of physical sensors, and has a better spatial resolution with increased degrees of freedom. However, when it comes to the problem of adaptive beamforming, the existing adaptive beamforming algorithms designed for the general array cannot take full advantage of coprime feature offered by the coprime array. In this paper, we propose a novel coprime array adaptive beamforming algorithm, where both robustness and efficiency are well balanced. Specifically, we first decompose the coprime array into a pair of sparse uniform linear subarrays and process their received signals separately. According to the property of coprime integers, the direction-of-arrival (DOA) can be uniquely estimated for each source by matching the super-resolution spatial spectra of the pair of sparse uniform linear subarrays. Further, a joint covariance matrix optimization problem is formulated to estimate the power of each source. The estimated DOAs and their corresponding power are utilized to reconstruct the interference-plus-noise covariance matrix and estimate the signal steering vector. Theoretical analyses are presented in terms of robustness and efficiency, and simulation results demonstrate the effectiveness of the proposed coprime array adaptive beamforming algorithm.",
"title": ""
},
{
"docid": "844bd5a95f2f7436e666d7408ca89462",
"text": "Neural message passing on molecular graphs is one of the most promising methods for predicting formation energy and other properties of molecules and materials. In this work we extend the neural message passing model with an edge update network which allows the information exchanged between atoms to depend on the hidden state of the receiving atom. We benchmark the proposed model on three publicly available datasets (QM9, The Materials Project and OQMD) and show that the proposed model yields superior prediction of formation energies and other properties on all three datasets in comparison with the best published results. Furthermore we investigate different methods for constructing the graph used to represent crystalline structures and we find that using a graph based on K-nearest neighbors achieves better prediction accuracy than using maximum distance cutoff or the Voronoi tessellation graph.",
"title": ""
},
{
"docid": "8cd8fbbc3e20d29989deeb2fd2362c10",
"text": "Modern programming languages and software engineering principles are causing increasing problems for compiler systems. Traditional approaches, which use a simple compile-link-execute model, are unable to provide adequate application performance under the demands of the new conditions. Traditional approaches to interprocedural and profile-driven compilation can provide the application performance needed, but require infeasible amounts of compilation time to build the application. This thesis presents LLVM, a design and implementation of a compiler infrastructure which supports a unique multi-stage optimization system. This system is designed to support extensive interprocedural and profile-driven optimizations, while being efficient enough for use in commercial compiler systems. The LLVM virtual instruction set is the glue that holds the system together. It is a low-level representation, but with high-level type information. This provides the benefits of a low-level representation (compact representation, wide variety of available transformations, etc.) as well as providing high-level information to support aggressive interprocedural optimizations at link-and post-link time. In particular, this system is designed to support optimization in the field, both at run-time and during otherwise unused idle time on the machine. This thesis also describes an implementation of this compiler design, the LLVM compiler infrastructure , proving that the design is feasible. The LLVM compiler infrastructure is a maturing and efficient system, which we show is a good host for a variety of research. More information about LLVM can be found on its web site at: iii Acknowledgments This thesis would not be possible without the support of a large number of people who have helped me both in big ways and little. In particular, I would like to thank my advisor, Vikram Adve, for his support, patience, and especially his trust and respect. He has shown me how to communicate ideas more effectively and how to find important and meaningful topics for research. By being demanding, understanding, and allowing me the freedom to explore my interests, he has driven me to succeed. The inspiration for this work certainly stems from one person: Tanya. She has been a continuous source of support, ideas, encouragement, and understanding. Despite my many late nights, unimaginable amounts of stress, and a truly odd sense of humor, she has not just tolerated me, but loved me. Another person who made this possible, perhaps without truly understanding his contribution, has been Brian Ensink. Brian has been an invaluable sounding board for ideas, a welcoming ear to occasional frustrations, provider …",
"title": ""
},
{
"docid": "b4ecf497c8240a48a6e60aef400d0e1e",
"text": "Skin color diversity is the most variable and noticeable phenotypic trait in humans resulting from constitutive pigmentation variability. This paper will review the characterization of skin pigmentation diversity with a focus on the most recent data on the genetic basis of skin pigmentation, and the various methodologies for skin color assessment. Then, melanocyte activity and amount, type and distribution of melanins, which are the main drivers for skin pigmentation, are described. Paracrine regulators of melanocyte microenvironment are also discussed. Skin response to sun exposure is also highly dependent on color diversity. Thus, sensitivity to solar wavelengths is examined in terms of acute effects such as sunburn/erythema or induced-pigmentation but also long-term consequences such as skin cancers, photoageing and pigmentary disorders. More pronounced sun-sensitivity in lighter or darker skin types depending on the detrimental effects and involved wavelengths is reviewed.",
"title": ""
},
{
"docid": "db4ed42c9b11ee736ad287eac05f8b29",
"text": "Food is a central part of our lives. Fundamentally, we need food to survive. Socially, food is something that brings people together-individuals interact through and around it. Culturally, food practices reflect our ethnicities and nationalities. Given the importance of food in our daily lives, it is important to understand what role technology currently plays and the roles it can be imagined to play in the future. In this paper we describe the existing and potential design space for HCI in the area of human-food interaction. We present ideas for future work on designing technologies in the area of human-food interaction that celebrate the positive interactions that people have with food as they eat and prepare foods in their everyday lives.",
"title": ""
},
{
"docid": "b0343eeb23c6630759e61a2ad234a56d",
"text": "The paper presents a neurorobotics cognitive model to explain the understanding and generalisation of nouns and verbs combinations when a vocal command consisting of a verb-noun sentence is provided to a humanoid robot. This generalisation process is done via the grounding process: different objects are being interacted, and associated, with different motor behaviours, following a learning approach inspired by developmental language acquisition in infants. This cognitive model is based on Multiple Time-scale Recurrent Neural Networks (MTRNN). With the data obtained from object manipulation tasks with a humanoid robot platform, the robotic agent implemented with this model can ground the primitive embodied structure of verbs through training with verb-noun combination samples. Moreover, we show that a functional hierarchical architecture, based on MTRNN, is able to generalise and produce novel combinations of noun-verb sentences. Further analyses of the learned network dynamics and representations also demonstrate how the generalisation is possible via the exploitation of this functional hierarchical recurrent network. J. Zhong Department of Intermedia Art and Science, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo, Japan, 169-8555 Centre for Robotics and Neural Systems, University of Plymouth, PL4 8AA, United Kingdom Tel.: +44 (0)175284908, E-mail: zhong@junpei.eu M. Peniak Cortexica Vision Systems, London, United Kingdom J. Tani Korea Advanced Institute of Science and Technology, Daejeon, South Korea T. Ogata Department of Intermedia Art and Science, Waseda University, Tokyo, Japan A. Cangelosi Centre for Robotics and Neural Systems, University of Plymouth, United Kingdom",
"title": ""
},
{
"docid": "c861009ed309b208218182e60b126228",
"text": "We present a novel beam-search decoder for grammatical error correction. The decoder iteratively generates new hypothesis corrections from current hypotheses and scores them based on features of grammatical correctness and fluency. These features include scores from discriminative classifiers for specific error categories, such as articles and prepositions. Unlike all previous approaches, our method is able to perform correction of whole sentences with multiple and interacting errors while still taking advantage of powerful existing classifier approaches. Our decoder achieves an F1 correction score significantly higher than all previous published scores on the Helping Our Own (HOO) shared task data set.",
"title": ""
},
{
"docid": "6e1013e84468c3809742bbe826598f21",
"text": "Many-light rendering methods replace multi-bounce light transport with direct lighting from many virtual point light sources to allow for simple and efficient computation of global illumination. Lightcuts build a hierarchy over virtual lights, so that surface points can be shaded with a sublinear number of lights while minimizing error. However, the original algorithm needs to run on every shading point of the rendered image. It is well known that the performance of Lightcuts can be improved by exploiting the coherence between individual cuts. We propose a novel approach where we invest into the initial lightcut creation at representative cache records, and then directly interpolate the input lightcuts themselves as well as per-cluster visibility for neighboring shading points. This allows us to improve upon the performance of the original Lightcuts algorithm by a factor of 4−8 compared to an optimized GPU-implementation of Lightcuts, while introducing only a small additional approximation error. The GPU-implementation of our technique enables us to create previews of Lightcuts-based global illumination renderings.",
"title": ""
},
{
"docid": "aeef684279957bc49bc4f65c5a0e328d",
"text": "Modern warehouse-scale computers (WSCs) are being outfitted with accelerators to provide the significant compute required by emerging intelligent personal assistant (IPA) workloads such as voice recognition, image classification, and natural language processing. It is well known that the diurnal user access pattern of user-facing services provides a strong incentive to co-locate applications for better accelerator utilization and efficiency, and prior work has focused on enabling co-location on multicore processors. However, interference when co-locating applications on non-preemptive accelerators is fundamentally different than contention on multi-core CPUs and introduces a new set of challenges to reduce QoS violation. To address this open problem, we first identify the underlying causes for QoS violation in accelerator-outfitted servers. Our experiments show that queuing delay for the compute resources and PCI-e bandwidth contention for data transfer are the main two factors that contribute to the long tails of user-facing applications. We then present Baymax, a runtime system that orchestrates the execution of compute tasks from different applications and mitigates PCI-e bandwidth contention to deliver the required QoS for user-facing applications and increase the accelerator utilization. Using DjiNN, a deep neural network service, Sirius, an end-to-end IPA workload, and traditional applications on a Nvidia K40 GPU, our evaluation shows that Baymax improves the accelerator utilization by 91.3% while achieving the desired 99%-ile latency target for for user-facing applications. In fact, Baymax reduces the 99%-ile latency of user-facing applications by up to 195x over default execution.",
"title": ""
},
{
"docid": "f7c73ca2b6cd6da6fec42076910ed3ec",
"text": "The goal of rating-based recommender systems is to make personalized predictions and recommendations for individual users by leveraging the preferences of a community of users with respect to a collection of items like songs or movies. Recommender systems are often based on intricate statistical models that are estimated from data sets containing a very high proportion of missing ratings. This work describes evidence of a basic incompatibility between the properties of recommender system data sets and the assumptions required for valid estimation and evaluation of statistical models in the presence of missing data. We discuss the implications of this problem and describe extended modelling and evaluation frameworks that attempt to circumvent it. We present prediction and ranking results showing that models developed and tested under these extended frameworks can significantly outperform standard models.",
"title": ""
},
{
"docid": "9ee426885fe9b873992d4c59aa569db6",
"text": "We introduce two data augmentation and normalization techniques, which, used with a CNN-LSTM, significantly reduce Word Error Rate (WER) and Character Error Rate (CER) beyond best-reported results on handwriting recognition tasks. (1) We apply a novel profile normalization technique to both word and line images. (2) We augment existing text images using random perturbations on a regular grid. We apply our normalization and augmentation to both training and test images. Our approach achieves low WER and CER over hundreds of authors, multiple languages and a variety of collections written centuries apart. Image augmentation in this manner achieves state-of-the-art recognition accuracy on several popular handwritten word benchmarks.",
"title": ""
},
{
"docid": "a63ded5cf6ad5aef3b8732d0921fc066",
"text": "BACKGROUND AND PURPOSE\nWe sought to modify existing sex-specific health risk appraisal functions (profile functions) for the prediction of first stroke that better assess the effects of the use of antihypertensive medication.\n\n\nMETHODS\nHealth risk appraisal functions were previously developed from the Framingham Study cohort. These functions were Cox proportional hazards regression models relating age, systolic blood pressure, diabetes mellitus, cigarette smoking, prior cardiovascular disease, atrial fibrillation, left ventricular hypertrophy by electrocardiogram, and the use of antihypertensive medication to the occurrence of stroke. Closer examination of the data indicated that antihypertensive therapy effect is present only for systolic blood pressures between 110 and 200 mm Hg. Adjustments to the regressions to better fit the observed data were developed and tested for statistical significance and goodness-of-fit of the model residuals.\n\n\nRESULTS\nModified functions more consistent with the data were developed, and, from these, tables to evaluate 10-year risk of first stroke were computed.\n\n\nCONCLUSIONS\nThe stroke profile can be used for evaluation of the risk of stroke and suggestion of risk factor modification to reduce risk. The effect of antihypertensive therapy in the evaluation of stroke risk can now be better evaluated.",
"title": ""
},
{
"docid": "2ba529e0c53554d7aa856a4766d45426",
"text": "Trauma in childhood is a psychosocial, medical, and public policy problem with serious consequences for its victims and for society. Chronic interpersonal violence in children is common worldwide. Developmental traumatology, the systemic investigation of the psychiatric and psychobiological effects of chronic overwhelming stress on the developing child, provides a framework and principles when empirically examining the neurobiological effects of pediatric trauma. This article focuses on peer-reviewed literature on the neurobiological sequelae of childhood trauma in children and in adults with histories of childhood trauma.",
"title": ""
},
{
"docid": "21b9b7995cabde4656c73e9e278b2bf5",
"text": "Topic modeling techniques have been recently applied to analyze and model source code. Such techniques exploit the textual content of source code to provide automated support for several basic software engineering activities. Despite these advances, applications of topic modeling in software engineering are frequently suboptimal. This can be attributed to the fact that current state-of-the-art topic modeling techniques tend to be data intensive. However, the textual content of source code, embedded in its identifiers, comments, and string literals, tends to be sparse in nature. This prevents classical topic modeling techniques, typically used to model natural language texts, to generate proper models when applied to source code. Furthermore, the operational complexity and multi-parameter calibration often associated with conventional topic modeling techniques raise important concerns about their feasibility as data analysis models in software engineering. Motivated by these observations, in this paper we propose a novel approach for topic modeling designed for source code. The proposed approach exploits the basic assumptions of the cluster hypothesis and information theory to discover semantically coherent topics in software systems. Ten software systems from different application domains are used to empirically calibrate and configure the proposed approach. The usefulness of generated topics is empirically validated using human judgment. Furthermore, a case study that demonstrates thet operation of the proposed approach in analyzing code evolution is reported. The results show that our approach produces stable, more interpretable, and more expressive topics than classical topic modeling techniques without the necessity for extensive parameter calibration.",
"title": ""
},
{
"docid": "6dca32a1e4ba096300c435fd0dce7858",
"text": "No wonder you activities are, reading will be always needed. It is not only to fulfil the duties that you need to finish in deadline time. Reading will encourage your mind and thoughts. Of course, reading will greatly develop your experiences about everything. Reading inverse problem theory and methods for model parameter estimation is also a way as one of the collective books that gives many advantages. The advantages are not only for you, but for the other peoples with those meaningful benefits.",
"title": ""
},
{
"docid": "44c66a2654fdc7ab72dabaa8e31f0e99",
"text": "The availability of new generation multispectral sensors of the Landsat 8 and Sentinel-2 satellite platforms offers unprecedented opportunities for long-term high-frequency monitoring applications. The present letter aims at highlighting some potentials and challenges deriving from the spectral and spatial characteristics of the two instruments. Some comparisons between corresponding bands and band combinations were performed on the basis of different datasets: the first consists of a set of simulated images derived from a hyperspectral Hyperion image, the other five consist instead of pairs of real images (Landsat 8 and Sentinel-2A) acquired on the same date, over five areas. Results point out that in most cases the two sensors can be well combined; however, some issues arise regarding near-infrared bands when Sentinel-2 data are combined with both Landsat 8 and older Landsat images.",
"title": ""
}
] |
scidocsrr
|
3b6b43583a159f924939a6d6c00c918e
|
Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features
|
[
{
"docid": "5664ca8d7f0f2f069d5483d4a334c670",
"text": "In Semantic Textual Similarity, systems rate the degree of semantic equivalence between two text snippets. This year, the participants were challenged with new data sets for English, as well as the introduction of Spanish, as a new language in which to assess semantic similarity. For the English subtask, we exposed the systems to a diversity of testing scenarios, by preparing additional OntoNotesWordNet sense mappings and news headlines, as well as introducing new genres, including image descriptions, DEFT discussion forums, DEFT newswire, and tweet-newswire headline mappings. For Spanish, since, to our knowledge, this is the first time that official evaluations are conducted, we used well-formed text, by featuring sentences extracted from encyclopedic content and newswire. The annotations for both tasks leveraged crowdsourcing. The Spanish subtask engaged 9 teams participating with 22 system runs, and the English subtask attracted 15 teams with 38 system runs.",
"title": ""
},
{
"docid": "ac6329671cf9bb43693870bc1f41b6e4",
"text": "We present the Siamese Continuous Bag of Words (Siamese CBOW) model, a neural network for efficient estimation of highquality sentence embeddings. Averaging the embeddings of words in a sentence has proven to be a surprisingly successful and efficient way of obtaining sentence embeddings. However, word embeddings trained with the methods currently available are not optimized for the task of sentence representation, and, thus, likely to be suboptimal. Siamese CBOW handles this problem by training word embeddings directly for the purpose of being averaged. The underlying neural network learns word embeddings by predicting, from a sentence representation, its surrounding sentences. We show the robustness of the Siamese CBOW model by evaluating it on 20 datasets stemming from a wide variety of sources.",
"title": ""
}
] |
[
{
"docid": "2c41d6a0d128a3443394b0dd204d6eac",
"text": "The paper proposes a fresh look at the concept of goal and adva nces that motivational attitudes like desire, goal and intention are just facets of the broader not io f (acceptable) outcome. We propose to encode the preferences of an agent as sequences of “altern ativ acceptable outcomes”. We then study how the agent’s beliefs and norms can be used to filter th e mental attitudes out of the sequences of alternative acceptable outcomes. Finally, we formalise such intuitions in a novel Modal Defeasible Logic and we prove that the resulting formalisation is compu tationally feasible.",
"title": ""
},
{
"docid": "0f37f7306f879ca0b5d35516a64818fb",
"text": "Much of empirical corporate finance focuses on sources of the demand for various forms of capital, not the supply. Recently, this has changed. Supply effects of equity and credit markets can arise from a combination of three ingredients: investor tastes, limited intermediation, and corporate opportunism. Investor tastes when combined with imperfectly competitive intermediaries lead prices and interest rates to deviate from fundamental values. Opportunistic firms respond by issuing securities with high prices and investing the proceeds. A link between capital market prices and corporate finance can in principle come from either supply or demand. This framework helps to organize empirical approaches that more precisely identify and quantify supply effects through variation in one of these three ingredients. Taken as a whole, the evidence shows that shifting equity and credit market conditions play an important role in dictating corporate finance and investment. 181 A nn u. R ev . F in . E co n. 2 00 9. 1: 18 120 5. D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by H ar va rd U ni ve rs ity o n 02 /1 1/ 14 . F or p er so na l u se o nl y.",
"title": ""
},
{
"docid": "1904d8b3c45bc24acdc0294d84d66c79",
"text": "The propagation of unreliable information is on the rise in many places around the world. This expansion is facilitated by the rapid spread of information and anonymity granted by the Internet. The spread of unreliable information is a well-studied issue and it is associated with negative social impacts. In a previous work, we have identified significant differences in the structure of news articles from reliable and unreliable sources in the US media. Our goal in this work was to explore such differences in the Brazilian media. We found significant features in two data sets: one with Brazilian news in Portuguese and another one with US news in English. Our results show that features related to the writing style were prominent in both data sets and, despite the language difference, some features have a universal behavior, being significant to both US and Brazilian news articles. Finally, we combined both data sets and used the universal features to build a machine learning classifier to predict the source type of a news article as reliable or unreliable.",
"title": ""
},
{
"docid": "dc0138d731e3e76c7523fedbc5a83b7e",
"text": "The affinity propagation (AP) clustering algorithm has received much attention in the past few years. AP is appealing because it is efficient, insensitive to initialization, and it produces clusters at a lower error rate than other exemplar-based methods. However, its single-exemplar model becomes inadequate when applied to model multisubclasses in some situations such as scene analysis and character recognition. To remedy this deficiency, we have extended the single-exemplar model to a multi-exemplar one to create a new multi-exemplar affinity propagation (MEAP) algorithm. This new model automatically determines the number of exemplars in each cluster associated with a super exemplar to approximate the subclasses in the category. Solving the model is NP--hard and we tackle it with the max-sum belief propagation to produce neighborhood maximum clusters, with no need to specify beforehand the number of clusters, multi-exemplars, and superexemplars. Also, utilizing the sparsity in the data, we are able to reduce substantially the computational time and storage. Experimental studies have shown MEAP's significant improvements over other algorithms on unsupervised image categorization and the clustering of handwritten digits.",
"title": ""
},
{
"docid": "8684054a3aed718333d39ea27a813791",
"text": "Article history: Received Accepted Available online 02 Sept. 2013 30 Sept. 2013 07 Oct. 2013 Colour is an inseparable as well as an important aspect of an interior design. The maximum influence in interior comes with the design of colour. So it is very important to study the colour and its effect in interior environment, it may be physiological as well as psychological. For this, articles were reviewed and analyzed from the existing literature, related to use of colour in both residence as well as commercial interior. The three major areas reviewed were (1) Psychological and physiological effect of colour (2) Meaning of Warm, Cool and Neutral Colour (3) Effect of Colour in forms. The results show that colour is important in designing functional spaces. The results of this analysis may benefit to architects, interior designer, and homeowner to use colour effectively in interior environment. © 2013 International Journal of Advanced Research in Science and Technology (IJARST). All rights reserved.",
"title": ""
},
{
"docid": "c17522f4b9f3b229dae56b394adb69a1",
"text": "This paper investigates fault effects and error propagation in a FlexRay-based network with hybrid topology that includes a bus subnetwork and a star subnetwork. The investigation is based on about 43500 bit-flip fault injection inside different parts of the FlexRay communication controller. To do this, a FlexRay communication controller is modeled by Verilog HDL at the behavioral level. Then, this controller is exploited to setup a FlexRay-based network composed of eight nodes (four nodes in the bus subnetwork and four nodes in the star subnetwork). The faults are injected in a node of the bus subnetwork and a node of the star subnetwork of the hybrid network Then, the faults resulting in the three kinds of errors, namely, content errors, syntax errors and boundary violation errors are characterized. The results of fault injection show that boundary violation errors and content errors are negligibly propagated to the star subnetwork and syntax errors propagation is almost equal in the both bus and star subnetworks. Totally, the percentage of errors propagation in the bus subnetwork is more than the star subnetwork.",
"title": ""
},
{
"docid": "c1b097ae37730e1bae4544a2555a8083",
"text": "Attackers have evolved classic code-injection attacks, such as those caused by buffer overflows to sophisticated Turing-complete codereuse attacks. Control-Flow Integrity (CFI) is a defence mechanism to eliminate control-flow hijacking attacks caused by common memory errors. CFI relies on static analysis for the creation of a program’s controlflow graph (CFG), then at runtime CFI ensures that the program follows the legitimate path. Thereby, when an attacker tries to execute malicious shellcode, CFI detects an unintended path and aborts execution. CFI heavily relies on static analysis for the accurate generation of the control-flow graph, and its security depends on how strictly the CFG is generated and enforced. This paper reviews the CFI schemes proposed over the last ten years and assesses their security guarantees against advanced exploitation techniques.",
"title": ""
},
{
"docid": "b27a7921ce2005727f1bf768802d660c",
"text": "Four methods for reviewing a body of research literature – narrative review, descriptive review, vote-counting, and meta-analysis – are compared. Meta-analysis as a formalized, systematic review method is discussed in detail in terms of its history, current status, advantages, common analytic methods, and recent developments. Meta-analysis is found to be underutilized in IS. Suggestions on encouraging the use of metaanalysis in IS research and procedures recommended for meta-analysis are also provided.",
"title": ""
},
{
"docid": "8dfd91ceadfcceea352975f9b5958aaf",
"text": "The bag-of-words representation commonly used in text analysis can be analyzed very efficiently and retains a great deal of useful information, but it is also troublesome because the same thought can be expressed using many different terms or one term can have very different meanings. Dimension reduction can collapse together terms that have the same semantics, to identify and disambiguate terms with multiple meanings and to provide a lower-dimensional representation of documents that reflects concepts instead of raw terms. In this chapter, we survey two influential forms of dimension reduction. Latent semantic indexing uses spectral decomposition to identify a lower-dimensional representation that maintains semantic properties of the documents. Topic modeling, including probabilistic latent semantic indexing and latent Dirichlet allocation, is a form of dimension reduction that uses a probabilistic model to find the co-occurrence patterns of terms that correspond to semantic topics in a collection of documents. We describe the basic technologies in detail and expose the underlying mechanism. We also discuss recent advances that have made it possible to apply these techniques to very large and evolving text collections and to incorporate network structure or other contextual information.",
"title": ""
},
{
"docid": "d75d453181293c92ec9bab800029e366",
"text": "For a majority of applications implemented today, the Intermediate Bus Architecture (IBA) has been the preferred power architecture. This power architecture has led to the development of the isolated, semi-regulated DC/DC converter known as the Intermediate Bus Converter (IBC). Fixed ratio Bus Converters that employ a new power topology known as the Sine Amplitude Converter (SAC) offer dramatic improvements in power density, noise reduction, and efficiency over the existing IBC products. As electronic systems continue to trend toward lower voltages with higher currents and as the speed of contemporary loads - such as state-of-the-art processors and memory - continues to increase, the power systems designer is challenged to provide small, cost effective and efficient solutions that offer the requisite performance. Traditional power architectures cannot, in the long run, provide the required performance. Vicor's Factorized Power Architecture (FPA), and the implementation of V·I Chips, provides a revolutionary new and optimal power conversion solution that addresses the challenge in every respect. The technology behind these power conversion engines used in the IBC and V·I Chips is analyzed and contextualized in a system perspective.",
"title": ""
},
{
"docid": "38524d91bcff648f96f5d693425dff7f",
"text": "This paper presents a predictive current control method and its application to a voltage source inverter. The method uses a discrete-time model of the system to predict the future value of the load current for all possible voltage vectors generated by the inverter. The voltage vector which minimizes a quality function is selected. The quality function used in this work evaluates the current error at the next sampling time. The performance of the proposed predictive control method is compared with hysteresis and pulsewidth modulation control. The results show that the predictive method controls very effectively the load current and performs very well compared with the classical solutions",
"title": ""
},
{
"docid": "2ab8c692ef55d2501ff61f487f91da9c",
"text": "A common discussion subject for the male part of the population in particular, is the prediction of next weekend’s soccer matches, especially for the local team. Knowledge of offensive and defensive skills is valuable in the decision process before making a bet at a bookmaker. In this article we take an applied statistician’s approach to the problem, suggesting a Bayesian dynamic generalised linear model to estimate the time dependent skills of all teams in a league, and to predict next weekend’s soccer matches. The problem is more intricate than it may appear at first glance, as we need to estimate the skills of all teams simultaneously as they are dependent. It is now possible to deal with such inference problems using the iterative simulation technique known as Markov Chain Monte Carlo. We will show various applications of the proposed model based on the English Premier League and Division 1 1997-98; Prediction with application to betting, retrospective analysis of the final ranking, detection of surprising matches and how each team’s properties vary during the season.",
"title": ""
},
{
"docid": "572be2eb18bd929c2b4e482f7d3e0754",
"text": "• Supervised learning --where the algorithm generates a function that maps inputs to desired outputs. One standard formulation of the supervised learning task is the classification problem: the learner is required to learn (to approximate the behavior of) a function which maps a vector into one of several classes by looking at several input-output examples of the function. • Unsupervised learning --which models a set of inputs: labeled examples are not available. • Semi-supervised learning --which combines both labeled and unlabeled examples to generate an appropriate function or classifier. • Reinforcement learning --where the algorithm learns a policy of how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback that guides the learning algorithm. • Transduction --similar to supervised learning, but does not explicitly construct a function: instead, tries to predict new outputs based on training inputs, training outputs, and new inputs. • Learning to learn --where the algorithm learns its own inductive bias based on previous experience.",
"title": ""
},
{
"docid": "289fa33e14d440e0985e49166dabbbe7",
"text": "Authentication is vital to all forms of remote communication. A lack of authentication opens the door to man-in-the-middle attacks, which, if performed at a key moment, may subvert the entire interaction. Current approaches to authentication on the internet include certificate authorities and webs of trust. Both of those approaches have significant drawbacks: the former relies upon trusted third parties, introducing a central point of failure, and the latter has a high barrier to entry. We propose Certcoin, an alternative, public and decentralized authentication scheme. The core idea of Certcoin is maintaining a public ledger of domains and their associated public keys. We describe the Certcoin scheme, as well as several optimizations to make Certcoin more accessible to devices with limited storage capacity, such as smartphones. Our optimizations use tools such as cryptographic accumulators and distributed hash tables.",
"title": ""
},
{
"docid": "fa92d4ab5cfd83ff87391b60c1454f39",
"text": "In this paper, we study the stochastic gradient descent (SGD) method for the nonconvex nonsmooth optimization, and propose an accelerated SGD method by combining the variance reduction technique with Nesterov’s extrapolation technique. Moreover, based on the local error bound condition, we establish the linear convergence of our method to obtain a stationary point of the nonconvex optimization. In particular, we prove that not only the sequence generated linearly converges to a stationary point of the problem, but also the corresponding sequence of objective values is linearly convergent. Finally, some numerical experiments demonstrate the effectiveness of our method. To the best of our knowledge, it is first proved that the accelerated SGD method converges linearly to the local minimum of the nonconvex optimization.",
"title": ""
},
{
"docid": "7063d3eb38008bcd344f0ae1508cca61",
"text": "The fitness of an evolutionary individual can be understood in terms of its two basic components: survival and reproduction. As embodied in current theory, trade-offs between these fitness components drive the evolution of life-history traits in extant multicellular organisms. Here, we argue that the evolution of germ-soma specialization and the emergence of individuality at a new higher level during the transition from unicellular to multicellular organisms are also consequences of trade-offs between the two components of fitness-survival and reproduction. The models presented here explore fitness trade-offs at both the cell and group levels during the unicellular-multicellular transition. When the two components of fitness negatively covary at the lower level there is an enhanced fitness at the group level equal to the covariance of components at the lower level. We show that the group fitness trade-offs are initially determined by the cell level trade-offs. However, as the transition proceeds to multicellularity, the group level trade-offs depart from the cell level ones, because certain fitness advantages of cell specialization may be realized only by the group. The curvature of the trade-off between fitness components is a basic issue in life-history theory and we predict that this curvature is concave in single-celled organisms but becomes increasingly convex as group size increases in multicellular organisms. We argue that the increasingly convex curvature of the trade-off function is driven by the initial cost of reproduction to survival which increases as group size increases. To illustrate the principles and conclusions of the model, we consider aspects of the biology of the volvocine green algae, which contain both unicellular and multicellular members.",
"title": ""
},
{
"docid": "b8c4e7dd31801bcba9e4738ba47f74df",
"text": "This paper considers the vehicle navigation problem for an autonomous underwater vehicle (AUV) with six degrees of freedom. We approach this problem using an error state formulation of the Kalman filter. Integration of the vehicle's high-rate inertial measurement unit's (IMU's) accelerometers and gyros allow time propagation while other sensors provide measurement corrections. The low-rate aiding sensors include a Doppler velocity log (DVL), an acoustic long baseline (LBL) system that provides round-trip travel times from known locations, a pressure sensor for aiding depth, and an attitude sensor. Measurements correct the filter independently as they arrive, and as such, the filter is not dependent on the arrival of any particular measurement. We propose novel tightly coupled techniques for the incorporation of the LBL and DVL measurements. In particular, the LBL correction properly accounts for the error state throughout the measurement cycle via the state transition matrix. Alternate tightly coupled approaches ignore the error state, utilizing only the navigation state to account for the physical latencies in the measurement cycle. These approaches account for neither the uncertainty of vehicle trajectory between interrogation and reply, nor the error state at interrogation. The navigation system also estimates critical sensor calibration parameters to improve performance. The result is a robust navigation system. Simulation and experimental results are provided.",
"title": ""
},
{
"docid": "50f09f5b2e579e878f041f136bafe07e",
"text": "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods.",
"title": ""
},
{
"docid": "ce8cabea6fff858da1fb9894860f7c2d",
"text": "This thesis investigates artificial agents learning to make strategic decisions in imperfect-information games. In particular, we introduce a novel approach to reinforcement learning from self-play. We introduce Smooth UCT, which combines the game-theoretic notion of fictitious play with Monte Carlo Tree Search (MCTS). Smooth UCT outperformed a classic MCTS method in several imperfect-information poker games and won three silver medals in the 2014 Annual Computer Poker Competition. We develop Extensive-Form Fictitious Play (XFP) that is entirely implemented in sequential strategies, thus extending this prominent game-theoretic model of learning to sequential games. XFP provides a principled foundation for self-play reinforcement learning in imperfect-information games. We introduce Fictitious Self-Play (FSP), a class of sample-based reinforcement learning algorithms that approximate XFP. We instantiate FSP with neuralnetwork function approximation and deep learning techniques, producing Neural FSP (NFSP). We demonstrate that (approximate) Nash equilibria and their representations (abstractions) can be learned using NFSP end to end, i.e. interfacing with the raw inputs and outputs of the domain. NFSP approached the performance of state-of-the-art, superhuman algorithms in Limit Texas Hold’em an imperfect-information game at the absolute limit of tractability using massive computational resources. This is the first time that any reinforcement learning algorithm, learning solely from game outcomes without prior domain knowledge, achieved such a feat.",
"title": ""
},
{
"docid": "eba25ae59603328f3ef84c0994d46472",
"text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.",
"title": ""
}
] |
scidocsrr
|
310413d5bc50c48467822a5aafdc1e9a
|
Ultrastructural characterization of male and female Physaloptera rara (Spirurida: Physalopteridae): feline stomach worms
|
[
{
"docid": "31da7acfb9d98421bbf7e70a508ba5df",
"text": "Habronema muscae (Spirurida: Habronematidae) occurs in the stomach of equids, is transmitted by adult muscid dipterans and causes gastric habronemiasis. Scanning electron microscopy (SEM) was used to study the morphological aspects of adult worms of this nematode in detail. The worms possess two trilobed lateral lips. The buccal cavity was cylindrical, with thick walls and without teeth. Around the mouth, four submedian cephalic papillae and two amphids were seen. A pair of lateral cervical papillae was present. There was a single lateral ala and in the female the vulva was situated in the middle of the body. In the male, there were wide caudal alae, and the spicules were unequal and dissimilar. At the posterior end of the male, four pairs of stalked precloacal papillae, unpaired post-cloacal papillae and a cluster of small papillae were present. In one case, the anterior end showed abnormal features.",
"title": ""
}
] |
[
{
"docid": "c120406dd4e60a9bb33dd4a87cbd3616",
"text": "Intersubjectivity is an important concept in psychology and sociology. It refers to sharing conceptualizations through social interactions in a community and using such shared conceptualization as a resource to interpret things that happen in everyday life. In this work, we make use of intersubjectivity as the basis to model shared stance and subjectivity for sentiment analysis. We construct an intersubjectivity network which links review writers, terms they used, as well as the polarities of the terms. Based on this network model, we propose a method to learn writer embeddings which are subsequently incorporated into a convolutional neural network for sentiment analysis. Evaluations on the IMDB, Yelp 2013 and Yelp 2014 datasets show that the proposed approach has achieved the state-of-the-art performance.",
"title": ""
},
{
"docid": "4ba77f32c62960b511c8d63c043d50d1",
"text": "Although the performance of the state-of-the-art OCR systems is very high, they can still introduce errors due to various reasons, and when it comes to historical documents with old manuscripts the performance of such systems gets even worse. That is why Post-OCR error correction has been an open problem for many years. Many state-of-the-art approaches have been introduced through the recent years. This paper contributes to the field of Post-OCR Error Correction by introducing two novel deep learning approaches to improve the accuracy of OCR systems, and a post processing technique that can further enhance the quality of the output results. These approaches are based on Neural Machine Translation (NMT) and were motivated by the great success that deep learning introduced to the field of Natural Language Processing. Finally, we will compare the state-of-the-art approaches in Post-OCR Error Correction with the newly introduced systems and discuss the results.",
"title": ""
},
{
"docid": "5fae8f62b7b50db1bd5235eeb8baf0eb",
"text": "This paper presents and compares algorithms for combined acoustic echo cancellation and noise reduction for hands-free telephones. A structure is proposed, consisting of a conventional acoustic echo canceler and a frequency domain postfilter in the sending path of the hands-free system. The postfilter applies the spectral weighting technique and attenuates both the background noise and the residual echo which remains after imperfect echo cancellation. Two weighting rules for the postfilter are discussed. The first is a conventional one, known from noise reduction, which is extended to attenuate residual echo as well as noise. The second is a psychoacoustically motivated weighting rule. Both rules are evaluated and compared by instrumental and auditive tests. They succeed about equally well in attenuating the noise and the residual echo. In listening tests, however, the psychoacoustically motivated weighting rule is mostly preferred since it leads to more natural near end speech and to less annoying residual noise.",
"title": ""
},
{
"docid": "453f381177097be0ec43b44688454472",
"text": "Dendritic spines of pyramidal neurons in the cerebral cortex undergo activity-dependent structural remodelling that has been proposed to be a cellular basis of learning and memory. How structural remodelling supports synaptic plasticity, such as long-term potentiation, and whether such plasticity is input-specific at the level of the individual spine has remained unknown. We investigated the structural basis of long-term potentiation using two-photon photolysis of caged glutamate at single spines of hippocampal CA1 pyramidal neurons. Here we show that repetitive quantum-like photorelease (uncaging) of glutamate induces a rapid and selective enlargement of stimulated spines that is transient in large mushroom spines but persistent in small spines. Spine enlargement is associated with an increase in AMPA-receptor-mediated currents at the stimulated synapse and is dependent on NMDA receptors, calmodulin and actin polymerization. Long-lasting spine enlargement also requires Ca2+/calmodulin-dependent protein kinase II. Our results thus indicate that spines individually follow Hebb's postulate for learning. They further suggest that small spines are preferential sites for long-term potentiation induction, whereas large spines might represent physical traces of long-term memory.",
"title": ""
},
{
"docid": "8933d7d0f57a532ef27b9dbbb3727a88",
"text": "All people can not do as they plan, it happens because of their habits. Therefore, habits and moods may affect their productivity. Hence, the habits and moods are the important parts of person's life. Such habits may be analyzed with various machine learning techniques as available nowadays. Now the question of analyzing the Habits and moods of a person with a goal of increasing one's productivity comes to mind. This paper discusses one such technique called HDML (Habit Detection with Machine Learning). HDML model analyses the mood which helps us to deal with a bad mood or a state of unproductivity, through suggestions about such activities that alleviate our mood. The overall accuracy of the model is about 87.5 %.",
"title": ""
},
{
"docid": "9c106d71e5c40c3338cf4acd1e142621",
"text": "Pomegranate peels were studied for the effect of gamma irradiation on microbial decontamination along with its effect on total phenolic content and in vitro antioxidant activity. Gamma irradiation was applied at various dose levels (5.0, 10.0, 15.0 and 25.0 kGy) on pomegranate peel powder. Both the values of total phenolic content and in vitro antioxidant activity were positively correlated and showed a significant increase (p < 0.05) for 10.0 kGy irradiated dose level immediately after irradiation and 60 days of post irradiation storage. At 5.0 kGy and above dose level, gamma irradiation has reduced microbial count of pomegranate peel powder to nil. Post irradiation storage studies also showed that, the irradiated peel powder was microbiologically safe even after 90 days of storage period.",
"title": ""
},
{
"docid": "c891330d08fb8e41d179e803524a1737",
"text": "This article deals with active frequency filter design using signalflow graphs. The procedure of multifunctional circuit design that can realize more types of frequency filters is shown. To design a new circuit the Mason – Coates graphs with undirected self-loops have been used. The voltage conveyors whose properties are dual to the properties of the well-known current conveyors have been used as the active element.",
"title": ""
},
{
"docid": "1c126457ee6b61be69448ee00a64d557",
"text": "Class imbalance is a common problem in the case of real-world object detection and classification tasks. Data of some classes are abundant, making them an overrepresented majority, and data of other classes are scarce, making them an underrepresented minority. This imbalance makes it challenging for a classifier to appropriately learn the discriminating boundaries of the majority and minority classes. In this paper, we propose a cost-sensitive (CoSen) deep neural network, which can automatically learn robust feature representations for both the majority and minority classes. During training, our learning procedure jointly optimizes the class-dependent costs and the neural network parameters. The proposed approach is applicable to both binary and multiclass problems without any modification. Moreover, as opposed to data-level approaches, we do not alter the original data distribution, which results in a lower computational cost during the training process. We report the results of our experiments on six major image classification data sets and show that the proposed approach significantly outperforms the baseline algorithms. Comparisons with popular data sampling techniques and CoSen classifiers demonstrate the superior performance of our proposed method.",
"title": ""
},
{
"docid": "54ec681832cd276b6641f7e7e08205a7",
"text": "In this paper, we proposed PRPRS (Personalized Research Paper Recommendation System) that designed expansively and implemented a UserProfile-based algorithm for extracting keyword by keyword extraction and keyword inference. If the papers don't have keyword section, we consider the title and text as an argument of keyword and execute the algorithm. Then, we create the possible combination from each word of title. We extract the combinations presented in the main text among the longest word combinations which include the same words. If the number of extracted combinations is more than the standard number, we used that combination as keyword. Otherwise, we refer the main text and extract combination as much as standard in order of high Term-Frequency. Whenever collected research papers by topic are selected, a renewal of UserProfile increases the frequency of each Domain, Topic and keyword. Each ratio of occurrence is recalculated and reflected on UserProfile. PRPRS calculates the similarity between given topic and collected papers by using Cosine Similarity which is used to recommend initial paper for each topic in Information retrieval. We measured satisfaction and accuracy for each system-recommended paper to test and evaluated performances of the suggested system. Finally PRPRS represents high level of satisfaction and accuracy.",
"title": ""
},
{
"docid": "a04e5f210af76b470b56affbe55d1af3",
"text": "We present an evaluation of Colorgorical, a web-based tool for creating discriminable and aesthetically preferable categorical color palettes. Colorgorical uses iterative semi-random sampling to pick colors from CIELAB space based on user-defined discriminability and preference importances. Colors are selected by assigning each a weighted sum score that applies the user-defined importances to Perceptual Distance, Name Difference, Name Uniqueness, and Pair Preference scoring functions, which compare a potential sample to already-picked palette colors. After, a color is added to the palette by randomly sampling from the highest scoring palettes. Users can also specify hue ranges or build off their own starting palettes. This procedure differs from previous approaches that do not allow customization (e.g., pre-made ColorBrewer palettes) or do not consider visualization design constraints (e.g., Adobe Color and ACE). In a Palette Score Evaluation, we verified that each scoring function measured different color information. Experiment 1 demonstrated that slider manipulation generates palettes that are consistent with the expected balance of discriminability and aesthetic preference for 3-, 5-, and 8-color palettes, and also shows that the number of colors may change the effectiveness of pair-based discriminability and preference scores. For instance, if the Pair Preference slider were upweighted, users would judge the palettes as more preferable on average. Experiment 2 compared Colorgorical palettes to benchmark palettes (ColorBrewer, Microsoft, Tableau, Random). Colorgorical palettes are as discriminable and are at least as preferable or more preferable than the alternative palette sets. In sum, Colorgorical allows users to make customized color palettes that are, on average, as effective as current industry standards by balancing the importance of discriminability and aesthetic preference.",
"title": ""
},
{
"docid": "bc06c989afd9f2e5cfe788e5d3455748",
"text": "The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.",
"title": ""
},
{
"docid": "61b02ae1994637115e3baec128f05bd8",
"text": "Ensuring reliability as the electrical grid morphs into the “smart grid” will require innovations in how we assess the state of the grid, for the purpose of proactive maintenance, rather than reactive maintenance – in the future, we will not only react to failures, but also try to anticipate and avoid them using predictive modeling (machine learning) techniques. To help in meeting this challenge, we present the Neutral Online Visualization-aided Autonomic evaluation framework (NOVA) for evaluating machine learning algorithms for preventive maintenance on the electrical grid. NOVA has three stages provided through a unified user interface: evaluation of input data quality, evaluation of machine learning results, and evaluation of the reliability improvement of the power grid. A prototype version of NOVA has been deployed for the power grid in New York City, and it is able to evaluate machine learning systems effectively and efficiently. Appearing in the ICML 2011 Workshop on Machine Learning for Global Challenges, Bellevue, WA, USA, 2011. Copyright 2011 by the author(s)/owner(s).",
"title": ""
},
{
"docid": "f9f1f442068d8eb0d87d05876a299179",
"text": "One common application of text mining is event extraction, which encompasses deducing specific knowledge concerning incidents referred to in texts. Event extraction can be applied to various types of written text, e.g., (online) news messages, blogs, and manuscripts. This literature survey reviews text mining techniques that are employed for various event extraction purposes. It provides general guidelines on how to choose a particular event extraction technique depending on the user, the available content, and the scenario of use.",
"title": ""
},
{
"docid": "85012f6ad9aa8f3e80a9c971076b4eb9",
"text": "The article aims to introduce an integrated web-based interactive data platform for molecular dynamic simulations using the datasets generated by different life science communities from Armenia. The suggested platform, consisting of data repository and workflow management services, is vital for current and future scientific discoveries in the life science domain. We focus on interactive data visualization workflow service as a key to perform more in-depth analyzes of research data outputs, helping to understand the problems efficiently and to consolidate the data into one collective illustration platform. The functionalities of the integrated data platform is presented as an advanced integrated environment to capture, analyze, process and visualize the scientific data.",
"title": ""
},
{
"docid": "a0b9b40328c03cbbe801e027fb793117",
"text": "BACKGROUND\nA better knowledge of the job aspects that may predict home health care nurses' burnout and work engagement is important in view of stress prevention and health promotion. The Job Demands-Resources model predicts that job demands and resources relate to burnout and work engagement but has not previously been tested in the specific context of home health care nursing.\n\n\nPURPOSE\nThe present study offers a comprehensive test of the Job-Demands Resources model in home health care nursing. We investigate the main and interaction effects of distinctive job demands (workload, emotional demands and aggression) and resources (autonomy, social support and learning opportunities) on burnout and work engagement.\n\n\nMETHODS\nAnalyses were conducted using cross-sectional data from 675 Belgian home health care nurses, who participated in a voluntary and anonymous survey.\n\n\nRESULTS\nThe results show that workload and emotional demands were positively associated with burnout, whereas aggression was unrelated to burnout. All job resources were associated with higher levels of work engagement and lower levels of burnout. In addition, social support buffered the positive relationship between workload and burnout.\n\n\nCONCLUSIONS\nHome health care organizations should invest in dealing with workload and emotional demands and stimulating the job resources under study to reduce the risk of burnout and increase their nurses' work engagement.",
"title": ""
},
{
"docid": "fac9465df30dd5d9ba5bc415b2be8172",
"text": "In the Railway System, Railway Signalling System is the vital control equipment responsible for the safe operation of trains. In Railways, the system of communication from railway stations and running trains is by the means of signals through wired medium. Once the train leaves station, there is no communication between the running train and the station or controller. Hence, in case of failures or in emergencies in between stations, immediate information cannot be given and a particular problem will escalate with valuable time lost. Because of this problem only a single train can run in between two nearest stations. Now a days, Railway all over the world is using Optical Fiber cable for communication between stations and to send signals to trains. The usage of optical fibre cables does not lend itself for providing trackside communication as in the case of copper cable. Hence, another transmission medium is necessary for communication outside the station limits with drivers, guards, maintenance gangs, gateman etc. Obviously the medium of choice for such communication is wireless. With increasing speed and train density, adoption of train control methods such as Automatic warning system, (AWS) or, Automatic train stop (ATS), or Positive train separation (PTS) is a must. Even though, these methods traditionally pick up their signals from track based beacons, Wireless Sensor Network based systems will suit the Railways much more. In this paper, we described a new and innovative medium for railways that is Wireless Sensor Network (WSN) based Railway Signalling System and conclude that Introduction of WSN in Railways will not only achieve economy but will also improve the level of safety and efficiency of train operations.",
"title": ""
},
{
"docid": "08e5d41228c9c6700873e93b5cb7fa28",
"text": "We propose a novel approach for automatic segmentation of anatomical structures on 3D CT images by voting from a fully convolutional network (FCN), which accomplishes an end-to-end, voxel-wise multiple-class classification to map each voxel in a CT image directly to an anatomical label. The proposed method simplifies the segmentation of the anatomical structures (including multiple organs) in a CT image (generally in 3D) to majority voting for the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. An FCN consisting of “convolution” and “de-convolution” parts is trained and re-used for the 2D semantic image segmentation of different slices of CT scans. All of the procedures are integrated into a simple and compact all-in-one network, which can segment complicated structures on differently sized CT images that cover arbitrary CT scan regions without any adjustment. We applied the proposed method to segment a wide range of anatomical structures that consisted of 19 types of targets in the human torso, including all the major organs. A database consisting of 240 3D CT scans and a humanly annotated ground truth was used for training and testing. The results showed that the target regions for the entire set of CT test scans were segmented with acceptable accuracies (89 % of total voxels were labeled correctly) against the human annotations. The experimental results showed better efficiency, generality, and flexibility of this end-to-end learning approach on CT image segmentations comparing to conventional methods guided by human expertise.",
"title": ""
},
{
"docid": "509c4b0d3cfd457b1ef22ee5de1830b8",
"text": "Convolutional neural nets (convnets) trained from massive labeled datasets [1] have substantially improved the state-of-the-art in image classification [2] and object detection [3]. However, visual understanding requires establishing correspondence on a finer level than object category. Given their large pooling regions and training from whole-image labels, it is not clear that convnets derive their success from an accurate correspondence model which could be used for precise localization. In this paper, we study the effectiveness of convnet activation features for tasks requiring correspondence. We present evidence that convnet features localize at a much finer scale than their receptive field sizes, that they can be used to perform intraclass aligment as well as conventional hand-engineered features, and that they outperform conventional features in keypoint prediction on objects from PASCAL VOC 2011 [4].",
"title": ""
},
{
"docid": "46db52db18591f50d16c4bcf93480e5c",
"text": "This paper presents a new isolated single-stage pulse-width modulation rectifier system based on the recently presented Swiss rectifier topology providing the isolation by replacing a buck with a forward converter. The principle of operation and a new modulation technique which compensates the reactive power generated by the input filter at light load and maximizes the power factor are discussed. Furthermore, the analytical equations for the stress in the semiconductor devices useful for the system optimization are derived. The proposed topology and modulation technique are experimentally validated on a 3.3-kW 115 Vac to 270 Vdc prototype demonstrator.",
"title": ""
},
{
"docid": "a9ff593d6eea9f28aa1d2b41efddea9b",
"text": "A central task in the study of evolution is the reconstruction of a phylogenetic tree from sequences of current-day taxa. A well supported approach to tree reconstruction performs maximum likelihood (ML) analysis. Unfortunately, searching for the maximum likelihood phylogenetic tree is computationally expensive. In this paper, we describe a new algorithm that uses Structural-EM for learning maximum likelihood trees. This algorithm is similar to the standard EM method for estimating branch lengths, except that during iterations of this algorithms the topology is improved as well as the branch length. The algorithm performs iterations of two steps. In the E-Step, we use the current tree topology and branch lengths to compute expected sufficient statistics, which summarize the data. In the M-Step, we search for a topology that maximizes the likelihood with respect to these expected sufficient statistics. As we show, searching for better topologies inside the M-step can be done efficiently, as opposed to standard search over topologies. We prove that each iteration of this procedure increases the likelihood of the topology, and thus the procedure must converge. We evaluate our new algorithm on both synthetic and real sequence data, and show that it is both dramatically faster and finds more plausible trees than standard search for maximum likelihood phylogenies.",
"title": ""
}
] |
scidocsrr
|
47803985a7e20308f0ea4ac4bc1901b7
|
Dex: a semantic-graph differencing tool for studying changes in large code bases
|
[
{
"docid": "b16dfd7a36069ed7df12f088c44922c5",
"text": "This paper considers the problem of computing the editing distance between unordered, labeled trees. We give efficient polynomial-time algorithms for the case when one tree is a string or has a bounded number of leaves. By contrast, we show that the problem is NP -complete even for binary trees having a label alphabet of size two. keywords: Computational Complexity, Unordered trees, NP -completeness.",
"title": ""
}
] |
[
{
"docid": "9e5eb4f68046524f7a178828c5ce705f",
"text": "Modularity refers to the use of common units to create product variants. As companies strive to rationalize engineering design, manufacturing, and support processes and to produce a large variety of products at a lower cost, modularity is becoming a focus. However, modularity has been treated in the literature in an abstract form and it has not been satisfactorily explored in industry. This paper aims at the development of models and solution approaches to the modularity problem for mechanical, electrical, and mixed process products (e.g., electromechanical products). To interpret various types of modularity, e.g., component-swapping, component-sharing, and bus modularity, a matrix representation of the modularity problem is presented. The decomposition approach is used to determine modules for different products. The representation and solution approaches presented are illustrated with numerous examples. The paper presents a formal approach to modularity allowing for optimal forming of modules even in the situation of insufficient availability of information. The modules determined may be shared across different products.",
"title": ""
},
{
"docid": "ba206d552bb33f853972e3f2e70484bc",
"text": "Presumptive stressful life event scale Dear Sir, in different demographic and clinical categories, which has not been attempted. I have read with considerable interest the article entitled, Presumptive stressful life events scale (PSLES)-a new stressful life events scale for use in India by Gurmeet Singh et al (April 1984 issue). I think it is a commendable effort to develop such a scale which would potentially be of use in our setting. However, the research raises several questions, which have not been dealt with in the' paper. The following are the questions or comments which ask for response from the authors: a) The mode of selection of 51 items is not mentioned. If taken arbitrarily they could suggest a bias. If selected from clinical experience, there could be a likelihood of certain events being missed. An ideal way would be to record various events from a number of persons (and patients) and then prepare a list of commonly occuring events. b) It is noteworthy that certain culture specific items as dowry, birth of daughter, etc. are included. Other relevant events as conflict with in-laws (not regarding dowry), refusal by match seeking team (difficulty in finding match for marriage) and lack of son, could be considered stressful in our setting. c) Total number of life events are a function of age, as has been mentioned in the review of literature also, hence age categorisation as under 35 and over 35 might neither be proper nor sufficient. The relationship of number of life events in different age groups would be interesting to note. d) Also, more interesting would be to examine the rank order of life events e) A briefened version would be more welcome. The authors should try to evolve a version of around about 25-30 items, which could be easily applied clinically or for research purposes. As can be seen, from items after serial number 30 (Table 4) many could be excluded. f) The cause and effect relationship is difficult to comment from the results given by the scale. As is known, 'stressfulness' of the event depends on an individuals perception of the event. That persons with higher neu-roticism scores report more events could partly be due to this. g) A minor point, Table 4 mentions Standard Deviations however S. D. has not been given for any item. Reply: I am grateful for the interest shown by Dr. Chaturvedi and his …",
"title": ""
},
{
"docid": "9fcdce293fec576f8d287b5692c6f45b",
"text": "Enabling search directly over encrypted data is a desirable technique to allow users to effectively utilize encrypted data outsourced to a remote server like cloud service provider. So far, most existing solutions focus on an honest-but-curious server, while security designs against a malicious server have not drawn enough attention. It is not until recently that a few works address the issue of verifiable designs that enable the data owner to verify the integrity of search results. Unfortunately, these verification mechanisms are highly dependent on the specific encrypted search index structures, and fail to support complex queries. There is a lack of a general verification mechanism that can be applied to all search schemes. Moreover, no effective countermeasures (e.g., punishing the cheater) are available when an unfaithful server is detected. In this work, we explore the potential of smart contract in Ethereum, an emerging blockchain-based decentralized technology that provides a new paradigm for trusted and transparent computing. By replacing the central server with a carefully-designed smart contract, we construct a decentralized privacy-preserving search scheme where the data owner can receive correct search results with assurance and without worrying about potential wrongdoings of a malicious server. To better support practical applications, we introduce fairness to our scheme by designing a new smart contract for a financially-fair search construction, in which every participant (especially in the multiuser setting) is treated equally and incentivized to conform to correct computations. In this way, an honest party can always gain what he deserves while a malicious one gets nothing. Finally, we implement a prototype of our construction and deploy it to a locally simulated network and an official Ethereum test network, respectively. The extensive experiments and evaluations demonstrate the practicability of our decentralized search scheme over encrypted data.",
"title": ""
},
{
"docid": "fb6068d738c7865d07999052750ff6a8",
"text": "Malware detection and prevention methods are increasingly becoming necessary for computer systems connected to the Internet. The traditional signature based detection of malware fails for metamorphic malware which changes its code structurally while maintaining functionality at time of propagation. This category of malware is called metamorphic malware. In this paper we dynamically analyze the executables produced from various metamorphic generators through an emulator by tracing API calls. A signature is generated for an entire malware class (each class representing a family of viruses generated from one metamorphic generator) instead of for individual malware sample. We show that most of the metamorphic viruses of same family are detected by the same base signature. Once a base signature for a particular metamorphic generator is generated, all the metamorphic viruses created from that tool are easily detected by the proposed method. A Proximity Index between the various Metamorphic generators has been proposed to determine how similar two or more generators are.",
"title": ""
},
{
"docid": "9b1a7f811d396e634e9cc5e34a18404e",
"text": "We introduce a novel colorization framework for old black-and-white cartoons which has been originally produced by a cel or paper based technology. In this case the dynamic part of the scene is represented by a set of outlined homogeneous regions that superimpose static background. To reduce a large amount of manual intervention we combine unsupervised image segmentation, background reconstruction and structural prediction. Our system in addition allows the user to specify the brightness of applied colors unlike the most of previous approaches which operate only with hue and saturation. We also present a simple but effective color modulation, composition and dust spot removal techniques able produce color images in broadcast quality without additional user intervention.",
"title": ""
},
{
"docid": "b321f3b5e814f809221bc618b99b95bb",
"text": "Abstract: Polymer processes often contain state variables whose distributions are multimodal; in addition, the models for these processes are often complex and nonlinear with uncertain parameters. This presents a challenge for Kalman-based state estimators such as the ensemble Kalman filter. We develop an estimator based on a Gaussian mixture model (GMM) coupled with the ensemble Kalman filter (EnKF) specifically for estimation with multimodal state distributions. The expectation maximization algorithm is used for clustering in the Gaussian mixture model. The performance of the GMM-based EnKF is compared to that of the EnKF and the particle filter (PF) through simulations of a polymethyl methacrylate process, and it is seen that it clearly outperforms the other estimators both in state and parameter estimation. While the PF is also able to handle nonlinearity and multimodality, its lack of robustness to model-plant mismatch affects its performance significantly.",
"title": ""
},
{
"docid": "bd73a86a9b67ba26eeeecb2f582fd10a",
"text": "Many of UCLES' academic examinations make extensive use of questions that require candidates to write one or two sentences. For example, questions often ask candidates to state, to suggest, to describe, or to explain. These questions are a highly regarded and integral part of the examinations, and are also used extensively by teachers. A system that could partially or wholly automate valid marking of short, free text answers would therefore be valuable, but until The UCLES Group provides assessment services worldwide through three main business units. • Cambridge-ESOL (English for speakers of other languages) provides examinations in English as a foreign language and qualifications for language teachers throughout the world. • CIE (Cambridge International Examinations) provides international school examinations and international vocational awards. • OCR (Oxford, Cambridge and RSA Examinations) provides general and vocational qualifications to schools, colleges, employers, and training providers in the UK. For more information please visit http://www.ucles.org.uk",
"title": ""
},
{
"docid": "8da6cc5c6a8a5d45fadbab8b7ca8b71f",
"text": "Feature detection and description is a pivotal step in many computer vision pipelines. Traditionally, human engineered features have been the main workhorse in this domain. In this paper, we present a novel approach for learning to detect and describe keypoints from images leveraging deep architectures. To allow for a learning based approach, we collect a large-scale dataset of patches with matching multiscale keypoints. The proposed model learns from this vast dataset to identify and describe meaningful keypoints. We evaluate our model for the effectiveness of its learned representations for detecting multiscale keypoints and describing their respective support regions.",
"title": ""
},
{
"docid": "0af9b629032ae50a2e94310abcc55aa5",
"text": "We introduce novel relaxations for cardinality-constrained learning problems, including least-squares regression as a special but important case. Our approach is based on reformulating a cardinality-constrained problem exactly as a Boolean program, to which standard convex relaxations such as the Lasserre and Sherali-Adams hierarchies can be applied. We analyze the first-order relaxation in detail, deriving necessary and sufficient conditions for exactness in a unified manner. In the special case of least-squares regression, we show that these conditions are satisfied with high probability for random ensembles satisfying suitable incoherence conditions, similar to results on 1-relaxations. In contrast to known methods, our relaxations yield lower bounds on the objective, and it can be verified whether or not the relaxation is exact. If it is not, we show that randomization based on the relaxed solution offers a principled way to generate provably good feasible solutions. This property enables us to obtain high quality estimates even if incoherence conditions are not met, as might be expected in real datasets. We numerically illustrate the performance of the relaxationrandomization strategy in both synthetic and real high-dimensional datasets, revealing substantial improvements relative to 1-based methods and greedy selection heuristics. B Laurent El Ghaoui elghaoui@berkeley.edu Mert Pilanci mert@berkeley.edu Martin J. Wainwright wainwrig@berkeley.edu 1 Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, USA 2 Department of Electrical Engineering and Computer Sciences and Department of Statistics, University of California, Berkeley, CA, USA",
"title": ""
},
{
"docid": "e8fb4848c8463bfcbe4a09dfeda52584",
"text": "A highly efficient rectifier for wireless power transfer in biomedical implant applications is implemented using 0.18-m CMOS technology. The proposed rectifier with active nMOS and pMOS diodes employs a four-input common-gate-type capacitively cross-coupled latched comparator to control the reverse leakage current in order to maximize the power conversion efficiency (PCE) of the rectifier. The designed rectifier achieves a maximum measured PCE of 81.9% at 13.56 MHz under conditions of a low 1.5-Vpp RF input signal with a 1- k output load resistance and occupies 0.009 mm2 of core die area.",
"title": ""
},
{
"docid": "14a8069c29f38129bc8d84b2b3d1ed16",
"text": "Document similarity measures are crucial components of many text-analysis tasks, including information retrieval, document classification, and document clustering. Conventional measures are brittle: They estimate the surface overlap between documents based on the words they mention and ignore deeper semantic connections. We propose a new measure that assesses similarity at both the lexical and semantic levels, and learns from human judgments how to combine them by using machine-learning techniques. Experiments show that the new measure produces values for documents that are more consistent with people’s judgments than people are with each other. We also use it to classify and cluster large document sets covering different genres and topics, and find that it improves both classification and clustering performance.",
"title": ""
},
{
"docid": "da237e14a3a9f6552fc520812073ee6c",
"text": "Shock filters are based in the idea to apply locally either a dilation or an erosion process, depending on whether the pixel belongs to the influence zone of a maximum or a minimum. They create a sharp shock between two influence zones and produce piecewise constant segmentations. In this paper we design specific shock filters for the enhancement of coherent flow-like structures. They are based on the idea to combine shock filtering with the robust orientation estimation by means of the structure tensor. Experiments with greyscale and colour images show that these novel filters may outperform previous shock filters as well as coherence-enhancing diffusion filters.",
"title": ""
},
{
"docid": "e93f468ac0da8e64037ca47aff55deb2",
"text": "Urban areas are the primary habitat for a majority of the global population. The development of cities not only entails a fundamental change in human settlement patterns but also a dramatic transformation of the physical environment. Thus, urban areas and their development are at the centre of all discussions on sustainability and/or sustainable development. This review essay introduces the notion of Urban Metabolism (UM), a term that provides a conceptual framework to study how a city functions, and hence, a way to address the sustainability issue of a city. Due to the significance and scope of the subject, the notion of UM is interpreted and thus approached differently across diverse disciplines from both the natural and social science fields. In order to comprehend the commonalities and controversies between them, the present review also briefly introduces the historical roots of the term. This review reveals the increasing significance of a rich and rapidly evolving field of research on the metabolism of urban areas.",
"title": ""
},
{
"docid": "bc2e5599a911dd84e303ac5ebd029f1a",
"text": "A simultaneous X/Ka feed system has been designed to cater for reflector antennas with a F/D ratio of 0.8. This work is an extension of the successful design of the initial X/Ka feed system that was designed for reflectors with a F/D ratio of 0.65. Although simple in concept, this move from F/D=0.65 to F/D=0.8 is not an easy task from a design point of view.",
"title": ""
},
{
"docid": "bd18e4473cba642c5bea1bddc418f6c2",
"text": "This paper presents Smart Home concepts for Internet of Things (IoT) technologies that will make life at home more convenient. In this paper, we first describe the overall design of a low-cost Smart Refrigerator built with Raspberry Pi. Next, we explain two sensors controlling each camera, which are hooked up to our Rasberry Pi board. We further show how the user can use the Graphical User Interface (GUI) to interact with our system. With this Smart Home and Internet of Things technology, a user-friendly graphical user interface, prompt data synchronization among multiple devices, and real-time actual images captured from the refrigerator, our system can easily assist a family to reduce food waste.",
"title": ""
},
{
"docid": "f50f7daeac03fbd41f91ff48c054955b",
"text": "Neuronal signalling and communication underpin virtually all aspects of brain activity and function. Network science approaches to modelling and analysing the dynamics of communication on networks have proved useful for simulating functional brain connectivity and predicting emergent network states. This Review surveys important aspects of communication dynamics in brain networks. We begin by sketching a conceptual framework that views communication dynamics as a necessary link between the empirical domains of structural and functional connectivity. We then consider how different local and global topological attributes of structural networks support potential patterns of network communication, and how the interactions between network topology and dynamic models can provide additional insights and constraints. We end by proposing that communication dynamics may act as potential generative models of effective connectivity and can offer insight into the mechanisms by which brain networks transform and process information.",
"title": ""
},
{
"docid": "18c517f26bceeb7930a4418f7a6b2f30",
"text": "BACKGROUND\nWe aimed to study whether pulmonary hypertension (PH) and elevated pulmonary vascular resistance (PVR) could be predicted by conventional echo Doppler and novel tissue Doppler imaging (TDI) in a population of chronic obstructive pulmonary disease (COPD) free of LV disease and co-morbidities.\n\n\nMETHODS\nEchocardiography and right heart catheterization was performed in 100 outpatients with COPD. By echocardiography the time-integral of the TDI index, right ventricular systolic velocity (RVSmVTI) and pulmonary acceleration-time (PAAcT) were measured and adjusted for heart rate. The COPD patients were randomly divided in a derivation (n = 50) and a validation cohort (n = 50).\n\n\nRESULTS\nPH (mean pulmonary artery pressure (mPAP) ≥ 25mmHg) and elevated PVR ≥ 2Wood unit (WU) were predicted by satisfactory area under the curve for RVSmVTI of 0.93 and 0.93 and for PAAcT of 0.96 and 0.96, respectively. Both echo indices were 100% feasible, contrasting 84% feasibility for parameters relying on contrast enhanced tricuspid-regurgitation. RVSmVTI and PAAcT showed best correlations to invasive measured mPAP, but less so to PVR. PAAcT was accurate in 90- and 78% and RVSmVTI in 90- and 84% in the calculation of mPAP and PVR, respectively.\n\n\nCONCLUSIONS\nHeart rate adjusted-PAAcT and RVSmVTI are simple and reproducible methods that correlate well with pulmonary artery pressure and PVR and showed high accuracy in detecting PH and increased PVR in patients with COPD. Taken into account the high feasibility of these two echo indices, they should be considered in the echocardiographic assessment of COPD patients.",
"title": ""
},
{
"docid": "fde9d6a4fc1594a1767e84c62c7d3b89",
"text": "This paper explores the effects of emotions embedded in a seller review on its perceived helpfulness to readers. Drawing on frameworks in literature on emotion and cognitive processing, we propose that over and above a well-known negativity bias, the impact of discrete emotions in a review will vary, and that one source of this variance is reader perceptions of reviewers’ cognitive effort. We focus on the roles of two distinct, negative emotions common to seller reviews: anxiety and anger. In the first two studies, experimental methods were utilized to identify and explain the differential impact of anxiety and anger in terms of perceived reviewer effort. In the third study, seller reviews from Yahoo! Shopping web sites were collected to examine the relationship between emotional review content and helpfulness ratings. Our findings demonstrate the importance of examining discrete emotions in online word-of-mouth, and they carry important practical implications for consumers and online retailers.",
"title": ""
},
{
"docid": "fd11fbed7a129e3853e73040cbabb56c",
"text": "A digitally modulated power amplifier (DPA) in 1.2 V 0.13 mum SOI CMOS is presented, to be used as a building block in multi-standard, multi-band polar transmitters. It performs direct amplitude modulation of an input RF carrier by digitally controlling an array of 127 unary-weighted and three binary-weighted elementary gain cells. The DPA is based on a novel two-stage topology, which allows seamless operation from 800 MHz through 2 GHz, with a full-power efficiency larger than 40% and a 25.2 dBm maximum envelope power. Adaptive digital predistortion is exploited for DPA linearization. The circuit is thus able to reconstruct 21.7 dBm WCDMA/EDGE signals at 1.9 GHz with 38% efficiency and a higher than 10 dB margin on all spectral specifications. As a result of the digital modulation technique, a higher than 20.1 % efficiency is guaranteed for WCDMA signals with a peak-to-average power ratio as high as 10.8 dB. Furthermore, a 15.3 dBm, 5 MHz WiMAX OFDM signal is successfully reconstructed with a 22% efficiency and 1.53% rms EVM. A high 10-bit nominal resolution enables a wide-range TX power control strategy to be implemented, which greatly minimizes the quiescent consumption down to 10 mW. A 16.4% CDMA average efficiency is thus obtained across a > 70 dB power control range, while complying with all the spectral specifications.",
"title": ""
},
{
"docid": "c44f971f063f8594985a98beb897464a",
"text": "In recent years, multi-agent epistemic planning has received attention from both dynamic logic and planning communities. Existing implementations of multi-agent epistemic planning are based on compilation into classical planning and suffer from various limitations, such as generating only linear plans, restriction to public actions, and incapability to handle disjunctive beliefs. In this paper, we propose a general representation language for multi-agent epistemic planning where the initial KB and the goal, the preconditions and effects of actions can be arbitrary multi-agent epistemic formulas, and the solution is an action tree branching on sensing results. To support efficient reasoning in the multi-agent KD45 logic, we make use of a normal form called alternating cover disjunctive formulas (ACDFs). We propose basic revision and update algorithms for ACDFs. We also handle static propositional common knowledge, which we call constraints. Based on our reasoning, revision and update algorithms, adapting the PrAO algorithm for contingent planning from the literature, we implemented a multi-agent epistemic planner called MEPK. Our experimental results show the viability of our approach.",
"title": ""
}
] |
scidocsrr
|
cb5ecbe2df35a4b18cd4f423304f26c9
|
Modular Deep Q Networks for Sim-to-real Transfer of Visuo-motor Policies
|
[
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "57a48d8c45b7ed6bbcde11586140f8b6",
"text": "We want to build robots that are useful in unstructured real world applications, such as doing work in the household. Grasping in particular is an important skill in this domain, yet it remains a challenge. One of the key hurdles is handling unexpected changes or motion in the objects being grasped and kinematic noise or other errors in the robot. This paper proposes an approach to learning a closed-loop controller for robotic grasping that dynamically guides the gripper to the object. We use a wrist-mounted sensor to acquire depth images in front of the gripper and train a convolutional neural network to learn a distance function to true grasps for grasp configurations over an image. The training sensor data is generated in simulation, a major advantage over previous work that uses real robot experience, which is costly to obtain. Despite being trained in simulation, our approach works well on real noisy sensor images. We compare our controller in simulated and real robot experiments to a strong baseline for grasp pose detection, and find that our approach significantly outperforms the baseline in the presence of kinematic noise, perceptual errors and disturbances of the object during grasping.",
"title": ""
},
{
"docid": "acc526dd0d86c5bf83034b3cd4c1ea38",
"text": "We describe a learning-based approach to handeye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.",
"title": ""
}
] |
[
{
"docid": "244745da710e8c401173fe39359c7c49",
"text": "BACKGROUND\nIntegrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses. These behavioural features of crossmodal processing appear to have parallels in the response properties of multisensory cells in the superior colliculi and cerebral cortex of non-human mammals. Although spatially concordant multisensory inputs can produce a dramatic, often multiplicative, increase in cellular activity, spatially disparate cues tend to induce a profound response depression.\n\n\nRESULTS\nUsing functional magnetic resonance imaging (fMRI), we investigated whether similar indices of crossmodal integration are detectable in human cerebral cortex, and for the synthesis of complex inputs relating to stimulus identity. Ten human subjects were exposed to varying epochs of semantically congruent and incongruent audio-visual speech and to each modality in isolation. Brain activations to matched and mismatched audio-visual inputs were contrasted with the combined response to both unimodal conditions. This strategy identified an area of heteromodal cortex in the left superior temporal sulcus that exhibited significant supra-additive response enhancement to matched audio-visual inputs and a corresponding sub-additive response to mismatched inputs.\n\n\nCONCLUSIONS\nThe data provide fMRI evidence of crossmodal binding by convergence in the human heteromodal cortex. They further suggest that response enhancement and depression may be a general property of multisensory integration operating at different levels of the neuroaxis and irrespective of the purpose for which sensory inputs are combined.",
"title": ""
},
{
"docid": "fb46f67ba94cb4d7dd7620e2bdf5f00e",
"text": "We design and implement TwinsCoin, the first cryptocurrency based on a provably secure and scalable public blockchain design using both proof-of-work and proof-of-stake mechanisms. Different from the proof-of-work based Bitcoin, our construction uses two types of resources, computing power and coins (i.e., stake). The blockchain in our system is more robust than that in a pure proof-of-work based system; even if the adversary controls the majority of mining power, we can still have the chance to secure the system by relying on honest stake. In contrast, Bitcoin blockchain will be insecure if the adversary controls more than 50% of mining power.\n Our design follows a recent provably secure proof-of-work/proof-of-stake hybrid blockchain[11]. In order to make our construction practical, we considerably enhance its design. In particular, we introduce a new strategy for difficulty adjustment in the hybrid blockchain and provide a theoretical analysis of it. We also show how to construct a light client for proof-of-stake cryptocurrencies and evaluate the proposal practically.\n We implement our new design. Our implementation uses a recent modular development framework for blockchains, called Scorex. It allows us to change only certain parts of an application leaving other codebase intact. In addition to the blockchain implementation, a testnet is deployed. Source code is publicly available.",
"title": ""
},
{
"docid": "1fcaa9ebde2922c13ce42f8f90c9c6ba",
"text": "Despite advances in HIV treatment, there continues to be great variability in the progression of this disease. This paper reviews the evidence that depression, stressful life events, and trauma account for some of the variation in HIV disease course. Longitudinal studies both before and after the advent of highly active antiretroviral therapies (HAART) are reviewed. To ensure a complete review, PubMed was searched for all English language articles from January 1990 to July 2007. We found substantial and consistent evidence that chronic depression, stressful events, and trauma may negatively affect HIV disease progression in terms of decreases in CD4 T lymphocytes, increases in viral load, and greater risk for clinical decline and mortality. More research is warranted to investigate biological and behavioral mediators of these psychoimmune relationships, and the types of interventions that might mitigate the negative health impact of chronic depression and trauma. Given the high rates of depression and past trauma in persons living with HIV/AIDS, it is important for healthcare providers to address these problems as part of standard HIV care.",
"title": ""
},
{
"docid": "14f3ecd814f5affe186146288d83697c",
"text": "Accidental intra-arterial filler injection may cause significant tissue injury and necrosis. Hyaluronic acid (HA) fillers, currently the most popular, are the focus of this article, which highlights complications and their symptoms, risk factors, and possible treatment strategies. Although ischemic events do happen and are therefore important to discuss, they seem to be exceptionally rare and represent a small percentage of complications in individual clinical practices. However, the true incidence of this complication is unknown because of underreporting by clinicians. Typical clinical findings include skin blanching, livedo reticularis, slow capillary refill, and dusky blue-red discoloration, followed a few days later by blister formation and finally tissue slough. Mainstays of treatment (apart from avoidance by meticulous technique) are prompt recognition, immediate treatment with hyaluronidase, topical nitropaste under occlusion, oral acetylsalicylic acid (aspirin), warm compresses, and vigorous massage. Secondary lines of treatment may involve intra-arterial hyaluronidase, hyperbaric oxygen therapy, and ancillary vasodilating agents such as prostaglandin E1. Emergency preparedness (a \"filler crash cart\") is emphasized, since early intervention is likely to significantly reduce morbidity. A clinical summary chart is provided, organized by complication presentation.",
"title": ""
},
{
"docid": "df833f98f7309a5ab5f79fae2f669460",
"text": "Model-free reinforcement learning (RL) has become a promising technique for designing a robust dynamic power management (DPM) framework that can cope with variations and uncertainties that emanate from hardware and application characteristics. Moreover, the potentially significant benefit of performing application-level scheduling as part of the system-level power management should be harnessed. This paper presents an architecture for hierarchical DPM in an embedded system composed of a processor chip and connected I/O devices (which are called system components.) The goal is to facilitate saving in the system component power consumption, which tends to dominate the total power consumption. The proposed (online) adaptive DPM technique consists of two layers: an RL-based component-level local power manager (LPM) and a system-level global power manager (GPM). The LPM performs component power and latency optimization. It employs temporal difference learning on semi-Markov decision process (SMDP) for model-free RL, and it is specifically optimized for an environment in which multiple (heterogeneous) types of applications can run in the embedded system. The GPM interacts with the CPU scheduler to perform effective application-level scheduling, thereby, enabling the LPM to do even more component power optimizations. In this hierarchical DPM framework, power and latency tradeoffs of each type of application can be precisely controlled based on a user-defined parameter. Experiments show that the amount of average power saving is up to 31.1% compared to existing approaches.",
"title": ""
},
{
"docid": "6fdd0fdbf609832138bfd1a5b6ebb3e7",
"text": "6 798;:+<>= ?@=BAC<;8>:D:EAC?GFH= = I JK79FKLNMO?@:K=BP+Q I R$7K?@MN8 A LTS;A ?US>= VW:9QBXTY = Z[=\\<BVTLOM]S@S>LN=1:9A ?GFH= =\\I ?US>7EP^MO=BP+F`_ S>:K=&F97KMNLNP^=\\<>?GQ a3R 79?@MN8b<>= 8cY Q RGRG= IEP^=\\<>? dTefRGJKLOMg8cMOShACIEP&=\\i^J9LOMN8\\M]S 8\\Q[LOLNA FHQC<;ACS>MOZj= k9L]S>=\\<>MOI9lm:EAC? FH= =\\I179?@=BP anQ <hRmA opMOI9lq<>=B8\\Q RGRG= IEP^=\\<>? VKMNIrA P9P^MOS>MOQ I&S>QqS>:9=sA 7^Y S>Q RmACS>MN8t8\\LNA ?@?@M]kE8 ACS>MOQ[I Q a R 79?@MN8hMOI`S>Qs?USf_^LO=t8 AuS>= l[QC<>MN=\\?hF9A ?@=BPmQ[I =\\ipS@<;A 8cS>=BP%AC7EPKMOQ&an=BACS>7^<>= ? d1vw:KMN?sJEACJH=\\< ?@7K<>Zj=\\_^?t<>= ?@= AC<;8>: MNI`S>Q R 79?@MN8 A LjS;AC?US>=[V[<>=\\Z^MO=\\XW?xR 79?@MN8x<>=B8\\Q[RGRG=\\IEPK=c<y<>= ?@= AC<;8>:yV`ACIEPzQ[7^S@Y LOMOI9= ?{JK<>Q RGMN?@MOIKlzPKM]<>=B8cS>MOQ[IK? d|efIqJ9AC<@S>MN8\\79LNAu<BV`Xw=WLO=BAC<>IK=BPqS>:EACS}PK=cY RGQ l <;A JK:9MN8&ACIEP-JH=c<>?@Q[IEACLOMOS~_ aA[8;S>Q <>?m:EABZj=&FH=\\= I-?@:KQBXWI-S>Q FH= aA[8cS>QC<>?MOIK979= I98\\MOI9l2R$79?@MN8bJK<>=\\an=\\<>= IE8c=[d9QC<RGQ`QpP
V}S>:K=&RmA MOI aA[8cS>QC<>?&AC<>=&S>=\\RGJHQKVwS>Q IEA LOM]Sf_`VP^MO?US>MNI98cS>MOZj= IK= ?@?mQ a3<>:`_pS>:9RACIEP JKMOS;8>: :K= MOl[:`SBd 1. INTRODUCTION e~I S>:K=qJEAC?UStSfX}= LOZj=$_`=BAu<>? VyS>:K=\\<>=q:9A ?3FH= = I MOI`S>=\\<>= ?USzMOI2S>:K=qPK=cY Zj=\\LNQ J9RG= I`S$Q atS>=B8>:9IKMNp7K= ?$S>:9ACSGJ^<>QCZpMNPK= JH=\\<>?@Q[I9A LOMO?@=BP 8\\Q IpS>=\\IpS S>Q%7K?@=\\<>? d(vw:9=&Sf_^JH=Q asA JKJ9LOMN8 ACS>MOQ I9?b:EABZj=&MOIE8cLN79PK=BP+kEL]S>=\\<>MOI9l QCa{IK=\\XW?RG= ?@?>A l = ? V
JK<>= ?@=\\IpS>MOIKlqLOMN?US>?hQ a|?US>Q <>MO= ?hQ < AC<@S~XwQ <>obS>:EAuS A&7K?@=\\<sRmAB_2FH=GMOI`S>=\\<>= ?US>=BP MOIVA I9P ?@Q1Q[Idr6 Q[?US Q aTS>:K= ?@=bA J^Y JKLNMN8 AuS>MOQ[I9?h:EABZj=3ACJ9JKLNMO=BP A S>=B8>:9I9MN`79= o^IKQBXWI&A ?bU8cQ[LOLNA FHQ <;AuS>MOZj= k9LOS>=c<>MNIKl[^d vw:9MO?qMOIpZjQ[LOZj=\\?q8\\Q LNLO=B8;S>MNIKl QCS>:9=\\<$79?@=\\<>? Q J9MOI9MOQ[IK?qQ a :KQuX l[Q`QpPmQ <}79?@=\\an79L
A IGM]S>= RMO? VKA IEP$S>:K= Iq<;A IKopMNIKlzMOS>=\\RG?{F9A ?@=BP Q I S>:9MO?WMOIKaQC<>RmACS>MOQ[IaQ <J^<>= ?@= I`S;ACS>MOQ[IS>Q$S>:9=t7K?@=\\<Bd D:KMOLN=M]SqRmA _%FH=AC<>l 79=BP%S>:EAuS$S>:K=\\<>=:9A ?qFH=\\= I ?@Q[RG=b?@7E8 8c= ?@? XWM]S>: S>:9MO?$S>=B8>:9IKMg`7K=[VS>:9=\\<>=bMO?qR$798;:%<>Q`Q RaQC<GMORGJK<>QuZj= RG=\\IpSBd xAu<;A LOLO= L`S>QwS>:9={PK= Zj=\\LNQ J9RG= I`S
QCa98cQ[LOLNA FHQ <;AuS>MOZj=}kEL]S>=\\<>MOIKl:9A ?FH= = I 8\\Q I`S>= I`S@Y~F9A ?@=BP+kEL]S>=\\<>MOI9l^dvw:9MO?bMN?A IACJ9J^<>QjA[8>: S>:9ACSbS@<>MN=\\?bS>Q =\\ipS@<;A 8cSq7K?@=\\a7KLwMOIKaQC<>RmACS>MOQ[I ag<>Q[RS>:9=mM]S>= RG? QCaWS>:9=b8\\Q[LOLO=B8cS>MOQ I S>:9ACSbAC<>= l[Q`QpP MOIEP^MN8 ACS>QC<>?bQCatS>:9=\\MO<m7K?@=\\a7KLOI9= ?@?qanQ <A27K?@=\\<BdDe~S MO?t8cLNQ ?@= L]_1<>= LNACS>=BP&S>QqS>:K=zk9= LNP&Q a{MOIKaQC<>RmACS>MOQ[I&<>=cS@<>MN=\\Z[ACLVHXW:KMN8;: ACMNRG?WS>QmPK=\\Zj= LOQ[J1FH=\\S@S>=c<S>= 8;:KI9MN`79= ?WS>QGLOQp8 ACS>= PKQp8\\7KRG= I`S>?wS>:EAuS ?>AuS>MN?Uag_1A$7K?@=\\<B ?WMNI^aQC<>RmACS>MOQ[I1IK= =BPd 7^<@<>= I`S>LO_`V{RGQ[?USqR$79?@MN8G<>=B8\\Q RGRG= IEP^=\\<$?@=c<>Z^MN8\\=\\?mAC<>=&F9A ?@=BP Q[I =BP^M]S>Q <>MNA LTPKACS;ApVx<>=B8\\Q[RGRG=\\IEP9AuS>MOQ[I9?zl LN= A I9= P an<>Q[RS>:9=Ge~I`S>=\\<>I9=\\S 7K?@=\\<x8\\Q RGR$7KI9M]Sf_`V\\A I9P FK<>QBXW?@MOI9lJEAuS@S>=\\<>I9? dy QBXw=\\Zj=\\<BVuM]SMO?<>=B8\\Q[lCY IKMN?@= P-S>:EAuS 8c7K<@<>= I`SA JKJK<>QjA 8;:K= ?m:EABZj=1MORGJHQ <@S;ACI`SmLNMORGM]S;ACS>MOQ I9? V MOIE8cLN79PKMOI9lzMOIEA PK=B`79ACS>=W<;ABXDP9ACS;AGMOIGS>:9=h8 A ?@=Q ay=BP^MOS>QC<>MNA L
MOIKaQC<@Y RmAuS>MNQ I
;VLgA 8;o Q aTp79A LOM]Sf_28\\Q IpS@<>Q LhMOI S>:9=m8 AC?@=GQ aT7K?@=\\<sJK<>=\\an=\\<@Y = I98\\= ?c;V
ACIEP LNA 8;oQ a79?@=c< J^<>=\\a=\\<>=\\IE8\\= ?aQC< IK=\\X<>= 8\\Q <;P^MNIKl[? dWt=cY <>MOZpMOI9lGa=BAuS>7K<>= ?han<>Q[RS>:9=zR 79?@MN83MOS>?@=\\LOa~V
<;ACS>:9=c<hS>:9A I&<>= L]_^MOI9lmQ[I 8\\7K?US>Q[RG=\\< FH= :9AuZpMOQ[7^<qMO?$J9AC<@S>MN8\\7KLgAu<>L]_ MORGJHQ <@S;A I`SsaQC<qMOI`S@<>QpPK7E8;Y MOI9l I9=\\XR 79?@MN8 dD<>=B8\\Q RGRG= IEP^=\\<w?U_^?US>= RXwQ[7KLNPbIK= Zj=\\<T?@7Kl[l = ?US IK=\\X AC<@S>MO?US>?xF9A ?@=BP3Q[IKL]_zQ[I 8\\79?US>Q RG=\\< FH= :9ABZ^MOQ 7K<BVCM]a9IKQ 8\\7K?US>Q[RG=\\< = Z[=\\<zMOI9M]S>MNA LOL]_ ?@= LO=B8;S>=BP2S>:9= I9=\\XAu<@S>MN?USBdGhI9QCS>:9=\\<tLNMORGM]S;ACS>MOQ I%MO? S>:K= XTA _1S>:K= <>=B8\\Q RGRG= IEPKACS>MOQ[IK?tAu<>=$JK<>=\\?@= I`S>=BPhRGQ[?US3?U_^?US>= RG? 7K?@=sIKQqRGQ <>=tS>:9A I1Aq?@MORGJKLN=3LOMO?UShQ ax<>=B8\\QC<;PKMOI9l ? dW97^<@S>:9=\\<BV^S>:9=\\<>= Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. 8 2002 IRCAM Centre Pompidou :9A ?3FH= =\\I2LOM]S@S>LO=G=\\¡
Q <@StS>Q79?@=qopIKQuXWLO=BP^l[=$ag<>Q[R¢R 79?@MN8 JK?U_K8;:KQ[L]Y Q l _G<>= ?@= AC<;8>:S>QzMOIKaQC<>R£S>:9=h8>:9Q[MN8\\=hQ aya=BAuS>7K<>= ?}S>Qs=\\ipS@<;A[8;S}an<>Q R AC7EP^MNQ^VEQC<S>Q kEL]S>=\\<WR$7K?@MN8 ?@= LO=B8cS>MOQ[IK? d e~IS>:KMN?TJ9A JH=\\<BV^Xw=h<>= J9:^<;A ?@=tS>:9=h<>= ?@=BAC<;8>:1`79=\\?US>MNQ IXWM]S>:&As?@JH=\\Y 8cMOkE8 aQp8\\7K?WQ[I R 79?@MN8<>= 8\\Q[RGRG= I9PK=\\<}?U_^?US>= RG? VKA IEP8\\Q[IK?@MNPK=\\<WS>:9= RmACMOI2aA 8cS>Q <>?sS>:EAuS3XwQ[7KLNP%AC¡
=B8;SsS>:K=q?@7E8\\8\\= ?@?sQ aTR 79?@MN8 <>=B8\\Q RqY RG=\\IEPK=c<S>= 8;:KI9Q[LOQ l _`d =x<>= ZpMN=cX2X}Q <>o ag<>Q[RAwZ[Au<>MN=cSf_tQCa^?@Q[7K<;8c= ? V MOI98\\LO7EPKMOIKls<>= ?@= AC<;8>:mag<>Q[RS>:9=WkE= LNPK?{Q ayJ9?U_K8>:9Q[LOQ l _mACIEPGRmAC<>oj=cS@Y MOIKlS>:9ACS <>= LNAuS>= ? S>QhR 79?@MN8 ACL[S;A ?US>=[dxanS>=c<A[PKP^<>= ?@?@MOI9lWS>:9=\\?@=}RmA MOI aA[8cS>QC<>?A IEPPKMO?>8\\7K?@?@MNIKl S>:K=t8\\Q IE8\\LO79?@MOQ I9?T<>=BA[8>:9= Pban<>Q R£S>:9=hZ[Au<@Y MOQ 79?tF^<;A IE8>:9=\\?tQ a|<>= ?@=BAu<;8;:r<>= LNACS>=BP1S>QGS>:9MO?tJ^<>Q[F9LO= R&V9Xw=qP^MO?US>MNLOL A2?@=\\SbQ a3l 79MNPK= LOMOI9=\\?AC?GXw= LOLhA ?b`79= ?US>MOQ[IK?GS>:EACSG<>=\\RmA MOI-S>Q2FH= <>=\\?@Q[LOZj=BP
d| =t<>= LNACS>=tS>:KMN?TS>Q$S>:K=3A MORG?WQ a Q[7K<W<>= ?@= AC<;8>:&JK<>Qu¤U=B8;SBd 2. THE PROBLEM ^<>Q[R¥S>:9=W79?@=c<B ?|JHQ[MOI`S{Q aZpMN=cX3VjS>:9=TJK7K<>JHQ[?@=WQCaAtR 79?@MN8}<>=B8\\Q RqY RG=\\IEPK=c<T?U_^?US>= R¦MO?WS>Q <>=B8\\Q[RGRG=\\IEPGR$7K?@Mg8S>:EACSTS>:K=t79?@=c<XWMOLOLFH= MOI`S>=\\<>= ?US>= PqMNIdxefIGQC<;PK=\\<|aQC<{S>:K=W79?@=\\<|S>QtXTACIpS|S>Q379?@=TS>:K=T?U_K?US>=\\R M]SR$7K?USTFH=3?@MORGJ9LO= S>Qq7K?@=[V^XWMOS>: A$RGMOI9MOR$7KR£QCaxMOI9J97^ST<>=B`79M]<>=BP ag<>Q[RS>:9=W79?@=c<BdhL]S>=\\<>I9ACS>MOZj= L]_`VjS>:K=\\<>=TR$7K?US{FH=A38\\LO=BAu<TACIEPqQ[FpZpM]Y Q 79?TMOIE8c= I`S>MNZ[=hS>Q S>:K= 7K?@=\\<TS>:EAuSTRGQ <>=h=\\¡
Q <@STMOIJK<>QuZpMgP^MOI9lsMNIKJ97^S XWMOLOLLO=BA PbS>QqFH=cS@S>=\\<w<>=B8\\Q RGRG= IEPKACS>MOQ[IK? d|vw:9=t7K?@=\\<WRmA _GXTACIpSTS>Q <>=cS@<>MN=\\Zj=3R$7K?@Mg8hFEA ?@= P Q[IJK<>=\\an=\\<>= IE8c= ? VE?US~_KLO=3QC<hRGQ`QpP
d 3. PSYCHOLOGICAL FACTORS AND MUSICAL TASTE :9= I PK= ?@MOl[IKMOI9lGA$?U_^?US>= RS>:EACST<>= 8\\Q[RGRG= I9PK?wR 79?@MN8 V^M]SMN?W7K?@=\\Y an79LS>Q%LO=BAC<>I ag<>Q[R§=\\i^MO?US>MOI9l <>= ?@= AC<;8>:DMNI`S>Q aA[8cS>QC<>?bS>:9ACSAC¡
= 8cS R 79?@MN8 ACL^S;AC?US>=[defIsS>:9MO??@=B8cS>MOQ[IqX}=T?@7K<>Zj=\\_z<>= ?@= AC<;8>:qS>:EACS|:9A ?Qp8cY 8c7K<@<>=BP MOIbS>:9=tLNA ?USh= MOl[:`S~_b_j=BAC<>?WQ[IR$7K?@Mg8\\A LJK<>=\\an=\\<>= IE8c= ? d}6 Q[?US QCahS>:K=q<>= ?@79L]S>?$8\\Q[RG=Gan<>Q R ̈X}Q <>o%F`_2?@Qp8\\MNA LTJK?U_98>:9Q LOQ[l[MO?US>? V{F97^S ?@Q RG=t8\\Q RG= ?}ag<>Q[R£S>:K= RGQ <>=hA JKJ9LOMO=BPbk9= LNPGQ axP^= RGQ[lC<;A JK:9MN8\\?}anQ < RmAu<>oj=\\S>MOI9l^d © Q[RG=wQCaES>:9=w<>=\\?@=BAC<;8>:G8\\M]S>=BPsJ979FKLOMN?@:K=BP FH=caQ <>=3a «j¬ t?@:KQBXw=BPq8c79L]Y S>7^<;A LtFKMNA ?@= ? defIA P9P^MOS>MOQ I S>:9=\\<>=1XTAC?1A%?US@<>Q[IKl F9MNA ? A ljACMOI9?US JHQ J97KLgAu<R$79?@MN8 V`anQ <=\\iKA RGJKLO=[V^Q[I9=tA 7KS>:KQ <P^=\\kEIK=BPbM]SAC?b~R$7K?@Mg8 S>:9ACSzMO?t<;A IKoj=BPrF`_r8c<>M]S>MN8\\?sA ?tS;A XP^<@_`VFEACIEACLV A I9P MOI9?@MOJKMgP^ ® «C ̄d °= ?@=BAu<;8;:K=\\<>?WFH= LOMO= Zj=BPmS>:EACSWQ I9=3MORGJHQ <@S;ACI`STJ97^<>JHQ[?@=tQCaxR$7K?@Mg8\\A L = PK7E8\\ACS>MOQ[I%XwA ? S>Q%±n2 3E ́Uμ ¶ ·A1?US>7EP^= I`SB ?sR 79?@MN8 ACL{S;A ?US>= d1 QBXTY =\\Zj=\\<BVS>:9=$=ciKJH=c<>MNRG=\\IpS>?tA IEP1S>:9=\\MO< <>= ?@79L]S>?zA JKJH=BAC<3S>QFH=q?@Q[7KIEP ACIEP+8 A I FH=79?@=BP A ?qA ?US;Au<@S>MNIKl JHQ MNI`S aQ <q=\\i^JH=\\<>MORG= I`S>?$RGQ <>= S;Au<>l[=\\S>= P S>Q$R$7K?@MN8h<>=B8\\Q RGRG= IEP^=\\<T?U_^?US>= RG? d 3.1 Personality, Demographics and Music Preference e ̧S:EAC?xFH= = I$?@:KQBXWI$S>:EAuS8\\=\\<@S;ACMNIqA ?@JH=B8cS>?QCaHJH=\\<>?@Q[I9A LOM]Sf_$AC<>=T8cQ <@Y <>=\\LgAuS>=BP XWM]S>:qR 79?@MN8}JK<>=ca=\\<>= I98\\=[dxe~I <>= ?@= AC<;8>:qJ97KF9LOMO?@:9=BPsMNIbaB« 1[«^V o{7K<@SG7K?@=BP%S>:9= MOI`S@<>QuZj=\\<@S$Zj=\\<>?@79? =\\ipS@<;ABZj=\\<@SGACIEP ?US;A FKLO= Zj=\\<>?@7K? 7KI9?US;ACF9LO=TJH=\\<>?@Q[I9A LOM]Sf_ S>= ?US>?|PK= ZpMO?@=BP$F`_ »_^?@= I98;osACIEP$8cQ[IE8cLN79PK=BP S>:9ACSz?US;ACF9LO=q=\\ipS@<;ABZj=\\<@S>?tJ^<>=\\a=c< ?@Q LOMgP JK<>=BP^MN8cS;A FKLN=$R$79?@MN8 Vy?US;A FKLO= MOI`S@<>QuZj=\\<@S>?|S>:9=RGQ <>=h8\\Q[l I9M]S>MOZj= QCa 8\\LNAC?@?@Mg8\\A LyA IEPqF9AC<>Qp`79=h?USf_^LO= ? V 7KI9?US;ACF9LO=}=\\ipS@<;ABZj=\\<@S>?S>:9=<>Q[RmACIpS>MN8?USf_^LO= ? =\\i^J^<>= ?@?@MOI9lQCZ[=\\<@S = RGQCY S>MOQ I9? V^A I9P$79IK?US;A FKLN=WMOI`S@<>QuZj=\\<@S>?S>:9=TRGQ <>=TRs_^?US>Mg8\\A LEACIEP$MORGJ^<>= ?UY ?@MOQ I9MO?US>MN8G<>Q RmA I`S>MN8 XwQC<>op?bP^MO?>8\\79?@?@=BP2MNI-®]aB«C ̄wA I9P-® 1⁄4 u ̄;d6 QC<>= A Review of Factors Affecting Music Recommender Success <>=B8c= I`S>LO_`VyM]Ss:EA ?3FH=\\= I ?@:KQuXWI2S>:EACS3S>:K=$LO= Zj=\\L{QCaWA l l <>= ?@?@MOZj= IK= ?@? 8\\QC<@<>= LNACS>= ?mXWM]S>: R 79?@MN8 A LW?US~_KLO=[V}XWMOS>:-RGQ <>=1ACl[lC<>= ?@?@MOZj=rJH= Q[JKLN= FH= MOIKl RGQ <>=&LOMOoj= L]_+S>Q = I ¤fQu_-:9= AuZ`_+RG=\\S;ACLhQC<:9AC<;P-<>Qp8>o+R$7KY ?@MN8G®]aB«C ̄d © S>7EPKMO= ?Q a}PKM]¡
=\\<>=\\IpSh8\\7KL]S>7K<;A Ll <>Q[7KJ9? V
?@:KQuX PKM]¡
=\\<>= I`S PKMO?US@<>MOF97^Y S>MOQ[IK?QCasR$79?@MN8 ACLhJ^<>=\\a=c<>= IE8\\=\\? d9Q <=\\iKA RGJKLO=[V jACJEA IK= ?@=2A[PKQCY LO= ?>8\\=\\IpS>?W:9AuZ[=tAz:9MOl[:K=\\<WLOMNo[= LOMN:KQ`QpP Q a= I ¤fQB_KMOIKlq8\\LNA ?@?@MN8 ACLQC<x¤@A R 79?@MN8{S>:EACI S>:K= M]<|hRG=\\<>MN8 ACIq8\\Q 79I`S>=\\<>JEAu<@S>?W® 1 1C ̄d|vw:9=}?US>7EP^_ ACLN?@Q 8\\Q IE8\\LO79PK=BP&S>:EAuShS>:K=\\<>",
"title": ""
},
{
"docid": "1832e7fe9b0d2f034c22777a6783cfde",
"text": "Recently, Monte-Carlo Tree Search (MCTS) has become a popular approach for intelligent play in games. Amongst others, it is successfully used in most state-of-the-art Go programs. To improve the playing strength of these Go programs any further, many parameters dealing with MCTS should be fine-tuned. In this paper, we propose to apply the Cross-Entropy Method (CEM) for this task. The method is comparable to Estimation-of-Distribution Algorithms (EDAs), a new area of evolutionary computation. We tested CEM by tuning various types of parameters in our Go program MANGO. The experiments were performed in matches against the open-source program GNU GO. They revealed that a program with the CEM-tuned parameters played better than without. Moreover, MANGO plus CEM outperformed the regular MANGO for various time settings and board sizes. From the results we may conclude that parameter tuning by CEM genuinely improved the playing strength of MANGO, for various time settings. This result may be generalized to other game engines using MCTS.",
"title": ""
},
{
"docid": "b50918f904d08f678cb153b16b052344",
"text": "According to Earnshaw's theorem, the ratio between axial and radial stiffness is always -2 for pure permanent magnetic configurations with rotational symmetry. Using highly permeable material increases the force and stiffness of permanent magnetic bearings. However, the stiffness in the unstable direction increases more than the stiffness in the stable direction. This paper presents an analytical approach to calculating the axial force and the axial and radial stiffnesses of attractive passive magnetic bearings (PMBs) with back iron. The investigations are based on the method of image charges and show in which magnet geometries lead to reasonable axial to radial stiffness ratios. Furthermore, the magnet dimensions achieving maximum force and stiffness per magnet volume are outlined. Finally, the calculation method was applied to the PMB of a magnetically levitated fan, and the analytical results were compared with a finite element analysis.",
"title": ""
},
{
"docid": "d03abae94005c27aa46c66e1cdc77b23",
"text": "The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multi-modality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement.",
"title": ""
},
{
"docid": "5e9669e422bbbb2c964e13ebf65703af",
"text": "Behavioral problems are a major source of poor welfare and premature mortality in companion dogs. Previous studies have demonstrated associations between owners' personality and psychological status and the prevalence and/or severity of their dogs' behavior problems. However, the mechanisms responsible for these associations are currently unknown. Other studies have detected links between the tendency of dogs to display behavior problems and their owners' use of aversive or confrontational training methods. This raises the possibility that the effects of owner personality and psychological status on dog behavior are mediated via their influence on the owner's choice of training methods. We investigated this hypothesis in a self-selected, convenience sample of 1564 current dog owners using an online battery of questionnaires designed to measure, respectively, owner personality, depression, emotion regulation, use of aversive/confrontational training methods, and owner-reported dog behavior. Multivariate linear and logistic regression analyses identified modest, positive associations between owners' use of aversive/confrontational training methods and the prevalence/severity of the following dog behavior problems: owner-directed aggression, stranger-directed aggression, separation problems, chasing, persistent barking, and house-soiling (urination and defecation when left alone). The regression models also detected modest associations between owners' low scores on four of the 'Big Five' personality dimensions (Agreeableness, Emotional Stability, Extraversion & Conscientiousness) and their dogs' tendency to display higher rates of owner-directed aggression, stranger-directed fear, and/or urination when left alone. The study found only weak evidence to support the hypothesis that these relationships between owner personality and dog behavior were mediated via the owners' use of punitive training methods, but it did detect a more than five-fold increase in the use of aversive/confrontational training techniques among men with moderate depression. Further research is needed to clarify the causal relationship between owner personality and psychological status and the behavioral problems of companion dogs.",
"title": ""
},
{
"docid": "6d5e80293931396556cf5fbe64e9c2d2",
"text": "Rotors of electrical high speed machines are subject to high stress, limiting the rated power of the machines. This paper describes the design process of a high-speed rotor of a Permanent Magnet Synchronous Machine (PMSM) for a rated power of 10kW at 100,000 rpm. Therefore, at the initial design the impact of the rotor radius to critical parameters is analyzed analytically. In particular, critical parameters are mechanical stress due to high centrifugal forces and natural bending frequencies. Furthermore, air friction losses, heating the rotor and the stator additionally, are no longer negligible compared to conventional machines and must be considered in the design process. These mechanical attributes are controversial to the electromagnetic design, increasing the effective magnetic air gap, for example. Thus, investigations are performed to achieve sufficient mechanical strength without a significant reduction of air gap flux density or causing thermal problems. After initial design by means of analytical estimations, an optimization of rotor geometry and materials is performed by means of the finite element method (FEM).",
"title": ""
},
{
"docid": "18dbbf0338d138f71a57b562883f0677",
"text": "We present the analytical capability of TecDEM, a MATLAB toolbox used in conjunction with Global DEMs for the extraction of tectonic geomorphologic information. TecDEM includes a suite of algorithms to analyze topography, extracted drainage networks and sub-basins. The aim of part 2 of this paper series is the generation of morphometric maps for surface dynamics and basin analysis. TecDEM therefore allows the extraction of parameters such as isobase, incision, drainage density and surface roughness maps. We also provide tools for basin asymmetry and hypsometric analysis. These are efficient graphical user interfaces (GUIs) for mapping drainage deviation from basin mid-line and basin hypsometry. A morphotectonic interpretation of the Kaghan Valley (Northern Pakistan) is performed with TecDEM and the findings indicate a high correlation between surface dynamics and basin analysis parameters with neotectonic features in the study area. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "807cd6adc45a2adb7943c5a0fb5baa94",
"text": "Reliable performance evaluations require the use of representative workloads. This is no easy task because modern computer systems and their workloads are complex, with many interrelated attributes and complicated structures. Experts often use sophisticated mathematics to analyze and describe workload models, making these models difficult for practitioners to grasp. This book aims to close this gap by emphasizing the intuition and the reasoning behind the definitions and derivations related to the workload models. It provides numerous examples from real production systems, with hundreds of graphs. Using this book, readers will be able to analyze collected workload data and clean it if necessary, derive statistical models that include skewed marginal distributions and correlations, and consider the need for generative models and feedback from the system. The descriptive statistics techniques covered are also useful for other domains.",
"title": ""
},
{
"docid": "6bbfac62a2f99c028c7df3b586b41f68",
"text": "Depression is a common mental health condition for which many mobile apps aim to provide support. This review aims to identify self-help apps available exclusively for people with depression and evaluate those that offer cognitive behavioural therapy (CBT) or behavioural activation (BA). One hundred and seventeen apps have been identified after searching both the scientific literature and the commercial market. 10.26% (n = 12) of these apps identified through our search offer support that seems to be consistent with evidence-based principles of CBT or BA. Taking into account the non existence of effectiveness/efficacy studies, and the low level of adherence to the core ingredients of the CBT/BA models, the utility of these CBT/BA apps are questionable. The usability of reviewed apps is highly variable and they rarely are accompanied by explicit privacy or safety policies. Despite the growing public demand, there is a concerning lack of appropiate CBT or BA apps, especially from a clinical and legal point of view. The application of superior scientific, technological, and legal knowledge is needed to improve the development, testing, and accessibility of apps for people with depression.",
"title": ""
},
{
"docid": "2802c89f5b943ea0bee357b36d072ada",
"text": "Motivation: Alzheimer’s disease (AD) is an incurable neurological condition which causes progressive mental deterioration, especially in the elderly. The focus of our work is to improve our understanding about the progression of AD. By finding brain regions which degenerate together in AD we can understand how the disease progresses during the lifespan of an Alzheimer’s patient. Our aim is to work towards not only achieving diagnostic performance but also generate useful clinical information. Objective: The main objective of this study is to find important sub regions of the brain which undergo neuronal degeneration together during AD using deep learning algorithms and other machine learning techniques. Methodology: We extract 3D brain region patches from 100 subject MRI images using a predefined anatomical atlas. We have devised an ensemble of pair predictors which use 3D convolutional neural networks to extract salient features for AD from a pair of regions in the brain. We then train them in a supervised manner and use a boosting algorithm to find the weightage of each pair predictor towards the final classification. We use this weightage as the strength of correlation and saliency between the two input sub regions of the pair predictor. Result: We were able to retrieve sub regional association measures for 100 sub region pairs using the proposed method. Our approach was able to automatically learn sub regional association structure in AD directly from images. Our approach also provides an insight into computational methods for demarcating effects of AD from effects of ageing (and other neurological diseases) on our neuroanatomy. Our meta classifier gave a final accuracy of 81.79% for AD classification relative to healthy subjects using a single imaging modality dataset.",
"title": ""
},
{
"docid": "5140cad8babfc17c660bf9ca5dfa5fb6",
"text": "In this paper, the fundamental problem of distribution and proactive caching of computing tasks in fog networks is studied under latency and reliability constraints. In the proposed scenario, computing can be executed either locally at the user device or offloaded to an edge cloudlet. Moreover, cloudlets exploit both their computing and storage capabilities by proactively caching popular task computation results to minimize computing latency. To this end, a clustering method to group spatially proximate user devices with mutual task popularity interests and their serving cloudlets is proposed. Then, cloudlets can proactively cache the popular tasks' computations of their cluster members to minimize computing latency. Additionally, the problem of distributing tasks to cloudlets is formulated as a matching game in which a cost function of computing delay is minimized under latency and reliability constraints. Simulation results show that the proposed scheme guarantees reliable computations with bounded latency and achieves up to 91% decrease in computing latency as compared to baseline schemes.",
"title": ""
},
{
"docid": "b77363417b2e5db93d9f1e0447bd1932",
"text": "UK Government regularly applies challenging strategic targets to the construction industry, chief amongst these are requirements for more rapid project delivery processes and consistent improvements to the time predictability aspects of on-site construction delivery periods. Latest industry KPI data has revealed a recent increase across measures of time predictability, however more than half of UK construction projects continue to exceed agreed time schedules. The aim of this research was to investigate the diffusion of 4D BIM innovation as adoption of this innovation is seen as a potential solution in response to these targets of construction time predictability. Through purposive sampling, a quantitative survey was undertaken using an online questionnaire that measured 4D BIM innovation adoption using accepted diffusion research methods. These included an exploration of several perceived attributes including compatibility, complexity, observability and the relative advantages of 4D BIM innovation in comparison against conventional functions of construction planning and against stages of the construction planning processes. Descriptive and inferential analysis of the data addresses how the benefits are being realised and explore reasons for adoption or rejection decisions of this innovation. Results indicate an increasing rate of 4D BIM innovation adoption and reveal the typical time lag between awareness and first use.",
"title": ""
},
{
"docid": "d9176322068e6ca207ae913b1164b3da",
"text": "Topic Detection and Tracking (TDT) is a variant of classiication in which the classes are not known or xed in advance. Consider for example an incoming stream of news articles or email messages that are to be classiied by topic; new classes must be created as new topics arise. The problem is a challenging one for machine learning. Instances of new topics must be recognized as not belonging to any of the existing classes (detection), and instances of old topics must be correctly classiied (tracking)|often with extremely little training data per class. This paper proposes a new approach to TDT based on probabilis-tic, generative models. Strong statistical techniques are used to address the many challenges: hierarchical shrinkage for sparse data, statistical \\garbage collection\" for new event detection, clustering in time to separate the diierent events of a common topic, and deterministic anneal-ing for creating the hierarchy. Preliminary experimental results show promise.",
"title": ""
},
{
"docid": "cd1cfbdae08907e27a4e1c51e0508839",
"text": "High-level synthesis (HLS) is an increasingly popular approach in electronic design automation (EDA) that raises the abstraction level for designing digital circuits. With the increasing complexity of embedded systems, these tools are particularly relevant in embedded systems design. In this paper, we present our evaluation of a broad selection of recent HLS tools in terms of capabilities, usability and quality of results. Even though HLS tools are still lacking some maturity, they are constantly improving and the industry is now starting to adopt them into their design flows.",
"title": ""
},
{
"docid": "c633668d5933118db60ea1c9b79333ea",
"text": "A robot exoskeleton which is inspired by the human musculoskeletal system has been developed for lower limb rehabilitation. The device was manufactured using a novel technique employing 3D printing and fiber reinforcement to make one-of-a-kind form fitting human-robot connections. Actuation of the exoskeleton is achieved using PMAs (pneumatic air muscles) and cable actuation to give the system inherent compliance while maintaining a very low mass. The entire system was modeled including a new hybrid model for PMAs. Simulation and experimental results for a force and impedance based trajectory tracking controller demonstrate the feasibility for using the HuREx system for gait and rehabilitation training.",
"title": ""
}
] |
scidocsrr
|
53e3f5b1d2e4975cb3e4d27943c46d11
|
Practical Secure Aggregation for Privacy-Preserving Machine Learning
|
[
{
"docid": "21384ea8d80efbf2440fb09a61b03be2",
"text": "We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.",
"title": ""
}
] |
[
{
"docid": "e9047e59f58e71404107b065e584c547",
"text": "Dermoscopic skin images are often obtained with different imaging devices, under varying acquisition conditions. In this work, instead of attempting to perform intensity and color normalization, we propose to leverage computational color constancy techniques to build an artificial data augmentation technique suitable for this kind of images. Specifically, we apply the shades of gray color constancy technique to color-normalize the entire training set of images, while retaining the estimated illuminants. We then draw one sample from the distribution of training set illuminants and apply it on the normalized image. We employ this technique for training two deep convolutional neural networks for the tasks of skin lesion segmentation and skin lesion classification, in the context of the ISIC 2017 challenge and without using any external dermatologic image set. Our results on the validation set are promising, and will be supplemented with extended results on the hidden test set when available.",
"title": ""
},
{
"docid": "7a8faa4e8ecef8e28aa2203f0aa9d888",
"text": "In today’s global marketplace, individual firms do not compete as independent entities rather as an integral part of a supply chain. This paper proposes a fuzzy mathematical programming model for supply chain planning which considers supply, demand and process uncertainties. The model has been formulated as a fuzzy mixed-integer linear programming model where data are ill-known andmodelled by triangular fuzzy numbers. The fuzzy model provides the decision maker with alternative decision plans for different degrees of satisfaction. This proposal is tested by using data from a real automobile supply chain. © 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "cac6da8b7ee88f95196651920a64486c",
"text": "The classification of food images is an interesting and challenging problem since the high variability of the image content which makes the task difficult for current state-of-the-art classification methods. The image representation to be employed in the classification engine plays an important role. We believe that texture features have been not properly considered in this application domain. This paper points out, through a set of experiments, that textures are fundamental to properly recognize different food items. For this purpose the bag of visual words model (BoW) is employed. Images are processed with a bank of rotation and scale invariant filters and then a small codebook of Textons is built for each food class. The learned class-based Textons are hence collected in a single visual dictionary. The food images are represented as visual words distributions (Bag of Textons) and a Support Vector Machine is used for the classification stage. The experiments demonstrate that the image representation based on Bag of Textons is more accurate than existing (and more complex) approaches in classifying the 61 classes of the Pittsburgh Fast-Food Image Dataset.",
"title": ""
},
{
"docid": "53f5133ef922585090fd80f32c6688da",
"text": "Standard approaches to functional safety as described in the automotive functional safety standard ISO 26262 are focused on reducing the risk of hazards due to random hardware faults or systematic failures during design (e.g. software bugs). However, as vehicle systems become increasingly complex and ever more connected to the internet of things, a third source of hazard must be considered, that of intentional manipulation of the electrical/electronic control systems either via direct physical contact or via the systems' open interfaces. This article describes how the process prescribed by the ISO 26262 can be extended with methods from the domain of embedded security to protect the systems against this third source of hazard.",
"title": ""
},
{
"docid": "10365680ff0a5da9b97727bf40432aae",
"text": "In this paper, we investigate the contextualization of news documents with geographic and visual information. We propose a matrix factorization approach to analyze the location relevance for each news document. We also propose a method to enrich the document with a set of web images. For location relevance analysis, we first perform toponym extraction and expansion to obtain a toponym list from news documents. We then propose a matrix factorization method to estimate the location-document relevance scores while simultaneously capturing the correlation of locations and documents. For image enrichment, we propose a method to generate multiple queries from each news document for image search and then employ an intelligent fusion approach to collect a set of images from the search results. Based on the location relevance analysis and image enrichment, we introduce a news browsing system named NewsMap which can support users in reading news via browsing a map and retrieving news with location queries. The news documents with the corresponding enriched images are presented to help users quickly get information. Extensive experiments demonstrate the effectiveness of our approaches.",
"title": ""
},
{
"docid": "867ddbd84e8544a5c2d6f747756ca3d9",
"text": "We report a 166 W burst mode pulse fiber amplifier seeded by a Q-switched mode-locked all-fiber laser at 1064 nm based on a fiber-coupled semiconductor saturable absorber mirror. With a pump power of 230 W at 976 nm, the output corresponds to a power conversion efficiency of 74%. The repetition rate of the burst pulse is 20 kHz, the burst energy is 8.3 mJ, and the burst duration is ∼ 20 μs, which including about 800 mode-locked pulses at a repetition rate of 40 MHz and the width of the individual mode-locked pulse is measured to be 112 ps at the maximum output power. To avoid optical damage to the fiber, the initial mode-locked pulses were stretched to 72 ps by a bandwidth-limited fiber bragg grating. After a two-stage preamplifier, the pulse width was further stretched to 112 ps, which is a result of self-phase modulation of the pulse burst during the amplification.",
"title": ""
},
{
"docid": "cd9d162462c6aafde953cedffbd29b5f",
"text": "ion is a perplexing problem. Perhaps we cannot design such a machine. However, if we cannot, it will be difficult for existing machines to cope with people who are increasingly more complex. This is a catch-22 situation, with machines expected to be real experts while, at the same time, people or the problems become more and more complex than they were before such devices came into being in the first place. Real Time: Can Machines Think? Much of human behavior has nothing to do with time, and much of it deals solely with time. Time separates \"now\" from \"then,\" and \"now\" from \"when.\" It is generally accepted that there are day people, who work well during the daylight hours; and night people, who experience their best at night. Time is the structure that separates events into those occurring simultaneously and those taking place over an infinite spectrum of time. Much of brain activity takes place constantly, although activities also exist that have specific time requirements. This is the paradox of time. For example, sleep generally takes place at night when one is tired; therefore, night might be a factor in triggering sleep. Yet, in another sense, the brain is active all the time in order to keep us alive during sleep. Thus, the brain is always operating in real time and is constantly at work. Machines, however, are generally at rest, except when called upon by humans to work. There are many theories about this active sense of the brain at work, but little is known about how much work is actually being performed by the brain, with the possible exception of research into the understanding of dreams. It is this concept of dreams and their associated representational approach to the brain that is intriguing as a way to understand a larger view of behavior and thinking. Various types of activities, from physical motor activities to speech and language, are all available spontaneously to humans. Humans can stand, sit, yell, or perform an infinite variety of functions without thinking about them. Essentially, the software or program behind these activities has been well written and debugged. What is not provided genetically, we program in ourselves. In many areas, the program code is waiting to be written. Learning how to ski, write a sonnet, or fly an airplane is something that we program ourselves to do. These types of activities represent a step beyond the level of merely replacing an activity that is known with another that is unknown and communicated to us. Figure 2.2 shows some of the steps in a knowledge engineering system. Under consideration is the issue of real-time thinking. One might ask, What is the comparison? There is a new genre of research that suggests that nonreal-time activity opens a new dimension in offering humans contemplative time. In fact, real-time communication is quite interruptive and even disruptive. This represents 64 © TECHtionary.com J^gwledg^&re Information Engineering Workbench\" Future Direction This map illustrates Ihe intended funclional overview of KnowledoeWare's Information Engineering Workbench® The use of color indicates existing product modules. Uncolored areas represent lulu re functionality (hat may be implemented either as discrete product modules or as capabilities within modules. This map is intended as an aid to discussion of IEW operational concepts, not as a depiction of actual product architecture.",
"title": ""
},
{
"docid": "ca9da9f8113bc50aaa79d654a9eaf95a",
"text": "Given an ensemble of randomized regression trees, it is possible to restructure them as a collection of multilayered neural networks with particular connection weights. Following this principle, we reformulate the random forest method of Breiman (2001) into a neural network setting, and in turn propose two new hybrid procedures that we call neural random forests. Both predictors exploit prior knowledge of regression trees for their architecture, have less parameters to tune than standard networks, and less restrictions on the geometry of the decision boundaries. Consistency results are proved, and substantial numerical evidence is provided on both synthetic and real data sets to assess the excellent performance of our methods in a large variety of prediction problems. Index Terms — Random forests, neural networks, ensemble methods, randomization, sparse networks. 2010 Mathematics Subject Classification: 62G08, 62G20, 68T05.",
"title": ""
},
{
"docid": "13b204da3f49b800b800ad453231e12c",
"text": "Interior point methods for optimization have been around for more than 25 years now. Their presence has shaken up the field of optimization. Interior point methods for linear and (convex) quadratic programming display several features which make them particularly attractive for very large scale optimization. Among the most impressive of them are their lowdegree polynomial worst-case complexity and an unrivalled ability to deliver optimal solutions in an almost constant number of iterations which depends very little, if at all, on the problem dimension. Interior point methods are competitive when dealing with small problems of dimensions below one million constraints and variables and are beyond competition when applied to large problems of dimensions going into millions of constraints and variables. In this survey we will discuss several issues related to interior point methods including the proof of the worst-case complexity result, the reasons for their amazingly fast practical convergence and the features responsible for their ability to solve very large problems. The ever-growing sizes of optimization problems impose new requirements on optimization methods and software. In the final part of this paper we will therefore address a redesign of interior point methods to allow them to work in a matrix-free regime and to make them well-suited to solving even larger problems.",
"title": ""
},
{
"docid": "c99389ad72e35abb651f9002f6053ab3",
"text": "Person re-identification aims to match the images of pedestrians across different camera views from different locations. This is a challenging intelligent video surveillance problem that remains an active area of research due to the need for performance improvement. Person re-identification involves two main steps: feature representation and metric learning. Although the keep it simple and straightforward (KISS) metric learning method for discriminative distance metric learning has been shown to be effective for the person re-identification, the estimation of the inverse of a covariance matrix is unstable and indeed may not exist when the training set is small, resulting in poor performance. Here, we present dual-regularized KISS (DR-KISS) metric learning. By regularizing the two covariance matrices, DR-KISS improves on KISS by reducing overestimation of large eigenvalues of the two estimated covariance matrices and, in doing so, guarantees that the covariance matrix is irreversible. Furthermore, we provide theoretical analyses for supporting the motivations. Specifically, we first prove why the regularization is necessary. Then, we prove that the proposed method is robust for generalization. We conduct extensive experiments on three challenging person re-identification datasets, VIPeR, GRID, and CUHK 01, and show that DR-KISS achieves new state-of-the-art performance.",
"title": ""
},
{
"docid": "d1df6cd3949b924ec51a5f0fd3193806",
"text": "This paper presents a new approach to C-space exploration and path planning for robotic manipulators using the structure named bur of free C-space. This structure builds upon the so-called bubble, which is a local volume of free C-space, easily computed using the distance information in the workspace. We show how the same distance information can be used to compute the bur that can reach substantially beyond the boundary of the bubble. It is shown how burs can be used to form a rapidly exploring bur tree (RBT): a space-filling tree that resembles RRT. Such a structure can easily be used within a suitably tailored path planning algorithm. Simulation study shows how the RBT-based algorithm outperforms the classical RRT-based method.",
"title": ""
},
{
"docid": "4ac12c76112ff2085c4701130448f5d5",
"text": "A key point in the deployment of new wireless services is the cost-effective extension and enhancement of the network's radio coverage in indoor environments. Distributed Antenna Systems using Fiber-optics distribution (F-DAS) represent a suitable method of extending multiple-operator radio coverage into indoor premises, tunnels, etc. Another key point is the adoption of MIMO (Multiple Input — Multiple Output) transmission techniques which can exploit the multipath nature of the radio link to ensure reliable, high-speed wireless communication in hostile environments. In this paper novel indoor deployment solutions based on Radio over Fiber (RoF) and distributed-antenna MIMO techniques are presented and discussed, highlighting their potential in different cases.",
"title": ""
},
{
"docid": "3a089466bbb924bc5d0b0d4e20f794f8",
"text": "The proportional-integral-derivative (PID) controllers are the most popular controllers used in industry because of their remarkable effectiveness, simplicity of implementation and broad applicability. However, manual tuning of these controllers is time consuming, tedious and generally lead to poor performance. This tuning which is application specific also deteriorates with time as a result of plant parameter changes. This paper presents an artificial intelligence (AI) method of particle swarm optimization (PSO) algorithm for tuning the optimal proportional-integral derivative (PID) controller parameters for industrial processes. This approach has superior features, including easy implementation, stable convergence characteristic and good computational efficiency over the conventional methods. ZieglerNichols, tuning method was applied in the PID tuning and results were compared with the PSO-Based PID for optimum control. Simulation results are presented to show that the PSO-Based optimized PID controller is capable of providing an improved closed-loop performance over the ZieglerNichols tuned PID controller Parameters. Compared to the heuristic PID tuning method of Ziegler-Nichols, the proposed method was more efficient in improving the step response characteristics such as, reducing the steady-states error; rise time, settling time and maximum overshoot in speed control of DC motor.",
"title": ""
},
{
"docid": "3f2b14cebc92b74eb611eb29ab4ac078",
"text": "OBJECTIVES\nAlthough there have been studies linking personality to selected aspects of functioning at work, Polish literature reports a shortage of detailed analyses considering, e.g., specific professional groups or certain variables. The aim of our study was to explore the links between personality traits and emotional labor, work engagement and job satisfaction among service workers.\n\n\nMATERIAL AND METHODS\nThe study was based on a cross-sectional, self-report survey of 137 workers representing different service industries in Poland. Each participant received a demographic data sheet and a set of questionnaires: NEO Five-Factor Inventory, the Deep Acting and Surface Acting Scale, the Job Satisfaction Scale and the Utrecht Work Engagement Scale - all in their Polish versions.\n\n\nRESULTS\nA correlation analysis revealed numerous relationships between the examined variables. However, results of the regression analysis showed that only some personality traits were related with individual aspects of functioning at work. Neuroticism accounted for the phenomenon of faking emotions. Conscientiousness was significantly related to general work engagement, vigor and dedication. Agreeableness and neuroticism significantly predicted job satisfaction.\n\n\nCONCLUSIONS\nIndividual personality traits account for various aspects of work functioning. Int J Occup Med Environ Health 2016;29(5):767-782.",
"title": ""
},
{
"docid": "1195189034e0c63061bd0feff190e4d4",
"text": "This tutorial review summarises the current state of green analytical chemistry with special emphasis on environmentally friendly sample preparation techniques. Green analytical chemistry is a part of the sustainable development concept; its history and origins are described. Miniaturisation of analytical devices and shortening the time elapsing between performing analysis and obtaining reliable analytical results are important aspects of green analytical chemistry. Solventless extraction techniques, the application of alternative solvents and assisted extractions are considered to be the main approaches complying with green analytical chemistry principles.",
"title": ""
},
{
"docid": "ae89feeef4a9c12f813abf9dbe1f0263",
"text": "A probabilistic network is a graphical model that encodes probabilistic relationships between variables of interest. Such a model records qualitative influences between variables in addition to the numerical parameters of the probability distribution. As such it provides an ideal form for combining prior knowledge, which might be limited solely to experience of the influences between some of the variables of interest, and data. In this paper, we first show how data can be used to revise initial estimates of the parameters of a model. We then progress to showing how the structure of the model can be revised as data is obtained. Techniques for learning with incomplete data are also covered. In order to make the paper as self contained as possible, we start with an introduction to probability theory and probabilistic graphical models. The paper concludes with a short discussion on how these techniques can be applied to the problem of learning causal relationships between variables in a domain of interest.",
"title": ""
},
{
"docid": "06fe4547495c597a0f7052efd78d5a04",
"text": "The American cockroach, Periplaneta americana, provides a successful model for the study of legged locomotion. Sensory regulation and the relative importance of sensory feedback vs. central control in animal locomotion are key aspects in our understanding of locomotive behavior. Here we introduce the cockroach model and describe the basic characteristics of the neural generation and control of walking and running in this insect. We further provide a brief overview of some recent studies, including mathematical modeling, which have contributed to our knowledge of sensory control in cockroach locomotion. We focus on two sensory mechanisms and sense organs, those providing information related to loading and unloading of the body and the legs, and leg-movement-related sensory receptors, and present evidence for the instrumental role of these sensory signals in inter-leg locomotion control. We conclude by identifying important open questions and indicate future perspectives.",
"title": ""
},
{
"docid": "a9775a1819327d4d9cf228a3371d784f",
"text": "Permissionless blockchains protocols such as Bitcoin are inherently limited in transaction throughput and latency. Current efforts to address this key issue focus on off-chain payment channels that can be combined in a Payment-Channel Network (PCN) to enable an unlimited number of payments without requiring to access the blockchain other than to register the initial and final capacity of each channel. While this approach paves the way for low latency and high throughput of payments, its deployment in practice raises several privacy concerns as well as technical challenges related to the inherently concurrent nature of payments that have not been sufficiently studied so far. In this work, we lay the foundations for privacy and concurrency in PCNs, presenting a formal definition in the Universal Composability framework as well as practical and provably secure solutions. In particular, we present Fulgor and Rayo. Fulgor is the first payment protocol for PCNs that provides provable privacy guarantees for PCNs and is fully compatible with the Bitcoin scripting system. However, Fulgor is a blocking protocol and therefore prone to deadlocks of concurrent payments as in currently available PCNs. Instead, Rayo is the first protocol for PCNs that enforces non-blocking progress (i.e., at least one of the concurrent payments terminates). We show through a new impossibility result that non-blocking progress necessarily comes at the cost of weaker privacy. At the core of Fulgor and Rayo is Multi-Hop HTLC, a new smart contract, compatible with the Bitcoin scripting system, that provides conditional payments while reducing running time and communication overhead with respect to previous approaches. Our performance evaluation of Fulgor and Rayo shows that a payment with 10 intermediate users takes as few as 5 seconds, thereby demonstrating their feasibility to be deployed in practice.",
"title": ""
},
{
"docid": "e7bf372840efea55c632afd96840212d",
"text": "The purpose of this systematic analysis of nursing simulation literature between 2000 -2007 was to determine how learning theory was used to design and assess learning that occurs in simulations. Out of the 120 articles in which designing nursing simulations was reported, 16 referenced learning or developmental theory as the basis of how and why they set up the simulation. Of the 16 articles that used a learning type of foundation, only two considered learning as a cognitive task. More research is needed that investigates the efficacy of simulation for improving student learning. The study concludes that most nursing faculty approach simulation from a teaching paradigm rather than a learning paradigm. For simulation to foster student learning there must be a fundamental shift from a teaching paradigm to a learning paradigm and a foundational learning theory to design and evaluate simulation should be used. Examples of how to match simulation with learning theory are included.",
"title": ""
},
{
"docid": "ed888adc25f012b9550fc53f30a9332d",
"text": "BACKGROUND\nThe PedsQL Measurement Model was designed to measure health-related quality of life (HRQOL) in children and adolescents. The PedsQL 4.0 Generic Core Scales were developed to be integrated with the PedsQL Disease-Specific Modules. The newly developed PedsQL Family Impact Module was designed to measure the impact of pediatric chronic health conditions on parents and the family. The PedsQL Family Impact Module measures parent self-reported physical, emotional, social, and cognitive functioning, communication, and worry. The Module also measures parent-reported family daily activities and family relationships.\n\n\nMETHODS\nThe 36-item PedsQL Family Impact Module was administered to 23 families of medically fragile children with complex chronic health conditions who either resided in a long-term care convalescent hospital or resided at home with their families.\n\n\nRESULTS\nInternal consistency reliability was demonstrated for the PedsQL Family Impact Module Total Scale Score (alpha = 0.97), Parent HRQOL Summary Score (alpha = 0.96), Family Functioning Summary Score (alpha = 0.90), and Module Scales (average alpha = 0.90, range = 0.82 - 0.97). The PedsQL Family Impact Module distinguished between families with children in a long-term care facility and families whose children resided at home.\n\n\nCONCLUSIONS\nThe results demonstrate the preliminary reliability and validity of the PedsQL Family Impact Module in families with children with complex chronic health conditions. The PedsQL Family Impact Module will be further field tested to determine the measurement properties of this new instrument with other pediatric chronic health conditions.",
"title": ""
}
] |
scidocsrr
|
a5699cdf23fcb656365d3a438d29012b
|
Sim-to-Real Robot Learning from Pixels with Progressive Nets
|
[
{
"docid": "9ec7b122117acf691f3bee6105deeb81",
"text": "We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.",
"title": ""
},
{
"docid": "060cf7fd8a97c1ddf852373b63fe8ae1",
"text": "Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.",
"title": ""
}
] |
[
{
"docid": "c768af4d39f6cafb85e4c89440b27047",
"text": "Interruptions can have a significant impact on users working to complete a task. When people are collaborating, either with other users or with systems, coordinating interruptions is an important factor in maintaining efficiency and preventing information overload. Computer systems can observe user behavior, model it, and use this to optimize the interruptions to minimize disruption. However, current techniques often require long training periods that make them unsuitable for online collaborative environments where new users frequently participate.\n In this paper, we present a novel synthesis between Collaborative Filtering methods and machine learning classification algorithms to create a fast learning algorithm, CRISP. CRISP exploits the similarities between users in order to apply data from known users to new users, therefore requiring less information on each person. Results from user studies indicate the algorithm significantly improves users' performances in completing the task and their perception of how long it took to complete each task.",
"title": ""
},
{
"docid": "8d8e7c9777f02c6a4a131f21a66ee870",
"text": "Teaching agile practices is becoming a priority in Software engineering curricula as a result of the increasing use of agile methods (AMs) such as Scrum in the software industry. Limitations in time, scope, and facilities within academic contexts hinder students’ hands-on experience in the use of professional AMs. To enhance students’ exposure to Scrum, we have developed Virtual Scrum, an educational virtual world that simulates a Scrum-based team room through virtual elements such as blackboards, a Web browser, document viewers, charts, and a calendar. A preliminary version of Virtual Scrum was tested with a group of 45 students running a capstone project with and without Virtual Scrum support. Students’ feedback showed that Virtual Scrum is a viable and effective tool to implement the different elements in a Scrum team room and to perform activities throughout the Scrum process. 2013 Wiley Periodicals, Inc. Comput Appl Eng Educ 23:147–156, 2015; View this article online at wileyonlinelibrary.com/journal/cae; DOI 10.1002/cae.21588",
"title": ""
},
{
"docid": "9f4db80a3474bf5651ff47057a4b2ae5",
"text": "With the emergence of free and open source software (F/OSS) projects (e.g. Linux) as serious contenders to well-established proprietary software, advocates of F/OSS are quick to generalize the superiority of this approach to software development. On the other hand, some wellestablished software development firms view F/OSS as a threat and vociferously refute the claims of F/OSS advocates. This article represents a tutorial on F/OSS that tries objectively to identify and present open source software’s concepts, benefits, and challenges. From our point of view, F/OSS is more than just software. We conceptualize it as an IPO system that consists of the license as the boundary of the system, the community that provides the input, the development process, and the software as the output. After describing the evolution and definition of F/OSS, we identify three approaches to benefiting from F/OSS that center on (1) the software, (2) the community, and (3) the license respectively. Each approach is fit for a specific situation and provides a unique set of benefits and challenges. We further illustrate our points by refuting common misconceptions associated with F/OSS based upon our conceptual framework.",
"title": ""
},
{
"docid": "ff50146989f30807463aee9af97ae71f",
"text": "The paper overviews novel technique for wireless charging system of electric vehicle in which verifies the developed theory using battery charger application of electric vehicle. In electric vehicle charging of battery through charger and wire is inconvenient, hazardous and expensive. The existing gasoline and petrol engine technology vehicles are responsible for air, noise pollution as well as for greenhouse gases. The implemented wireless charging system of battery for Electric vehicle by inductive coupling method has been presented in this paper. The driving circuit is used between the transmitter coil & receiver coil where MOSFET is used for switching operation. The transmitter coil circuit is turn ON and OFF whenever the vehicle is present and absent respectively. The system is achieves 67% efficiency level while providing safety, reliability, low maintenance and long product life.",
"title": ""
},
{
"docid": "1edd6cb3c6ed4657021b6916efbc23d9",
"text": "Siamese-like networks, Streetscore-CNN (SS-CNN) and Ranking SS-CNN, to predict pairwise comparisons Figure 1: User Interface for Crowdsourced Online Game Performance Analysis • SS-CNN: We calculate the % of pairwise comparisons in test set predicted correctly by (1) Softmax of output neurons in final layer (2) comparing TrueSkill scores [2] obtained from synthetic pairwise comparisons from the CNN (3) extracting features from penultimate layer of CNN and feeding pairwise feature representations to a RankSVM [3] • RSS-CNN: We compare the ranking function outputs for both images in a test pair to decide which image wins, and calculate the binary prediction accuracy.",
"title": ""
},
{
"docid": "8a243d17a61f75ef9a881af120014963",
"text": "This paper presents a Deep Mayo Predictor model for predicting the outcomes of the matches in IPL 9 being played in April – May, 2016. The model has three components which are based on multifarious considerations emerging out of a deeper analysis of T20 cricket. The models are created using Data Analytics methods from machine learning domain. The prediction accuracy obtained is high as the Mayo Predictor Model is able to correctly predict the outcomes of 39 matches out of the 56 matches played in the league stage of the IPL IX tournament. Further improvement in the model can be attempted by using a larger training data set than the one that has been utilized in this work. No such effort at creating predictor models for cricket matches has been reported in the literature.",
"title": ""
},
{
"docid": "5539885c88d11eb6a9c4e54b6e399863",
"text": "Deep neural network is difficult to train and this predicament becomes worse as the depth increases. The essence of this problem exists in the magnitude of backpropagated errors that will result in gradient vanishing or exploding phenomenon. We show that a variant of regularizer which utilizes orthonormality among different filter banks can alleviate this problem. Moreover, we design a backward error modulation mechanism based on the quasi-isometry assumption between two consecutive parametric layers. Equipped with these two ingredients, we propose several novel optimization solutions that can be utilized for training a specific-structured (repetitively triple modules of Conv-BNReLU) extremely deep convolutional neural network (CNN) WITHOUT any shortcuts/ identity mappings from scratch. Experiments show that our proposed solutions can achieve distinct improvements for a 44-layer and a 110-layer plain networks on both the CIFAR-10 and ImageNet datasets. Moreover, we can successfully train plain CNNs to match the performance of the residual counterparts. Besides, we propose new principles for designing network structure from the insights evoked by orthonormality. Combined with residual structure, we achieve comparative performance on the ImageNet dataset.",
"title": ""
},
{
"docid": "4a6e382b9db87bf5915fec8de4a67b55",
"text": "BACKGROUND\nThe aim of the study is to analyze the nature, extensions, and dural relationships of hormonally inactive giant pituitary tumors. The relevance of the anatomic relationships to surgery is analyzed.\n\n\nMETHODS\nThere were 118 cases of hormonally inactive pituitary tumors analyzed with the maximum dimension of more than 4 cm. These cases were surgically treated in our neurosurgical department from 1995 to 2002. Depending on the anatomic extensions and the nature of their meningeal coverings, these tumors were divided into 4 grades. The grades reflected an increasing order of invasiveness of adjacent dural and arachnoidal compartments. The strategy and outcome of surgery and radiotherapy was analyzed for these 4 groups. Average duration of follow-up was 31 months.\n\n\nRESULTS\nThere were 54 giant pituitary tumors, which remained within the confines of sellar dura and under the diaphragma sellae and did not enter into the compartment of cavernous sinus (Grade I). Transgression of the medial wall and invasion into the compartment of the cavernous sinus (Grade II) was seen in 38 cases. Elevation of the dura of the superior wall of the cavernous sinus and extension of this elevation into various compartments of brain (Grade III) was observed in 24 cases. Supradiaphragmatic-subarachnoid extension (Grade IV) was seen in 2 patients. The majority of patients were treated by transsphenoidal route.\n\n\nCONCLUSIONS\nGiant pituitary tumors usually have a meningeal cover and extend into well-defined anatomic pathways. Radical surgery by a transsphenoidal route is indicated and possible in Grade I-III pituitary tumors. Such a strategy offers a reasonable opportunity for recovery in vision and a satisfactory postoperative and long-term outcome. Biopsy of the tumor followed by radiotherapy could be suitable for Grade IV pituitary tumors.",
"title": ""
},
{
"docid": "57ff834b30f5e0f31c3382fed9c2a8ee",
"text": "Today's vehicles are becoming cyber-physical systems that not only communicate with other vehicles but also gather various information from hundreds of sensors within them. These developments help create smart and connected (e.g., self-driving) vehicles that will introduce significant information to drivers, manufacturers, insurance companies, and maintenance service providers for various applications. One such application that is becoming crucial with the introduction of self-driving cars is forensic analysis of traffic accidents. The utilization of vehicle-related data can be instrumental in post-accident scenarios to discover the faulty party, particularly for self-driving vehicles. With the opportunity of being able to access various information in cars, we propose a permissioned blockchain framework among the various elements involved to manage the collected vehicle-related data. Specifically, we first integrate vehicular public key infrastructure (VPKI) to the proposed blockchain to provide membership establishment and privacy. Next, we design a fragmented ledger that will store detailed data related to vehicles such as maintenance information/ history, car diagnosis reports, and so on. The proposed forensic framework enables trustless, traceable, and privacy-aware post-accident analysis with minimal storage and processing overhead.",
"title": ""
},
{
"docid": "a017ab9f310f9f36f88bf488ac833f05",
"text": "Wireless data communication technology has eliminated wired connections for data transfer to portable devices. Wireless power technology offers the possibility of eliminating the remaining wired connection: the power cord. For ventricular assist devices (VADs), wireless power technology will eliminate the complications and infections caused by the percutaneous wired power connection. Integrating wireless power technology into VADs will enable VAD implants to become a more viable option for heart failure patients (of which there are 80 000 in the United States each year) than heart transplants. Previous transcutaneous energy transfer systems (TETS) have attempted to wirelessly power VADs ; however, TETS-based technologies are limited in range to a few millimeters, do not tolerate angular misalignment, and suffer from poor efficiency. The free-range resonant electrical delivery (FREE-D) wireless power system aims to use magnetically coupled resonators to efficiently transfer power across a distance to a VAD implanted in the human body, and to provide robustness to geometric changes. Multiple resonator configurations are implemented to improve the range and efficiency of wireless power transmission to both a commercially available axial pump and a VentrAssist centrifugal pump [3]. An adaptive frequency tuning method allows for maximum power transfer efficiency for nearly any angular orientation over a range of separation distances. Additionally, laboratory results show the continuous operation of both pumps using the FREE-D system with a wireless power transfer efficiency upwards of 90%.",
"title": ""
},
{
"docid": "88a21d973ec80ee676695c95f6b20545",
"text": "Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.",
"title": ""
},
{
"docid": "3cb2bfb076e9c21526ec82c43188def5",
"text": "Voice is projected to be the next input interface for portable devices. The increased use of audio interfaces can be mainly attributed to the success of speech and speaker recognition technologies. With these advances comes the risk of criminal threats where attackers are reportedly trying to access sensitive information using diverse voice spoofing techniques. Among them, replay attacks pose a real challenge to voice biometrics. This paper addresses the problem by proposing a deep learning architecture in tandem with low-level cepstral features. We investigate the use of a deep neural network (DNN) to discriminate between the different channel conditions available in the ASVSpoof 2017 dataset, namely recording, playback and session conditions. The high-level feature vectors derived from this network are used to discriminate between genuine and spoofed audio. Two kinds of low-level features are utilized: state-ofthe-art constant-Q cepstral coefficients (CQCC), and our proposed high-frequency cepstral coefficients (HFCC) that derive from the high-frequency spectrum of the audio. The fusion of both features proved to be effective in generalizing well across diverse replay attacks seen in the evaluation of the ASVSpoof 2017 challenge, with an equal error rate of 11.5%, that is 53% better than the baseline Gaussian Mixture Model (GMM) applied on CQCC.",
"title": ""
},
{
"docid": "e3699de3c4450eb2988cb50d5d75c44e",
"text": "Biomarkers of Alzheimer's disease (AD) are increasingly important. All modern AD therapeutic trials employ AD biomarkers in some capacity. In addition, AD biomarkers are an essential component of recently updated diagnostic criteria for AD from the National Institute on Aging--Alzheimer's Association. Biomarkers serve as proxies for specific pathophysiological features of disease. The 5 most well established AD biomarkers include both brain imaging and cerebrospinal fluid (CSF) measures--cerebrospinal fluid Abeta and tau, amyloid positron emission tomography (PET), fluorodeoxyglucose (FDG) positron emission tomography, and structural magnetic resonance imaging (MRI). This article reviews evidence supporting the position that MRI is a biomarker of neurodegenerative atrophy. Topics covered include methods of extracting quantitative and semiquantitative information from structural MRI; imaging-autopsy correlation; and evidence supporting diagnostic and prognostic value of MRI measures. Finally, the place of MRI in a hypothetical model of temporal ordering of AD biomarkers is reviewed.",
"title": ""
},
{
"docid": "4164774428ce68c4c61039eafeae03ea",
"text": "Images of a human iris contain rich texture information useful for identity authentication. A key and still open issue in iris recognition is how best to represent such textural information using a compact set of features (iris features). In this paper, we propose using ordinal measures for iris feature representation with the objective of characterizing qualitative relationships between iris regions rather than precise measurements of iris image structures. Such a representation may lose some image-specific information, but it achieves a good trade-off between distinctiveness and robustness. We show that ordinal measures are intrinsic features of iris patterns and largely invariant to illumination changes. Moreover, compactness and low computational complexity of ordinal measures enable highly efficient iris recognition. Ordinal measures are a general concept useful for image analysis and many variants can be derived for ordinal feature extraction. In this paper, we develop multilobe differential filters to compute ordinal measures with flexible intralobe and interlobe parameters such as location, scale, orientation, and distance. Experimental results on three public iris image databases demonstrate the effectiveness of the proposed ordinal feature models.",
"title": ""
},
{
"docid": "62c3c062bbea8151543e0491190cf02d",
"text": "In this article, we present a survey of recent advances in passive human behaviour recognition in indoor areas using the channel state information (CSI) of commercial WiFi systems. Movement of human body causes a change in the wireless signal reflections, which results in variations in the CSI. By analyzing the data streams of CSIs for different activities and comparing them against stored models, human behaviour can be recognized. This is done by extracting features from CSI data streams and using machine learning techniques to build models and classifiers. The techniques from the literature that are presented herein have great performances, however, instead of the machine learning techniques employed in these works, we propose to use deep learning techniques such as long-short term memory (LSTM) recurrent neural network (RNN), and show the improved performance. We also discuss about different challenges such as environment change, frame rate selection, and multi-user scenario, and suggest possible directions for future work.",
"title": ""
},
{
"docid": "a150300014315dd7d486c61c391228a9",
"text": "Roles in a movie form a small society and their interrelationship provides clues for movie understanding. Based on this observation, we present a new viewpoint to perform semantic movie analysis. Through checking the co-occurrence of roles in different scenes, we construct a roles' social network to describe their relationships. We introduce the concept of social network analysis to elaborately identify leading roles and the hidden communities. With the results of community identification, we perform storyline detection that facilitates more flexible movie browsing and higher-level movie analysis. The experimental results show that the proposed community identification method is accurate and is robust to errors.",
"title": ""
},
{
"docid": "7c58c2c72f0b3b83fef0e67da98297f8",
"text": "Distortion is a desirable effect for sound coloration in ele ctric guitar amplifiers and effect processors. At high sound level s, particularly at low frequencies, the loudspeakers used in clas sic style cabinets are also a source of distortion. This paper present s a case study of measurements and digital modeling of a typical guit ar loudspeaker as a real-time audio effect. It demonstrates th e complexity of the driver behavior, which cannot be efficiently m odeled in true physical detail. A model with linear transfer fu nctions and static nonlinearity characteristics to approximate th e measured behavior is derived based upon physical arguments. An effici nt method to simulate radiation directivity is also proposed.",
"title": ""
},
{
"docid": "986279f6f47189a6d069c0336fa4ba94",
"text": "Compared to the traditional single-phase-shift control, dual-phase-shift (DPS) control can greatly improve the performance of the isolated bidirectional dual-active-bridge dc-dc converter (IBDC). This letter points out some wrong knowledge about transmission power of IBDC under DPS control in the earlier studies. On this basis, this letter gives the detailed theoretical and experimental analyses of the transmission power of IBDC under DPS control. And the experimental results showed agreement with theoretical analysis.",
"title": ""
},
{
"docid": "6301ec034b04323bf0437cc7b829cfad",
"text": "Selective mutism (SM) is a relatively rare childhood disorder and is underdiagnosed and undertreated. The purpose of the retrospective naturalistic study was to examine the long-term outcome of children with SM who were treated with specifically designed modular cognitive behavioral therapy (MCBT). Parents of 36 children who met diagnostic criteria of SM that received MCBT treatment were invited for a follow-up evaluation. Parents were interviewed using structured scales and completed questionnaires regarding the child, including the Selective Mutism Questionnaire (SMQ). Twenty-four subjects were identified and evaluated. Their mean age ± SD of onset of SM symptoms, beginning of treatment, and age at follow-up were 3.4 ± 1.4, 6.4 ± 3.1, and 9.3 ± 3.4 years, respectively. There was robust improvement from beginning of treatment to follow-up evaluation in SM, social anxiety disorder, and specific phobia symptoms. The recovery rate from SM was 84.2 %. Conclusion: SM-focused MCBT is feasible in children and possibly effective in inducing long-term reduction of SM and comorbid anxiety symptoms. What is Known: • There are limited empirical data on selective mutism (SM) treatment outcome and specifically on cognitive-behavioral therapy, with the majority of studies being uncontrolled case reports of 1 to 2 cases each. • There is also limited data on the long-term outcome of children with SM following treatment. What is New: • Modular cognitive behavioral treatment is a feasible and possibly effective treatment for SM. Intervention at a younger age is more effective comparing to an older age. • Treatment for SM also decreases the rate of psychiatric comorbidities, including separation anxiety disorder and specific phobia.",
"title": ""
},
{
"docid": "b1b57467dff40b52822ff2406405b217",
"text": "Placement of attributes/methods within classes in an object-oriented system is usually guided by conceptual criteria and aided by appropriate metrics. Moving state and behavior between classes can help reduce coupling and increase cohesion, but it is nontrivial to identify where such refactorings should be applied. In this paper, we propose a methodology for the identification of Move Method refactoring opportunities that constitute a way for solving many common feature envy bad smells. An algorithm that employs the notion of distance between system entities (attributes/methods) and classes extracts a list of behavior-preserving refactorings based on the examination of a set of preconditions. In practice, a software system may exhibit such problems in many different places. Therefore, our approach measures the effect of all refactoring suggestions based on a novel entity placement metric that quantifies how well entities have been placed in system classes. The proposed methodology can be regarded as a semi-automatic approach since the designer will eventually decide whether a suggested refactoring should be applied or not based on conceptual or other design quality criteria. The evaluation of the proposed approach has been performed considering qualitative, metric, conceptual, and efficiency aspects of the suggested refactorings in a number of open-source projects.",
"title": ""
}
] |
scidocsrr
|
69ec0e4876b8ce7c5c7c8de81d4f5082
|
A Semantic Space for Music Derived from Social Tags
|
[
{
"docid": "4ba308bd5ff2196b8ca34d170acb8275",
"text": "This paper reviews the state-of-the-art in automatic genre classification of music collections through three main paradigms: expert systems, unsupervised classification, and supervised classification. The paper discusses the importance of music genres with their definitions and hierarchies. It also presents techniques to extract meaningful information from audio data to characterize musical excerpts. The paper also presents the results of new emerging research fields and techniques that investigate the proximity of music genres",
"title": ""
}
] |
[
{
"docid": "0b117f379a32b0ba4383c71a692405c8",
"text": "Today’s educational policies are largely devoted to fostering the development and implementation of computer applications in education. This paper analyses the skills and competences needed for the knowledgebased society and reveals the role and impact of using computer applications to the teaching and learning processes. Also, the aim of this paper is to reveal the outcomes of a study conducted in order to determine the impact of using computer applications in teaching and learning Management and to propose new opportunities for the process improvement. The findings of this study related to the teachers’ and students’ perceptions about using computer applications for teaching and learning could open further researches on computer applications in education and their educational and economic implications.",
"title": ""
},
{
"docid": "bc6f18cf559e120cdb40b3a6f6e708b3",
"text": "The disabling and painful disease osteoarthritis (OA) is the most common form of arthritis. Strong evidence suggests that a subpopulation of OA patients has a form of OA driven by inflammation. Consequently, understanding when inflammation is the driver of disease progression and which OA patients might benefit from anti-inflammatory treatment is a topic of intense research in the OA field. We have reviewed the current literature on OA, with an emphasis on inflammation in OA, biochemical markers of structural damage, and anti-inflammatory treatments for OA. The literature suggests that the OA patient population is diverse, consisting of several subpopulations, including one associated with inflammation. This inflammatory subpopulation may be identified by a combination of novel serological inflammatory biomarkers. Preliminary evidence from small clinical studies suggests that this subpopulation may benefit from anti-inflammatory treatment currently reserved for other inflammatory arthritides.",
"title": ""
},
{
"docid": "6c14243c49a2d119d768685b59f9548b",
"text": "Over the past decade, researchers have shown significant advances in the area of radio frequency identification (RFID) and metamaterials. RFID is being applied to a wide spectrum of industries and metamaterial-based antennas are beginning to perform just as well as existing larger printed antennas. This paper presents two novel metamaterial-based antennas for passive ultra-high frequency (UHF) RFID tags. It is shown that by implementing omega-like elements and split-ring resonators into the design of an antenna for an UHF RFID tag, the overall size of the antenna can be significantly reduced to dimensions of less than 0.15λ0, while preserving the performance of the antenna.",
"title": ""
},
{
"docid": "0618529a20e00174369a05077294de5b",
"text": "In this paper we present a case study of the steps leading up to the extraction of the spam bot payload found within a backdoor rootkit known as Backdoor.Rustock.B or Spam-Mailbot.c. Following the extraction of the spam module we focus our analysis on the steps necessary to decrypt the communications between the command and control server and infected hosts. Part of the discussion involves a method to extract the encryption key from within the malware binary and use that to decrypt the communications. The result is a better understanding of an advanced botnet communications scheme.",
"title": ""
},
{
"docid": "9b37cc1d96d9a24e500c572fa2cb339a",
"text": "Site-based or topic-specific search engines work with mixed success because of the general difficulty of the information retrieval task, and the lack of good link information to allow authorities to be identified. We are advocating an open source approach to the problem due to its scope and need for software components. We have adopted a topic-based search engine because it represents the next generation of capability. This paper outlines our scalable system for site-based or topic-specific search, and demonstrates the developing system on a small 250,000 document collection of EU and UN web pages.",
"title": ""
},
{
"docid": "3948cda9132e1dc4f2a99cd6d3da1bd0",
"text": "Health care is growing increasingly complex, and most clinical research focuses on new approaches to diagnosis and treatment. In contrast, relatively little effort has been targeted at the perfection of operational systems, which are partly responsible for the well-documented problems with medical safety. If medicine is to achieve major gains in quality, it must be transformed, and information technology will play a key part, especially with respect to safety.",
"title": ""
},
{
"docid": "2b44b0af4481acd273fbf99585bea383",
"text": "Behavioral targeting (BT), which aims to sell advertisers those behaviorally related user segments to deliver their advertisements, is facing a bottleneck in serving the rapid growth of long tail advertisers. Due to the small business nature of the tail advertisers, they generally expect to accurately reach a small group of audience, which is hard to be satisfied by classical BT solutions with large size user segments. In this paper, we propose a novel probabilistic generative model named Rank Latent Dirichlet Allocation (RANKLDA) to rank audience according to their ads click probabilities for the long tail advertisers to deliver their ads. Based on the basic assumption that users who clicked the same group of ads will have a higher probability of sharing similar latent search topical interests, RANKLDA combines topic discovery from users' search behaviors and learning to rank users from their ads click behaviors together. In computation, the topic learning could be enhanced by the supervised information of the rank learning and simultaneously, the rank learning could be better optimized by considering the discovered topics as features. This co-optimization scheme enhances each other iteratively. Experiments over the real click-through log of display ads in a public ad network show that the proposed RANKLDA model can effectively rank the audience for the tail advertisers.",
"title": ""
},
{
"docid": "756acd9371f7f0c30b10b55742d93730",
"text": "Pseudo-Relevance Feedback (PRF) is an important general technique for improving retrieval effectiveness without requiring any user effort. Several state-of-the-art PRF models are based on the language modeling approach where a query language model is learned based on feedback documents. In all these models, feedback documents are represented with unigram language models smoothed with a collection language model. While collection language model-based smoothing has proven both effective and necessary in using language models for retrieval, we use axiomatic analysis to show that this smoothing scheme inherently causes the feedback model to favor frequent terms and thus violates the IDF constraint needed to ensure selection of discriminative feedback terms. To address this problem, we propose replacing collection language model-based smoothing in the feedback stage with additive smoothing, which is analytically shown to select more discriminative terms. Empirical evaluation further confirms that additive smoothing indeed significantly outperforms collection-based smoothing methods in multiple language model-based PRF models.",
"title": ""
},
{
"docid": "f09bb6b62cde22b0b3607bf5e804e6e0",
"text": "The olive tree, Olea europaea, is native to the Mediterranean basin and parts of Asia Minor. The fruit and compression-extracted oil have a wide range of therapeutic and culinary applications. Olive oil also constitutes a major component of the \"Mediterranean diet.\" The chief active components of olive oil include oleic acid, phenolic constituents, and squalene. The main phenolics include hydroxytyrosol, tyrosol, and oleuropein, which occur in highest levels in virgin olive oil and have demonstrated antioxidant activity. Antioxidants are believed to be responsible for a number of olive oil's biological activities. Oleic acid, a monounsaturated fatty acid, has shown activity in cancer prevention, while squalene has also been identified as having anticancer effects. Olive oil consumption has benefit for colon and breast cancer prevention. The oil has been widely studied for its effects on coronary heart disease (CHD), specifically for its ability to reduce blood pressure and low-density lipoprotein (LDL) cholesterol. Antimicrobial activity of hydroxytyrosol, tyrosol, and oleuropein has been demonstrated against several strains of bacteria implicated in intestinal and respiratory infections. Although the majority of research has been conducted on the oil, consumption of whole olives might also confer health benefits.",
"title": ""
},
{
"docid": "053218d2f92ec623daa403a55aba8c74",
"text": "Yoga is an age-old traditional Indian psycho-philosophical-cultural method of leading one's life, that alleviates stress, induces relaxation and provides multiple health benefits to the person following its system. It is a method of controlling the mind through the union of an individual's dormant energy with the universal energy. Commonly practiced yoga methods are 'Pranayama' (controlled deep breathing), 'Asanas' (physical postures) and 'Dhyana' (meditation) admixed in varying proportions with differing philosophic ideas. A review of yoga in relation to epilepsy encompasses not only seizure control but also many factors dealing with overall quality-of-life issues (QOL). This paper reviews articles related to yoga and epilepsy, seizures, EEG, autonomic changes, neuro-psychology, limbic system, arousal, sleep, brain plasticity, motor performance, brain imaging studies, and rehabilitation. There is a dearth of randomized, blinded, controlled studies related to yoga and seizure control. A multi-centre, cross-cultural, preferably blinded (difficult for yoga), well-randomized controlled trial, especially using a single yogic technique in a homogeneous population such as Juvenile myoclonic epilepsy is justified to find out how yoga affects seizure control and QOL of the person with epilepsy.",
"title": ""
},
{
"docid": "15518edc9bde13f55df3192262c3a9bf",
"text": "Under the framework of the argumentation scheme theory (Walton, 1996), we developed annotation protocols for an argumentative writing task to support identification and classification of the arguments being made in essays. Each annotation protocol defined argumentation schemes (i.e., reasoning patterns) in a given writing prompt and listed questions to help evaluate an argument based on these schemes, to make the argument structure in a text explicit and classifiable. We report findings based on an annotation of 600 essays. Most annotation categories were applied reliably by human annotators, and some categories significantly contributed to essay score. An NLP system to identify sentences containing scheme-relevant critical questions was developed based on the human annotations.",
"title": ""
},
{
"docid": "c6645086397ba0825f5f283ba5441cbf",
"text": "Anomalies have broad patterns corresponding to their causes. In industry, anomalies are typically observed as equipment failures. Anomaly detection aims to detect such failures as anomalies. Although this is usually a binary classification task, the potential existence of unseen (unknown) failures makes this task difficult. Conventional supervised approaches are suitable for detecting seen anomalies but not for unseen anomalies. Although, unsupervised neural networks for anomaly detection now detect unseen anomalies well, they cannot utilize anomalous data for detecting seen anomalies even if some data have been made available. Thus, providing an anomaly detector that finds both seen and unseen anomalies well is still a tough problem. In this paper, we introduce a novel probabilistic representation of anomalies to solve this problem. The proposed model defines the normal and anomaly distributions using the analogy between a set and the complementary set. We applied these distributions to an unsupervised variational autoencoder (VAE)-based method and turned it into a supervised VAE-based method. We tested the proposed method with well-known data and real industrial data to show that the proposed method detects seen anomalies better than the conventional unsupervised method without degrading the detection performance for unseen anomalies.",
"title": ""
},
{
"docid": "4d95cf6e1d801721fa7f588b25388528",
"text": "Compression bandaging is the most common therapy used to treat venous ulceration. The bandages must be applied so that they generate a specific pressure profile in order for the treatment to be effective. No method currently exists to monitor the pressure applied by the bandage over a number of days outside of a laboratory setting. A portable device was developed that is capable of monitoring sub-bandage pressure as the user goes about their daily routine. The device consists of four Tekscan FlexiForce A401-series force sensors connected to an excitation circuit and PIC microcontroller circuit. It is capable of measuring pressures in the range of 0 - 96 mmHg. These sensors were chosen because they are cheap, thin, flexible and durable. Both circuits are housed in a protective case that attaches to the users leg. Preliminary results correspond with the pressure values stated in the literature and the device is capable of generating accurate sub-bandage pressure data.",
"title": ""
},
{
"docid": "9a1665cff530d93c84598e7df947099f",
"text": "The algorithmic Markov condition states that the most likely causal direction between two random variables X and Y can be identified as the direction with the lowest Kolmogorov complexity. This notion is very powerful as it can detect any causal dependency that can be explained by a physical process. However, due to the halting problem, it is also not computable. In this paper we propose an computable instantiation that provably maintains the key aspects of the ideal. We propose to approximate Kolmogorov complexity via the Minimum Description Length (MDL) principle, using a score that is mini-max optimal with regard to the model class under consideration. This means that even in an adversarial setting, the score degrades gracefully, and we are still maximally able to detect dependencies between the marginal and the conditional distribution. As a proof of concept, we propose CISC, a linear-time algorithm for causal inference by stochastic complexity, for pairs of univariate discrete variables. Experiments show that CISC is highly accurate on synthetic, benchmark, as well as real-world data, outperforming the state of the art by a margin, and scales extremely well with regard to sample and domain sizes.",
"title": ""
},
{
"docid": "a1d061eb47e1404d2160c5e830229dc1",
"text": "Recommendation techniques are very important in the fields of E-commerce and other web-based services. One of the main difficulties is dynamically providing high-quality recommendation on sparse data. In this paper, a novel dynamic personalized recommendation algorithm is proposed, in which information contained in both ratings and profile contents are utilized by exploring latent relations between ratings, a set of dynamic features are designed to describe user preferences in multiple phases, and finally, a recommendation is made by adaptively weighting the features. Experimental results on public data sets show that the proposed algorithm has satisfying performance.",
"title": ""
},
{
"docid": "bb626694bb0293522b991b9573981186",
"text": "Glaucoma is a disease in which the optic nerve is chronically damaged by the elevation of the intra-ocular pressure, resulting in visual field defect. Therefore, it is important to monitor and treat suspected patients before they are confirmed with glaucoma. In this paper, we propose a 2-stage ranking-CNN that classifies fundus images as normal, suspicious, and glaucoma. Furthermore, we propose a method of using the class activation map as a mask filter and combining it with the original fundus image as an intermediate input. Our results have improved the average accuracy by about 10% over the existing 3-class CNN and ranking-CNN, and especially improved the sensitivity of suspicious class by more than 20% over 3-class CNN. In addition, the extracted ROI was also found to overlap with the diagnostic criteria of the physician. The method we propose is expected to be efficiently applied to any medical data where there is a suspicious condition between normal and disease.",
"title": ""
},
{
"docid": "70e14966cf68874c82e4576543bb0c36",
"text": "Blockchain technology is a generic term for data organization structures, cryptography algorithms, distributed consensus mechanisms, and peer-to-peer communications that are used to implement distributed ledgers. Its core value lies in the establishment of mutual trust between non-coordinated participants. However, the current blockchain applications are inclined towards software and network, missing strong connection with the real world. The Internet of Things technology connects various smart devices and sensors to Internet to facilitate recognition and management of information. This paper presents a blockchain-based Internet of Things solution, where RFID chips with built-in asymmetric encryption algorithm and scanners uploading information directly to blockchain. Compared to traditional Internet of Things designs, the proposed solution combines the advantages of both decentralized blockchain, and the Waltonchain implementation demonstrates such advantages with both data security and application flexibility.",
"title": ""
},
{
"docid": "e5aa3c20ccd4b473142093e225fd314e",
"text": "BACKGROUND\nLong-term engagement in exercise and physical activity mitigates the progression of disability and increases quality of life in people with Parkinson disease (PD). Despite this, the vast majority of individuals with PD are sedentary. There is a critical need for a feasible, safe, acceptable, and effective method to assist those with PD to engage in active lifestyles. Peer coaching through mobile health (mHealth) may be a viable approach.\n\n\nOBJECTIVE\nThe purpose of this study was to develop a PD-specific peer coach training program and a remote peer-mentored walking program using mHealth technology with the goal of increasing physical activity in persons with PD. We set out to examine the feasibility, safety, and acceptability of the programs along with preliminary evidence of individual-level changes in walking activity, self-efficacy, and disability in the peer mentees.\n\n\nMETHODS\nA peer coach training program and a remote peer-mentored walking program using mHealth was developed and tested in 10 individuals with PD. We matched physically active persons with PD (peer coaches) with sedentary persons with PD (peer mentees), resulting in 5 dyads. Using both Web-based and in-person delivery methods, we trained the peer coaches in basic knowledge of PD, exercise, active listening, and motivational interviewing. Peer coaches and mentees wore FitBit Zip activity trackers and participated in daily walking over 8 weeks. Peer dyads interacted daily via the FitBit friends mobile app and weekly via telephone calls. Feasibility was determined by examining recruitment, participation, and retention rates. Safety was assessed by monitoring adverse events during the study period. Acceptability was assessed via satisfaction surveys. Individual-level changes in physical activity were examined relative to clinically important differences.\n\n\nRESULTS\nFour out of the 5 peer pairs used the FitBit activity tracker and friends function without difficulty. A total of 4 of the 5 pairs completed the 8 weekly phone conversations. There were no adverse events over the course of the study. All peer coaches were \"satisfied\" or \"very satisfied\" with the training program, and all participants were \"satisfied\" or \"very satisfied\" with the peer-mentored walking program. All participants would recommend this program to others with PD. Increases in average steps per day exceeding the clinically important difference occurred in 4 out of the 5 mentees.\n\n\nCONCLUSIONS\nRemote peer coaching using mHealth is feasible, safe, and acceptable for persons with PD. Peer coaching using mHealth technology may be a viable method to increase physical activity in individuals with PD. Larger controlled trials are necessary to examine the effectiveness of this approach.",
"title": ""
},
{
"docid": "517d6d154c53297192d64d19e23e1a09",
"text": "As computational work becomes more and more integral to many aspects of scientific research, computational reproducibility has become an issue of increasing importance to computer systems researchers and domain scientists alike. Though computational reproducibility seems more straight forward than replicating physical experiments, the complex and rapidly changing nature of computer environments makes being able to reproduce and extend such work a serious challenge. In this paper, I explore common reasons that code developed for one research project cannot be successfully executed or extended by subsequent researchers. I review current approaches to these issues, including virtual machines and workflow systems, and their limitations. I then examine how the popular emerging technology Docker combines several areas from systems research - such as operating system virtualization, cross-platform portability, modular re-usable elements, versioning, and a 'DevOps' philosophy, to address these challenges. I illustrate this with several examples of Docker use with a focus on the R statistical environment.",
"title": ""
},
{
"docid": "e7865d56e092376493090efc48a7e238",
"text": "Machine learning techniques are applied to the task of context awareness, or inferring aspects of the user's state given a stream of inputs from sensors worn by the person. We focus on the task of indoor navigation and show that, by integrating information from accelerometers, magnetometers and temperature and light sensors, we can collect enough information to infer the user's location. However, our navigation algorithm performs very poorly, with almost a 50% error rate, if we use only the raw sensor signals. Instead, we introduce a \"data cooking\" module that computes appropriate high-level features from the raw sensor data. By introducing these high-level features, we are able to reduce the error rate to 2% in our example environment.",
"title": ""
}
] |
scidocsrr
|
1a4109d23dffd67c388f61dfd7df6a46
|
Learning to rank relational objects and its application to web search
|
[
{
"docid": "04ba17b4fc6b506ee236ba501d6cb0cf",
"text": "We propose a family of learning algorithms based on a new form f regularization that allows us to exploit the geometry of the marginal distribution. We foc us on a semi-supervised framework that incorporates labeled and unlabeled data in a general-p u pose learner. Some transductive graph learning algorithms and standard methods including Suppor t Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize pr op rties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theor e ical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we ob tain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semiupervised algorithms are able to use unlabeled data effectively. Finally we have a brief discuss ion of unsupervised and fully supervised learning within our general framework.",
"title": ""
}
] |
[
{
"docid": "92d3c81c7be8ed591019edf2949015a4",
"text": "With the increasing popularity of Bitcoin, a digital decentralized currency and payment system, the number of malicious third parties attempting to steal bitcoins has grown substantially. Attackers have stolen bitcoins worth millions of dollars from victims by using malware to gain access to the private keys stored on the victims’ computers or smart phones. In order to protect the Bitcoin private keys, we propose the use of a hardware token for the authorization of transactions. We created a proof-of-concept Bitcoin hardware token: BlueWallet. The device communicates using Bluetooth Low Energy and is able to securely sign Bitcoin transactions. The device can also be used as an electronic wallet in combination with a point of sale and serves as an alternative to cash and credit cards.",
"title": ""
},
{
"docid": "bb0b9b679444291bceecd68153f6f480",
"text": "Path planning is one of the most significant and challenging subjects in robot control field. In this paper, a path planning method based on an improved shuffled frog leaping algorithm is proposed. In the proposed approach, a novel updating mechanism based on the median strategy is used to avoid local optimal solution problem in the general shuffled frog leaping algorithm. Furthermore, the fitness function is modified to make the path generated by the shuffled frog leaping algorithm smoother. In each iteration, the globally best frog is obtained and its position is used to lead the movement of the robot. Finally, some simulation experiments are carried out. The experimental results show the feasibility and effectiveness of the proposed algorithm in path planning for mobile robots.",
"title": ""
},
{
"docid": "d4cf614c352b3bbef18d7f219a3da2d1",
"text": "In recent years there has been growing interest on the occurrence and the fate of pharmaceuticals in the aquatic environment. Nevertheless, few data are available covering the fate of the pharmaceuticals in the water/sediment compartment. In this study, the environmental fate of 10 selected pharmaceuticals and pharmaceutical metabolites was investigated in water/sediment systems including both the analysis of water and sediment. The experiments covered the application of four 14C-labeled pharmaceuticals (diazepam, ibuprofen, iopromide, and paracetamol) for which radio-TLC analysis was used as well as six nonlabeled compounds (carbamazepine, clofibric acid, 10,11-dihydro-10,11-dihydroxycarbamazepine, 2-hydroxyibuprofen, ivermectin, and oxazepam), which were analyzed via LC-tandem MS. Ibuprofen, 2-hydroxyibuprofen, and paracetamol displayed a low persistence with DT50 values in the water/sediment system < or =20 d. The sediment played a key role in the elimination of paracetamol due to the rapid and extensive formation of bound residues. A moderate persistence was found for ivermectin and oxazepam with DT50 values of 15 and 54 d, respectively. Lopromide, for which no corresponding DT50 values could be calculated, also exhibited a moderate persistence and was transformed into at least four transformation products. For diazepam, carbamazepine, 10,11-dihydro-10,11-dihydroxycarbamazepine, and clofibric acid, system DT90 values of >365 d were found, which exhibit their high persistence in the water/sediment system. An elevated level of sorption onto the sediment was observed for ivermectin, diazepam, oxazepam, and carbamazepine. Respective Koc values calculated from the experimental data ranged from 1172 L x kg(-1) for ivermectin down to 83 L x kg(-1) for carbamazepine.",
"title": ""
},
{
"docid": "0a432546553ffbb06690495d5c858e19",
"text": "Since the first reported death in 1977, scores of seemingly healthy Hmong refugees have died mysteriously and without warning from what has come to be known as Sudden Unexpected Nocturnal Death Syndrome (SUNDS). To date medical research has provided no adequate explanation for these sudden deaths. This study is an investigation into the changing impact of traditional beliefs as they manifest during the stress of traumatic relocation. In Stockton, California, 118 Hmong men and women were interviewed regarding their awareness of and personal experience with a traditional nocturnal spirit encounter. An analysis of this data reveals that the supranormal attack acts as a trigger for Hmong SUNDS.",
"title": ""
},
{
"docid": "49a6de5759f4e760f68939e9292928d8",
"text": "An ongoing controversy exists in the prototyping community about how closely in form and function a user-interface prototype should represent the final product. This dispute is referred to as the \" Low-versus High-Fidelity Prototyping Debate.'' In this article, we discuss arguments for and against low-and high-fidelity prototypes , guidelines for the use of rapid user-interface proto-typing, and the implications for user-interface designers.",
"title": ""
},
{
"docid": "f2e94643b8896614c3538e7b694b2253",
"text": "Training and adaption of employees are time and money consuming. Employees’ turnover can be predicted by their organizational and personal historical data in order to reduce probable loss of organizations. Prediction methods are highly related to human resource management to obtain patterns by historical data. This article implements knowledge discovery steps on real data of a manufacturing plant. We consider many characteristics of employees such as age, technical skills and work experience. Different data mining methods are compared based on their accuracy, calculation time and user friendliness. Furthermore the importance of data features is measured by Pearson ChiSquare test. In order to reach the desired user friendliness, a graphical user interface is designed specifically for the case study to handle knowledge discovery life cycle.",
"title": ""
},
{
"docid": "ba94bfaa5dc669877deedfaee057c93d",
"text": "Bayesian networks have become a widely used method in the modelling of uncertain knowledge. Owing to the difficulty domain experts have in specifying them, techniques that learn Bayesian networks from data have become indispensable. Recently, however, there have been many important new developments in this field. This work takes a broad look at the literature on learning Bayesian networks—in particular their structure—from data. Specific topics are not focused on in detail, but it is hoped that all the major fields in the area are covered. This article is not intended to be a tutorial—for this, there are many books on the topic, which will be presented. However, an effort has been made to locate all the relevant publications, so that this paper can be used as a ready reference to find the works on particular sub-topics.",
"title": ""
},
{
"docid": "6aaabe17947bc455d940047745ed7962",
"text": "In this paper, we want to study how natural and engineered systems could perform complex optimizations with limited computational and communication capabilities. We adopt a continuous-time dynamical system view rooted in early work on optimization and more recently in network protocol design, and merge it with the dynamic view of distributed averaging systems. We obtain a general approach, based on the control system viewpoint, that allows to analyze and design (distributed) optimization systems converging to the solution of given convex optimization problems. The control system viewpoint provides many insights and new directions of research. We apply the framework to a distributed optimal location problem and demonstrate the natural tracking and adaptation capabilities of the system to changing constraints.",
"title": ""
},
{
"docid": "c6a17677f0020c9f530a3d4236665b64",
"text": "In medicine, visualizing chromosomes is important for medical diagnostics, drug development, and biomedical research. Unfortunately, chromosomes often overlap and it is necessary to identify and distinguish between the overlapping chromosomes. A segmentation solution that is fast and automated will enable scaling of cost effective medicine and biomedical research. We apply neural network-based image segmentation to the problem of distinguishing between partially overlapping DNA chromosomes. A convolutional neural network is customized for this problem. The results achieved intersection over union (IOU) scores of 94.7% for the overlapping region and 88-94% on the non-overlapping chromosome regions.",
"title": ""
},
{
"docid": "868c0627cc309c8029fa0edc7f9d24b3",
"text": "Aspect-based opinion mining is widely applied to review data to aggregate or summarize opinions of a product, and the current state-of-the-art is achieved with Latent Dirichlet Allocation (LDA)-based model. Although social media data like tweets are laden with opinions, their \"dirty\" nature (as natural language) has discouraged researchers from applying LDA-based opinion model for product review mining. Tweets are often informal, unstructured and lacking labeled data such as categories and ratings, making it challenging for product opinion mining. In this paper, we propose an LDA-based opinion model named Twitter Opinion Topic Model (TOTM) for opinion mining and sentiment analysis. TOTM leverages hashtags, mentions, emoticons and strong sentiment words that are present in tweets in its discovery process. It improves opinion prediction by modeling the target-opinion interaction directly, thus discovering target specific opinion words, neglected in existing approaches. Moreover, we propose a new formulation of incorporating sentiment prior information into a topic model, by utilizing an existing public sentiment lexicon. This is novel in that it learns and updates with the data. We conduct experiments on 9 million tweets on electronic products, and demonstrate the improved performance of TOTM in both quantitative evaluations and qualitative analysis. We show that aspect-based opinion analysis on massive volume of tweets provides useful opinions on products.",
"title": ""
},
{
"docid": "fceb43462f77cf858ef9747c1c5f0728",
"text": "MapReduce has become a dominant parallel computing paradigm for big data, i.e., colossal datasets at the scale of tera-bytes or higher. Ideally, a MapReduce system should achieve a high degree of load balancing among the participating machines, and minimize the space usage, CPU and I/O time, and network transfer at each machine. Although these principles have guided the development of MapReduce algorithms, limited emphasis has been placed on enforcing serious constraints on the aforementioned metrics simultaneously. This paper presents the notion of minimal algorithm, that is, an algorithm that guarantees the best parallelization in multiple aspects at the same time, up to a small constant factor. We show the existence of elegant minimal algorithms for a set of fundamental database problems, and demonstrate their excellent performance with extensive experiments.",
"title": ""
},
{
"docid": "381509845636d016eb716540980cb291",
"text": "Germinal centers (GCs) are the site of antibody diversification and affinity maturation and as such are vitally important for humoral immunity. The study of GC biology has undergone a renaissance in the past 10 years, with a succession of findings that have transformed our understanding of the cellular dynamics of affinity maturation. In this review, we discuss recent developments in the field, with special emphasis on how GC cellular and clonal dynamics shape antibody affinity and diversity during the immune response.",
"title": ""
},
{
"docid": "a0fb601da8e6b79d4a876730cfee4271",
"text": "Social media platforms provide an inexpensive communication medium that allows anyone to publish content and anyone interested in the content can obtain it. However, this same potential of social media provide space for discourses that are harmful to certain groups of people. Examples of these discourses include bullying, offensive content, and hate speech. Out of these discourses hate speech is rapidly recognized as a serious problem by authorities of many countries. In this paper, we provide the first of a kind systematic large-scale measurement and analysis study of explicit expressions of hate speech in online social media. We aim to understand the abundance of hate speech in online social media, the most common hate expressions, the effect of anonymity on hate speech, the sensitivity of hate speech and the most hated groups across regions. In order to achieve our objectives, we gather traces from two social media systems: Whisper and Twitter. We then develop and validate a methodology to identify hate speech on both of these systems. Our results identify hate speech forms and unveil a set of important patterns, providing not only a broader understanding of online hate speech, but also offering directions for detection and prevention approaches.",
"title": ""
},
{
"docid": "494b375064fbbe012b382d0ad2db2900",
"text": "You are smart to question how different medications interact when used concurrently. Champix, called Chantix in the United States and globally by its generic name varenicline [2], is a prescription medication that can help individuals quit smoking by partially stimulating nicotine receptors in cells throughout the body. Nicorette gum, a type of nicotine replacement therapy (NRT), is also a tool to help smokers quit by providing individuals with the nicotine they crave by delivering the substance in controlled amounts through the lining of the mouth. NRT is available in many other forms including lozenges, patches, inhalers, and nasal sprays. The short answer is that there is disagreement among researchers about whether or not there are negative consequences to chewing nicotine gum while taking varenicline. While some studies suggest no harmful side effects to using them together, others have found that adverse effects from using both at the same time. So, what does the current evidence say?",
"title": ""
},
{
"docid": "a8478fa2a7088c270f1b3370bb06d862",
"text": "Sodium-ion batteries (SIBs) are prospective alternative to lithium-ion batteries for large-scale energy-storage applications, owing to the abundant resources of sodium. Metal sulfides are deemed to be promising anode materials for SIBs due to their low-cost and eco-friendliness. Herein, for the first time, series of copper sulfides (Cu2S, Cu7S4, and Cu7KS4) are controllably synthesized via a facile electrochemical route in KCl-NaCl-Na2S molten salts. The as-prepared Cu2S with micron-sized flakes structure is first investigated as anode of SIBs, which delivers a capacity of 430 mAh g-1 with a high initial Coulombic efficiency of 84.9% at a current density of 100 mA g-1. Moreover, the Cu2S anode demonstrates superior capability (337 mAh g-1 at 20 A g-1, corresponding to 50 C) and ultralong cycle performance (88.2% of capacity retention after 5000 cycles at 5 A g-1, corresponding to 0.0024% of fade rate per cycle). Meanwhile, the pseudocapacitance contribution and robust porous structure in situ formed during cycling endow the Cu2S anodes with outstanding rate capability and enhanced cyclic performance, which are revealed by kinetics analysis and ex situ characterization.",
"title": ""
},
{
"docid": "922e4d742d4fc800ac7e212dda92c7a9",
"text": "Maintaining the stability of tracks on multiple targets in video over extended time periods remains a challenging problem. A few methods which have recently shown encouraging results in this direction rely on learning context models or the availability of training data. However, this may not be feasible in many application scenarios. Moreover, tracking methods should be able to work across different scenarios (e.g. multiple resolutions of the video) making such context models hard to obtain. In this paper, we consider the problem of long-term tracking in video in application domains where context information is not available a priori, nor can it be learned online. We build our solution on the hypothesis that most existing trackers can obtain reasonable short-term tracks (tracklets). By analyzing the statistical properties of these tracklets, we develop associations between them so as to come up with longer tracks. This is achieved through a stochastic graph evolution step that considers the statistical properties of individual tracklets, as well as the statistics of the targets along each proposed long-term track. On multiple real-life video sequences spanning low and high resolution data, we show the ability to accurately track over extended time periods (results are shown on many minutes of continuous video).",
"title": ""
},
{
"docid": "732d6bd47a4ab7b77d1c192315a1577c",
"text": "In this paper, we address the problem of classifying image sets, each of which contains images belonging to the same class but covering large variations in, for instance, viewpoint and illumination. We innovatively formulate the problem as the computation of Manifold-Manifold Distance (MMD), i.e., calculating the distance between nonlinear manifolds each representing one image set. To compute MMD, we also propose a novel manifold learning approach, which expresses a manifold by a collection of local linear models, each depicted by a subspace. MMD is then converted to integrating the distances between pair of subspaces respectively from one of the involved manifolds. The proposed MMD method is evaluated on the task of Face Recognition based on Image Set (FRIS). In FRIS, each known subject is enrolled with a set of facial images and modeled as a gallery manifold, while a testing subject is modeled as a probe manifold, which is then matched against all the gallery manifolds by MMD. Identification is achieved by seeking the minimum MMD. Experimental results on two public face databases, Honda/UCSD and CMU MoBo, demonstrate that the proposed MMD method outperforms the competing methods.",
"title": ""
},
{
"docid": "080a7cd58682a156bcddcaad2031fe14",
"text": "In this paper, we present new models and algorithms for object-level video advertising. A framework that aims to embed content-relevant ads within a video stream is investigated in this context. First, a comprehensive optimization model is designed to minimize intrusiveness to viewers when ads are inserted in a video. For human clothing advertising, we design a deep convolutional neural network using face features to recognize human genders in a video stream. Human parts alignment is then implemented to extract human part features that are used for clothing retrieval. Second, we develop a heuristic algorithm to solve the proposed optimization problem. For comparison, we also employ the genetic algorithm to find solutions approaching the global optimum. Our novel framework is examined in various types of videos. Experimental results demonstrate the effectiveness of the proposed method for object-level video advertising.",
"title": ""
},
{
"docid": "c3f1a534afe9f5c48aac88812a51ab09",
"text": "We propose a novel method MultiModal Pseudo Relevance Feedback (MMPRF) for event search in video, which requires no search examples from the user. Pseudo Relevance Feedback has shown great potential in retrieval tasks, but previous works are limited to unimodal tasks with only a single ranked list. To tackle the event search task which is inherently multimodal, our proposed MMPRF takes advantage of multiple modalities and multiple ranked lists to enhance event search performance in a principled way. The approach is unique in that it leverages not only semantic features, but also non-semantic low-level features for event search in the absence of training data. Evaluated on the TRECVID MEDTest dataset, the approach improves the baseline by up to 158% in terms of the mean average precision. It also significantly contributes to CMU Team's final submission in TRECVID-13 Multimedia Event Detection.",
"title": ""
},
{
"docid": "038064c2998a5da8664be1ba493a0326",
"text": "The bandit problem is revisited and considered under the PAC model. Our main contribution in this part is to show that given n arms, it suffices to pull the arms O( n 2 log 1 δ ) times to find an -optimal arm with probability of at least 1 − δ. This is in contrast to the naive bound of O( n 2 log n δ ). We derive another algorithm whose complexity depends on the specific setting of the rewards, rather than the worst case setting. We also provide a matching lower bound. We show how given an algorithm for the PAC model Multi-Armed Bandit problem, one can derive a batch learning algorithm for Markov Decision Processes. This is done essentially by simulating Value Iteration, and in each iteration invoking the multi-armed bandit algorithm. Using our PAC algorithm for the multi-armed bandit problem we improve the dependence on the number of actions.",
"title": ""
}
] |
scidocsrr
|
d65de0dda2d77e708ff603d1527f02a4
|
Large-scale Isolated Gesture Recognition using pyramidal 3D convolutional networks
|
[
{
"docid": "92da117d31574246744173b339b0d055",
"text": "We present a method for gesture detection and localization based on multi-scale and multi-modal deep learning. Each visual modality captures spatial information at a particular spatial scale (such as motion of the upper body or a hand), and the whole system operates at two temporal scales. Key to our technique is a training strategy which exploits i) careful initialization of individual modalities; and ii) gradual fusion of modalities from strongest to weakest cross-modality structure. We present experiments on the ChaLearn 2014 Looking at People Challenge gesture recognition track, in which we placed first out of 17 teams.",
"title": ""
},
{
"docid": "44a1c6ebc90e57398ee92a137a5a54f8",
"text": "Most of human actions consist of complex temporal compositions of more simple actions. Action recognition tasks usually relies on complex handcrafted structures as features to represent the human action model. Convolutional Neural Nets (CNN) have shown to be a powerful tool that eliminate the need for designing handcrafted features. Usually, the output of the last layer in CNN (a layer before the classification layer -known as fc7) is used as a generic feature for images. In this paper, we show that fc7 features, per se, can not get a good performance for the task of action recognition, when the network is trained only on images. We present a feature structure on top of fc7 features, which can capture the temporal variation in a video. To represent the temporal components, which is needed to capture motion information, we introduced a hierarchical structure. The hierarchical model enables to capture sub-actions from a complex action. At the higher levels of the hierarchy, it represents a coarse capture of action sequence and lower levels represent fine action elements. Furthermore, we introduce a method for extracting key-frames using binary coding of each frame in a video, which helps to improve the performance of our hierarchical model. We experimented our method on several action datasets and show that our method achieves superior results compared to other stateof-the-arts methods.",
"title": ""
},
{
"docid": "595a31e82d857cedecd098bf4c910e99",
"text": "Human actions in video sequences are three-dimensional (3D) spatio-temporal signals characterizing both the visual appearance and motion dynamics of the involved humans and objects. Inspired by the success of convolutional neural networks (CNN) for image classification, recent attempts have been made to learn 3D CNNs for recognizing human actions in videos. However, partly due to the high complexity of training 3D convolution kernels and the need for large quantities of training videos, only limited success has been reported. This has triggered us to investigate in this paper a new deep architecture which can handle 3D signals more effectively. Specifically, we propose factorized spatio-temporal convolutional networks (FstCN) that factorize the original 3D convolution kernel learning as a sequential process of learning 2D spatial kernels in the lower layers (called spatial convolutional layers), followed by learning 1D temporal kernels in the upper layers (called temporal convolutional layers). We introduce a novel transformation and permutation operator to make factorization in FstCN possible. Moreover, to address the issue of sequence alignment, we propose an effective training and inference strategy based on sampling multiple video clips from a given action video sequence. We have tested FstCN on two commonly used benchmark datasets (UCF-101 and HMDB-51). Without using auxiliary training videos to boost the performance, FstCN outperforms existing CNN based methods and achieves comparable performance with a recent method that benefits from using auxiliary training videos.",
"title": ""
}
] |
[
{
"docid": "3be97e1d93bc9313acce69c14c7f88b9",
"text": "Product positioning in stores is of importance for the manufacturers. With this aim, the placement on shelves is done according to some documents named planograms. To check whether the placement is in compliance with the planogram, manufacturers need tools to gather information easily. In this work, using existing image processing techniques, we aim for the automation of planogram compliance control. We propose a novel technique for inventory management with the assumption that we can extract meaningful information from images using planogram context. The controller will be able to capture a photograph of the shelves and upload to the server to be processed by our system. The system is composed of three modules: 1) shelf detection, 2) product detection and 3) brand recognition. Basically, we make use of Hough Transform, Cascade Object Detection algorithm, and Support Vector Machines for the modules respectively. We test each module separately on a dataset we collected with cigarette products.",
"title": ""
},
{
"docid": "c4912e6187e5e64ec70dd4423f85474a",
"text": "Communication technologies are becoming increasingly diverse in form and functionality, making it important to identify which aspects of these technologies actually improve geographically distributed communication. Our study examines two potentially important aspects of communication technologies which appear in robot-mediated communication - physical embodiment and control of this embodiment. We studied the impact of physical embodiment and control upon interpersonal trust in a controlled laboratory experiment using three different videoconferencing settings: (1) a handheld tablet controlled by a local user, (2) an embodied system controlled by a local user, and (3) an embodied system controlled by a remote user (n = 29 dyads). We found that physical embodiment and control by the local user increased the amount of trust built between partners. These results suggest that both physical embodiment and control of the system influence interpersonal trust in mediated communication and have implications for future system designs.",
"title": ""
},
{
"docid": "5d447d516e8f2db2e9d9943972b4b0d1",
"text": "Autonomous robot manipulation often involves both estimating the pose of the object to be manipulated and selecting a viable grasp point. Methods using RGB-D data have shown great success in solving these problems. However, there are situations where cost constraints or the working environment may limit the use of RGB-D sensors. When limited to monocular camera data only, both the problem of object pose estimation and of grasp point selection are very challenging. In the past, research has focused on solving these problems separately. In this work, we introduce a novel method called SilhoNet that bridges the gap between these two tasks. We use a Convolutional Neural Network (CNN) pipeline that takes in region of interest (ROI) proposals to simultaneously predict an intermediate silhouette representation for objects with an associated occlusion mask. The 3D pose is then regressed from the predicted silhouettes. Grasp points from a precomputed database are filtered by back-projecting them onto the occlusion mask to find which points are visible in the scene. We show that our method achieves better overall performance than the state-of-the art PoseCNN network for 3D pose estimation on the YCB-video dataset.",
"title": ""
},
{
"docid": "d139fdffdc8fbfc5cc7d0490e6aa518c",
"text": "We propose a task independent neural networks model, based on a Siamese-twin architecture. Our model specifically benefits from two forms of attention scheme, which we use to extract high level feature representation of the underlying texts, both at word level (intra-attention) as well as sentence level (inter-attention). The inter attention scheme uses one of the text to create a contextual interlock with the other text, thus paying attention to mutually important parts. We evaluate our system on three tasks, i.e. Textual Entailment, Paraphrase Detection and Answer-Sentence selection. We set a near state-of-the-art result on the textual entailment task with the SNLI corpus while obtaining strong performance across the other tasks that we evaluate our model on.",
"title": ""
},
{
"docid": "d54e33049b3f5170ec8bd09d8f17c05c",
"text": "Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higherlevel representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution P (x) is structurally related to some task of interest, say predicting P (y|x). This paper focusses on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.",
"title": ""
},
{
"docid": "3e0a52bc1fdf84279dee74898fcd93bf",
"text": "A variety of abnormal imaging findings of the petrous apex are encountered in children. Many petrous apex lesions are identified incidentally while images of the brain or head and neck are being obtained for indications unrelated to the temporal bone. Differential considerations of petrous apex lesions in children include “leave me alone” lesions, infectious or inflammatory lesions, fibro-osseous lesions, neoplasms and neoplasm-like lesions, as well as a few rare miscellaneous conditions. Some lesions are similar to those encountered in adults, and some are unique to children. Langerhans cell histiocytosis (LCH) and primary and metastatic pediatric malignancies such as neuroblastoma, rhabomyosarcoma and Ewing sarcoma are more likely to be encountered in children. Lesions such as petrous apex cholesterol granuloma, cholesteatoma and chondrosarcoma are more common in adults and are rarely a diagnostic consideration in children. We present a comprehensive pictorial review of CT and MRI appearances of pediatric petrous apex lesions.",
"title": ""
},
{
"docid": "4569526ff0e03e01264a6e1e566a88c9",
"text": "Trust management is a fundamental and critical aspect of any serious application in ITS. However, only a few studies have addressed this important problem. In this paper, we present a survey on trust management for ITS. We first describe the properties of trust, trust metrics and potential attacks against trust management schemes. Existing related works are then reviewed based on the way in which trust management is implemented. Along with the review, we also identify some open research questions for future work, and consequently present a novel idea of trust management implementation.",
"title": ""
},
{
"docid": "ff8dec3914e16ae7da8801fe67421760",
"text": "A hypothesized need to form and maintain strong, stable interpersonal relationships is evaluated in light of the empirical literature. The need is for frequent, nonaversive interactions within an ongoing relational bond. Consistent with the belongingness hypothesis, people form social attachments readily under most conditions and resist the dissolution of existing bonds. Belongingness appears to have multiple and strong effects on emotional patterns and on cognitive processes. Lack of attachments is linked to a variety of ill effects on health, adjustment, and well-being. Other evidence, such as that concerning satiation, substitution, and behavioral consequences, is likewise consistent with the hypothesized motivation. Several seeming counterexamples turned out not to disconfirm the hypothesis. Existing evidence supports the hypothesis that the need to belong is a powerful, fundamental, and extremely pervasive motivation.",
"title": ""
},
{
"docid": "802af4a1179602c086c4bbf73208ce16",
"text": "BACKGROUND\nWe undertook a feasibility study to evaluate feasibility and utility of short message services (SMSs) to support Iraqi adults with newly diagnosed type 2 diabetes.\n\n\nSUBJECTS AND METHODS\nFifty patients from a teaching hospital clinic in Basrah in the first year after diagnosis were recruited to receive weekly SMSs relating to diabetes self-management over 29 weeks. Numbers of messages received, acceptability, cost, effect on glycated hemoglobin (HbA1c), and diabetes knowledge were documented.\n\n\nRESULTS\nForty-two patients completed the study, receiving an average 22 of 28 messages. Mean knowledge score rose from 8.6 (SD 1.5) at baseline to 9.9 (SD 1.4) 6 months after receipt of SMSs (P=0.002). Baseline and 6-month knowledge scores correlated (r=0.297, P=0.049). Mean baseline HbA1c was 79 mmol/mol (SD 14 mmol/mol) (9.3% [SD 1.3%]) and decreased to 70 mmol/mol (SD 13 mmol/mol) (8.6% [SD 1.2%]) (P=0.001) 6 months after the SMS intervention. Baseline and 6-month values were correlated (r=0.898, P=0.001). Age, gender, and educational level showed no association with changes in HbA1c or knowledge score. Changes in knowledge score were correlated with postintervention HbA1c (r=-0.341, P=0.027). All patients were satisfied with text messages and wished the service to be continued after the study. The cost of SMSs was €0.065 per message.\n\n\nCONCLUSIONS\nThis study demonstrates SMSs are acceptable, cost-effective, and feasible in supporting diabetes care in the challenging, resource-poor environment of modern-day Iraq. This study is the first in Iraq to demonstrate similar benefits of this technology on diabetes education and management to those seen from its use in better-resourced parts of the world. A randomized controlled trial is needed to assess precise benefits on self-care and knowledge.",
"title": ""
},
{
"docid": "1d9b1ce73d8d2421092bb5a70016a142",
"text": "Social networks have the surprising property of being \"searchable\": Ordinary people are capable of directing messages through their network of acquaintances to reach a specific but distant target person in only a few steps. We present a model that offers an explanation of social network searchability in terms of recognizable personal identities: sets of characteristics measured along a number of social dimensions. Our model defines a class of searchable networks and a method for searching them that may be applicable to many network search problems, including the location of data files in peer-to-peer networks, pages on the World Wide Web, and information in distributed databases.",
"title": ""
},
{
"docid": "50471274efcc7fd7547dc6c0a1b3d052",
"text": "Recently, the UAS has been extensively exploited for data collection from remote and dangerous or inaccessible areas. While most of its existing applications have been directed toward surveillance and monitoring tasks, the UAS can play a significant role as a communication network facilitator. For example, the UAS may effectively extend communication capability to disaster-affected people (who have lost cellular and Internet communication infrastructures on the ground) by quickly constructing a communication relay system among a number of UAVs. However, the distance between the centers of trajectories of two neighboring UAVs, referred to as IUD, plays an important role in the communication delay and throughput. For instance, the communication delay increases rapidly while the throughput is degraded when the IUD increases. In order to address this issue, in this article, we propose a simple but effective dynamic trajectory control algorithm for UAVs. Our proposed algorithm considers that UAVs with queue occupancy above a threshold are experiencing congestion resulting in communication delay. To alleviate the congestion at UAVs, our proposal adjusts their center coordinates and also, if needed, the radius of their trajectory. The performance of our proposal is evaluated through computer-based simulations. In addition, we conduct several field experiments in order to verify the effectiveness of UAV-aided networks.",
"title": ""
},
{
"docid": "7706afde38a6445ef0b0858e8e500159",
"text": "Clustering is a problem of great practical importance in numerous applications. The problem of clustering becomes more challenging when the data is categorical, that is, when there is no inherent distance measure between data values. We introduce LIMBO, a scalable hierarchical categorical clustering algorithm that builds on the Information Bottleneck (IB) framework for quantifying the relevant information preserved when clustering. As a hierarchical algorithm, LIMBO has the advantage that it can produce clusterings of different sizes in a single execution. We use the IB framework to define a distance measure for categorical tuples and we also present a novel distance measure for categorical attribute values. We show how the LIMBO algorithm can be used to cluster both tuples and values. LIMBO handles large data sets by producing a memory bounded summary model for the data. We present an experimental evaluation of LIMBO, and we study how clustering quality compares to other categorical clustering algorithms. LIMBO supports a trade-off between efficiency (in terms of space and time) and quality. We quantify this trade-off and demonstrate that LIMBO allows for substantial improvements in efficiency with negligible decrease in quality.",
"title": ""
},
{
"docid": "37b3b7a5af646fbc00708f136641f617",
"text": "Recent advances of 3D acquisition devices have enabled large-scale acquisition of 3D scene data. Such data, if completely and well annotated, can serve as useful ingredients for a wide spectrum of computer vision and graphics works such as data-driven modeling and scene understanding, object detection and recognition. However, annotating a vast amount of 3D scene data remains challenging due to the lack of an effective tool and/or the complexity of 3D scenes (e.g., clutter, varying illumination conditions). This paper aims to build a robust annotation tool that effectively and conveniently enables the segmentation and annotation of massive 3D data. Our tool works by coupling 2D and 3D information via an interactive framework, through which users can provide high-level semantic annotation for objects. We have experimented our tool and found that a typical indoor scene could be well segmented and annotated in less than 30 minutes by using the tool, as opposed to a few hours if done manually. Along with the tool, we created a dataset of over a hundred 3D scenes associated with complete annotations using our tool. Both the tool and dataset are available at http://scenenn.net.",
"title": ""
},
{
"docid": "bbc565d8cc780a1d68bf5384283f59db",
"text": "The physiological requirements of performing exercise above the anaerobic threshold are considerably more demanding than for lower work rates. Lactic acidosis develops at a metabolic rate that is specific to the individual and the task being performed. Although numerous pyruvate-dependent mechanisms can lead to an elevated blood lactate, the increase in lactate during muscular exercise is accompanied by an increase in lactate/pyruvate ratio (i.e., increased NADH/NAD ratio). This is typically caused by an inadequate O2 supply to the mitochondria. Thus, the anaerobic threshold can be considered to be an important assessment of the ability of the cardiovascular system to supply O2 at a rate adequate to prevent muscle anaerobiosis during exercise testing. In this paper, we demonstrate, with statistical justification, that the pattern of arterial lactate and lactate/pyruvate ratio increase during exercise evidences threshold dynamics rather than the continuous exponential increase proposed by some investigators. The pattern of change in arterial bicarbonate (HCO3-) and pulmonary gas exchange supports this threshold concept. To estimate the anaerobic threshold by gas exchange methods, we measure CO2 output (VCO2) as a continuous function of O2 uptake (VO2) (V-slope analysis) as work rate is increased. The break-point in this plot reflects the obligate buffering of increasing lactic acid production by HCO3-. The anaerobic threshold measured by the V-slope analysis appears to be a sensitive index of the development of metabolic acidosis even in subjects in whom other gas exchange indexes are insensitive, owing to irregular breathing, reduced chemoreceptor sensitivity, impaired respiratory mechanics, or all of these occurrences.",
"title": ""
},
{
"docid": "5e7e74966751bba22ca66b02c4c91642",
"text": "To deal with the defects of BP neural networks used in balance control of inverted pendulum, such as longer train time and converging in partial minimum, this article reaLizes the control of double inverted pendulum with improved BP algorithm of artificial neural networks(ANN), builds up a training model of test simulation and the BP network is 6-10-1 structure. Tansig function is used in hidden layer and PureLin function is used in output layer, LM is used in training algorithm. The training data is acquried by three-loop PID algorithm. The model is learned and trained with Matlab calculating software, and the simuLink simulation experiment results prove that improved BP algorithm for inverted pendulum control has higher precision, better astringency and lower calculation. This algorithm has wide appLication on nonLinear control and robust control field in particular.",
"title": ""
},
{
"docid": "e28f2a2d5f3a0729943dca52da5d45b6",
"text": "Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. We propose a novel, accurate tightly-coupled visual-inertial odometry pipeline for such cameras that leverages their outstanding properties to estimate the camera ego-motion in challenging conditions, such as high-speed motion or high dynamic range scenes. The method tracks a set of features (extracted on the image plane) through time. To achieve that, we consider events in overlapping spatio-temporal windows and align them using the current camera motion and scene structure, yielding motion-compensated event frames. We then combine these feature tracks in a keyframebased, visual-inertial odometry algorithm based on nonlinear optimization to estimate the camera’s 6-DOF pose, velocity, and IMU biases. The proposed method is evaluated quantitatively on the public Event Camera Dataset [19] and significantly outperforms the state-of-the-art [28], while being computationally much more efficient: our pipeline can run much faster than real-time on a laptop and even on a smartphone processor. Furthermore, we demonstrate qualitatively the accuracy and robustness of our pipeline on a large-scale dataset, and an extremely high-speed dataset recorded by spinning an event camera on a leash at 850 deg/s.",
"title": ""
},
{
"docid": "b4dd76179734fb43e74c9c1daef15bbf",
"text": "Breast cancer represents one of the diseases that make a high number of deaths every year. It is the most common type of all cancers and the main cause of women’s deaths worldwide. Classification and data mining methods are an effective way to classify data. Especially in medical field, where those methods are widely used in diagnosis and analysis to make decisions. In this paper, a performance comparison between different machine learning algorithms: Support Vector Machine (SVM), Decision Tree (C4.5), Naive Bayes (NB) and k Nearest Neighbors (k-NN) on the Wisconsin Breast Cancer (original) datasets is conducted. The main objective is to assess the correctness in classifying data with respect to efficiency and effectiveness of each algorithm in terms of accuracy, precision, sensitivity and specificity. Experimental results show that SVM gives the highest accuracy (97.13%) with lowest error rate. All experiments are executed within a simulation environment and conducted in WEKA data mining tool. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the Conference Program Chairs.",
"title": ""
},
{
"docid": "cabf420400bc46a00ee062c5d6a850a7",
"text": "In the last years, automotive systems evolved to be more and more software-intensive systems. As a result, consider able attention has been paid to establish an efficient softwa re development process of such systems, where reliability is an important criterion. Hence, model-driven development (MDD), software engineering and requirements engineering (amongst others) found their way into the systems engineering domain. However, one important aspect regarding the reliability of such systems, has been largely neglected on a holistic level: the IT security. In this paper, we introduce a potential approach for integrating IT security in the requirements engineering process of automotive software development using function net modeling.",
"title": ""
},
{
"docid": "27433f4fdfacc748bc711f17640dc3be",
"text": "Redox modulation has been recognized to be an important mechanism of regulation for the N-methyl-d-aspartate (NMDA) receptor. Sulfhydryl reducing agents enhance, whereas oxidizing agents decrease, NMDA-evoked currents. Multiple cysteine residues located in different NMDA receptor subunits have been identified as molecular determinants underlying redox modulation. The NMDA receptor is also regulated by nitric oxide (NO)-related species directly, not involving cyclic GMP, but the molecular mechanism of this action has heretofore not been entirely clear. The confusion arose at least partly due to the fact that various redox forms of NO (NO+, NO•, NO−, each having an additional electron compared with the previous) have distinct mechanisms of action. Recently, a critical cysteine residue (Cys 399) on the NR2A subunit has been shown to react under physiological conditions with NO by S-nitrosylation (transfer of the NO+ to cysteine thiol) or by reaction with NO− (nitroxyl anion) to underlie this form of modulation.",
"title": ""
},
{
"docid": "e0f56e20d509234a45b0a91f8d6b91cb",
"text": "This paper describes recent research findings on resource sharing between trees and crops in the semiarid tropics and attempts to reconcile this information with current knowledge of the interactions between savannah trees and understorey vegetation by examining agroforestry systems from the perspective of succession. In general, productivity of natural vegetation under savannah trees increases as rainfall decreases, while the opposite occurs in agroforestry. One explanation is that in the savannah, the beneficial effects of microclimatic improvements (e.g. lower temperatures and evaporation losses) are greater in more xeric environments. Mature savannah trees have a high proportion of woody above-ground structure compared to foliage, so that the amount of water 'saved' (largely by reduction in soil evaporation) is greater than water 'lost' through transpiration by trees. By contrast, in agroforestry practices such as alley cropping where tree density is high, any beneficial effects of the trees on microclimate are negated by reductions in soil moisture due to increasing interception losses and tree transpiration. While investment in woody structure can improve the water economy beneath agroforestry trees, it inevitably reduces the growth rate of the trees and thus increases the time required for improved understorey productivity. Therefore, agroforesters prefer trees with more direct and immediate benefits to farmers. The greatest opportunity for simultaneous agroforestry practices is therefore to fill niches within the landscape where resources are currently under-utilised by crops. In this way, agroforestry can mimic the large scale patch dynamics and successional progression of a natural ecosystem.",
"title": ""
}
] |
scidocsrr
|
208dfe5455ac851725ebd0d58f986730
|
Identification of multiple intelligences with the Multiple Intelligence Profiling Questionnaire III
|
[
{
"docid": "ec788f48207b0a001810e1eabf6b2312",
"text": "Maximum likelihood factor analysis provides an effective method for estimation of factor matrices and a useful test statistic in the likelihood ratio for rejection of overly simple factor models. A reliability coefficient is proposed to indicate quality of representation of interrelations among attributes in a battery by a maximum likelihood factor analysis. Usually, for a large sample of individuals or objects, the likelihood ratio statistic could indicate that an otherwise acceptable factor model does not exactly represent the interrelations among the attributes for a population. The reliability coefficient could indicate a very close representation in this case and be a better indication as to whether to accept or reject the factor solution.",
"title": ""
}
] |
[
{
"docid": "befd91b3e6874b91249d101f8373db01",
"text": "Today's biomedical research has become heavily dependent on access to the biological knowledge encoded in expert curated biological databases. As the volume of biological literature grows rapidly, it becomes increasingly difficult for biocurators to keep up with the literature because manual curation is an expensive and time-consuming endeavour. Past research has suggested that computer-assisted curation can improve efficiency, but few text-mining systems have been formally evaluated in this regard. Through participation in the interactive text-mining track of the BioCreative 2012 workshop, we developed PubTator, a PubMed-like system that assists with two specific human curation tasks: document triage and bioconcept annotation. On the basis of evaluation results from two external user groups, we find that the accuracy of PubTator-assisted curation is comparable with that of manual curation and that PubTator can significantly increase human curatorial speed. These encouraging findings warrant further investigation with a larger number of publications to be annotated. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator/",
"title": ""
},
{
"docid": "4539b6dda3a8b85dfb1ba0f5da6e7c8c",
"text": "3D Printing promises to produce complex biomedical devices according to computer design using patient-specific anatomical data. Since its initial use as pre-surgical visualization models and tooling molds, 3D Printing has slowly evolved to create one-of-a-kind devices, implants, scaffolds for tissue engineering, diagnostic platforms, and drug delivery systems. Fueled by the recent explosion in public interest and access to affordable printers, there is renewed interest to combine stem cells with custom 3D scaffolds for personalized regenerative medicine. Before 3D Printing can be used routinely for the regeneration of complex tissues (e.g. bone, cartilage, muscles, vessels, nerves in the craniomaxillofacial complex), and complex organs with intricate 3D microarchitecture (e.g. liver, lymphoid organs), several technological limitations must be addressed. In this review, the major materials and technology advances within the last five years for each of the common 3D Printing technologies (Three Dimensional Printing, Fused Deposition Modeling, Selective Laser Sintering, Stereolithography, and 3D Plotting/Direct-Write/Bioprinting) are described. Examples are highlighted to illustrate progress of each technology in tissue engineering, and key limitations are identified to motivate future research and advance this fascinating field of advanced manufacturing.",
"title": ""
},
{
"docid": "32c3c226186b5d10b50ce4bac8f20630",
"text": "A sub-1 V CMOS low-dropout (LDO) voltage regulator with 103 nA low-quiescent current is presented in this paper. The proposed LDO uses a digital error amplifier that can make the quiescent current lower than other LDOs with the traditional error amplifier. Besides, the LDO can be stable even without the output capacitor. With a 0.9 V power supply, the output voltage is designed as 0.5 V. The maximum output current of the LDO is 50 mA at an output of 0.5 V. The prototype of the LDO is fabricated with TSMC 0.35 mum CMOS processes. The active area without pads is only 240 mum times 400 mum.",
"title": ""
},
{
"docid": "dbc468368059e6b676c8ece22b040328",
"text": "In medical diagnoses and treatments, e.g., endoscopy, dosage transition monitoring, it is often desirable to wirelessly track an object that moves through the human GI tract. In this paper, we propose a magnetic localization and orientation system for such applications. This system uses a small magnet enclosed in the object to serve as excitation source, so it does not require the connection wire and power supply for the excitation signal. When the magnet moves, it establishes a static magnetic field around, whose intensity is related to the magnet's position and orientation. With the magnetic sensors, the magnetic intensities in some predetermined spatial positions can be detected, and the magnet's position and orientation parameters can be computed based on an appropriate algorithm. Here, we propose a real-time tracking system developed by a cubic magnetic sensor array made of Honeywell 3-axis magnetic sensors, HMC1043. Using some efficient software modules and calibration methods, the system can achieve satisfactory tracking accuracy if the cubic sensor array has enough number of 3-axis magnetic sensors. The experimental results show that the average localization error is 1.8 mm.",
"title": ""
},
{
"docid": "73325d08f17701942e6b63adfd1a521f",
"text": "BACKGROUND\nThe presence of the G-spot (an assumed erotic sensitive area in the anterior wall of the vagina) remains controversial. We explored the histomorphological basis of the G-spot.\n\n\nMETHODS\nBiopsies were drawn from a 12 o'clock direction in the distal- and proximal-third areas of the anterior vagina of 32 Chinese subjects. The total number of protein gene product 9.5-immunoreactive nerves and smooth muscle actin-immunoreactive blood vessels in each specimen was quantified using the avidin-biotin-peroxidase assay.\n\n\nRESULTS\nVaginal innervation was observed in the lamina propria and muscle layer of the anterior vaginal wall. The distal-third of the anterior vaginal wall had significantly richer small-nerve-fiber innervation in the lamina propria than the proximal-third (p = 0.000) and in the vaginal muscle layer (p = 0.006). There were abundant microvessels in the lamina propria and muscle layer, but no small vessels in the lamina propria and few in the muscle layer. Significant differences were noted in the number of microvessels when comparing the distal- with proximal-third parts in the lamina propria (p = 0.046) and muscle layer (p = 0.002).\n\n\nCONCLUSIONS\nSignificantly increased density of nerves and microvessels in the distal-third of the anterior vaginal wall could be the histomorphological basis of the G-spot. Distal anterior vaginal repair could disrupt the normal anatomy, neurovascular supply and function of the G-spot, and cause sexual dysfunction.",
"title": ""
},
{
"docid": "da4699d1e358bebc822b059b568916a8",
"text": "An InterCloud is an interconnected global “cloud of clouds” that enables each cloud to tap into resources of other clouds. This is the earliest work to devise an agent-based InterCloud economic model for analyzing consumer-to-cloud and cloud-to-cloud interactions. While economic encounters between consumers and cloud providers are modeled as a many-to-many negotiation, economic encounters among clouds are modeled as a coalition game. To bolster many-to-many consumer-to-cloud negotiations, this work devises a novel interaction protocol and a novel negotiation strategy that is characterized by both 1) adaptive concession rate (ACR) and 2) minimally sufficient concession (MSC). Mathematical proofs show that agents adopting the ACR-MSC strategy negotiate optimally because they make minimum amounts of concession. By automatically controlling concession rates, empirical results show that the ACR-MSC strategy is efficient because it achieves significantly higher utilities than the fixed-concession-rate time-dependent strategy. To facilitate the formation of InterCloud coalitions, this work devises a novel four-stage cloud-to-cloud interaction protocol and a set of novel strategies for InterCloud agents. Mathematical proofs show that these InterCloud coalition formation strategies 1) converge to a subgame perfect equilibrium and 2) result in every cloud agent in an InterCloud coalition receiving a payoff that is equal to its Shapley value.",
"title": ""
},
{
"docid": "785ee9de92bdcef648b5e43dd32e25f5",
"text": "A voltage reference using a depletion-mode device is designed in a 0.13µm CMOS process and achieves ultra-low power consumption and sub-1V operation without sacrificing temperature and supply voltage insensitivity. Measurements show a temperature coefficient of 19.4ppm/° (3.4 µV/°), line sensitivity of 0.033%/V, power supply rejection ratio of−67dB, and power consumption of 2.2pW. It requires only two devices and functions down to V<inf>dd</inf>=0.5V with an area of 1350µm<sup>2</sup>. A variant for higher Vout is also demonstrated.",
"title": ""
},
{
"docid": "a921c4eba2d9590b9b8f4679349c985b",
"text": "Advances in micro-electro-mechanical (MEMS) techniques enable inertial measurements units (IMUs) to be small, cheap, energy efficient, and widely used in smartphones, robots, and drones. Exploiting inertial data for accurate and reliable navigation and localization has attracted significant research and industrial interest, as IMU measurements are completely ego-centric and generally environment agnostic. Recent studies have shown that the notorious issue of drift can be significantly alleviated by using deep neural networks (DNNs) [1]. However, the lack of sufficient labelled data for training and testing various architectures limits the proliferation of adopting DNNs in IMU-based tasks. In this paper, we propose and release the Oxford Inertial Odometry Dataset (OxIOD), a first-of-its-kind data collection for inertial-odometry research, with all sequences having ground-truth labels. Our dataset contains 158 sequences totalling more than 42 km in total distance, much larger than previous inertial datasets. Another notable feature of this dataset lies in its diversity, which can reflect the complex motions of phone-based IMUs in various everyday usage. The measurements were collected with four different attachments (handheld, in the pocket, in the handbag and on the trolley), four motion modes (halting, walking slowly, walking normally, and running), five different users, four types of off-the-shelf consumer phones, and large-scale localization from office buildings. Deep inertial tracking experiments were conducted to show the effectiveness of our dataset in training deep neural network models and evaluate learning-based and model-based algorithms. The OxIOD Dataset is available at: http://deepio.cs.ox.ac.uk",
"title": ""
},
{
"docid": "d8eab1f244bd5f9e05eb706bb814d299",
"text": "Private participation in road projects is increasing around the world. The most popular franchising mechanism is a concession contract, which allows a private firm to charge tolls to road users during a pre-determined period in order to recover its investments. Concessionaires are usually selected through auctions at which candidates submit bids for tolls, payments to the government, or minimum term to hold the contract. This paper discusses, in the context of road franchising, how this mechanism does not generally yield optimal outcomes and it induces the frequent contract renegotiations observed in road projects. A new franchising mechanism is proposed, based on flexible-term contracts and auctions with bids for total net revenue and maintenance costs. This new mechanism improves outcomes compared to fixed-term concessions, by eliminating traffic risk and promoting the selection of efficient concessionaires.",
"title": ""
},
{
"docid": "30c6829427aaa8d23989afcd666372f7",
"text": "We developed an optimizing compiler for intrusion detection rules popularized by an open-source Snort Network Intrusion Detection System (www.snort.org). While Snort and Snort-like rules are usually thought of as a list of independent patterns to be tested in a sequential order, we demonstrate that common compilation techniques are directly applicable to Snort rule sets and are able to produce high-performance matching engines. SNORTRAN combines several compilation techniques, including cost-optimized decision trees, pattern matching precompilation, and string set clustering. Although all these techniques have been used before in other domain-specific languages, we believe their synthesis in SNORTRAN is original and unique. Introduction Snort [RO99] is a popular open-source Network Intrusion Detection System (NIDS). Snort is controlled by a set of pattern/action rules residing in a configuration file of a specific format. Due to Snort’s popularity, Snort-like rules are accepted by several other NIDS [FSTM, HANK]. In this paper we describe an optimizing compiler for Snort rule sets called SNORTRAN that incorporates ideas of pattern matching compilation based on cost-optimized decision trees [DKP92, KS88] with setwise string search algorithms popularized by recent research in highperformance NIDS detection engines [FV01, CC01, GJMP]. The two main design goals were performance and compatibility with the original Snort rule interpreter. The primary application area for NIDS is monitoring IP traffic inside and outside of firewalls, looking for unusual activities that can be attributed to external attacks or internal misuse. Most NIDS are designed to handle T1/partial T3 traffic, but as the number of the known vulnerabilities grows and more and more weight is given to internal misuse monitoring on high-throughput networks (100Mbps/1Gbps), it gets harder to keep up with the traffic without dropping too many packets to make detection ineffective. Throwing hardware at the problem is not always possible because of growing maintenance and support costs, let alone the fact that the problem of making multi-unit system work in realistic environment is as hard as the original performance problem. Bottlenecks of the detection process were identified by many researchers and practitioners [FV01, ND02, GJMP], and several approaches were proposed [FV01, CC01]. Our benchmarking supported the performance analysis made by M. Fisk and G. Varghese [FV01], adding some interesting findings on worst-case behavior of setwise string search algorithms in practice. Traditionally, NIDS are designed around a packet grabber (system-specific or libcap) getting traffic packets off the wire, combined with preprocessors, packet decoders, and a detection engine looking for a static set of signatures loaded from a rule file at system startup. Snort [SNORT] and",
"title": ""
},
{
"docid": "38f6aaf5844ddb6e4ed0665559b7f813",
"text": "A novel dual-broadband multiple-input-multiple-output (MIMO) antenna system is developed. The MIMO antenna system consists of two dual-broadband antenna elements, each of which comprises two opened loops: an outer loop and an inner loop. The opened outer loop acts as a half-wave dipole and is excited by electromagnetic coupling from the inner loop, leading to a broadband performance for the lower band. The opened inner loop serves as two monopoles. A combination of the two monopoles and the higher modes from the outer loop results in a broadband performance for the upper band. The bandwidths (return loss >;10 dB) achieved for the dual-broadband antenna element are 1.5-2.8 GHz (~ 60%) for the lower band and 4.7-8.5 GHz (~ 58\\%) for the upper band. Two U-shaped slots are introduced to reduce the coupling between the two dual-broadband antenna elements. The isolation achieved is higher than 15 dB in the lower band and 20 dB in the upper band, leading to an envelope correlation coefficient of less than 0.01. The dual-broadband MIMO antenna system has a compact volume of 50×17×0.8 mm3, suitable for GSM/UMTS/LTE and WLAN communication handsets.",
"title": ""
},
{
"docid": "6215c6ca6826001291314405ea936dda",
"text": "This paper describes a text mining tool that performs two tasks, namely document clustering and text summarization. These tasks have, of course, their corresponding counterpart in “conventional” data mining. However, the textual, unstructured nature of documents makes these two text mining tasks considerably more difficult than their data mining counterparts. In our system document clustering is performed by using the Autoclass data mining algorithm. Our text summarization algorithm is based on computing the value of a TF-ISF (term frequency – inverse sentence frequency) measure for each word, which is an adaptation of the conventional TF-IDF (term frequency – inverse document frequency) measure of information retrieval. Sentences with high values of TF-ISF are selected to produce a summary of the source text. The system has been evaluated on real-world documents, and the results are satisfactory.",
"title": ""
},
{
"docid": "466c0d9436e1f1878aaafa2297022321",
"text": "Acetic acid was used topically at concentrations of between 0.5% and 5% to eliminate Pseudomonas aeruginosa from the burn wounds or soft tissue wounds of 16 patients. In-vitro studies indicated the susceptibility of P. aeruginosa to acetic acid; all strains exhibited a minimum inhibitory concentration of 2 per cent. P. aeruginosa was eliminated from the wounds of 14 of the 16 patients within two weeks of treatment. Acetic acid was shown to be an inexpensive and efficient agent for the elimination of P. aeruginosa from burn and soft tissue wounds.",
"title": ""
},
{
"docid": "9869f2a28b11a5f0a83127937408b0ac",
"text": "With the advent of the Semantic Web, the field of domain ontology engineering has gained more and more importance. This innovative field may have a big impact on computer-based education and will certainly contribute to its development. This paper presents a survey on domain ontology engineering and especially domain ontology learning. The paper focuses particularly on automatic methods for ontology learning from texts. It summarizes the state of the art in natural language processing techniques and statistical and machine learning techniques for ontology extraction. It also explains how intelligent tutoring systems may benefit from this engineering and talks about the challenges that face the field.",
"title": ""
},
{
"docid": "1e0eade3cc92eb79160aeac35a3a26d1",
"text": "Global environmental concerns and the escalating demand for energy, coupled with steady progress in renewable energy technologies, are opening up new opportunities for utilization of renewable energy vailable online 12 January 2011",
"title": ""
},
{
"docid": "e9b3ddc114998e25932819e3281e2e0c",
"text": "We study the problem of jointly aligning sentence constituents and predicting their similarities. While extensive sentence similarity data exists, manually generating reference alignments and labeling the similarities of the aligned chunks is comparatively onerous. This prompts the natural question of whether we can exploit easy-to-create sentence level data to train better aligners. In this paper, we present a model that learns to jointly align constituents of two sentences and also predict their similarities. By taking advantage of both sentence and constituent level data, we show that our model achieves state-of-the-art performance at predicting alignments and constituent similarities.",
"title": ""
},
{
"docid": "d308842b1684f4c7c6c499376c6b5d02",
"text": "A picture is worth one thousand words, but what words should be used to describe the sentiment and emotions conveyed in the increasingly popular social multimedia? We demonstrate a novel system which combines sound structures from psychology and the folksonomy extracted from social multimedia to develop a large visual sentiment ontology consisting of 1,200 concepts and associated classifiers called SentiBank. Each concept, defined as an Adjective Noun Pair (ANP), is made of an adjective strongly indicating emotions and a noun corresponding to objects or scenes that have a reasonable prospect of automatic detection. We believe such large-scale visual classifiers offer a powerful mid-level semantic representation enabling high-level sentiment analysis of social multimedia. We demonstrate novel applications made possible by SentiBank including live sentiment prediction of social media and visualization of visual content in a rich intuitive semantic space.",
"title": ""
},
{
"docid": "1d7bbd7aaa65f13dd72ffeecc8499cb6",
"text": "Due to the 60Hz or higher LCD refresh operations, display controller (DC) reads the pixels out from frame buffer at fixed rate. Accessing frame buffer consumes not only memory bandwidth, but power as well. Thus frame buffer compression (FBC) can contribute to alleviating both bandwidth and power consumption. A conceptual frame buffer compression model is proposed, and to the best of our knowledge, an arithmetic expression concerning the compression ratio and the read/update ratio of frame buffer is firstly presented, which reveals the correlation between frame buffer compression and target applications. Moreover, considering the linear access feature of frame buffer, we investigate a frame buffer compression without color information loss, named LFBC (Loss less Frame-Buffer Compression). LFBC defines new frame buffer compression data format, and employs run-length encoding (RLE) to implement the compression. For the applications suitable for frame buffer compression, LFBC reduces 50%90% bandwidth consumption and memory accesses caused by LCD refresh operations.",
"title": ""
},
{
"docid": "1947a704719aa9fe5311eccdea52aecc",
"text": "Based on the observation that the correlation between observed traffic at two measurement points or traffic stations may be time-varying, attributable to the time-varying speed which subsequently causes variations in the time required to travel between the two points, in this paper, we develop a modified Space-Time Autoregressive Integrated Moving Average (STARIMA) model with time-varying lags for short-term traffic flow prediction. Particularly, the temporal lags in the modified STARIMA change with the time-varying speed at different time of the day or equivalently change with the (time-varying) time required to travel between two measurement points. Firstly, a technique is developed to evaluate the temporal lag in the STARIMA model, where the temporal lag is formulated as a function of the spatial lag (spatial distance) and the average speed. Secondly, an unsupervised classification algorithm based on ISODATA algorithm is designed to classify different time periods of the day according to the variation of the speed. The classification helps to determine the appropriate time lag to use in the STARIMA model. Finally, a STARIMA-based model with time-varying lags is developed for short-term traffic prediction. Experimental results using real traffic data show that the developed STARIMA-based model with time-varying lags has superior accuracy compared with its counterpart developed using the traditional cross-correlation function and without employing time-varying lags.",
"title": ""
}
] |
scidocsrr
|
b89924bf219db0d4aabcd5bae5d85923
|
6D hands: markerless hand-tracking for computer aided design
|
[
{
"docid": "24f141bd7a29bb8922fa010dd63181a6",
"text": "This paper reports on the development of a hand to machine interface device that provides real-time gesture, position and orientation information. The key element is a glove and the device as a whole incorporates a collection of technologies. Analog flex sensors on the glove measure finger bending. Hand position and orientation are measured either by ultrasonics, providing five degrees of freedom, or magnetic flux sensors, which provide six degrees of freedom. Piezoceramic benders provide the wearer of the glove with tactile feedback. These sensors are mounted on the light-weight glove and connected to the driving hardware via a small cable.\nApplications of the glove and its component technologies include its use in conjunction with a host computer which drives a real-time 3-dimensional model of the hand allowing the glove wearer to manipulate computer-generated objects as if they were real, interpretation of finger-spelling, evaluation of hand impairment in addition to providing an interface to a visual programming language.",
"title": ""
}
] |
[
{
"docid": "f7ff248c209049ae6ab4725fdab38c2b",
"text": "This paper investigates Foveons X3 sensor [6] , which vertically stacks three sensor layer that absorb light of different colors, talks about the operating principle of this type of sensors, and the advantages and disadvantages of such design.",
"title": ""
},
{
"docid": "c938e7651f8dc41c9d76c1866bd3a4a7",
"text": "Many biological monitoring projects rely on acoustic detection of birds. Despite increasingly large datasets, this detection is often manual or semi-automatic, requiring manual tuning/postprocessing. We review the state of the art in automatic bird sound detection, and identify a widespread need for tuning-free and species-agnostic approaches. We introduce new datasets and an IEEE research challenge to address this need, to make possible the development of fully automatic algorithms for bird sound detection.",
"title": ""
},
{
"docid": "273bd65511ef2f7ef61e75e6272079b6",
"text": "The capacity of Mobile Health (mHealth) technologies to propel healthcare forward is directly linked to the quality of mobile interventions developed through careful mHealth research. mHealth research entails several unique characteristics, including collaboration with technologists at all phases of a project, reliance on regional telecommunication infrastructure and commercial mobile service providers, and deployment and evaluation of interventions “in the wild”, with participants using mobile tools in uncontrolled environments. In the current paper, we summarize the lessons our multi-institutional/multi-disciplinary team has learned conducting a range of mHealth projects using mobile phones with diverse clinical populations. First, we describe three ongoing projects that we draw from to illustrate throughout the paper. We then provide an example for multidisciplinary teamwork and conceptual mHealth intervention development that we found to be particularly useful. Finally, we discuss mHealth research challenges (i.e. evolving technology, mobile phone selection, user characteristics, the deployment environment, and mHealth system “bugs and glitches”), and provide recommendations for identifying and resolving barriers, or preventing their occurrence altogether.",
"title": ""
},
{
"docid": "59bb9a006844dcf7c5f1769a4b208744",
"text": "3rd Generation Partnership Project (3GPP) has recently completed the specification of the Long Term Evolution (LTE) standard. Majority of the world’s operators and vendors are already committed to LTE deployments and developments, making LTE the market leader in the upcoming evolution to 4G wireless communication systems. Multiple input multiple output (MIMO) technologies introduced in LTE such as spatial multiplexing, transmit diversity, and beamforming are key components for providing higher peak rate at a better system efficiency, which are essential for supporting future broadband data service over wireless links. Further extension of LTE MIMO technologies is being studied under the 3GPP study item “LTE-Advanced” to meet the requirement of IMT-Advanced set by International Telecommunication Union Radiocommunication Sector (ITU-R). In this paper, we introduce various MIMO technologies employed in LTE and provide a brief overview on the MIMO technologies currently discussed in the LTE-Advanced forum.",
"title": ""
},
{
"docid": "6614eeffe9fb332a028b1e80aa24016a",
"text": "Advances in microelectronics, array processing, and wireless networking, have motivated the analysis and design of low-cost integrated sensing, computating, and communicating nodes capable of performing various demanding collaborative space-time processing tasks. In this paper, we consider the problem of coherent acoustic sensor array processing and localization on distributed wireless sensor networks. We first introduce some basic concepts of beamforming and localization for wideband acoustic sources. A review of various known localization algorithms based on time-delay followed by LS estimations as well as maximum likelihood method is given. Issues related to practical implementation of coherent array processing including the need for fine-grain time synchronization are discussed. Then we describe the implementation of a Linux-based wireless networked acoustic sensor array testbed, utilizing commercially available iPAQs with built in microphones, codecs, and microprocessors, plus wireless Ethernet cards, to perform acoustic source localization. Various field-measured results using two localization algorithms show the effectiveness of the proposed testbed. An extensive list of references related to this work is also included. Keywords— Beamforming, Source Localization, Distributed Sensor Network, Wireless Network, Ad Hoc Network, Microphone Array, Time Synchronization.",
"title": ""
},
{
"docid": "1189c3648c2cce0c716ec7c0eca214d7",
"text": "This article considers the application of variational Bayesian methods to joint recursive estimation of the dynamic state and the time-varying measurement noise parameters in linear state space models. The proposed adaptive Kalman filtering method is based on forming a separable variational approximation to the joint posterior distribution of states and noise parameters on each time step separately. The result is a recursive algorithm, where on each step the state is estimated with Kalman filter and the sufficient statistics of the noise variances are estimated with a fixed-point iteration. The performance of the algorithm is demonstrated with simulated data.",
"title": ""
},
{
"docid": "81312e4811dfce560ced2e2840953e59",
"text": "A method for automatically assessing the quality of retinal images is presented. It is based on the idea that images of good quality possess some common features that should help define a model of what a good ophthalmic image is. The proposed features are the histogram of the edge magnitude distribution in the image as well as the local histograms of pixel gray-scale values. Histogram matching functions are proposed and experiments show that these features help discriminate between good and bad images.",
"title": ""
},
{
"docid": "da1589675bd8ce0be599ce9d1f2d8975",
"text": "Mobile devices and applications provide significant advantages to their users, in terms of portability, location awareness, and accessibility. A number of studies have examined usability challenges in the mobile context, and proposed definitions of mobile application usability and methods to evaluate it. This paper presents the state of the art of the evaluation and measurement of mobile application usability.",
"title": ""
},
{
"docid": "aea4b65d1c30e80e7f60a52dbecc78f3",
"text": "The aim of this paper is to automate the car and the car parking as well. It discusses a project which presents a miniature model of an automated car parking system that can regulate and manage the number of cars that can be parked in a given space at any given time based on the availability of parking spot. Automated parking is a method of parking and exiting cars using sensing devices. The entering to or leaving from the parking lot is commanded by an Android based application. We have studied some of the existing systems and it shows that most of the existing systems aren't completely automated and require a certain level of human interference or interaction in or with the system. The difference between our system and the other existing systems is that we aim to make our system as less human dependent as possible by automating the cars as well as the entire parking lot, on the other hand most existing systems require human personnel (or the car owner) to park the car themselves. To prove the effectiveness of the system proposed by us we have developed and presented a mathematical model which will be discussed in brief further in the paper.",
"title": ""
},
{
"docid": "c61107e9c5213ddb8c5e3b1b14dca661",
"text": "In advanced driving assistance systems, it is important to be able to detect the region covered by the road in the images. This paper proposes a method for estimating the road region in images captured by a vehicle-mounted monocular camera. Our proposed method first estimates all of relevant parameters for the camera motion and the 3D road plane from correspondence points between successive images. By calculating a homography matrix from the estimated camera motion and the estimated road plane parameters, and then warping the image at the previous frame, the road region can be determined. To achieve robustness in various road scenes, our method selects the threshold for determining the road region adaptively and incorporates the assumption of a simple road region boundary. In our experiments, it has been shown that the proposed method is able to estimate the road region in real road environments.",
"title": ""
},
{
"docid": "d3e51c3f9ece671cf5e8e1f630c83a8c",
"text": "Bayesian (machine) learning has been playing a significant role in machine learning for a long time due to its particular ability to embrace uncertainty, encode prior knowledge, and endow interpretability. On the back of Bayesian learning’s great success, Bayesian nonparametric learning (BNL) has emerged as a force for further advances in this field due to its greater modelling flexibility and representation power. Instead of playing with the fixed-dimensional probabilistic distributions of Bayesian learning, BNL creates a new “game” with infinite-dimensional stochastic processes. BNL has long been recognised as a research subject in statistics, and, to date, several state-of-the-art pilot studies have demonstrated that BNL has a great deal of potential to solve real-world machine-learning tasks. However, despite these promising results, BNL has not created a huge wave in the machine-learning community. Esotericism may account for this. The books and surveys on BNL written by statisticians are overcomplicated and filled with tedious theories and proofs. Each is certainly meaningful but may scare away new researchers, especially those with computer science backgrounds. Hence, the aim of this article is to provide a plain-spoken, yet comprehensive, theoretical survey of BNL in terms that researchers in the machine-learning community can understand. It is hoped this survey will serve as a starting point for understanding and exploiting the benefits of BNL in our current scholarly endeavours. To achieve this goal, we have collated the extant studies in this field and aligned them with the steps of a standard BNL procedure—from selecting the appropriate stochastic processes through manipulation to executing the model inference algorithms. At each step, past efforts have been thoroughly summarised and discussed. In addition, we have reviewed the common methods for implementing BNL in various machine-learning tasks along with its diverse applications in the real world as examples to motivate future studies.",
"title": ""
},
{
"docid": "280c39aea4584e6f722607df68ee28dc",
"text": "Statistical parametric speech synthesis (SPSS) using deep neural networks (DNNs) has shown its potential to produce naturally-sounding synthesized speech. However, there are limitations in the current implementation of DNN-based acoustic modeling for speech synthesis, such as the unimodal nature of its objective function and its lack of ability to predict variances. To address these limitations, this paper investigates the use of a mixture density output layer. It can estimate full probability density functions over real-valued output features conditioned on the corresponding input features. Experimental results in objective and subjective evaluations show that the use of the mixture density output layer improves the prediction accuracy of acoustic features and the naturalness of the synthesized speech.",
"title": ""
},
{
"docid": "4bac5fa3b753c6da269a8c9d6d6ecb5a",
"text": "The use of antimicrobial compounds in food animal production provides demonstrated benefits, including improved animal health, higher production and, in some cases, reduction in foodborne pathogens. However, use of antibiotics for agricultural purposes, particularly for growth enhancement, has come under much scrutiny, as it has been shown to contribute to the increased prevalence of antibiotic-resistant bacteria of human significance. The transfer of antibiotic resistance genes and selection for resistant bacteria can occur through a variety of mechanisms, which may not always be linked to specific antibiotic use. Prevalence data may provide some perspective on occurrence and changes in resistance over time; however, the reasons are diverse and complex. Much consideration has been given this issue on both domestic and international fronts, and various countries have enacted or are considering tighter restrictions or bans on some types of antibiotic use in food animal production. In some cases, banning the use of growth-promoting antibiotics appears to have resulted in decreases in prevalence of some drug resistant bacteria; however, subsequent increases in animal morbidity and mortality, particularly in young animals, have sometimes resulted in higher use of therapeutic antibiotics, which often come from drug families of greater relevance to human medicine. While it is clear that use of antibiotics can over time result in significant pools of resistance genes among bacteria, including human pathogens, the risk posed to humans by resistant organisms from farms and livestock has not been clearly defined. As livestock producers, animal health experts, the medical community, and government agencies consider effective strategies for control, it is critical that science-based information provide the basis for such considerations, and that the risks, benefits, and feasibility of such strategies are fully considered, so that human and animal health can be maintained while at the same time limiting the risks from antibiotic-resistant bacteria.",
"title": ""
},
{
"docid": "8cf02bf19145df237e77273e70babc1d",
"text": "Micro-facial expressions are spontaneous, involuntary movements of the face when a person experiences an emotion but attempts to hide their facial expression, most likely in a high-stakes environment. Recently, research in this field has grown in popularity, however publicly available datasets of micro-expressions have limitations due to the difficulty of naturally inducing spontaneous micro-expressions. Other issues include lighting, low resolution and low participant diversity. We present a newly developed spontaneous micro-facial movement dataset with diverse participants and coded using the Facial Action Coding System. The experimental protocol addresses the limitations of previous datasets, including eliciting emotional responses from stimuli tailored to each participant. Dataset evaluation was completed by running preliminary experiments to classify micro-movements from non-movements. Results were obtained using a selection of spatio-temporal descriptors and machine learning. We further evaluate the dataset on emerging methods of feature difference analysis and propose an Adaptive Baseline Threshold that uses individualised neutral expression to improve the performance of micro-movement detection. In contrast to machine learning approaches, we outperform the state of the art with a recall of 0.91. The outcomes show the dataset can become a new standard for micro-movement data, with future work expanding on data representation and analysis.",
"title": ""
},
{
"docid": "080a097ddc53effd838494f40b7d39c2",
"text": "This paper surveys research on applying neuroevolution (NE) to games. In neuroevolution, artificial neural networks are trained through evolutionary algorithms, taking inspiration from the way biological brains evolved. We analyze the application of NE in games along five different axes, which are the role NE is chosen to play in a game, the different types of neural networks used, the way these networks are evolved, how the fitness is determined and what type of input the network receives. The paper also highlights important open research challenges in the field.",
"title": ""
},
{
"docid": "3a4d51387f8fcb4add9c5662dcc08c41",
"text": "Pulse transformer is always used to be the isolator between gate driver and power MOSFET. There are many topologies about the peripheral circuit. This paper proposes a new topology circuit that uses pulse transformer to transfer driving signal and driving power, energy storage capacitor to supply secondary side power and negative voltage. Without auxiliary power source, it can realize rapidly switch and off state with negative voltage. And a simulation model has been used to verify it. The simulation results prove that the new driver has a better anti-interference, faster switching speed, lower switching loss, and higher reliability than the current drive circuits.",
"title": ""
},
{
"docid": "28389a4db6c8a8167eac71511873f1b4",
"text": "In recent years, maritime safety and efficiency become very important across the world. Automatic Identification System (AIS) tracks vessel movement by onboard transceiver and terrestrial and/or satellite base stations. The data collected by AIS contain broadcast kinematic information and static information. Both of them are useful for maritime anomaly detection and vessel route prediction which are key techniques in maritime intelligence. This paper is devoted to construct a standard AIS database for maritime trajectory learning, prediction and data mining. A path prediction method based on Extreme Learning Machine (ELM) is tested on this AIS database and the testing results show this database can be used as a standardized training resource for different trajectory prediction algorithms and other AIS data based mining applications.",
"title": ""
},
{
"docid": "1dbdd4a6d39fe973b5c6f860ec9873a2",
"text": "Meaningful facial parts can convey key cues for both facial action unit detection and expression prediction. Textured 3D face scan can provide both detailed 3D geometric shape and 2D texture appearance cues of the face which are beneficial for Facial Expression Recognition (FER). However, accurate facial parts extraction as well as their fusion are challenging tasks. In this paper, a novel system for 3D FER is designed based on accurate facial parts extraction and deep feature fusion of facial parts. Experiments are conducted on the BU-3DFE database, demonstrating the effectiveness of combing different facial parts, texture and depth cues and reporting the state-of-the-art results in comparison with all existing methods under the same setting.",
"title": ""
},
{
"docid": "272281eafb06f6c9dd030897e846fd00",
"text": "Cloud computing is emerging as a new paradigm of large-scale distributed computing. It is a framework for enabling convenient, on-demand network access to a shared pool of computing resources. Load balancing is one of the main challenges in cloud computing which is required to distribute the dynamic workload across multiple nodes to ensure that no single node is overwhelmed. It helps in optimal utilization of resources and hence in enhancing the performance of the system. The goal of load balancing is to minimize the resource consumption which will further reduce energy consumption and carbon emission rate that is the dire need of cloud computing. This determines the need of new metrics, energy consumption and carbon emission for energy-efficient load balancing in cloud computing. This paper discusses the existing load balancing techniques in cloud computing and further compares them based on various parameters like performance, scalability, associated overhead etc. that are considered in different techniques. It further discusses these techniques from energy consumption and carbon emission perspective.",
"title": ""
},
{
"docid": "72c4ba6c7ffde3ad8c5aab9932aaa3fc",
"text": "24 25 26 27 28 29 30 31 32 33 34 35 Article history: Received 22 June 2011 Received in revised form 5 September 2012 Accepted 20 September 2012 Available online xxxx",
"title": ""
}
] |
scidocsrr
|
18a1e0ea12fa184e040ad23a909e65c7
|
Distributed TensorFlow with MPI
|
[
{
"docid": "4a684a0a590f326894416d5afc31b63c",
"text": "Collisions at high-energy particle colliders are a traditionally fruitful source of exotic particle discoveries. Finding these rare particles requires solving difficult signal-versus-background classification problems, hence machine-learning approaches are often used. Standard approaches have relied on 'shallow' machine-learning models that have a limited capacity to learn complex nonlinear functions of the inputs, and rely on a painstaking search through manually constructed nonlinear features. Progress on this problem has slowed, as a variety of techniques have shown equivalent performance. Recent advances in the field of deep learning make it possible to learn more complex functions and better discriminate between signal and background classes. Here, using benchmark data sets, we show that deep-learning methods need no manually constructed inputs and yet improve the classification metric by as much as 8% over the best current approaches. This demonstrates that deep-learning approaches can improve the power of collider searches for exotic particles.",
"title": ""
},
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
},
{
"docid": "6104736f53363991d675c2a03ada8c82",
"text": "The term machine learning refers to a set of topics dealing with the creation and evaluation of algorithms that facilitate pattern recognition, classification, and prediction, based on models derived from existing data. Two facets of mechanization should be acknowledged when considering machine learning in broad terms. Firstly, it is intended that the classification and prediction tasks can be accomplished by a suitably programmed computing machine. That is, the product of machine learning is a classifier that can be feasibly used on available hardware. Secondly, it is intended that the creation of the classifier should itself be highly mechanized, and should not involve too much human input. This second facet is inevitably vague, but the basic objective is that the use of automatic algorithm construction methods can minimize the possibility that human biases could affect the selection and performance of the algorithm. Both the creation of the algorithm and its operation to classify objects or predict events are to be based on concrete, observable data. The history of relations between biology and the field of machine learning is long and complex. An early technique [1] for machine learning called the perceptron constituted an attempt to model actual neuronal behavior, and the field of artificial neural network (ANN) design emerged from this attempt. Early work on the analysis of translation initiation sequences [2] employed the perceptron to define criteria for start sites in Escherichia coli. Further artificial neural network architectures such as the adaptive resonance theory (ART) [3] and neocognitron [4] were inspired from the organization of the visual nervous system. In the intervening years, the flexibility of machine learning techniques has grown along with mathematical frameworks for measuring their reliability, and it is natural to hope that machine learning methods will improve the efficiency of discovery and understanding in the mounting volume and complexity of biological data. This tutorial is structured in four main components. Firstly, a brief section reviews definitions and mathematical prerequisites. Secondly, the field of supervised learning is described. Thirdly, methods of unsupervised learning are reviewed. Finally, a section reviews methods and examples as implemented in the open source data analysis and visualization language R (http://www.r-project.org).",
"title": ""
}
] |
[
{
"docid": "0e60cb8f9147f5334c3cfca2880c2241",
"text": "The quest for automatic Programming is the holy grail of artificial intelligence. The dream of having computer programs write other useful computer programs has haunted researchers since the nineteen fifties. In Genetic Progvamming III Darwinian Invention and Problem Solving (GP?) by John R. Koza, Forest H. Bennet 111, David Andre, and Martin A. Keane, the authors claim that the first inscription on this trophy should be the name Genetic Programming (GP). GP is about applying evolutionary algorithms to search the space of computer programs. The authors paraphrase Arthur Samuel of 1959 and argue that with this method it is possible to tell the computer what to do without telling it explicitly how t o do it.",
"title": ""
},
{
"docid": "e6b27bb9f2b74791af5e74c16c7c47da",
"text": "Due to the storage and retrieval efficiency, hashing has been widely deployed to approximate nearest neighbor search for large-scale multimedia retrieval. Supervised hashing, which improves the quality of hash coding by exploiting the semantic similarity on data pairs, has received increasing attention recently. For most existing supervised hashing methods for image retrieval, an image is first represented as a vector of hand-crafted or machine-learned features, followed by another separate quantization step that generates binary codes. However, suboptimal hash coding may be produced, because the quantization error is not statistically minimized and the feature representation is not optimally compatible with the binary coding. In this paper, we propose a novel Deep Hashing Network (DHN) architecture for supervised hashing, in which we jointly learn good image representation tailored to hash coding and formally control the quantization error. The DHN model constitutes four key components: (1) a subnetwork with multiple convolution-pooling layers to capture image representations; (2) a fully-connected hashing layer to generate compact binary hash codes; (3) a pairwise crossentropy loss layer for similarity-preserving learning; and (4) a pairwise quantization loss for controlling hashing quality. Extensive experiments on standard image retrieval datasets show the proposed DHN model yields substantial boosts over latest state-of-the-art hashing methods.",
"title": ""
},
{
"docid": "89e9d32e14da1acd74e23f8cecea5d8e",
"text": "BACKGROUND\nDespite considerable progress in the treatment of post-traumatic stress disorder (PTSD), a large percentage of individuals remain symptomatic following gold-standard therapies. One route to improving care is examining affective disturbances that involve other emotions beyond fear and threat. A growing body of research has implicated shame in PTSD's development and course, although to date no review of this specific literature exists. This scoping review investigated the link between shame and PTSD and sought to identify research gaps.\n\n\nMETHODS\nA systematic database search of PubMed, PsycInfo, Embase, Cochrane, and CINAHL was conducted to find original quantitative research related to shame and PTSD.\n\n\nRESULTS\nForty-seven studies met inclusion criteria. Review found substantial support for an association between shame and PTSD as well as preliminary evidence suggesting its utility as a treatment target. Several design limitations and under-investigated areas were recognized, including the need for a multimodal assessment of shame and more longitudinal and treatment-focused research.\n\n\nCONCLUSION\nThis review provides crucial synthesis of research to date, highlighting the prominence of shame in PTSD, and its likely relevance in successful treatment outcomes. The present review serves as a guide to future work into this critical area of study.",
"title": ""
},
{
"docid": "0de0093ab3720901d4704bfeb7be4093",
"text": "Big Data analytics can revolutionize the healthcare industry. It can improve operational efficiencies, help predict and plan responses to disease epidemics, improve the quality of monitoring of clinical trials, and optimize healthcare spending at all levels from patients to hospital systems to governments. This paper provides an overview of Big Data, applicability of it in healthcare, some of the work in progress and a future outlook on how Big Data analytics can improve overall quality in healthcare systems.",
"title": ""
},
{
"docid": "e16f1b1d4b583f5d198eac8d01d12c48",
"text": "Mathematical models have been widely used in the studies of biological signaling pathways. Among these studies, two systems biology approaches have been applied: top-down and bottom-up systems biology. The former approach focuses on X-omics researches involving the measurement of experimental data in a large scale, for example proteomics, metabolomics, or fluxomics and transcriptomics. In contrast, the bottom-up approach studies the interaction of the network components and employs mathematical models to gain some insights about the mechanisms and dynamics of biological systems. This chapter introduces how to use the bottom-up approach to establish mathematical models for cell signaling studies.",
"title": ""
},
{
"docid": "6f53e2f4827995a2164513961f2776d2",
"text": "—s the popularity of wireless networks increases, so does the need to protect them. Encryption algorithms play a main role in information security systems. On the other side, those algorithms consume a significant amount of computing resources such as CPU time, memory, and battery power. This paper illustrates the key concepts of security, wireless networks, and security over wireless networks. Wireless security is demonstrated by applying the common security standards like (802.11 WEP and 802.11i WPA,WPA2) and provides evaluation of six of the most common encryption algorithms on power consumption for wireless devices namely: AES comparison has been conducted for those encryption algorithms at different settings for each algorithm such as different sizes of data blocks, different data types, battery power consumption, date transmission through wireless network and finally encryption/decryption speed. Experimental results are given to demonstrate the effectiveness of each algorithm.",
"title": ""
},
{
"docid": "86c998f5ffcddb0b74360ff27b8fead4",
"text": "Attention networks in multimodal learning provide an efficient way to utilize given visual information selectively. However, the computational cost to learn attention distributions for every pair of multimodal input channels is prohibitively expensive. To solve this problem, co-attention builds two separate attention distributions for each modality neglecting the interaction between multimodal inputs. In this paper, we propose bilinear attention networks (BAN) that find bilinear attention distributions to utilize given vision-language information seamlessly. BAN considers bilinear interactions among two groups of input channels, while low-rank bilinear pooling extracts the joint representations for each pair of channels. Furthermore, we propose a variant of multimodal residual networks to exploit eight-attention maps of the BAN efficiently. We quantitatively and qualitatively evaluate our model on visual question answering (VQA 2.0) and Flickr30k Entities datasets, showing that BAN significantly outperforms previous methods and achieves new state-of-the-arts on both datasets.",
"title": ""
},
{
"docid": "d399e142488766759abf607defd848f0",
"text": "The high penetration of cell phones in today's global environment offers a wide range of promising mobile marketing activities, including mobile viral marketing campaigns. However, the success of these campaigns, which remains unexplored, depends on the consumers' willingness to actively forward the advertisements that they receive to acquaintances, e.g., to make mobile referrals. Therefore, it is important to identify and understand the factors that influence consumer referral behavior via mobile devices. The authors analyze a three-stage model of consumer referral behavior via mobile devices in a field study of a firm-created mobile viral marketing campaign. The findings suggest that consumers who place high importance on the purposive value and entertainment value of a message are likely to enter the interest and referral stages. Accounting for consumers' egocentric social networks, we find that tie strength has a negative influence on the reading and decision to refer stages and that degree centrality has no influence on the decision-making process. © 2013 Direct Marketing Educational Foundation, Inc. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a4e1f420dfc3b1b30a58ec3e60288761",
"text": "Despite recent advances in uncovering the quantitative features of stationary human activity patterns, many applications, from pandemic prediction to emergency response, require an understanding of how these patterns change when the population encounters unfamiliar conditions. To explore societal response to external perturbations we identified real-time changes in communication and mobility patterns in the vicinity of eight emergencies, such as bomb attacks and earthquakes, comparing these with eight non-emergencies, like concerts and sporting events. We find that communication spikes accompanying emergencies are both spatially and temporally localized, but information about emergencies spreads globally, resulting in communication avalanches that engage in a significant manner the social network of eyewitnesses. These results offer a quantitative view of behavioral changes in human activity under extreme conditions, with potential long-term impact on emergency detection and response.",
"title": ""
},
{
"docid": "ffa5989436b8783314d60f7fb47c447a",
"text": "A long-term goal of machine learning research is to build an intelligent dialog agent. Most research in natural language understanding has focused on learning from fixed training sets of labeled data, with supervision either at the word level (tagging, parsing tasks) or sentence level (question answering, machine translation). This kind of supervision is not realistic of how humans learn, where language is both learned by, and used for, communication. In this work, we study dialog-based language learning, where supervision is given naturally and implicitly in the response of the dialog partner during the conversation. We study this setup in two domains: the bAbI dataset of [30] and large-scale question answering from [4]. We evaluate a set of baseline learning strategies on these tasks, and show that a novel model incorporating predictive lookahead is a promising approach for learning from a teacher’s response. In particular, a surprising result is that it can learn to answer questions correctly without any reward-based supervision at all.",
"title": ""
},
{
"docid": "8401deada9010f05e3c9907a421d6760",
"text": "Heuristics evaluation is one of the common techniques being used for usability evaluation. The potential of HE has been explored in games design and development and later playability heuristics evaluation (PHE) is generated. PHE has been used in evaluating games. Issues in games usability covers forms of game usability, game interface, game mechanics, game narrative and game play. This general heuristics has the potential to be further explored in specific domain of games that is educational games. Combination of general heuristics of games (tailored based on specific domain) and education heuristics seems to be an excellent focus in order to evaluate the usability issues in educational games especially educational games produced in Malaysia.",
"title": ""
},
{
"docid": "26e81c8256df36fb02bffe2b17140d3a",
"text": "BACKGROUND\nBruton's tyrosine kinase (BTK) is a mediator of the B-cell-receptor signaling pathway implicated in the pathogenesis of B-cell cancers. In a phase 1 study, ibrutinib, a BTK inhibitor, showed antitumor activity in several types of non-Hodgkin's lymphoma, including mantle-cell lymphoma.\n\n\nMETHODS\nIn this phase 2 study, we investigated oral ibrutinib, at a daily dose of 560 mg, in 111 patients with relapsed or refractory mantle-cell lymphoma. Patients were enrolled into two groups: those who had previously received at least 2 cycles of bortezomib therapy and those who had received less than 2 complete cycles of bortezomib or had received no prior bortezomib therapy. The primary end point was the overall response rate. Secondary end points were duration of response, progression-free survival, overall survival, and safety.\n\n\nRESULTS\nThe median age was 68 years, and 86% of patients had intermediate-risk or high-risk mantle-cell lymphoma according to clinical prognostic factors. Patients had received a median of three prior therapies. The most common treatment-related adverse events were mild or moderate diarrhea, fatigue, and nausea. Grade 3 or higher hematologic events were infrequent and included neutropenia (in 16% of patients), thrombocytopenia (in 11%), and anemia (in 10%). A response rate of 68% (75 patients) was observed, with a complete response rate of 21% and a partial response rate of 47%; prior treatment with bortezomib had no effect on the response rate. With an estimated median follow-up of 15.3 months, the estimated median response duration was 17.5 months (95% confidence interval [CI], 15.8 to not reached), the estimated median progression-free survival was 13.9 months (95% CI, 7.0 to not reached), and the median overall survival was not reached. The estimated rate of overall survival was 58% at 18 months.\n\n\nCONCLUSIONS\nIbrutinib shows durable single-agent efficacy in relapsed or refractory mantle-cell lymphoma. (Funded by Pharmacyclics and others; ClinicalTrials.gov number, NCT01236391.)",
"title": ""
},
{
"docid": "f0b522d7f3a0eeb6cb951356407cf15a",
"text": "Today, the most accurate steganalysis methods for digital media are built as supervised classifiers on feature vectors extracted from the media. The tool of choice for the machine learning seems to be the support vector machine (SVM). In this paper, we propose an alternative and well-known machine learning tool-ensemble classifiers implemented as random forests-and argue that they are ideally suited for steganalysis. Ensemble classifiers scale much more favorably w.r.t. the number of training examples and the feature dimensionality with performance comparable to the much more complex SVMs. The significantly lower training complexity opens up the possibility for the steganalyst to work with rich (high-dimensional) cover models and train on larger training sets-two key elements that appear necessary to reliably detect modern steganographic algorithms. Ensemble classification is portrayed here as a powerful developer tool that allows fast construction of steganography detectors with markedly improved detection accuracy across a wide range of embedding methods. The power of the proposed framework is demonstrated on three steganographic methods that hide messages in JPEG images.",
"title": ""
},
{
"docid": "d045e59441a16874f3ccb1d8068e4e6d",
"text": "In two experiments, we tested the hypotheses that (a) the difference between liars and truth tellers will be greater when interviewees report their stories in reverse order than in chronological order, and (b) instructing interviewees to recall their stories in reverse order will facilitate detecting deception. In Experiment 1, 80 mock suspects told the truth or lied about a staged event and did or did not report their stories in reverse order. The reverse order interviews contained many more cues to deceit than the control interviews. In Experiment 2, 55 police officers watched a selection of the videotaped interviews of Experiment 1 and made veracity judgements. Requesting suspects to convey their stories in reverse order improved police observers' ability to detect deception and did not result in a response bias.",
"title": ""
},
{
"docid": "a0a9785ee7688a601e678b4b8d40cb91",
"text": "We present a light-weight machine learning tool for NLP research. The package supports operations on both discrete and dense vectors, facilitating implementation of linear models as well as neural models. It provides several basic layers which mainly aims for single-layer linear and non-linear transformations. By using these layers, we can conveniently implement linear models and simple neural models. Besides, this package also integrates several complex layers by composing those basic layers, such as RNN, Attention Pooling, LSTM and gated RNN. Those complex layers can be used to implement deep neural models directly.",
"title": ""
},
{
"docid": "6952a28e63c231c1bfb43391a21e80fd",
"text": "Deep learning has attracted tremendous attention from researchers in various fields of information engineering such as AI, computer vision, and language processing [Kalchbrenner and Blunsom, 2013; Krizhevsky et al., 2012; Mnih et al., 2013], but also from more traditional sciences such as physics, biology, and manufacturing [Anjos et al., 2015; Baldi et al., 2014; Bergmann et al., 2014]. Neural networks, image processing tools such as convolutional neural networks, sequence processing models such as recurrent neural networks, and regularisation tools such as dropout, are used extensively. However, fields such as physics, biology, and manufacturing are ones in which representing model uncertainty is of crucial importance [Ghahramani, 2015; Krzywinski and Altman, 2013]. With the recent shift in many of these fields towards the use of Bayesian uncertainty [Herzog and Ostwald, 2013; Nuzzo, 2014; Trafimow and Marks, 2015], new needs arise from deep learning. In this work we develop tools to obtain practical uncertainty estimates in deep learning, casting recent deep learning tools as Bayesian models without changing either the models or the optimisation. In the first part of this thesis we develop the theory for such tools, providing applications and illustrative examples. We tie approximate inference in Bayesian models to dropout and other stochastic regularisation techniques, and assess the approximations empirically. We give example applications arising from this connection between modern deep learning and Bayesian modelling such as active learning of image data and data efficient deep reinforcement learning. We further demonstrate the method’s practicality through a survey of recent applications making use of the suggested tools in language applications, medical diagnostics, bioinformatics, image processing, and autonomous driving. In the second part of the thesis we explore its theoretical implications, and the insights stemming from the link between Bayesian modelling and deep learning. We discuss what determines model uncertainty properties, analyse the approximate inference analytically in the linear case, and theoretically examine various priors such as spike and slab priors.",
"title": ""
},
{
"docid": "5c5225b5e66d49f17a881ed1843e944c",
"text": "The organic-inorganic hybrid perovskites methylammonium lead iodide (CH3NH3PbI3) and the partially chlorine-substituted mixed halide CH3NH3PbI3-xClx emit strong and broad photoluminescence (PL) around their band gap energy of ∼1.6 eV. However, the nature of the radiative decay channels behind the observed emission and, in particular, the spectral broadening mechanisms are still unclear. Here we investigate these processes for high-quality vapor-deposited films of CH3NH3PbI3-xClx using time- and excitation-energy dependent photoluminescence spectroscopy. We show that the PL spectrum is homogenously broadened with a line width of 103 meV most likely as a consequence of phonon coupling effects. Further analysis reveals that defects or trap states play a minor role in radiative decay channels. In terms of possible lasing applications, the emission spectrum of the perovskite is sufficiently broad to have potential for amplification of light pulses below 100 fs pulse duration.",
"title": ""
},
{
"docid": "f8410df5746c3271cd5b495b91a1c316",
"text": "Cognitive control supports flexible behavior by selecting actions that are consistent with our goals and appropriate for our environment. The prefrontal cortex (PFC) has an established role in cognitive control, and research on the functional organization of PFC promises to contribute to our understanding of the architecture of control. A recently popular hypothesis is that the rostro-caudal axis of PFC supports a control hierarchy whereby posterior-to-anterior PFC mediates progressively abstract, higher-order control. This review discusses evidence for a rostro-caudal gradient of function in PFC and the theories proposed to account for these results, including domain generality in working memory, relational complexity, the temporal organization of behavior and abstract representational hierarchy. Distinctions among these frameworks are considered as a basis for future research.",
"title": ""
},
{
"docid": "fa2c5de925a28a26e8ff031d7918ebd3",
"text": "Penetration testing is a series of activities undertaken to identify and exploit security vulnerabilities. It helps confirm the effectiveness or ineffectiveness of the security measures that have been implemented. This paper provides an overview of penetration testing. It discusses the benefits, the strategies and the methodology of conducting penetration testing. The methodology of penetration testing includes three phases: test preparation, test and test analysis. The test phase involves the following steps: information gathering, vulnerability analysis, and vulnerability exploit. This paper further illustrates how to apply this methodology to conduct penetration testing on two example web applications.",
"title": ""
},
{
"docid": "f39abb67a6c392369c5618f5c33d93cf",
"text": "In our research, we view human behavior as a structured sequence of context-sensitive decisions. We develop a conditional probabilistic model for predicting human decisions given the contextual situation. Our approach employs the principle of maximum entropy within the Markov Decision Process framework. Modeling human behavior is reduced to recovering a context-sensitive utility function that explains demonstrated behavior within the probabilistic model. In this work, we review the development of our probabilistic model (Ziebart et al. 2008a) and the results of its application to modeling the context-sensitive route preferences of drivers (Ziebart et al. 2008b). We additionally expand the approach’s applicability to domains with stochastic dynamics, present preliminary experiments on modeling time-usage, and discuss remaining challenges for applying our approach to other human behavior modeling problems.",
"title": ""
}
] |
scidocsrr
|
763926387611f248936a8f858abbe353
|
Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection
|
[
{
"docid": "7bb04f2163e253068ac665f12a5dd35c",
"text": "Automatic segmentation of the liver and hepatic lesions is an important step towards deriving quantitative biomarkers for accurate clinical diagnosis and computer-aided decision support systems. This paper presents a method to automatically segment liver and lesions in CT and MRI abdomen images using cascaded fully convolutional neural networks (CFCNs) enabling the segmentation of large-scale medical trials and quantitative image analyses. We train and cascade two FCNs for the combined segmentation of the liver and its lesions. As a first step, we train an FCN to segment the liver as ROI input for a second FCN. The second FCN solely segments lesions within the predicted liver ROIs of step 1. CFCN models were trained on an abdominal CT dataset comprising 100 hepatic tumor volumes. Validation results on further datasets show that CFCN-based semantic liver and lesion segmentation achieves Dice scores over 94% for the liver with computation times below 100s per volume. We further experimentally demonstrate the robustness of the proposed method on 38 MRI liver tumor volumes and the public 3DIRCAD dataset.",
"title": ""
},
{
"docid": "23cdfc1c46b83b02bd0f1a6ef49a6d2d",
"text": "In this work we explore a fully convolutional network (FCN) for the task of liver segmentation and liver metastases detection in computed tomography (CT) examinations. FCN has proven to be a very powerful tool for semantic segmentation. We explore the FCN performance on a relatively small dataset and compare it to patch based CNN and sparsity based classification schemes. Our data contains CT examinations from 20 patients with overall 68 lesions and 43 livers marked in one slice and 20 different patients with a full 3D liver segmentation. We ran 3-fold cross-validation and results indicate superiority of the FCN over all other methods tested. Using our fully automatic algorithm we achieved true positive rate of 0.86 and 0.6 false positive per case which are very promising and clinically relevant results.",
"title": ""
}
] |
[
{
"docid": "1423deed29f33cc6e81760b8306ffd15",
"text": "In this paper, we describe RavenClaw, a plan-based, task-independent dialog management framework. RavenClaw isolates the domain-specific aspects of the dialog control logic from domain-independent conversational skills, and in the process facilitates rapid development of mixed-initiative systems operating in complex, task-oriented domains. System developers can focus exclusively on describing the dialog task control logic, while a large number of domain-independent conversational skills such as error handling, timing and turn-taking are transparently supported and enforced by the RavenClaw dialog engine. To date, RavenClaw has been used to construct and deploy a large number of systems, spanning different domains and interaction styles, such as information access, guidance through procedures, command-and-control, medical diagnosis, etc. The framework has easily adapted to all of these domains, indicating a high degree of versatility and scalability. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8621fff78e92e1e0e9ba898d5e2433ca",
"text": "This paper aims at providing insight on the transferability of deep CNN features to unsupervised problems. We study the impact of different pretrained CNN feature extractors on the problem of image set clustering for object classification as well as fine-grained classification. We propose a rather straightforward pipeline combining deep-feature extraction using a CNN pretrained on ImageNet and a classic clustering algorithm to classify sets of images. This approach is compared to state-of-the-art algorithms in image-clustering and provides better results. These results strengthen the belief that supervised training of deep CNN on large datasets, with a large variability of classes, extracts better features than most carefully designed engineering approaches, even for unsupervised tasks. We also validate our approach on a robotic application, consisting in sorting and storing objects smartly based on clustering.",
"title": ""
},
{
"docid": "fd641bf2d7e783e8e774fedcb9f6892b",
"text": "Since the Netflix $1 million Prize, announced in 2006, our company has been known to have personalization at the core of our product. Even at that point in time, the dataset that we released was considered \"large\", and we stirred innovation in the (Big) Data Mining research field. Our current product offering is now focused around instant video streaming, and our data is now many orders of magnitude larger. Not only do we have many more users in many more countries, but we also receive many more streams of data. Besides the ratings, we now also use information such as what our members play, browse, or search.\n In this paper, we will discuss the different approaches we follow to deal with these large streams of data in order to extract information for personalizing our service. We will describe some of the machine learning models used, as well as the architectures that allow us to combine complex offline batch processes with real-time data streams.",
"title": ""
},
{
"docid": "60f9a34771b844228e1d8da363e89359",
"text": "3-mercaptopyruvate sulfurtransferase (3-MST) was a novel hydrogen sulfide (H2S)-synthesizing enzyme that may be involved in cyanide degradation and in thiosulfate biosynthesis. Over recent years, considerable attention has been focused on the biochemistry and molecular biology of H2S-synthesizing enzyme. In contrast, there have been few concerted attempts to investigate the changes in the expression of the H2S-synthesizing enzymes with disease states. To investigate the changes of 3-MST after traumatic brain injury (TBI) and its possible role, mice TBI model was established by controlled cortical impact system, and the expression and cellular localization of 3-MST after TBI was investigated in the present study. Western blot analysis revealed that 3-MST was present in normal mice brain cortex. It gradually increased, reached a peak on the first day after TBI, and then reached a valley on the third day. Importantly, 3-MST was colocalized with neuron. In addition, Western blot detection showed that the first day post injury was also the autophagic peak indicated by the elevated expression of LC3. Importantly, immunohistochemistry analysis revealed that injury-induced expression of 3-MST was partly colabeled by LC3. However, there was no colocalization of 3-MST with propidium iodide (cell death marker) and LC3 positive cells were partly colocalized with propidium iodide. These data suggested that 3-MST was mainly located in living neurons and may be implicated in the autophagy of neuron and involved in the pathophysiology of brain after TBI.",
"title": ""
},
{
"docid": "c1eb1bded65ad62c395183318622ab76",
"text": "The CHiME challenge series aims to advance far field speech recognition technology by promoting research at the interface of signal processing and automatic speech recognition. This paper presents the design and outcomes of the 3rd CHiME Challenge, which targets the performance of automatic speech recognition in a real-world, commercially-motivated scenario: a person talking to a tablet device that has been fitted with a six-channel microphone array. The paper describes the data collection, the task definition and the baseline systems for data simulation, enhancement and recognition. The paper then presents an overview of the 26 systems that were submitted to the challenge focusing on the strategies that proved to be most successful relative to the MVDR array processing and DNN acoustic modeling reference system. Challenge findings related to the role of simulated data in system training and evaluation are discussed.",
"title": ""
},
{
"docid": "274373d46b748d92e6913496507353b1",
"text": "This paper introduces a blind watermarking based on a convolutional neural network (CNN). We propose an iterative learning framework to secure robustness of watermarking. One loop of learning process consists of the following three stages: Watermark embedding, attack simulation, and weight update. We have learned a network that can detect a 1-bit message from a image sub-block. Experimental results show that this learned network is an extension of the frequency domain that is widely used in existing watermarking scheme. The proposed scheme achieved robustness against geometric and signal processing attacks with a learning time of one day.",
"title": ""
},
{
"docid": "d03cda3a3e4deb5e249af7f3bcec0bee",
"text": "In this research, we investigate the process of producing allicin in garlic. With regard to the chemical compositions of garlic (Allium Sativum L.), allicin is among the active sulfuric materials in garlic that has a lot of benefits such as anti-bacterial, anti-oxidant and deradicalizing properties.",
"title": ""
},
{
"docid": "5ca477445f70e051dbfa81a2e3a79b3d",
"text": "In the context of content-based multimedia indexing gender identification using speech signal is an important task. Existing techniques are dependent on the quality of the speech signal making them unsuitable for the video indexing problems. In this paper we introduce a novel gender identification approach based on a general audio classifier. The audio classifier models the audio signal by the first order spectrum’s statistics in 1s windows and uses a set of neural networks as classifiers. The presented technique shows robustness to adverse audio compression and it is language independent. We show how practical considerations about the speech in audio-visual data, such as the continuity of speech, can further improve the classification results which attain 92%.",
"title": ""
},
{
"docid": "9ca46f81c121866f6d8f3d9c8a102b64",
"text": "Assessment of age and size structure of marine populations is often used to detect and determine the effect of natural and anthropogenic factors, such as commercial fishing, upon marine communities. A primary tool in the characterisation of population structure is the distribution of the lengths or biomass of a large sample of individual specimens of a particular species. Rather than use relatively unreliable visual estimates by divers, an underwater stereo-video system has been developed to improve the accuracy of the measurement of lengths of highly indicative species such as reef fish. In common with any system used for accurate measurements, the design and calibration of the underwater stereo-video system are of paramount importance to realise the maximum possible accuracy from the system. Aspects of the design of the system, the calibration procedure and algorithm, the determination of the relative orientation of the two cameras, stereo-measurement and stereo-matching, and the tracking of individual specimens are discussed. Also addressed is the stability of the calibrations and relative orientation of the cameras during dives to capture video sequences of marine life.",
"title": ""
},
{
"docid": "9707365fac6490f52b328c2b039915b6",
"text": "Identification of protein–protein interactions often provides insight into protein function, and many cellular processes are performed by stable protein complexes. We used tandem affinity purification to process 4,562 different tagged proteins of the yeast Saccharomyces cerevisiae. Each preparation was analysed by both matrix-assisted laser desorption/ionization–time of flight mass spectrometry and liquid chromatography tandem mass spectrometry to increase coverage and accuracy. Machine learning was used to integrate the mass spectrometry scores and assign probabilities to the protein–protein interactions. Among 4,087 different proteins identified with high confidence by mass spectrometry from 2,357 successful purifications, our core data set (median precision of 0.69) comprises 7,123 protein–protein interactions involving 2,708 proteins. A Markov clustering algorithm organized these interactions into 547 protein complexes averaging 4.9 subunits per complex, about half of them absent from the MIPS database, as well as 429 additional interactions between pairs of complexes. The data (all of which are available online) will help future studies on individual proteins as well as functional genomics and systems biology.",
"title": ""
},
{
"docid": "4c0734c02f1a76545fcb61efcc6be84e",
"text": "This article offers a plausible domain-general explanation for why some concepts of processes are resistant to instructional remediation although other, apparently similar concepts are more easily understood. The explanation assumes that processes may differ in ontological ways: that some processes (such as the apparent flow in diffusion of dye in water) are emergent and other processes (such as the flow of blood in human circulation) are direct. Although precise definition of the two kinds of processes are probably impossible, attributes of direct and emergent processes are described that distinguish them in a domain-general way. Circulation and diffusion, which are used as examples of direct and emergent processes, are associated with different kinds of misconceptions. The claim is that students’ misconceptions for direct kinds of processes, such as blood circulation, are of the same ontological kind as the correct conception, suggesting that misconceptions of direct processes may be nonrobust. However, students’ misconceptions of emergent processes are robust because they misinterpret emergent processes as a kind of commonsense direct processes. To correct such a misconception requires a re-representation or a conceptual shift across ontological kinds. Therefore, misconceptions of emergent processes are robust because such a shift requires that students know about the emergent kind and can overcome their (perhaps even innate) predisposition to conceive of all processes as a direct kind. Such a domain-general explanation suggests that teaching students the causal structure underlying emergent processes may enable them to recognize and understand a variety of emergent processes for which they have robust misconceptions, such as concepts of electricity, heat and temperature, and evolution. THE JOURNAL OF THE LEARNING SCIENCES, 14(2), 161–199 Copyright © 2005, Lawrence Erlbaum Associates, Inc.",
"title": ""
},
{
"docid": "3994b51e9b9ed5aec98ed33e541a8e8c",
"text": "The development of relational database management systems served to focus the data management community for decades, with spectacular results. In recent years, however, the rapidly-expanding demands of \"data everywhere\" have led to a field comprised of interesting and productive efforts, but without a central focus or coordinated agenda. The most acute information management challenges today stem from organizations (e.g., enterprises, government agencies, libraries, \"smart\" homes) relying on a large number of diverse, interrelated data sources, but having no way to manage their dataspaces in a convenient, integrated, or principled fashion. This paper proposes dataspaces and their support systems as a new agenda for data management. This agenda encompasses much of the work going on in data management today, while posing additional research objectives.",
"title": ""
},
{
"docid": "fedfe025c82e41a7d1e8920a797edb4a",
"text": "We investigate the influence of channel estimation error on the achievable system-level throughput performance of our previous non-orthogonal multiple access (NOMA) scheme in a multiple-input multiple-output (MIMO) downlink. The NOMA scheme employs intra-beam superposition coding of a multiuser signal at the transmitter and spatial filtering of inter-beam interference followed by an intra-beam successive interference canceller (SIC) at the user terminal receiver. The intra-beam SIC cancels the inter-user interference within a beam. This configuration achieves reduced overhead for the downlink reference signaling for channel estimation at the user terminal in the case of non-orthogonal user multiplexing and enables the SIC receiver to be applied to the MIMO downlink. The channel estimation error in the NOMA scheme causes residual interference in the SIC process, which decreases the achievable user throughput. Furthermore, the channel estimation error causes error In the transmission rate control for the respective users, which may result in decoding error at not only the destination user terminal but also at other user terminals for the SIC process. However, we show that by using a simple transmission rate back-off algorithm, the impact of the channel estimation error is effectively abated and the NOMA scheme achieves clear average and cell-edge user throughput gains relative to orthogonal multiple access (OMA) similar to the case with perfect channel estimation.",
"title": ""
},
{
"docid": "3e5e9eecab5937dc1ec7ab835b045445",
"text": "Kombucha is a beverage of probable Manchurian origins obtained from fermented tea by a microbial consortium composed of several bacteria and yeasts. This mixed consortium forms a powerful symbiosis capable of inhibiting the growth of potentially contaminating bacteria. The fermentation process also leads to the formation of a polymeric cellulose pellicle due to the activity of certain strains of Acetobacter sp. The tea fermentation process by the microbial consortium was able to show an increase in certain biological activities which have been already studied; however, little information is available on the characterization of its active components and their evolution during fermentation. Studies have also reported that the use of infusions from other plants may be a promising alternative.\n\n\nPRACTICAL APPLICATION\nKombucha is a traditional fermented tea whose consumption has increased in the recent years due to its multiple functional properties such as anti-inflammatory potential and antioxidant activity. The microbiological composition of this beverage is quite complex and still more research is needed in order to fully understand its behavior. This study comprises the chemical and microbiological composition of the tea and the main factors that may affect its production.",
"title": ""
},
{
"docid": "18f47545d929fedb53588291faf65dee",
"text": "We present TimeLineCurator, a browser-based authoring tool that automatically extracts event data from temporal references in unstructured text documents using natural language processing and encodes them along a visual timeline. Our goal is to facilitate the timeline creation process for journalists and others who tell temporal stories online. Current solutions involve manually extracting and formatting event data from source documents, a process that tends to be tedious and error prone. With TimeLineCurator, a prospective timeline author can quickly identify the extent of time encompassed by a document, as well as the distribution of events occurring along this timeline. Authors can speculatively browse possible documents to quickly determine whether they are appropriate sources of timeline material. TimeLineCurator provides controls for curating and editing events on a timeline, the ability to combine timelines from multiple source documents, and export curated timelines for online deployment. We evaluate TimeLineCurator through a benchmark comparison of entity extraction error against a manual timeline curation process, a preliminary evaluation of the user experience of timeline authoring, a brief qualitative analysis of its visual output, and a discussion of prospective use cases suggested by members of the target author communities following its deployment.",
"title": ""
},
{
"docid": "a2fc7b5fbb88e45c84400b1fe15368ee",
"text": "There is increasing evidence from functional magnetic resonance imaging (fMRI) that visual awareness is not only associated with activity in ventral visual cortex but also with activity in the parietal cortex. However, due to the correlational nature of neuroimaging, it remains unclear whether this parietal activity plays a causal role in awareness. In the experiment presented here we disrupted activity in right or left parietal cortex by applying repetitive transcranial magnetic stimulation (rTMS) over these areas while subjects attempted to detect changes between two images separated by a brief interval (i.e. 1-shot change detection task). We found that rTMS applied over right parietal cortex but not left parietal cortex resulted in longer latencies to detect changes and a greater rate of change blindness compared with no TMS. These results suggest that the right parietal cortex plays a critical role in conscious change detection.",
"title": ""
},
{
"docid": "81fc9abd3e2ad86feff7bd713cff5915",
"text": "With the advance of the Internet, e-commerce systems have become extremely important and convenient to human being. More and more products are sold on the web, and more and more people are purchasing products online. As a result, an increasing number of customers post product reviews at merchant websites and express their opinions and experiences in any network space such as Internet forums, discussion groups, and blogs. So there is a large amount of data records related to products on the Web, which are useful for both manufacturers and customers. Mining product reviews becomes a hot research topic, and prior researches mostly base on product features to analyze the opinions. So mining product features is the first step to further reviews processing. In this paper, we present how to mine product features. The proposed extraction approach is different from the previous methods because we only mine the features of the product in opinion sentences which the customers have expressed their positive or negative experiences on. In order to find opinion sentence, a SentiWordNet-based algorithm is proposed. There are three steps to perform our task: (1) identifying opinion sentences in each review which is positive or negative via SentiWordNet; (2) mining product features that have been commented on by customers from opinion sentences; (3) pruning feature to remove those incorrect features. Compared to previous work, our experimental result achieves higher precision and recall.",
"title": ""
},
{
"docid": "29cceb730e663c08e20107b6d34ced8b",
"text": "Cumulative citation recommendation refers to the task of filtering a time-ordered corpus for documents that are highly relevant to a predefined set of entities. This task has been introduced at the TREC Knowledge Base Acceleration track in 2012, where two main families of approaches emerged: classification and ranking. In this paper we perform an experimental comparison of these two strategies using supervised learning with a rich feature set. Our main finding is that ranking outperforms classification on all evaluation settings and metrics. Our analysis also reveals that a ranking-based approach has more potential for future improvements.",
"title": ""
},
{
"docid": "aa8ea8624477a02790df66898f86657b",
"text": "Extensible Markup Language (XML) is an extremely simple dialect of SGML which is completely described in this document. The goal is to enable generic SGML to be served, received, and processed on the Web in the way that is now possible with HTML. For this reason, XML has been designed for ease of implementation, and for interoperability with both SGML and HTML. Note on status of this document: This is even more of a moving target than the typical W3C working draft. Several important decisions on the details of XML are still outstanding members of the W3C SGML Working Group will recognize these areas of particular volatility in the spec, but those who are not intimately familiar with the deliberative process should be careful to avoid actions based on the content of this document, until the notice you are now reading has been removed.",
"title": ""
},
{
"docid": "6f72afeb0a2c904e17dca27f53be249e",
"text": "With its three-term functionality offering treatment of both transient and steady-state responses, proportional-integral-derivative (PID) control provides a generic and efficient solution to real-world control problems. The wide application of PID control has stimulated and sustained research and development to \"get the best out of PID\", and \"the search is on to find the next key technology or methodology for PID tuning\". This article presents remedies for problems involving the integral and derivative terms. PID design objectives, methods, and future directions are discussed. Subsequently, a computerized simulation-based approach is presented, together with illustrative design results for first-order, higher order, and nonlinear plants. Finally, we discuss differences between academic research and industrial practice, so as to motivate new research directions in PID control.",
"title": ""
}
] |
scidocsrr
|
4030c4980c973a7ed27573035291cd93
|
Carboplatin and pemetrexed with or without pembrolizumab for advanced, non-squamous non-small-cell lung cancer: a randomised, phase 2 cohort of the open-label KEYNOTE-021 study.
|
[
{
"docid": "f17a6c34a7b3c6a7bf266f04e819af94",
"text": "BACKGROUND\nPatients with advanced squamous-cell non-small-cell lung cancer (NSCLC) who have disease progression during or after first-line chemotherapy have limited treatment options. This randomized, open-label, international, phase 3 study evaluated the efficacy and safety of nivolumab, a fully human IgG4 programmed death 1 (PD-1) immune-checkpoint-inhibitor antibody, as compared with docetaxel in this patient population.\n\n\nMETHODS\nWe randomly assigned 272 patients to receive nivolumab, at a dose of 3 mg per kilogram of body weight every 2 weeks, or docetaxel, at a dose of 75 mg per square meter of body-surface area every 3 weeks. The primary end point was overall survival.\n\n\nRESULTS\nThe median overall survival was 9.2 months (95% confidence interval [CI], 7.3 to 13.3) with nivolumab versus 6.0 months (95% CI, 5.1 to 7.3) with docetaxel. The risk of death was 41% lower with nivolumab than with docetaxel (hazard ratio, 0.59; 95% CI, 0.44 to 0.79; P<0.001). At 1 year, the overall survival rate was 42% (95% CI, 34 to 50) with nivolumab versus 24% (95% CI, 17 to 31) with docetaxel. The response rate was 20% with nivolumab versus 9% with docetaxel (P=0.008). The median progression-free survival was 3.5 months with nivolumab versus 2.8 months with docetaxel (hazard ratio for death or disease progression, 0.62; 95% CI, 0.47 to 0.81; P<0.001). The expression of the PD-1 ligand (PD-L1) was neither prognostic nor predictive of benefit. Treatment-related adverse events of grade 3 or 4 were reported in 7% of the patients in the nivolumab group as compared with 55% of those in the docetaxel group.\n\n\nCONCLUSIONS\nAmong patients with advanced, previously treated squamous-cell NSCLC, overall survival, response rate, and progression-free survival were significantly better with nivolumab than with docetaxel, regardless of PD-L1 expression level. (Funded by Bristol-Myers Squibb; CheckMate 017 ClinicalTrials.gov number, NCT01642004.).",
"title": ""
}
] |
[
{
"docid": "492bddbe966df723e8793712e6a15b1a",
"text": "This article introduces Hybreed, a software framework for building complex context-aware applications, together with a set of components that are specifically targeted at developing hybrid, context-aware recommender systems. Hybreed is based on a concept for processing context that we call dynamic contextualization. The underlying notion of context is very generic, enabling application developers to exploit sensor-based physical factors as well as factors derived from user models or user interaction. This approach is well aligned with context definitions that emphasize the dynamic and activity-oriented nature of context. As an extension of the generic framework, we describe Hybreed RecViews, a set of components facilitating the development of context-aware and hybrid recommender systems. With Hybreed and RecViews, developers can rapidly develop context-aware applications that generate recommendations for both individual users and groups. The framework provides a range of recommendation algorithms and strategies for producing group recommendations as well as templates for combining different methods into hybrid recommenders. Hybreed also provides means for integrating existing user or product data from external sources such as social networks. It combines aspects known from context processing frameworks with features of state-of-the-art recommender system frameworks, aspects that have been addressed only separately in previous research. To our knowledge, Hybreed is the first framework to cover all these aspects in an integrated manner. To evaluate the framework and its conceptual foundation, we verified its capabilities in three different use cases. The evaluation also comprises a comparative assessment of Hybreed’s functional features, a comparison to existing frameworks, and a user study assessing its usability for developers. The results of this study indicate that Hybreed is intuitive to use and extend by developers.",
"title": ""
},
{
"docid": "a47b13043f033f211779379274a69e2f",
"text": "Attack techniques based on code reuse continue to enable real-world exploits bypassing all current mitigations. Code randomization defenses greatly improve resilience against code reuse. Unfortunately, sophisticated modern attacks such as JITROP can circumvent randomization by discovering the actual code layout on the target and relocating the attack payload on the fly. Hence, effective code randomization additionally requires that the code layout cannot be leaked to adversaries. Previous approaches to leakage-resilient diversity have either relied on hardware features that are not available in all processors, particularly resource-limited processors commonly found in mobile devices, or they have had high memory overheads. We introduce a code randomization technique that avoids these limitations and scales down to mobile and embedded devices: Leakage-Resilient Layout Randomization (LR2). Whereas previous solutions have relied on virtualization, x86 segmentation, or virtual memory support, LR2 merely requires the underlying processor to enforce a W⊕X policy—a feature that is virtually ubiquitous in modern processors, including mobile and embedded variants. Our evaluation shows that LR2 provides the same security as existing virtualization-based solutions while avoiding design decisions that would prevent deployment on less capable yet equally vulnerable systems. Although we enforce execute-only permissions in software, LR2 is as efficient as the best-in-class virtualization-based solution.",
"title": ""
},
{
"docid": "98a647d378a06c0314a60e220d10976a",
"text": "Driven by the confluence between the need to collect data about people's physical, physiological, psychological, cognitive, and behavioral processes in spaces ranging from personal to urban and the recent availability of the technologies that enable this data collection, wireless sensor networks for healthcare have emerged in the recent years. In this review, we present some representative applications in the healthcare domain and describe the challenges they introduce to wireless sensor networks due to the required level of trustworthiness and the need to ensure the privacy and security of medical data. These challenges are exacerbated by the resource scarcity that is inherent with wireless sensor network platforms. We outline prototype systems spanning application domains from physiological and activity monitoring to large-scale physiological and behavioral studies and emphasize ongoing research challenges.",
"title": ""
},
{
"docid": "bb3d3530a2bb406e9b08888bec0644fa",
"text": "The number of power supplies connected to the public household power grid is constantly increasing. For a good mains voltage quality, the harmonic input currents are limited in standards. Many applications also require a lower output voltage with isolation. Today, this is mostly obtained via a multi-stage converter, consisting of a diode bridge rectifier, a boost PFC and a DC-DC converter with galvanic isolation. For higher efficiency, a bridgeless PFC-stage can be used, but it nevertheless requires a DC-DC converter for isolation. In order to further increase efficiency and decrease component count, a single-stage concept can be used. The Cuk “True Bridgeless PFC” rectifier system, for example, can perform all the requirements in a single stage. A simple application and realization of a converter topology is also quite important, especially in consumer electronics, where costs are crucial. In this paper, a new single stage topology with a different operation principle is presented, focusing on ease of use in hardware realization.",
"title": ""
},
{
"docid": "70261e842efa3b56df82403b1a8ae5e7",
"text": "Chronic respiratory diseases, including asthma, chronic obstructive pulmonary disease (COPD) and cystic fibrosis (CF), are among the leading causes of mortality and morbidity worldwide. In the past decade, the interest in the role of microbiome in maintaining lung health and in respiratory diseases has grown exponentially. The advent of sophisticated multiomics techniques has enabled the identification and characterisation of microbiota and their roles in respiratory health and disease. Furthermore, associations between the microbiome of the lung and gut, as well as the immune cells and mediators that may link these two mucosal sites, appear to be important in the pathogenesis of lung conditions. Here we review the recent evidence of the role of normal gastrointestinal and respiratory microbiome in health and how dysbiosis affects chronic pulmonary diseases. The potential implications of host and environmental factors such as age, gender, diet and use of antibiotics on the composition and overall functionality of microbiome are also discussed. We summarise how microbiota may mediate the dynamic process of immune development and/or regulation focusing on recent data from both clinical human studies and translational animal studies. This furthers the understanding of the pathogenesis of chronic pulmonary diseases and may yield novel avenues for the utilisation of microbiota as potential therapeutic interventions.",
"title": ""
},
{
"docid": "deaa86a5fe696d887140e29d0b2ae22c",
"text": "The high prevalence of spinal stenosis results in a large volume of MRI imaging, yet interpretation can be time-consuming with high inter-reader variability even among the most specialized radiologists. In this paper, we develop an efficient methodology to leverage the subject-matter-expertise stored in large-scale archival reporting and image data for a deep-learning approach to fully-automated lumbar spinal stenosis grading. Specifically, we introduce three major contributions: (1) a natural-language-processing scheme to extract level-by-level ground-truth labels from free-text radiology reports for the various types and grades of spinal stenosis (2) accurate vertebral segmentation and disc-level localization using a U-Net architecture combined with a spine-curve fitting method, and (3) a multiinput, multi-task, and multi-class convolutional neural network to perform central canal and foraminal stenosis grading on both axial and sagittal imaging series inputs with the extracted report-derived labels applied to corresponding imaging level segments. This study uses a large dataset of 22796 disc-levels extracted from 4075 patients. We achieve state-ofthe-art performance on lumbar spinal stenosis classification and expect the technique will increase both radiology workflow efficiency and the perceived value of radiology reports for referring clinicians and patients.",
"title": ""
},
{
"docid": "054fcf065915118bbfa3f12759cb6912",
"text": "Automatization of the diagnosis of any kind of disease is of great importance and its gaining speed as more and more deep learning solutions are applied to different problems. One of such computer-aided systems could be a decision support tool able to accurately differentiate between different types of breast cancer histological images – normal tissue or carcinoma (benign, in situ or invasive). In this paper authors present a deep learning solution, based on convolutional capsule network, for classification of four types of images of breast tissue biopsy when hematoxylin and eosin staining is applied. The crossvalidation accuracy, averaged over four classes, was achieved to be 87 % with equally high sensitivity.",
"title": ""
},
{
"docid": "57d8f78ac76925f17b28b78992b7a7b9",
"text": "The effects of long-term aerobic exercise on endothelial function in patients with essential hypertension remain unclear. To determine whether endothelial function relating to forearm hemodynamics in these patients differs from normotensive subjects and whether endothelial function can be modified by continued physical exercise, we randomized patients with essential hypertension into a group that engaged in 30 minutes of brisk walking 5 to 7 times weekly for 12 weeks (n=20) or a group that underwent no activity modifications (control group, n=7). Forearm blood flow was measured using strain-gauge plethysmography during reactive hyperemia to test for endothelium-dependent vasodilation and after sublingual nitroglycerin administration to test endothelium-independent vasodilation. Forearm blood flow in hypertensive patients during reactive hyperemia was significantly less than that in normotensive subjects (n=17). Increases in forearm blood flow after nitroglycerin were similar between hypertensive and normotensive subjects. Exercise lowered mean blood pressure from 115.7+/-5.3 to 110.2+/-5.1 mm Hg (P<0.01) and forearm vascular resistance from 25.6+/-3.2 to 23. 2+/-2.8 mm Hg/mL per minute per 100 mL tissue (P<0.01); no change occurred in controls. Basal forearm blood flow, body weight, and heart rate did not differ with exercise. After 12 weeks of exercise, maximal forearm blood flow response during reactive hyperemia increased significantly, from 38.4+/-4.6 to 47.1+/-4.9 mL/min per 100 mL tissue (P<0.05); this increase was not seen in controls. Changes in forearm blood flow after sublingual nitroglycerin administration were similar before and after 12 weeks of exercise. Intra-arterial infusion of the nitric oxide synthase inhibitor NG-monomethyl-L-arginine abolished the enhancement of reactive hyperemia induced by 12 weeks of exercise. These findings suggest that through increased release of nitric oxide, continued physical exercise alleviates impairment of reactive hyperemia in patients with essential hypertension.",
"title": ""
},
{
"docid": "ce1b4c5e15fd1d0777c26ca93a9cadbd",
"text": "In early studies on energy metabolism of tumor cells, it was proposed that the enhanced glycolysis was induced by a decreased oxidative phosphorylation. Since then it has been indiscriminately applied to all types of tumor cells that the ATP supply is mainly or only provided by glycolysis, without an appropriate experimental evaluation. In this review, the different genetic and biochemical mechanisms by which tumor cells achieve an enhanced glycolytic flux are analyzed. Furthermore, the proposed mechanisms that arguably lead to a decreased oxidative phosphorylation in tumor cells are discussed. As the O(2) concentration in hypoxic regions of tumors seems not to be limiting for the functioning of oxidative phosphorylation, this pathway is re-evaluated regarding oxidizable substrate utilization and its contribution to ATP supply versus glycolysis. In the tumor cell lines where the oxidative metabolism prevails over the glycolytic metabolism for ATP supply, the flux control distribution of both pathways is described. The effect of glycolytic and mitochondrial drugs on tumor energy metabolism and cellular proliferation is described and discussed. Similarly, the energy metabolic changes associated with inherent and acquired resistance to radiotherapy and chemotherapy of tumor cells, and those determined by positron emission tomography, are revised. It is proposed that energy metabolism may be an alternative therapeutic target for both hypoxic (glycolytic) and oxidative tumors.",
"title": ""
},
{
"docid": "3c1c89aeeae6bde84e338c15c44b20ce",
"text": "Using statistical machine learning for making security decisions introduces new vulnerabilities in large scale systems. This paper shows how an adversary can exploit statistical machine learning, as used in the SpamBayes spam filter, to render it useless—even if the adversary’s access is limited to only 1% of the training messages. We further demonstrate a new class of focused attacks that successfully prevent victims from receiving specific email messages. Finally, we introduce two new types of defenses against these attacks.",
"title": ""
},
{
"docid": "3eccedb5a9afc0f7bc8b64c3b5ff5434",
"text": "The design of a high impedance, high Q tunable load is presented with operating frequency between 400MHz and close to 6GHz. The bandwidth is made independently tunable of the carrier frequency by using an active inductor resonator with multiple tunable capacitances. The Q factor can be tuned from a value 40 up to 300. The circuit is targeted at 5G wideband applications requiring narrow band filtering where both centre frequency and bandwidth needs to be tunable. The circuit impedance is applied to the output stage of a standard CMOS cascode and results show that high Q factors can be achieved close to 6GHz with 11dB rejection at 20MHz offset from the centre frequency. The circuit architecture takes advantage of currently available low cost, low area tunable capacitors based on micro-electromechanical systems (MEMS) and Barium Strontium Titanate (BST).",
"title": ""
},
{
"docid": "29d02d7219cb4911ab59681e0c70a903",
"text": "As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures, which bring network functions and contents to the network edge, are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks, including definition, architecture, and advantages. Next, a comprehensive survey of issues on computing, caching, and communication techniques at the network edge is presented. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks, such as cloud technology, SDN/NFV, and smart devices are discussed. Finally, open research challenges and future directions are presented as well.",
"title": ""
},
{
"docid": "4e142571b30a66dd9ff55dc0d28282cf",
"text": "Test models are needed to evaluate and benchmark algorithms and tools in model driven development. Most model generators randomly apply graph operations on graph representations of models. This approach leads to test models of poor quality. Some approaches do not guarantee the basic syntactic correctness of the created models. Even if so, it is almost impossible to guarantee, or even control, the creation of complex structures, e.g. a subgraph which implements an association between two classes. Such a subgraph consists of an association node, two association end nodes, and several edges, and is normally created by one user command. This paper presents the SiDiff Model Generator, which can generate models, or sets of models, which are syntactically correct, contain complex structures, and exhibit defined statistical characteristics.",
"title": ""
},
{
"docid": "29c32c8c447b498f43ec215633305923",
"text": "A growing body of evidence suggests that empathy for pain is underpinned by neural structures that are also involved in the direct experience of pain. In order to assess the consistency of this finding, an image-based meta-analysis of nine independent functional magnetic resonance imaging (fMRI) investigations and a coordinate-based meta-analysis of 32 studies that had investigated empathy for pain using fMRI were conducted. The results indicate that a core network consisting of bilateral anterior insular cortex and medial/anterior cingulate cortex is associated with empathy for pain. Activation in these areas overlaps with activation during directly experienced pain, and we link their involvement to representing global feeling states and the guidance of adaptive behavior for both self- and other-related experiences. Moreover, the image-based analysis demonstrates that depending on the type of experimental paradigm this core network was co-activated with distinct brain regions: While viewing pictures of body parts in painful situations recruited areas underpinning action understanding (inferior parietal/ventral premotor cortices) to a stronger extent, eliciting empathy by means of abstract visual information about the other's affective state more strongly engaged areas associated with inferring and representing mental states of self and other (precuneus, ventral medial prefrontal cortex, superior temporal cortex, and temporo-parietal junction). In addition, only the picture-based paradigms activated somatosensory areas, indicating that previous discrepancies concerning somatosensory activity during empathy for pain might have resulted from differences in experimental paradigms. We conclude that social neuroscience paradigms provide reliable and accurate insights into complex social phenomena such as empathy and that meta-analyses of previous studies are a valuable tool in this endeavor.",
"title": ""
},
{
"docid": "58e176bb818efed6de7224d7088f2487",
"text": "In the context of marketing, attribution is the process of quantifying the value of marketing activities relative to the final outcome. It is a topic rapidly growing in importance as acknowledged by the industry. However, despite numerous tools and techniques designed for its measurement, the absence of a comprehensive assessment and classification scheme persists. Thus, we aim to bridge this gap by providing an academic review to accumulate and comprehend current knowledge in attribution modeling, leading to a road map to guide future research, expediting new knowledge creation.",
"title": ""
},
{
"docid": "b3911204471f409cf243558f1a7c11db",
"text": "Process mining allows for the automated discovery of process models from event logs. These models provide insights and enable various types of model-based analysis. This paper demonstrates that the discovered process models can be extended with information to predict the completion time of running instances. There are many scenarios where it is useful to have reliable time predictions. For example, when a customer phones her insurance company for information about her insurance claim, she can be given an estimate for the remaining processing time. In order to do this, we provide a configurable approach to construct a process model, augment this model with time information learned from earlier instances, and use this to predict e.g. the completion time. To provide meaningful time predictions we use a configurable set of abstractions that allow for a good balance between “overfitting” and “underfitting”. The approach has been implemented in ProM and through several experiments using real-life event logs we demonstrate its applicability.",
"title": ""
},
{
"docid": "74fb666c47afc81b8e080f730e0d1fe0",
"text": "In current commercial Web search engines, queries are processed in the conjunctive mode, which requires the search engine to compute the intersection of a number of posting lists to determine the documents matching all query terms. In practice, the intersection operation takes a significant fraction of the query processing time, for some queries dominating the total query latency. Hence, efficient posting list intersection is critical for achieving short query latencies. In this work, we focus on improving the performance of posting list intersection by leveraging the compute capabilities of recent multicore systems. To this end, we consider various coarse-grained and fine-grained parallelization models for list intersection. Specifically, we present an algorithm that partitions the work associated with a given query into a number of small and independent tasks that are subsequently processed in parallel. Through a detailed empirical analysis of these alternative models, we demonstrate that exploiting parallelism at the finest-level of granularity is critical to achieve the best performance on multicore systems. On an eight-core system, the fine-grained parallelization method is able to achieve more than five times reduction in average query processing time while still exploiting the parallelism for high query throughput.",
"title": ""
},
{
"docid": "53633432216e383297e401753332b00a",
"text": "Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock) has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR) at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR) is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears) rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right) precentral sulcus (lPCS), a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream. Results suggest that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity) help to explain why past ASSR studies of auditory spatial attention yield seemingly contradictory results.",
"title": ""
},
{
"docid": "387ae92200526a650db269b4644238ba",
"text": "Graph Convolutional Networks (GCNs) have shown significant improvements in semi-supervised learning on graph-structured data. Concurrently, unsupervised learning of graph embeddings has benefited from the information contained in random walks. In this paper, we propose a model: Network of GCNs (N-GCN), which marries these two lines of work. At its core, N-GCN trains multiple instances of GCNs over node pairs discovered at different distances in random walks, and learns a combination of the instance outputs which optimizes the classification objective. Our experiments show that our proposed N-GCNmodel improves state-of-the-art baselines on all of the challenging node classification tasks we consider: Cora, Citeseer, Pubmed, and PPI. In addition, our proposed method has other desirable properties, including generalization to recently proposed semi-supervised learning methods such as GraphSAGE, allowing us to propose N-SAGE, and resilience to adversarial input perturbations.",
"title": ""
},
{
"docid": "b3998d818b12e9dc376afea3094ae23f",
"text": "1. Andrew Borthwick and Ralph Grishman. 1999. A maximum entropy approach to named entity recognition. Ph. D. Thesis, Dept. of Computer Science, New York University. 2. Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6645–6649. 3. Xuezhe Ma and Eduard Hovy. 2016. End-to-end se-quence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). The Ohio State University",
"title": ""
}
] |
scidocsrr
|
0157625d170f780556d057b0243dca85
|
Highlighting Diverse Concepts in Documents
|
[
{
"docid": "c8768e560af11068890cc097f1255474",
"text": "Abstract This paper describes the functionality of MEAD, a comprehensive, public domain, open source, multidocument multilingual summarization environment that has been thus far downloaded by more than 500 organizations. MEAD has been used in a variety of summarization applications ranging from summarization for mobile devices to Web page summarization within a search engine and to novelty detection.",
"title": ""
},
{
"docid": "64fc1433249bb7aba59e0a9092aeee5e",
"text": "In this paper, we propose two generic text summarization methods that create text summaries by ranking and extracting sentences from the original documents. The first method uses standard IR methods to rank sentence relevances, while the second method uses the latent semantic analysis technique to identify semantically important sentences, for summary creations. Both methods strive to select sentences that are highly ranked and different from each other. This is an attempt to create a summary with a wider coverage of the document's main content and less redundancy. Performance evaluations on the two summarization methods are conducted by comparing their summarization outputs with the manual summaries generated by three independent human evaluators. The evaluations also study the influence of different VSM weighting schemes on the text summarization performances. Finally, the causes of the large disparities in the evaluators' manual summarization results are investigated, and discussions on human text summarization patterns are presented.",
"title": ""
},
{
"docid": "da8e0706b5ca5b7d391a07d443edc0cf",
"text": "The Web has become an excellent source for gathering consumer opinions. There are now numerous Web sources containing such opinions, e.g., product reviews, forums, discussion groups, and blogs. Techniques are now being developed to exploit these sources to help organizations and individuals to gain such important information easily and quickly. In this paper, we first discuss several aspects of the problem in the AI context, and then present some results of our existing work published in KDD-04 and WWW-05.",
"title": ""
}
] |
[
{
"docid": "89f1cec7c2999693805945c3c898c484",
"text": "Studies investigating the relationship between job satisfaction and turnover intention are abundant. Yet, this relationship has not been fully addressed in the IT field particularly in the developing countries. Moving from this point, this study aims at further probe this area by evaluating the levels of job satisfaction and turnover intention among a sample of IT employees in the Palestinian IT firms. Then, it attempts to examine the sources of job satisfaction and the causes of turnover intention among those employees. The findings show job security, work conditions, pay and benefits, work nature, coworkers, career advancement, supervision and management were all significantly correlated with overall job satisfaction. Only job security, pay, and coworkers were able to significantly influence turnover intention. Implications of the findings and future research directions are discussed",
"title": ""
},
{
"docid": "8854e27390246d8ef4ccd6b8183375f4",
"text": "Ducted fans that are popular choices in vertical take-off and landing (VTOL) unmanned aerial vehicles (UAV) offer a higher static thrust/power ratio for a given diameter than open propellers. Although ducted fans provide high performance in many VTOL applications, there are still unresolved problems associated with these systems. Fan rotor tip leakage flow is a significant source of aerodynamic loss for ducted fan VTOL UAVs and adversely affects the general aerodynamic performance of these vehicles. The present study utilized experimental and computational techniques in a 22” diameter ducted fan test system that has been custom designed and manufactured. Experimental investigation consisted of total pressure measurements using Kiel total pressure probes and real time six-component force and torque measurements. The computational technique used in this study included a 3D Reynolds-Averaged Navier Stokes (RANS) based CFD model of the ducted fan test system. RANS simulations of the flow around rotor blades and duct geometry in the rotating frame of reference provided a comprehensive description of the tip leakage and passage flow. The experimental and computational analysis performed for various tip clearances were utilized in understanding the effect of the tip leakage flow on aerodynamic performance of ducted fans used in VTOL UAVs. The aerodynamic measurements and results of the RANS simulations showed good agreement especially near the tip region. ∗Postdoctoral Research Fellow †Professor of Aerospace Engineering, corresponding author NOMENCLATURE",
"title": ""
},
{
"docid": "b7980ebc9634263729a0ac51cc148604",
"text": "Background: The popularity of the open source software development in the last decade, has brought about an increased interest from the industry on how to use open source components, participate in the open source community, build business models around this type of software development, and learn more about open source development methodologies. Aim: The aim of this study is to review research carried out on usage of open source components and development methodologies by the industry, as well as companies’ participation in the open source community. Method: Systematic review through searches in library databases and manual identification of articles from the open source conference. Results: 19 articles were identified. Conclusions: The articles could be divided into four categories: open source as part of component based software engineering, business models with open source in commercial organization, company participation in open source development communities, and usage of open source processes within a company.",
"title": ""
},
{
"docid": "0a340a2dc4d9a6acd90d3bedad07f84a",
"text": "BACKGROUND\nKhat (Catha edulis) contains a psychoactive substance, cathinone, which produces central nervous system stimulation analogous to amphetamine. It is believed that khat chewing has a negative impact on the physical and mental health of individuals as well as the socioeconomic condition of the family and the society at large. There is lack of community based studies regarding the link between khat use and poor mental health. The objective of this study was to evaluate the association between khat use and mental distress and to determine the prevalence of mental distress and khat use in Jimma City.\n\n\nMETHODS\nA cross-sectional community-based study was conducted in Jimma City from October 15 to November 15, 2009. The study used a structured questionnaire and Self Reporting Questionnaire-20 designed by WHO and which has been translated into Amharic and validated in Ethiopia. By multi stage sampling, 1200 individuals were included in the study. Data analysis was done using SPSS for window version 13.\n\n\nRESULTS\nThe Khat use prevalence was found to be 37.8% during the study period. Majority of the khat users were males (73.5%), age group 18-24 (41.1%), Muslims (46.6%), Oromo Ethnic group (47.2%), single (51.4%), high school students (46.8%) and employed (80%). Using cut-off point 7 out of 20 on the Self Reporting Questionnaire-20, 25.8% of the study population was found to have mental distress. Males (26.6%), persons older than 55 years (36.4%), Orthodox Christians (28.4%), Kefficho Ethnic groups (36.4%), widowed (44.8%), illiterates (43.8%) and farmers (40.0%) had higher rates of mental distress. We found that mental distress and khat use have significant association (34.7% Vs 20.5%, P<0.001). There was also significant association between mental distress and frequency of khat use (41% Vs 31.1%, P<0.001)\n\n\nCONCLUSION\nThe high rate of khat use among the young persons calls for public intervention to prevent more serious forms of substance use disorders. Our findings suggest that persons who use khat suffer from higher rates of mental distress. However, causal association could not be established due to cross-sectional study design.",
"title": ""
},
{
"docid": "979aa24afb8e77bc70ffd8b3e5665fe7",
"text": "and signals the end point. The color goes from a yellow to a brownish-yellow. The change can be detected most precisely when the color in the titration flask is compared to a reference color. A mixture containing CrO4 2indicator in a suspension of CaCO3, simulating AgCl precipitate, is used for this purpose. An indicator blank is prepared using a portion of this mixture. The blank provides a correction for the slight difference between the end point and the equivalence point, a systematic error.",
"title": ""
},
{
"docid": "0b5616a9e272183502e198886a251513",
"text": "Recently, Amazon Mechanical Turk has gained a lot of attention as a tool for conducting different kinds of relevance evaluations. In this paper we show a series of experiments on TREC data, evaluate the outcome, and discuss the results. Our position, supported by these preliminary experimental results, is that crowdsourcing is a viable alternative for relevance assessment.",
"title": ""
},
{
"docid": "0a40df0ef684f45278e92194b7c478b6",
"text": "Agency is the meta-concept associated with self-advancement in social hierarchies; communion is the partner concept associated with maintenance of positive relationships. Despite the wealth of data documenting the conceptual utility of agency and communion (A & C) as superordinate metaconcepts, no direct measures of global A & C value dimensions are currently available. The first part of this article presents structural analyses of data from 4 diverse data sets (3 archival and 1 new): Each included a broad inventory of values or life goals. All 4 data sets revealed higher order A & C dimensions that were either apparent or implicit. The second part details the development of the ACV, a 24-item questionnaire measuring global A and C values, and documents its psychometric properties. Four studies support their joint construct validity by positioning the value measures within a nomological network of interpersonal traits, self-favorability biases, ideology dimensions, gender, socio-sexuality, and religious attitudes. Potential applications of the new instrument are discussed.",
"title": ""
},
{
"docid": "359c5322961b43cec07c8a172ad043bb",
"text": "A deadlock-free routing algorithm can be generated for arbitrary interconnection networks using the concept of virtual channels. A necessary and sufficient condition for deadlock-free routing is the absence of cycles in a channel dependency graph. Given an arbitrary network and a routing function, the cycles of the channel dependency graph can be removed by splitting physical channels into groups of virtual channels. This method is used to develop deadlock-free routing algorithms for k-ary n-cubes, for cube-connected cycles, and for shuffle-exchange networks.",
"title": ""
},
{
"docid": "f6dbce178e428522c80743e735920875",
"text": "With the recent advancement in deep learning, we have witnessed a great progress in single image super-resolution. However, due to the significant information loss of the image downscaling process, it has become extremely challenging to further advance the state-of-theart, especially for large upscaling factors. This paper explores a new research direction in super resolution, called reference-conditioned superresolution, in which a reference image containing desired high-resolution texture details is provided besides the low-resolution image. We focus on transferring the high-resolution texture from reference images to the super-resolution process without the constraint of content similarity between reference and target images, which is a key difference from previous example-based methods. Inspired by recent work on image stylization, we address the problem via neural texture transfer. We design an end-to-end trainable deep model which generates detail enriched results by adaptively fusing the content from the low-resolution image with the texture patterns from the reference image. We create a benchmark dataset for the general research of reference-based super-resolution, which contains reference images paired with low-resolution inputs with varying degrees of similarity. Both objective and subjective evaluations demonstrate the great potential of using reference images as well as the superiority of our results over other state-of-the-art methods.",
"title": ""
},
{
"docid": "b95776a33ab5ff12d405523a90cbfb93",
"text": "In this paper, we introduce the splitter placement problem in wavelength-routed networks (SP-WRN). Given a network topology, a set of multicast sessions, and a fixed number of multicast-capable cross-connects, the SP-WRN problem entails the placement of the multicast-capable cross-connects so that the blocking probability is minimized. The SP-WRN problem is NP-complete as it includes as a subproblem the routing and wavelength assignment problem which is NP-complete. To gain a deeper insight into the computational complexity of the SP-WRN problem, we define a graph-theoretic version of the splitter placement problem (SPG), and show that even SPG is NP-complete. We develop three heuristics for the SP-WRN problem with different degrees of trade-off between computation time and quality of solution. The first heuristic uses the CPLEX general solver to solve an integer-linear program (ILP) of the problem. The second heuristic is based on a greedy approach and is called most-saturated node first (MSNF). The third heuristic employs simulated annealing (SA) with route-coordination. Through numerical examples on a wide variety of network topologies we demonstrate that: (1) no more than 50% of the cross-connects need to be multicast-capable, (2) the proposed SA heuristic provides fast near-optimal solutions, and (3) it is not practical to use general solvers such as CPLEX for solving the SP-WRN problem.",
"title": ""
},
{
"docid": "99b485dd4290c463b35867b98b51146c",
"text": "The term rhombencephalitis refers to inflammatory diseases affecting the hindbrain (brainstem and cerebellum). Rhombencephalitis has a wide variety of etiologies, including infections, autoimmune diseases, and paraneoplastic syndromes. Infection with bacteria of the genus Listeria is the most common cause of rhombencephalitis. Primary rhombencephalitis caused by infection with Listeria spp. occurs in healthy young adults. It usually has a biphasic time course with a flu-like syndrome, followed by brainstem dysfunction; 75% of patients have cerebrospinal fluid pleocytosis, and nearly 100% have an abnormal brain magnetic resonance imaging scan. However, other possible causes of rhombencephalitis must be borne in mind. In addition to the clinical aspects, the patterns seen in magnetic resonance imaging can be helpful in defining the possible cause. Some of the reported causes of rhombencephalitis are potentially severe and life threatening; therefore, an accurate initial diagnostic approach is important to establishing a proper early treatment regimen. This pictorial essay reviews the various causes of rhombencephalitis and the corresponding magnetic resonance imaging findings, by describing illustrative confirmed cases.",
"title": ""
},
{
"docid": "b103e091df051f4958317b3b7806fa71",
"text": "We present a static, precise, and scalable technique for finding CVEs (Common Vulnerabilities and Exposures) in stripped firmware images. Our technique is able to efficiently find vulnerabilities in real-world firmware with high accuracy. Given a vulnerable procedure in an executable binary and a firmware image containing multiple stripped binaries, our goal is to detect possible occurrences of the vulnerable procedure in the firmware image. Due to the variety of architectures and unique tool chains used by vendors, as well as the highly customized nature of firmware, identifying procedures in stripped firmware is extremely challenging. Vulnerability detection requires not only pairwise similarity between procedures but also information about the relationships between procedures in the surrounding executable. This observation serves as the foundation for a novel technique that establishes a partial correspondence between procedures in the two binaries. We implemented our technique in a tool called FirmUp and performed an extensive evaluation over 40 million procedures, over 4 different prevalent architectures, crawled from public vendor firmware images. We discovered 373 vulnerabilities affecting publicly available firmware, 147 of them in the latest available firmware version for the device. A thorough comparison of FirmUp to previous methods shows that it accurately and effectively finds vulnerabilities in firmware, while outperforming the detection rate of the state of the art by 45% on average.",
"title": ""
},
{
"docid": "0211fc059cb4dbd76f58137bbf3dda0c",
"text": "This paper focuses on the integration of GIS and an extension of the analytical hierarchy process (AHP) using quantifier-guided ordered weighted averaging (OWA) procedure. AHP_OWA is a multicriteria combination operator. The nature of the AHP_OWA depends on some parameters, which are expressed by means of fuzzy linguistic quantifiers. By changing the linguistic terms, AHP_OWA can generate a wide range of decision strategies. We propose a GIS-multicriteria evaluation (MCE) system through implementation of AHP_OWA within ArcGIS, capable of integrating linguistic labels within conventional AHP for spatial decision making. We suggest that the proposed GIS-MCE would simplify the definition of decision strategies and facilitate an exploratory analysis of multiple criteria by incorporating qualitative information within the analysis. r 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cafad783a00d162320770648a18657ef",
"text": "What is the relation between the idea of the public sphere and computer-mediated interaction? I argue that the notion of the public sphere is not only inapplicable to the Net, but also and more importantly, that it is damaging to practices of democracy under conditions of contemporary technoculture, conditions Manuel Castells theorizes as capitalism in the information mode of development and which I refer to as communicative capitalism. 1 As an alternative to the public sphere, I consider the potential of a political architecture rooted in a notion of networks. To the extent that such an architecture can center democratic practice in conflict and contestation, so can it open up the democratic imagination in the networked societies of communicative capitalism.",
"title": ""
},
{
"docid": "f537ade1894e5cc7cbbc78b414d8b217",
"text": "The precious irrigation is great significance for arid and semiarid area. According to the special environment of greenhouse, a fuzzy control algorithm was proposed to make an optimal irrigation strategy based on the actual measured soil humidity during the whole plant growth process. The fuzzy control system had two inputs (soil humidity error and its rate) and one output (water level difference). In this paper, the fuzzy control algorithm was introduced in detail which included the setting of input and output, the selection of membership function and the setting of fuzzy rules. The fuzzy control system was meaningful to the smart water-saving irrigation in greenhouse.",
"title": ""
},
{
"docid": "eb377b18937389d70b2d1d0116361530",
"text": "We describe an approach to print composite polymers in high-resolution three-dimensional (3D) architectures that can be rapidly transformed to a new permanent configuration directly by heating. The permanent shape of a component results from the programmed time evolution of the printed shape upon heating via the design of the architecture and process parameters of a composite consisting of a glassy shape memory polymer and an elastomer that is programmed with a built-in compressive strain during photopolymerization. Upon heating, the shape memory polymer softens, releases the constraint on the strained elastomer, and allows the object to transform into a new permanent shape, which can then be reprogrammed into multiple subsequent shapes. Our key advance, the markedly simplified creation of high-resolution complex 3D reprogrammable structures, promises to enable myriad applications across domains, including medical technology, aerospace, and consumer products, and even suggests a new paradigm in product design, where components are simultaneously designed to inhabit multiple configurations during service.",
"title": ""
},
{
"docid": "e3051e92e84c69f999c09fe751c936f0",
"text": "Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data. Nevertheless, these networks often generalize well in practice. It has also been observed that trained networks can often be “compressed” to much smaller representations. The purpose of this paper is to connect these two empirical observations. Our main technical result is a generalization bound for compressed networks based on the compressed size. Combined with off-the-shelf compression algorithms, the bound leads to state of the art generalization guarantees; in particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem. As additional evidence connecting compression and generalization, we show that compressibility of models that tend to overfit is limited: We establish an absolute limit on expected compressibility as a function of expected generalization error, where the expectations are over the random choice of training examples. The bounds are complemented by empirical results that show an increase in overfitting implies an increase in the number of bits required to describe a trained network.",
"title": ""
},
{
"docid": "d5666bfb1fcd82ac89da2cb893ba9fb7",
"text": "Ad-servers have to satisfy many different targeting criteria, and the combination can often result in no feasible solution. We hypothesize that advertisers may be defining these metrics to create a kind of \"proxy target\". We therefore reformulate the standard ad-serving problem to one where we attempt to get as close as possible to the advertiser's multi-dimensional target inclusive of delivery. We use a simple simulation to illustrate the behavior of this algorithm compared to Constraint and Pacing strategies. The system is then deployed in one of the largest video ad-servers in the United States and we show experimental results from live test ads, as well as 6 months of production performance across hundreds of ads. We find that the live ad-server tests match the simulation, and we report significant gains in multi-KPI performance from using the error minimization strategy.",
"title": ""
},
{
"docid": "a93e0e98e6367606a8bb72000b0bbe8a",
"text": "Programming by Demonstration: a Machine Learning Approach",
"title": ""
}
] |
scidocsrr
|
ab9cc4ceb41082e65ed2d122060559c2
|
Combining and measuring the benefits of bimanual pen and direct-touch interaction on horizontal interfaces
|
[
{
"docid": "5313d913c67668269bc95ccde8a48670",
"text": "A touchscreen can be overlaid on a tablet computer to support asymmetric two-handed interaction in which the preferred hand uses a stylus and the non-preferred hand operates the touchscreen. The result is a portable device that allows both hands to interact directly with the display, easily constructed from commonly available hardware. The method for tracking the independent motions of both hands is described. A wide variety of existing two-handed interaction techniques can be used on this platform, as well as some new ones that exploit the reconfigurability of touchscreen interfaces. Informal tests show that, when the non-preferred hand performs simple actions, users find direct manipulation on the display with both hands to be comfortable, natural, and efficient.",
"title": ""
}
] |
[
{
"docid": "4a2b00754ed6b4e90708f44f6ad6ccb2",
"text": "Virtual reality (VR) is useful for treating several psychological problems, including phobias such as fear of flying, agoraphobia, claustrophobia, and phobia to insects and small animals. We believe that augmented reality (AR) could also be used to treat some psychological disorders. AR and VR share some advantages over traditional treatments. However, AR gives a greater feeling of presence (the sensation of being there) and reality judgment (judging an experience as real) than VR because the environment and the elements the patient uses to interact with the application are real. Moreover, in AR users see their own hands, feet, and so on, whereas VR only simulates this experience. With these differences in mind, the question arises as to the kinds of psychological treatments AR and VR are most suited for. In our system, patients see their own hands, feet, and so on. They can touch the table that animals are crossing or seeing their feet while the animals are running on the floor. They can also hold a marker with a dead spider or cockroach or pick up a flyswatter, a can of insecticide, or a dustpan.",
"title": ""
},
{
"docid": "c15093ead030ba1aa020a99c312109fa",
"text": "Analysts report spending upwards of 80% of their time on problems in data cleaning. The data cleaning process is inherently iterative, with evolving cleaning workflows that start with basic exploratory data analysis on small samples of dirty data, then refine analysis with more sophisticated/expensive cleaning operators (i.e., crowdsourcing), and finally apply the insights to a full dataset. While an analyst often knows at a logical level what operations need to be done, they often have to manage a large search space of physical operators and parameters. We present Wisteria, a system designed to support the iterative development and optimization of data cleaning workflows, especially ones that utilize the crowd. Wisteria separates logical operations from physical implementations, and driven by analyst feedback, suggests optimizations and/or replacements to the analyst’s choice of physical implementation. We highlight research challenges in sampling, in-flight operator replacement, and crowdsourcing. We overview the system architecture and these techniques, then propose a demonstration designed to showcase how Wisteria can improve iterative data analysis and cleaning. The code is available at: http://www.sampleclean.org.",
"title": ""
},
{
"docid": "f03679aaf855457f7aaf5b1568e5db90",
"text": "An indispensable part of our modern life is scientific computing which is used in large-scale high-performance systems as well as in low-power smart cyber-physical systems. Hence, accelerators for scientific computing need to be fast and energy efficient. Therefore, partial differential equations (PDEs), as an integral component of many scientific computing tasks, require efficient implementation. In this regard, FPGAs are well suited for data-parallel computations as they occur in PDE solvers. However, including FPGAs in the programming flow is not trivial, as hardware description languages (HDLs) have to be exploited, which requires detailed knowledge of the underlying hardware. This issue is tackled by OpenCL, which allows to write standardized code in a C-like fashion, rendering experience with HDLs unnecessary. Yet, hiding the underlying hardware from the developer makes it challenging to implement solvers that exploit the full FPGA potential. Therefore, we propose in this work a comprehensive set of generic and specific optimization techniques for PDE solvers using OpenCL that improve the FPGA performance and energy efficiency by orders of magnitude. Based on these optimizations, our study shows that, despite the high abstraction level of OpenCL, very energy efficient PDE accelerators on the FPGA fabric can be designed, making the FPGA an ideal solution for power-constrained applications.",
"title": ""
},
{
"docid": "f1414fa3b4e4828489fd9da99892a795",
"text": "PERSON APPROACH The long-standing and widespread tradition of the person approach focuses on the unsafe acts—errors and procedural violations—of people on the front line: nurses, physicians, surgeons, anesthetists, pharmacists, and the like. It views these unsafe acts as arising primarily from aberrant mental processes such as forgetfulness, inattention, poor motivation, carelessness, negligence, and recklessness. The associated countermeasures are directed mainly at reducing unwanted variability in human behavior. These methods include poster campaigns that appeal to people’s fear, writing another procedure (or adding to existing ones), disciplinary measures, threat of litigation, retraining, naming, blaming, and shaming. Followers of these approaches tend to treat errors as moral issues, assuming that bad things happen to bad people—what psychologists have called the “just-world hypothesis.”",
"title": ""
},
{
"docid": "d579ed125d3a051069b69f634fffe488",
"text": "Culture can be thought of as a set of everyday practices and a core theme-individualism, collectivism, or honor-as well as the capacity to understand each of these themes. In one's own culture, it is easy to fail to see that a cultural lens exists and instead to think that there is no lens at all, only reality. Hence, studying culture requires stepping out of it. There are two main methods to do so: The first involves using between-group comparisons to highlight differences and the second involves using experimental methods to test the consequences of disruption to implicit cultural frames. These methods highlight three ways that culture organizes experience: (a) It shields reflexive processing by making everyday life feel predictable, (b) it scaffolds which cognitive procedure (connect, separate, or order) will be the default in ambiguous situations, and (c) it facilitates situation-specific accessibility of alternate cognitive procedures. Modern societal social-demographic trends reduce predictability and increase collectivism and honor-based go-to cognitive procedures.",
"title": ""
},
{
"docid": "da28960f4a5daeb80aa5c344db326c8d",
"text": "Adaptive traffic signal control, which adjusts traffic signal timing according to real-time traffic, has been shown to be an effective method to reduce traffic congestion. Available works on adaptive traffic signal control make responsive traffic signal control decisions based on human-crafted features (e.g. vehicle queue length). However, human-crafted features are abstractions of raw traffic data (e.g., position and speed of vehicles), which ignore some useful traffic information and lead to suboptimal traffic signal controls. In this paper, we propose a deep reinforcement learning algorithm that automatically extracts all useful features (machine-crafted features) from raw real-time traffic data and learns the optimal policy for adaptive traffic signal control. To improve algorithm stability, we adopt experience replay and target network mechanisms. Simulation results show that our algorithm reduces vehicle delay by up to 47% and 86% when compared to another two popular traffic signal control algorithms, longest queue first algorithm and fixed time control algorithm, respectively.",
"title": ""
},
{
"docid": "24f74b24c68d633ee74f0da78f6ec084",
"text": "This paper presents a fully integrated energy harvester that maintains >35% end-to-end efficiency when harvesting from a 0.84 mm 2 solar cell in low light condition of 260 lux, converting 7 nW input power from 250 mV to 4 V. Newly proposed self-oscillating switched-capacitor (SC) DC-DC voltage doublers are cascaded to form a complete harvester, with configurable overall conversion ratio from 9× to 23×. In each voltage doubler, the oscillator is completely internalized within the SC network, eliminating clock generation and level shifting power overheads. A single doubler has >70% measured efficiency across 1 nA to 0.35 mA output current ( >10 5 range) with low idle power consumption of 170 pW. In the harvester, each doubler has independent frequency modulation to maintain its optimum conversion efficiency, enabling optimization of harvester overall conversion efficiency. A leakage-based delay element provides energy-efficient frequency control over a wide range, enabling low idle power consumption and a wide load range with optimum conversion efficiency. The harvester delivers 5 nW-5 μW output power with >40% efficiency and has an idle power consumption 3 nW, in test chip fabricated in 0.18 μm CMOS technology.",
"title": ""
},
{
"docid": "4e6ff17d33aceaa63ec156fc90aed2ce",
"text": "Objective:\nThe aim of the present study was to translate and cross-culturally adapt the Functional Status Score for the intensive care unit (FSS-ICU) into Brazilian Portuguese.\n\n\nMethods:\nThis study consisted of the following steps: translation (performed by two independent translators), synthesis of the initial translation, back-translation (by two independent translators who were unaware of the original FSS-ICU), and testing to evaluate the target audience's understanding. An Expert Committee supervised all steps and was responsible for the modifications made throughout the process and the final translated version.\n\n\nResults:\nThe testing phase included two experienced physiotherapists who assessed a total of 30 critical care patients (mean FSS-ICU score = 25 ± 6). As the physiotherapists did not report any uncertainties or problems with interpretation affecting their performance, no additional adjustments were made to the Brazilian Portuguese version after the testing phase. Good interobserver reliability between the two assessors was obtained for each of the 5 FSS-ICU tasks and for the total FSS-ICU score (intraclass correlation coefficients ranged from 0.88 to 0.91).\n\n\nConclusion:\nThe adapted version of the FSS-ICU in Brazilian Portuguese was easy to understand and apply in an intensive care unit environment.",
"title": ""
},
{
"docid": "e637afb6ca079213192984a6c5b1731b",
"text": "This paper describes the implementation of a holographic subsurface radar for sounding at shallow depths. The radar uses continuous signal with frequency switching and records phases of reflected signal at several operating frequencies. Two versions of the radar with frequency bandwidths 6.4-6.8 and 13.8-14.6 GHz were designed and tested. The data acquisition is accomplished by manual scanning along parallel equidistant lines. Upon acquisition, this two-dimensional interference pattern, or hologram, can be focused by the outlined Fourier-based back propagation algorithm into an image that reflects distribution of sources. Experimentally obtained and demonstrated in the paper images with plan view resolution of the order of 1 cm suggest application of the radar in civil engineering and non-destructive testing.",
"title": ""
},
{
"docid": "dde2211bd3e9cceb20cce63d670ebc4c",
"text": "This paper presents the design of a 60 GHz phase shifter integrated with a low-noise amplifier (LNA) and power amplifier (PA) in a 65 nm CMOS technology for phased array systems. The 4-bit digitally controlled RF phase shifter is based on programmable weighted combinations of I/Q paths using digitally controlled variable gain amplifiers (VGAs). With the combination of an LNA, a phase shifter and part of a combiner, each receiver path achieves 7.2 dB noise figure, a 360° phase shift range in steps of approximately 22.5°, an average insertion gain of 12 dB at 61 GHz, a 3 dB-bandwidth of 5.5 GHz and dissipates 78 mW. Consisting of a phase shifter and a PA, one transmitter path achieves a maximum output power of higher than +8.3 dBm, a 360° phase shift range in 22.5° steps, an average insertion gain of 7.7 dB at 62 GHz, a 3 dB-bandwidth of 6.5 GHz and dissipates 168 mW.",
"title": ""
},
{
"docid": "534b98fdecd4e3c53eb2659b7a25b556",
"text": "Nitrite is known to accumulate in wastewater treatment plants (WWTPs) under certain environmental conditions. The protonated form of nitrite, free nitrous acid (FNA), has been found to cause severe inhibition to numerous bioprocesses at WWTPs. However, this inhibitory effect of FNA may possibly be gainfully exploited, such as repressing nitrite oxidizing bacteria (NOB) growth to achieve N removal via the nitrite shortcut. However, the inhibition threshold of FNA to repress NOB (∼0.02 mg HNO2-N/L) may also inhibit other bioprocesses. This paper reviews the inhibitory effects of FNA on nitrifiers, denitrifiers, anammox bacteria, phosphorus accumulating organisms (PAO), methanogens, and other microorganisms in populations used in WWTPs. The possible inhibition mechanisms of FNA on microorganisms are discussed and compared. It is concluded that a single inhibition mechanism is not sufficient to explain the negative impacts of FNA on microbial metabolisms and that multiple inhibitory effects can be generated from FNA. The review would suggest further research is necessary before the FNA inhibition mechanisms can be more effectively used to optimize WWTP bioprocesses. Perspectives on research directions, how the outcomes may be used to manipulate bioprocesses and the overall implications of FNA on WWTPs are also discussed.",
"title": ""
},
{
"docid": "fc2f99fff361e68f154d88da0739bac4",
"text": "Mondor's disease is characterized by thrombophlebitis of the superficial veins of the breast and the chest wall. The list of causes is long. Various types of clothing, mainly tight bras and girdles, have been postulated as causes. We report a case of a 34-year-old woman who referred typical symptoms and signs of Mondor's disease, without other possible risk factors, and showed the cutaneous findings of the tight bra. Therefore, after distinguishing benign causes of Mondor's disease from hidden malignant causes, the clinicians should consider this clinical entity.",
"title": ""
},
{
"docid": "ee4c6084527c6099ea5394aec66ce171",
"text": "Gualzru’s path to the Advertisement World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Fernando Fernández, Moisés Mart́ınez, Ismael Garćıa-Varea, Jesús Mart́ınez-Gómez, Jose Pérez-Lorenzo, Raquel Viciana, Pablo Bustos, Luis J. Manso, Luis Calderita, Marco Antonio Gutiérrez Giraldo, Pedro Núñez, Antonio Bandera, Adrián Romero-Garcés, Juan Bandera and Rebeca Marfil",
"title": ""
},
{
"docid": "7a0dc88d05401c92581d6fed11aed9a1",
"text": "The technological advancement has been accompanied with many issues to the information: security, privacy, and integrity. Malware is one of the security issues that threaten computer system. Ransomware is a type of malicious software that threatens to publish the victim’s data or perpetually block access to it unless a ransom is paid. This paper investigates the intrusion of WannaCry ransomware and the possible detection of the ransomware using static and dynamic analysis. From the analysis, the features of the malware were extracted and detection has been done using those features. The intrusion detection technique used here in this study is Yara-rule based detection which involves an attempt to define a set of rules which comprises of unique strings which is decoded from the wannacry file.",
"title": ""
},
{
"docid": "277919545c003c0c2a266ace0d70de03",
"text": "Two single-pole, double-throw transmit/receive switches were designed and fabricated with different substrate resistances using a 0.18-/spl mu/m p/sup $/substrate CMOS process. The switch with low substrate resistances exhibits 0.8-dB insertion loss and 17-dBm P/sub 1dB/ at 5.825 GHz, whereas the switch with high substrate resistances has 1-dB insertion loss and 18-dBm P/sub 1dB/. These results suggest that the optimal insertion loss can be achieved with low substrate resistances and 5.8-GHz T/R switches with excellent insertion loss and reasonable power handling capability can be implemented in a 0.18-/spl mu/m CMOS process.",
"title": ""
},
{
"docid": "e0160911f70fa836f64c08f721f6409e",
"text": "Today’s openly available knowledge bases, such as DBpedia, Yago, Wikidata or Freebase, capture billions of facts about the world’s entities. However, even the largest among these (i) are still limited in up-to-date coverage of what happens in the real world, and (ii) miss out on many relevant predicates that precisely capture the wide variety of relationships among entities. To overcome both of these limitations, we propose a novel approach to build on-the-fly knowledge bases in a query-driven manner. Our system, called QKBfly, supports analysts and journalists as well as question answering on emerging topics, by dynamically acquiring relevant facts as timely and comprehensively as possible. QKBfly is based on a semantic-graph representation of sentences, by which we perform three key IE tasks, namely named-entity disambiguation, co-reference resolution and relation extraction, in a light-weight and integrated manner. In contrast to Open IE, our output is canonicalized. In contrast to traditional IE, we capture more predicates, including ternary and higher-arity ones. Our experiments demonstrate that QKBfly can build high-quality, on-the-fly knowledge bases that can readily be deployed, e.g., for the task of ad-hoc question answering. PVLDB Reference Format: D. B. Nguyen, A. Abujabal, N. K. Tran, M. Theobald, and G. Weikum. Query-Driven On-The-Fly Knowledge Base Construction. PVLDB, 11 (1): 66-7 , 2017. DOI: 10.14778/3136610.3136616",
"title": ""
},
{
"docid": "88e193c935a216ea21cb352921deaa71",
"text": "This overview paper outlines our views of actual security of biometric authentication and encryption systems. The attractiveness of some novel approaches like cryptographic key generation from biometric data is in some respect understandable, yet so far has lead to various shortcuts and compromises on security. Our paper starts with an introductory section that is followed by a section about variability of biometric characteristics, with a particular attention paid to biometrics used in large systems. The following sections then discuss the potential for biometric authentication systems, and for the use of biometrics in support of cryptographic applications as they are typically used in computer systems.",
"title": ""
},
{
"docid": "daf1be97c0e1f6d133b58ca899fbd5af",
"text": "Predicting traffic conditions has been recently explored as a way to relieve traffic congestion. Several pioneering approaches have been proposed based on traffic observations of the target location as well as its adjacent regions, but they obtain somewhat limited accuracy due to lack of mining road topology. To address the effect attenuation problem, we propose to take account of the traffic of surrounding locations. We propose an end-to-end framework called DeepTransport, in which Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) are utilized to obtain spatial-temporal traffic information within a transport network topology. In addition, attention mechanism is introduced to align spatial and temporal information. Moreover, we constructed and released a real-world large traffic condition dataset with 5-minute resolution. Our experiments on this dataset demonstrate our method captures the complex relationship in both temporal and spatial domain. It significantly outperforms traditional statistical methods and a state-of-the-art deep learning method.",
"title": ""
},
{
"docid": "35f2e6242ca33c7bb7127cf4111b088a",
"text": "We present a new algorithm for efficiently training n-gram language models on uncertain data, and illustrate its use for semisupervised language model adaptation. We compute the probability that an n-gram occurs k times in the sample of uncertain data, and use the resulting histograms to derive a generalized Katz back-off model. We compare three approaches to semisupervised adaptation of language models for speech recognition of selected YouTube video categories: (1) using just the one-best output from the baseline speech recognizer or (2) using samples from lattices with standard algorithms versus (3) using full lattices with our new algorithm. Unlike the other methods, our new algorithm provides models that yield solid improvements over the baseline on the full test set, and, further, achieves these gains without hurting performance on any of the set of video categories. We show that categories with the most data yielded the largest gains. The algorithm has been released as part of the OpenGrm n-gram library [1].",
"title": ""
},
{
"docid": "7f2403a849690fb12a184ec67b0a2872",
"text": "Deep reinforcement learning achieves superhuman performance in a range of video game environments, but requires that a designer manually specify a reward function. It is often easier to provide demonstrations of a target behavior than to design a reward function describing that behavior. Inverse reinforcement learning (IRL) algorithms can infer a reward from demonstrations in low-dimensional continuous control environments, but there has been little work on applying IRL to high-dimensional video games. In our CNN-AIRL baseline, we modify the state-of-the-art adversarial IRL (AIRL) algorithm to use CNNs for the generator and discriminator. To stabilize training, we normalize the reward and increase the size of the discriminator training dataset. We additionally learn a low-dimensional state representation using a novel autoencoder architecture tuned for video game environments. This embedding is used as input to the reward network, improving the sample efficiency of expert demonstrations. Our method achieves high-level performance on the simple Catcher video game, substantially outperforming the CNN-AIRL baseline. We also score points on the Enduro Atari racing game, but do not match expert performance, highlighting the need for further work.",
"title": ""
}
] |
scidocsrr
|
68f2bf965191c6c8fede96c83c3894a6
|
Interpretable VAEs for nonlinear group factor analysis
|
[
{
"docid": "db75809bcc029a4105dc12c63e2eca76",
"text": "Understanding the amazingly complex human cerebral cortex requires a map (or parcellation) of its major subdivisions, known as cortical areas. Making an accurate areal map has been a century-old objective in neuroscience. Using multi-modal magnetic resonance images from the Human Connectome Project (HCP) and an objective semi-automated neuroanatomical approach, we delineated 180 areas per hemisphere bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults. We characterized 97 new areas and 83 areas previously reported using post-mortem microscopy or other specialized study-specific approaches. To enable automated delineation and identification of these areas in new HCP subjects and in future studies, we trained a machine-learning classifier to recognize the multi-modal ‘fingerprint’ of each cortical area. This classifier detected the presence of 96.6% of the cortical areas in new subjects, replicated the group parcellation, and could correctly locate areas in individuals with atypical parcellations. The freely available parcellation and classifier will enable substantially improved neuroanatomical precision for studies of the structural and functional organization of human cerebral cortex and its variation across individuals and in development, aging, and disease.",
"title": ""
}
] |
[
{
"docid": "732e72f152075d47f6473910a2e98e9f",
"text": "In this paper we describe the ForSpec Temporal Logic (FTL), the new temporal property-specification logic of ForSpec, Intel’s new formal specification language. The key features of FTL are as follows: it is a l inear temporal logic, based on Pnueli’s LTL, it is based on a rich set of logic al and arithmetical operations on bit vectors to describe state properties, it enables the user to define temporal connectives over time windows, it enables th user to define regular events, which are regular sequences of Boolean events, and then relate such events via special connectives, it enables the user to expre ss roperties about the past, and it includes constructs that enable the user to mode l multiple clock and reset signals, which is useful in the verification of hardwar e design.",
"title": ""
},
{
"docid": "6ba537ef9dd306a3caaba63c2b48c222",
"text": "A lumped-element circuit is proposed to model a coplanar waveguide (CPW) interdigital capacitor (IDC). Closed-form expressions suitable for CAD purposes are given for each element in the circuit. The obtained results for the series capacitance are in good agreement with those available in the literature. In addition, the scattering parameters obtained from the circuit model are compared with those obtained using the full-wave method of moments (MoM) and good agreement is obtained. Moreover, a multilayer feed-forward artificial neural network (ANN) is developed to model the capacitance of the CPW IDC. It is shown that the developed ANN has successfully learned the required task of evaluating the capacitance of the IDC. © 2005 Wiley Periodicals, Inc. Int J RF and Microwave CAE 15: 551–559, 2005.",
"title": ""
},
{
"docid": "03fb57d2810ed42f7fe57f688db6fd57",
"text": "This paper reviews some of the accomplishments in the field of robot dynamics research, from the development of the recursive Newton-Euler algorithm to the present day. Equations and algorithms are given for the most important dynamics computations, expressed in a common notation to facilitate their presentation and comparison.",
"title": ""
},
{
"docid": "c2081b44d63490f2967517558065bdf0",
"text": "The add-on battery pack in plug-in hybrid electric vehicles can be charged from an AC outlet, feed power back to the grid, provide power for electric traction, and capture regenerative energy when braking. Conventionally, three-stage bidirectional converter interfaces are used to fulfil these functions. In this paper, a single stage integrated converter is proposed based on direct AC/DC conversion theory. The proposed converter eliminates the full bridge rectifier, reduces the number of semiconductor switches and high current inductors, and improves the conversion efficiency.",
"title": ""
},
{
"docid": "b8274589a145a94e19329b2640a08c17",
"text": "Since 2004, many nations have started issuing “e-passports” containing an RFID tag that, when powered, broadcast information. It is claimed that these passports are more secure and that our data will be protected from any possible unauthorised attempts to read it. In this paper we show that there is a flaw in one of the passport’s protocols that makes it possible to trace the movements of a particular passport, without having to break the passport’s cryptographic key. All an attacker has to do is to record one session between the passport and a legitimate reader, then by replaying a particular message, the attacker can distinguish that passport from any other. We have implemented our attack and tested it successfully against passports issued by a range of nations.",
"title": ""
},
{
"docid": "6ab38099b989f1d9bdc504c9b50b6bbe",
"text": "Users' search tactics often appear naïve. Much research has endeavored to understand the rudimentary query typically seen in log analyses and user studies. Researchers have tested a number of approaches to supporting query development, including information literacy training and interaction design these have tried and often failed to induce users to use more complex search strategies. To further investigate this phenomenon, we combined established HCI methods with models from cultural studies, and observed customers' mediated searches for books in bookstores. Our results suggest that sophisticated search techniques demand mental models that many users lack.",
"title": ""
},
{
"docid": "3d3c04826eafd366401231aba984419b",
"text": "INTRODUCTION\nDespite the known advantages of objective physical activity monitors (e.g., accelerometers), these devices have high rates of non-wear, which leads to missing data. Objective activity monitors are also unable to capture valuable contextual information about behavior. Adolescents recruited into physical activity surveillance and intervention studies will increasingly have smartphones, which are miniature computers with built-in motion sensors.\n\n\nMETHODS\nThis paper describes the design and development of a smartphone application (\"app\") called Mobile Teen that combines objective and self-report assessment strategies through (1) sensor-informed context-sensitive ecological momentary assessment (CS-EMA) and (2) sensor-assisted end-of-day recall.\n\n\nRESULTS\nThe Mobile Teen app uses the mobile phone's built-in motion sensor to automatically detect likely bouts of phone non-wear, sedentary behavior, and physical activity. The app then uses transitions between these inferred states to trigger CS-EMA self-report surveys measuring the type, purpose, and context of activity in real-time. The end of the day recall component of the Mobile Teen app allows users to interactively review and label their own physical activity data each evening using visual cues from automatically detected major activity transitions from the phone's built-in motion sensors. Major activity transitions are identified by the app, which cues the user to label that \"chunk,\" or period, of time using activity categories.\n\n\nCONCLUSION\nSensor-driven CS-EMA and end-of-day recall smartphone apps can be used to augment physical activity data collected by objective activity monitors, filling in gaps during non-wear bouts and providing additional real-time data on environmental, social, and emotional correlates of behavior. Smartphone apps such as these have potential for affordable deployment in large-scale epidemiological and intervention studies.",
"title": ""
},
{
"docid": "076ad699191bd3df87443f427268222a",
"text": "Robotic systems for disease detection in greenhouses are expected to improve disease control, increase yield, and reduce pesticide application. We present a robotic detection system for combined detection of two major threats of greenhouse bell peppers: Powdery mildew (PM) and Tomato spotted wilt virus (TSWV). The system is based on a manipulator, which facilitates reaching multiple detection poses. Several detection algorithms are developed based on principal component analysis (PCA) and the coefficient of variation (CV). Tests ascertain the system can successfully detect the plant and reach the detection pose required for PM (along the side of the plant), yet it has difficulties in reaching the TSWV detection pose (above the plant). Increasing manipulator work-volume is expected to solve this issue. For TSWV, PCA-based classification with leaf vein removal, achieved the highest classification accuracy (90%) while the accuracy of the CV methods was also high (85% and 87%). For PM, PCA-based pixel-level classification was high (95.2%) while leaf condition classification accuracy was low (64.3%) since it was determined based on the upper side of the leaf while disease symptoms start on its lower side. Exposure of the lower side of the leaf during detection is expected to improve PM condition detection.",
"title": ""
},
{
"docid": "77362cc72d7a09dbbb0f067c11fe8087",
"text": "The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high-performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing.",
"title": ""
},
{
"docid": "883be979cd5e7d43ded67da1a40427ce",
"text": "This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution.",
"title": ""
},
{
"docid": "50906e5d648b7598c307b09975daf2d8",
"text": "Digitization forces industries to adapt to changing market conditions and consumer behavior. Exponential advances in technology, increased consumer power and sharpened competition imply that companies are facing the menace of commoditization. To sustainably succeed in the market, obsolete business models have to be adapted and new business models can be developed. Differentiation and unique selling propositions through innovation as well as holistic stakeholder engagement help companies to master the transformation. To enable companies and start-ups facing the implications of digital change, a tool was created and designed specifically for this demand: the Business Model Builder. This paper investigates the process of transforming the Business Model Builder into a software-supported digitized version. The digital twin allows companies to simulate the iterative adjustment of business models to constantly changing market conditions as well as customer needs on an ongoing basis. The user can modify individual variables, understand interdependencies and see the impact on the result of the business case, i.e. earnings before interest and taxes (EBIT) or economic value added (EVA). The simulation of a business models accordingly provides the opportunity to generate a dynamic view of the business model where any changes of input variables are considered in the result, the business case. Thus, functionality, feasibility and profitability of a business model can be reviewed, tested and validated in the digital simulation tool.",
"title": ""
},
{
"docid": "ec3542685d1b6e71e523cdcafc59d849",
"text": "The goal of subspace segmentation is to partition a set of data drawn from a union of subspace into their underlying subspaces. The performance of spectral clustering based approaches heavily depends on learned data affinity matrices, which are usually constructed either directly from the raw data or from their computed representations. In this paper, we propose a novel method to simultaneously learn the representations of data and the affinity matrix of representation in a unified optimization framework. A novel Augmented Lagrangian Multiplier based algorithm is designed to effectively and efficiently seek the optimal solution of the problem. The experimental results on both synthetic and real data demonstrate the efficacy of the proposed method and its superior performance over the state-of-the-art alternatives.",
"title": ""
},
{
"docid": "d1eb1b18105d79c44dc1b6b3b2c06ee2",
"text": "An implementation of high speed AES algorithm based on FPGA is presented in this paper in order to improve the safety of data in transmission. The mathematic principle, encryption process and logic structure of AES algorithm are introduced. So as to reach the porpose of improving the system computing speed, the pipelining and papallel processing methods were used. The simulation results show that the high-speed AES encryption algorithm implemented correctly. Using the method of AES encryption the data could be protected effectively.",
"title": ""
},
{
"docid": "42961b66e41a155edb74cc4ab5493c9c",
"text": "OBJECTIVE\nTo determine the preventive effect of manual lymph drainage on the development of lymphoedema related to breast cancer.\n\n\nDESIGN\nRandomised single blinded controlled trial.\n\n\nSETTING\nUniversity Hospitals Leuven, Leuven, Belgium.\n\n\nPARTICIPANTS\n160 consecutive patients with breast cancer and unilateral axillary lymph node dissection. The randomisation was stratified for body mass index (BMI) and axillary irradiation and treatment allocation was concealed. Randomisation was done independently from recruitment and treatment. Baseline characteristics were comparable between the groups.\n\n\nINTERVENTION\nFor six months the intervention group (n = 79) performed a treatment programme consisting of guidelines about the prevention of lymphoedema, exercise therapy, and manual lymph drainage. The control group (n = 81) performed the same programme without manual lymph drainage.\n\n\nMAIN OUTCOME MEASURES\nCumulative incidence of arm lymphoedema and time to develop arm lymphoedema, defined as an increase in arm volume of 200 mL or more in the value before surgery.\n\n\nRESULTS\nFour patients in the intervention group and two in the control group were lost to follow-up. At 12 months after surgery, the cumulative incidence rate for arm lymphoedema was comparable between the intervention group (24%) and control group (19%) (odds ratio 1.3, 95% confidence interval 0.6 to 2.9; P = 0.45). The time to develop arm lymphoedema was comparable between the two group during the first year after surgery (hazard ratio 1.3, 0.6 to 2.5; P = 0.49). The sample size calculation was based on a presumed odds ratio of 0.3, which is not included in the 95% confidence interval. This odds ratio was calculated as (presumed cumulative incidence of lymphoedema in intervention group/presumed cumulative incidence of no lymphoedema in intervention group)×(presumed cumulative incidence of no lymphoedema in control group/presumed cumulative incidence of lymphoedema in control group) or (10/90)×(70/30).\n\n\nCONCLUSION\nManual lymph drainage in addition to guidelines and exercise therapy after axillary lymph node dissection for breast cancer is unlikely to have a medium to large effect in reducing the incidence of arm lymphoedema in the short term. Trial registration Netherlands Trial Register No NTR 1055.",
"title": ""
},
{
"docid": "43b2721bb2fb4e50e855c69ea147ffd1",
"text": "Bladder tumours represent a heterogeneous group of cancers. The natural history of these bladder cancers is that of recurrence of disease and progression to higher grade and stage disease. Furthermore, recurrence and progression rates of superficial bladder cancer vary according to several tumour characteristics, mainly tumour grade and stage. The most recent World Health Organization (WHO) classification of tumours of the urinary system includes urothelial flat lesions: flat hyperplasia, dysplasia and carcinoma in situ. The papillary lesions are broadly subdivided into benign (papilloma and inverted papilloma), papillary urothelial neoplasia of low malignant potential (PUNLMP) and non-invasive papillary carcinoma (low or high grade). The initial proposal of the 2004 WHO has been achieved, with most reports supporting that categories are better defined than in previous classifications. An additional important issue is that PUNLMP, the most controversial proposal of the WHO in 2004, has lower malignant behaviour than low-grade carcinoma. Whether PUNLMP remains a clinically useful category, or whether this category should be expanded to include all low-grade, stage Ta lesions (PUNLMP and low-grade papillary carcinoma) as a wider category of less aggressive tumours not labelled as cancer, needs to be discussed in the near future. This article summarizes the recent literature concerning important issues in the pathology and the clinical management of patients with bladder urothelial carcinoma. Emphasis is placed on clinical presentation, the significance of haematuria, macroscopic appearance (papillary, solid or mixed, single or multiple) and synchronous or metachronous presentation (field disease vs monoclonal disease with seeding), classification and microscopic variations of bladder cancer with clinical significance, TNM distribution and the pathological grading according to the 2004 WHO proposal.",
"title": ""
},
{
"docid": "6b9663085968c5483c9a2871b4807524",
"text": "E-Commerce is one of the crucial trading methods worldwide. Hence, it is important to understand consumers’ online purchase intention. This research aims to examine factors that influence consumers’ online purchase intention among university students in Malaysia. Quantitative research approach has been adapted in this research by distributing online questionnaires to 250 Malaysian university students aged between 20-29 years old, who possess experience in online purchases. Findings of this research have discovered that trust, perceived usefulness and subjective norm are the significant factors in predicting online purchase intention. However, perceived ease of use and perceived enjoyment are not significant in predicting the variance in online purchase intention. The findings also revealed that subjective norm is the most significant predicting factor on online purchase intention among university students in Malaysia. Findings of this research will provide online marketers with a better understanding on online purchase intention which enable them to direct effective online marketing strategies.",
"title": ""
},
{
"docid": "e715b87fc145d80dbab179abcc85c14b",
"text": "This paper proposes an efficient multi-view 3D reconstruction method based on randomization and propagation scheme. Our method progressively refines a 3D model of a given scene by randomly perturbing the initial guess of 3D points and propagating photo-consistent ones to their neighbors. While finding local optima is an ordinary method for better photo-consistency, our randomization and propagation takes lucky matchings to spread better points replacing old ones for reducing the computational complexity. Experiments show favorable efficiency of the proposed method accompanied by competitive accuracy with the state-of-the-art methods.",
"title": ""
},
{
"docid": "4d405c1c2919be01209b820f61876d57",
"text": "This paper presents a single-pole eight-throw switch, based on an eight-way power divider, using substrate integrate waveguide(SIW) technology. Eight sectorial-lines are formed by inserting radial slot-lines on the top plate of SIW power divider. Each sectorial-line can be controlled independently with high level of isolation. The switching is accomplished by altering the capacitance of the varactor on the line, which causes different input impedances to be seen at a central probe to each sectorial line. The proposed structure works as a switching circuit and an eight-way power divider depending on the bias condition. The change in resonant frequency and input impedance are estimated by adapting a tapered transmission line model. The detailed design, fabrication, and measurement are discussed.",
"title": ""
},
{
"docid": "60c976cb53d5128039e752e5f797f110",
"text": "This essay presents and discusses the developing role of virtual and augmented reality technologies in education. Addressing the challenges in adapting such technologies to focus on improving students’ learning outcomes, the author discusses the inclusion of experiential modes as a vehicle for improving students’ knowledge acquisition. Stakeholders in the educational role of technology include students, faculty members, institutions, and manufacturers. While the benefits of such technologies are still under investigation, the technology landscape offers opportunities to enhance face-to-face and online teaching, including contributions in the understanding of abstract concepts and training in real environments and situations. Barriers to technology use involve limited adoption of augmented and virtual reality technologies, and, more directly, necessary training of teachers in using such technologies within meaningful educational contexts. The author proposes a six-step methodology to aid adoption of these technologies as basic elements within the regular education: training teachers; developing conceptual prototypes; teamwork involving the teacher, a technical programmer, and an educational architect; and producing the experience, which then provides results in the subsequent two phases wherein teachers are trained to apply augmentedand virtual-reality solutions within their teaching methodology using an available subject-specific experience and then finally implementing the use of the experience in a regular subject with students. The essay concludes with discussion of the business opportunities facing virtual reality in face-to-face education as well as augmented and virtual reality in online education.",
"title": ""
},
{
"docid": "7ef3829b1fab59c50f08265d7f4e0132",
"text": "Muscle glycogen is the predominant energy source for soccer match play, though its importance for soccer training (where lower loads are observed) is not well known. In an attempt to better inform carbohydrate (CHO) guidelines, we quantified training load in English Premier League soccer players (n = 12) during a one-, two- and three-game week schedule (weekly training frequency was four, four and two, respectively). In a one-game week, training load was progressively reduced (P < 0.05) in 3 days prior to match day (total distance = 5223 ± 406, 3097 ± 149 and 2912 ± 192 m for day 1, 2 and 3, respectively). Whilst daily training load and periodisation was similar in the one- and two-game weeks, total accumulative distance (inclusive of both match and training load) was higher in a two-game week (32.5 ± 4.1 km) versus one-game week (25.9 ± 2 km). In contrast, daily training total distance was lower in the three-game week (2422 ± 251 m) versus the one- and two-game weeks, though accumulative weekly distance was highest in this week (35.5 ± 2.4 km) and more time (P < 0.05) was spent in speed zones >14.4 km · h(-1) (14%, 18% and 23% in the one-, two- and three-game weeks, respectively). Considering that high CHO availability improves physical match performance but high CHO availability attenuates molecular pathways regulating training adaptation (especially considering the low daily customary loads reported here, e.g., 3-5 km per day), we suggest daily CHO intake should be periodised according to weekly training and match schedules.",
"title": ""
}
] |
scidocsrr
|
efa1a1abdec28d20e35262578d71ae34
|
Neighborhood Mixture Model for Knowledge Base Completion
|
[
{
"docid": "a5b7253f56a487552ba3b0ce15332dd1",
"text": "We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (Socher et al., 2013) and TransE (Bordes et al., 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and/or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2% vs. 54.7% by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as BornInCitypa, bq ^ CityInCountrypb, cq ùñ Nationalitypa, cq. We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics, and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-ofthe-art confidence-based rule mining approach in mining horn rules that involve compositional reasoning.",
"title": ""
},
{
"docid": "95e2a8e2d1e3a1bbfbf44d20f9956cf0",
"text": "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to stateof-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: //github.com/mrlyk423/relation extraction.",
"title": ""
},
{
"docid": "8093219e7e2b4a7067f8d96118a5ea93",
"text": "We model knowledge graphs for their completion by encoding each entity and relation into a numerical space. All previous work including Trans(E, H, R, and D) ignore the heterogeneity (some relations link many entity pairs and others do not) and the imbalance (the number of head entities and that of tail entities in a relation could be different) of knowledge graphs. In this paper, we propose a novel approach TranSparse to deal with the two issues. In TranSparse, transfer matrices are replaced by adaptive sparse matrices, whose sparse degrees are determined by the number of entities (or entity pairs) linked by relations. In experiments, we design structured and unstructured sparse patterns for transfer matrices and analyze their advantages and disadvantages. We evaluate our approach on triplet classification and link prediction tasks. Experimental results show that TranSparse outperforms Trans(E, H, R, and D) significantly, and achieves state-ofthe-art performance.",
"title": ""
},
{
"docid": "7072c7b94fc6376b13649ec748612705",
"text": "Performing link prediction in Knowledge Bases (KBs) with embedding-based models, like with the model TransE (Bordes et al., 2013) which represents relationships as translations in the embedding space, have shown promising results in recent years. Most of these works are focused on modeling single relationships and hence do not take full advantage of the graph structure of KBs. In this paper, we propose an extension of TransE that learns to explicitly model composition of relationships via the addition of their corresponding translation vectors. We show empirically that this allows to improve performance for predicting single relationships as well as compositions of pairs of them.",
"title": ""
}
] |
[
{
"docid": "b0840d44b7ec95922eeed4ef71b338f9",
"text": "Decoding DNA symbols using next-generation sequencers was a major breakthrough in genomic research. Despite the many advantages of next-generation sequencers, e.g., the high-throughput sequencing rate and relatively low cost of sequencing, the assembly of the reads produced by these sequencers still remains a major challenge. In this review, we address the basic framework of next-generation genome sequence assemblers, which comprises four basic stages: preprocessing filtering, a graph construction process, a graph simplification process, and postprocessing filtering. Here we discuss them as a framework of four stages for data analysis and processing and survey variety of techniques, algorithms, and software tools used during each stage. We also discuss the challenges that face current assemblers in the next-generation environment to determine the current state-of-the-art. We recommend a layered architecture approach for constructing a general assembler that can handle the sequences generated by different sequencing platforms.",
"title": ""
},
{
"docid": "b798103f64ec684a4d0f530c7add8eeb",
"text": "Self-adaptation is a key feature of evolutionary algorithms (EAs). Although EAs have been used successfully to solve a wide variety of problems, the performance of this technique depends heavily on the selection of the EA parameters. Moreover, the process of setting such parameters is considered a time-consuming task. Several research works have tried to deal with this problem; however, the construction of algorithms letting the parameters adapt themselves to the problem is a critical and open problem of EAs. This work proposes a novel ensemble machine learning method that is able to learn rules, solve problems in a parallel way and adapt parameters used by its components. A self-adaptive ensemble machine consists of simultaneously working extended classifier systems (XCSs). The proposed ensemble machine may be treated as a meta classifier system. A new self-adaptive XCS-based ensemble machine was compared with two other XCSbased ensembles in relation to one-step binary problems: Multiplexer, One Counts, Hidden Parity, and randomly generated Boolean functions, in a noisy version as well. Results of the experiments have shown the ability of the model to adapt the mutation rate and the tournament size. The results are analyzed in detail.",
"title": ""
},
{
"docid": "a856b4fc2ec126ee3709d21ff4c3c49c",
"text": "In this work, glass fiber reinforced epoxy composites were fabricated. Epoxy resin was used as polymer matrix material and glass fiber was used as reinforcing material. The main focus of this work was to fabricate this composite material by the cheapest and easiest way. For this, hand layup method was used to fabricate glass fiber reinforced epoxy resin composites and TiO2 material was used as filler material. Six types of compositions were made with and without filler material keeping the glass fiber constant and changing the epoxy resin with respect to filler material addition. Mechanical properties such as tensile, impact, hardness, compression and flexural properties were investigated. Additionally, microscopic analysis was done. The experimental investigations show that without filler material the composites exhibit overall lower value in mechanical properties than with addition of filler material in the composites. The results also show that addition of filler material increases the mechanical properties but highest values were obtained for different filler material addition. From the obtained results, it was observed that composites filled by 15wt% of TiO2 particulate exhibited maximum tensile strength, 20wt% of TiO2 particulate exhibited maximum impact strength, 25wt% of TiO2 particulate exhibited maximum hardness value, 25wt% of TiO2 particulate exhibited maximum compressive strength, 20wt% of TiO2 particulate exhibited maximum flexural strength.",
"title": ""
},
{
"docid": "9b7ca792de0889191567a47410cb2970",
"text": "P2P online lending platforms have become increasingly developed. However, these platforms may suffer a serious loss caused by default behaviors of borrowers. In this paper, we present an effective default behavior prediction model to reduce default risk in P2P lending. The proposed model uses mobile phone usage data, which are generated from widely used mobile phones. We extract features from five aspects, including consumption, social network, mobility, socioeconomic, and individual attribute. Based on these features, we propose a joint decision model, which makes a default risk judgment through combining Random Forests with Light Gradient Boosting Machine. Validated by a real-world dataset collected by a mobile carrier and a P2P lending company in China, the proposed model not only demonstrates satisfactory performance on the evaluation metrics but also outperforms the existing methods in this area. Based on these results, the proposed model implies the high feasibility and potential to be adopted in real-world P2P online lending platforms.",
"title": ""
},
{
"docid": "6f1e71399e5786eb9c3923a1e967cd8f",
"text": "This Working Paper should not be reported as representing the views of the IMF. The views expressed in this Working Paper are those of the author(s) and do not necessarily represent those of the IMF or IMF policy. Working Papers describe research in progress by the author(s) and are published to elicit comments and to further debate. Using a dataset which breaks down FDI flows into primary, secondary and tertiary sector investments and a GMM dynamic approach to address concerns about endogeneity, the paper analyzes various macroeconomic, developmental, and institutional/qualitative determinants of FDI in a sample of emerging market and developed economies. While FDI flows into the primary sector show little dependence on any of these variables, secondary and tertiary sector investments are affected in different ways by countries’ income levels and exchange rate valuation, as well as development indicators such as financial depth and school enrollment, and institutional factors such as judicial independence and labor market flexibility. Finally, we find that the effect of these factors often differs between advanced and emerging economies. JEL Classification Numbers: F21, F23",
"title": ""
},
{
"docid": "0537a00983f91942099d93a5a2c22195",
"text": "Conflicting evidence exists regarding the optimal treatment for abscess complicating acute appendicitis. The objective of this study is to compare immediate appendectomy (IMM APP) versus expectant management (EXP MAN) including percutaneous drainage with or without interval appendectomy to treat periappendiceal abscess. One hundred four patients with acute appendicitis complicated by periappendiceal abscess were identified. We compared 36 patients who underwent IMM APP with 68 patients who underwent EXP MAN. Outcome measures included morbidity and length of hospital stay. The groups were similar with regard to age (30.6 +/- 12.3 vs. 34.8 +/- 13.5 years), gender (61% vs. 62% males), admission WBC count (17.5 +/- 5.1 x 10(3) vs. 17.0 +/- 4.8 x 10(3) cells/dL), and admission temperature (37.9 +/- 1.2 vs. 37.8 +/- 0.9 degrees F). IMM APP patients had a higher rate of complications than EXP MAN patients at initial hospitalization (58% vs. 15%, P < 0.001) and for all hospitalizations (67% vs. 24%, P < 0.001). The IMM APP group also had a longer initial (14.8 +/- 16.1 vs. 9.0 +/- 4.8 days, P = 0.01) and overall hospital stay (15.3 +/- 16.2 vs. 10.7 +/- 5.4 days, P = 0.04). We conclude that percutaneous drainage and interval appendectomy is preferable to immediate appendectomy for treatment of appendiceal abscess because it leads to a lower complication rate and a shorter hospital stay.",
"title": ""
},
{
"docid": "0c4f02b3b361d60da1aec0f0c100dcf9",
"text": "Architecture Compliance Checking (ACC) is an approach to verify the conformance of implemented program code to high-level models of architectural design. ACC is used to prevent architectural erosion during the development and evolution of a software system. Static ACC, based on static software analysis techniques, focuses on the modular architecture and especially on rules constraining the modular elements. A semantically rich modular architecture (SRMA) is expressive and may contain modules with different semantics, like layers and subsystems, constrained by rules of different types. To check the conformance to an SRMA, ACC-tools should support the module and rule types used by the architect. This paper presents requirements regarding SRMA support and an inventory of common module and rule types, on which basis eight commercial and non-commercial tools were tested. The test results show large differences between the tools, but all could improve their support of SRMA, what might contribute to the adoption of ACC in practice.",
"title": ""
},
{
"docid": "773b5914dce6770b2db707ff4536c7f6",
"text": "This paper presents an automatic drowsy driver monitoring and accident prevention system that is based on monitoring the changes in the eye blink duration. Our proposed method detects visual changes in eye locations using the proposed horizontal symmetry feature of the eyes. Our new method detects eye blinks via a standard webcam in real-time at 110fps for a 320×240 resolution. Experimental results in the JZU [3] eye-blink database showed that the proposed system detects eye blinks with a 94% accuracy with a 1% false positive rate.",
"title": ""
},
{
"docid": "7abe1fd1b0f2a89bf51447eaef7aa989",
"text": "End users increasingly expect ubiquitous connectivity while on the move. With a variety of wireless access technologies available, we expect to always be connected to the technology that best matches our performance goals and price points. Meanwhile, sophisticated onboard units (OBUs) enable geolocation and complex computation in support of handover. In this paper, we present an overview of vertical handover techniques and propose an algorithm empowered by the IEEE 802.21 standard, which considers the particularities of the vehicular networks (VNs), the surrounding context, the application requirements, the user preferences, and the different available wireless networks [i.e., Wireless Fidelity (Wi-Fi), Worldwide Interoperability for Microwave Access (WiMAX), and Universal Mobile Telecommunications System (UMTS)] to improve users' quality of experience (QoE). Our results demonstrate that our approach, under the considered scenario, is able to meet application requirements while ensuring user preferences are also met.",
"title": ""
},
{
"docid": "5bd168673acca10828a03cbfd80e8932",
"text": "Since a biped humanoid inherently suffers from instability and always risks tipping itself over, ensuring high stability and reliability of walk is one of the most important goals. This paper proposes a walk control consisting of a feedforward dynamic pattern and a feedback sensory reflex. The dynamic pattern is a rhythmic and periodic motion, which satisfies the constraints of dynamic stability and ground conditions, and is generated assuming that the models of the humanoid and the environment are known. The sensory reflex is a simple, but rapid motion programmed in respect to sensory information. The sensory reflex we propose in this paper consists of the zero moment point reflex, the landing-phase reflex, and the body-posture reflex. With the dynamic pattern and the sensory reflex, it is possible for the humanoid to walk rhythmically and to adapt itself to the environmental uncertainties. The effectiveness of our proposed method was confirmed by dynamic simulation and walk experiments on an actual 26-degree-of-freedom humanoid.",
"title": ""
},
{
"docid": "29a0e5ddd495b46b73ea71b1983fd73b",
"text": "Data extraction from the web pages is the process of analyzing and retrieving relevant data out of the data sources (usually unstructured or poorly structure) in a specific pattern for further processing, involves addition of metadata and data integration details for further process in the data workflow. This survey describes overview of the different web data extraction and data alignment techniques. Extraction techniques are DeLa, DEPTA, ViPER, and ViNT. Data alignment techniques are Pairwise QRR alignment, Holistic alignment, Nested structure processing. Query Result pages are generated by using Web database based on Users Query. The data from these query result pages should be automatically extracted which is very important for many applications, such as data integration, which are needed to cooperate with multiple web databases. New method is proposed for data extraction t that combines both tag and value similarity. It automatically extracts data from query result pages by first identifying and segmenting the query result records (QRRs) in the query result pages and then aligning the segmented QRRs into a table. In which the data values from the same attribute are put into the same column. Data region identification method identify the noncontiguous QRRs that have the same parents according to their tag similarities. Specifically, we propose new techniques to handle the case when the QRRs are not contiguous, which may be due to presence of auxiliary information, such as a comment, recommendation or advertisement, and for handling any nested structure that may exist in the QRRs.",
"title": ""
},
{
"docid": "1f752034b5307c0118d4156d0b95eab3",
"text": "Importance\nTherapy-related myeloid neoplasms are a potentially life-threatening consequence of treatment for autoimmune disease (AID) and an emerging clinical phenomenon.\n\n\nObjective\nTo query the association of cytotoxic, anti-inflammatory, and immunomodulating agents to treat patients with AID with the risk for developing myeloid neoplasm.\n\n\nDesign, Setting, and Participants\nThis retrospective case-control study and medical record review included 40 011 patients with an International Classification of Diseases, Ninth Revision, coded diagnosis of primary AID who were seen at 2 centers from January 1, 2004, to December 31, 2014; of these, 311 patients had a concomitant coded diagnosis of myelodysplastic syndrome (MDS) or acute myeloid leukemia (AML). Eighty-six cases met strict inclusion criteria. A case-control match was performed at a 2:1 ratio.\n\n\nMain Outcomes and Measures\nOdds ratio (OR) assessment for AID-directed therapies.\n\n\nResults\nAmong the 86 patients who met inclusion criteria (49 men [57%]; 37 women [43%]; mean [SD] age, 72.3 [15.6] years), 55 (64.0%) had MDS, 21 (24.4%) had de novo AML, and 10 (11.6%) had AML and a history of MDS. Rheumatoid arthritis (23 [26.7%]), psoriasis (18 [20.9%]), and systemic lupus erythematosus (12 [14.0%]) were the most common autoimmune profiles. Median time from onset of AID to diagnosis of myeloid neoplasm was 8 (interquartile range, 4-15) years. A total of 57 of 86 cases (66.3%) received a cytotoxic or an immunomodulating agent. In the comparison group of 172 controls (98 men [57.0%]; 74 women [43.0%]; mean [SD] age, 72.7 [13.8] years), 105 (61.0%) received either agent (P = .50). Azathioprine sodium use was observed more frequently in cases (odds ratio [OR], 7.05; 95% CI, 2.35- 21.13; P < .001). Notable but insignificant case cohort use among cytotoxic agents was found for exposure to cyclophosphamide (OR, 3.58; 95% CI, 0.91-14.11) followed by mitoxantrone hydrochloride (OR, 2.73; 95% CI, 0.23-33.0). Methotrexate sodium (OR, 0.60; 95% CI, 0.29-1.22), mercaptopurine (OR, 0.62; 95% CI, 0.15-2.53), and mycophenolate mofetil hydrochloride (OR, 0.66; 95% CI, 0.21-2.03) had favorable ORs that were not statistically significant. No significant association between a specific length of time of exposure to an agent and the drug's category was observed.\n\n\nConclusions and Relevance\nIn a large population with primary AID, azathioprine exposure was associated with a 7-fold risk for myeloid neoplasm. The control and case cohorts had similar systemic exposures by agent category. No association was found for anti-tumor necrosis factor agents. Finally, no timeline was found for the association of drug exposure with the incidence in development of myeloid neoplasm.",
"title": ""
},
{
"docid": "82af21c1e687d7303c06cef4b66f1fb4",
"text": "Strategic planning and talent management in large enterprises composed of knowledge workers requires complete, accurate, and up-to-date representation of the expertise of employees in a form that integrates with business processes. Like other similar organizations operating in dynamic environments, the IBM Corporation strives to maintain such current and correct information, specifically assessments of employees against job roles and skill sets from its expertise taxonomy. In this work, we deploy an analytics-driven solution that infers the expertise of employees through the mining of enterprise and social data that is not specifically generated and collected for expertise inference. We consider job role and specialty prediction and pose them as supervised classification problems. We evaluate a large number of feature sets, predictive models and postprocessing algorithms, and choose a combination for deployment. This expertise analytics system has been deployed for key employee population segments, yielding large reductions in manual effort and the ability to continually and consistently serve up-to-date and accurate data for several business functions. This expertise management system is in the process of being deployed throughout the corporation.",
"title": ""
},
{
"docid": "b1d348e2095bd7054cc11bd84eb8ccdc",
"text": "Epidermolysis bullosa (EB) is a group of inherited, mechanobullous disorders caused by mutations in various structural proteins in the skin. There have been several advances in the classification of EB since it was first introduced in the late 19th century. We now recognize four major types of EB, depending on the location of the target proteins and level of the blisters: EB simplex (epidermolytic), junctional EB (lucidolytic), dystrophic EB (dermolytic), and Kindler syndrome (mixed levels of blistering). This contribution will summarize the most recent classification and discuss the molecular basis, target genes, and proteins involved. We have also included new subtypes, such as autosomal dominant junctional EB and autosomal recessive EB due to mutations in the dystonin (DST) gene, which encodes the epithelial isoform of bullouspemphigoid antigen 1. The main laboratory diagnostic techniques-immunofluorescence mapping, transmission electron microscopy, and mutation analysis-will also be discussed. Finally, the clinical characteristics of the different major EB types and subtypes will be reviewed.",
"title": ""
},
{
"docid": "fe3aa62af7f769d25d51c60444be0907",
"text": "Neurophysiological recording techniques are helping provide marketers and salespeople with an increased understanding of their targeted customers. Such tools are also providing information systems researchers more insight to their end-users. These techniques may also be used introspectively to help researchers learn more about their own techniques. Here we look to help salespeople have an increased understanding of their selling methods by looking through their eyes instead of through the eyes of the customer. A preliminary study is presented using electroencephalography of three sales experts while watching the first moments of a video of a sales pitch to understand mental processing during the approach phase. Follow on work is described and considerations for interpreting data in light of individual differences.",
"title": ""
},
{
"docid": "e6f5c58910c877ade6594e206ac19e02",
"text": "Model-based compression is an effective, facilitating, and expanded model of neural network models with limited computing and low power. However, conventional models of compression techniques utilize crafted features [2,3,12] and explore specialized areas for exploration and design of large spaces in terms of size, speed, and accuracy, which usually have returns Less and time is up. This paper will effectively analyze deep auto compression (ADC) and reinforcement learning strength in an effective sample and space design, and improve the compression quality of the model. The results of compression of the advanced model are obtained without any human effort and in a completely automated way. With a 4fold reduction in FLOP, the accuracy of 2.8% is higher than the manual compression model for VGG-16 in ImageNet.",
"title": ""
},
{
"docid": "7ab5f56b615848ba5d8dc2f149fd8bf2",
"text": "At present, most outdoor video-surveillance, driver-assistance and optical remote sensing systems have been designed to work under good visibility and weather conditions. Poor visibility often occurs in foggy or hazy weather conditions and can strongly influence the accuracy or even the general functionality of such vision systems. Consequently, it is important to import actual weather-condition data to the appropriate processing mode. Recently, significant progress has been made in haze removal from a single image [1,2]. Based on the hazy weather classification, specialized approaches, such as a dehazing process, can be employed to improve recognition. Figure 1 shows a sample processing flow of our dehazing program.",
"title": ""
},
{
"docid": "855b80a4dd22e841c8a929b20eb6e002",
"text": "Accuracy and stability of Kinect-like depth data is limited by its generating principle. In order to serve further applications with high quality depth, the preprocessing on depth data is essential. In this paper, we analyze the characteristics of the Kinect-like depth data by examing its generation principle and propose a spatial-temporal denoising algorithm taking into account its special properties. Both the intra-frame spatial correlation and the inter-frame temporal correlation are exploited to fill the depth hole and suppress the depth noise. Moreover, a divisive normalization approach is proposed to assist the noise filtering process. The 3D rendering results of the processed depth demonstrates that the lost depth is recovered in some hole regions and the noise is suppressed with depth features preserved.",
"title": ""
},
{
"docid": "0b51b727f39a9c8ea6580794c6f1e2bb",
"text": "Many researchers proposed different methodologies for the text skew estimation in binary images/gray scale images. They have been used widely for the skew identification of the printed text. There exist so many ways algorithms for detecting and correcting a slant or skew in a given document or image. Some of them provide better accuracy but are slow in speed, others have angle limitation drawback. So a new technique for skew detection in the paper, will reduce the time and cost. Keywords— Document image processing, Skew detection, Nearest-neighbour approach, Moments, Hough transformation.",
"title": ""
},
{
"docid": "6c406578abde6104439470f9e3187c7e",
"text": "Extended superficial musculoaponeurotic system (SMAS) rhytidectomy has been advocated for improving nasolabial fold prominence. Extended subSMAS dissection requires release of the SMAS typically from the upper lateral border of the zygomaticus major muscle and continued dissection medial to this muscle. This maneuver releases the zygomatic retaining ligaments and achieves more effective mobilization and elevation of the ptotic malar soft tissues, resulting in more dramatic effacement of the nasolabial crease. Despite its presumed advantages, few reports have suggested greater risk of nerve injury with this technique compared with other limited sub-SMAS dissection techniques. Although the caudal extent of the zygomaticus muscle insertion to the modiolus of the mouth has been well delineated, the more cephalad origin has been vaguely defined. We attempted to define anatomic landmarks which could serve to more reliably identify the upper extent of the lateral zygomaticus major muscle border and more safely guide extended sub-SMAS dissections. Bilateral zygomaticus major muscles were identified in 13 cadaver heads with 4.0-power loupe magnification. Bony anatomic landmarks were identified that would predict the location of the lateral border of the zygomaticus major muscle. The upper extent of the lateral border of the zygomaticus major muscle was defined in relation to an oblique line extending from the mental protuberance to the notch defined at the most anterior-inferior aspect of the temporal fossa at the junction of the frontal process and temporal process of the zygomatic bone. The lateral border of the zygomaticus major muscle was observed 4.4 +/- 2.2 mm lateral and parallel to this line. More accurate prediction of the location of the upper extent of the lateral border of the zygomaticus major muscle using the above bony anatomic landmarks may limit nerve injury during SMAS dissections in extended SMAS rhytidectomy.",
"title": ""
}
] |
scidocsrr
|
078e290193c187b5f0b617bf45d0ad0c
|
Heart Disease Prediction System Using Data Mining and Hybrid Intelligent Techniques: A Review
|
[
{
"docid": "793bbf998dd28f0ed1973d13ca67cce6",
"text": "— The heart disease accounts to be the leading cause of death worldwide. It is difficult for medical practitioners to predict the heart attack as it is a complex task that requires experience and knowledge. The health sector today contains hidden information that can be important in making decisions. Data mining algorithms such as J48, Naïve Bayes, REPTREE, CART, and Bayes Net are applied in this research for predicting heart attacks. The research result shows prediction accuracy of 99%. Data mining enable the health sector to predict patterns in the dataset.",
"title": ""
},
{
"docid": "63a58b3b6eb46cdd92b9c241b1670926",
"text": "The Healthcare industry is generally "information rich", but unfortunately not all the data are mined which is required for discovering hidden patterns & effective decision making. Advanced data mining techniques are used to discover knowledge in database and for medical research, particularly in Heart disease prediction. This paper has analysed prediction systems for Heart disease using more number of input attributes. The system uses medical terms such as sex, blood pressure, cholesterol like 13 attributes to predict the likelihood of patient getting a Heart disease. Until now, 13 attributes are used for prediction. This research paper added two more attributes i. e. obesity and smoking. The data mining classification techniques, namely Decision Trees, Naive Bayes, and Neural Networks are analyzed on Heart disease database. The performance of these techniques is compared, based on accuracy. As per our results accuracy of Neural Networks, Decision Trees, and Naive Bayes are 100%, 99. 62%, and 90. 74% respectively. Our analysis shows that out of these three classification models Neural Networks predicts Heart disease with highest accuracy.",
"title": ""
},
{
"docid": "34118709a36ba09a822202753cbff535",
"text": "Our healthcare sector daily collects a huge data including clinical examination, vital parameters, investigation reports, treatment follow-up and drug decisions etc. But very unfortunately it is not analyzed and mined in an appropriate way. The Health care industry collects the huge amounts of health care data which unfortunately are not “mined” to discover hidden information for effective decision making for health care practitioners. Data mining refers to using a variety of techniques to identify suggest of information or decision making knowledge in database and extracting these in a way that they can put to use in areas such as decision support , Clustering ,Classification and Prediction. This paper has developed a Computer-Based Clinical Decision Support System for Prediction of Heart Diseases (CCDSS) using Naïve Bayes data mining algorithm. CCDSS can answer complex “what if” queries which traditional decision support systems cannot. Using medical profiles such as age, sex, spO2,chest pain type, heart rate, blood pressure and blood sugar it can predict the likelihood of patients getting a heart disease. CCDSS is Webbased, user-friendly, scalable, reliable and expandable. It is implemented on the PHPplatform. Keywords—Computer-Based Clinical Decision Support System(CCDSS), Heart disease, Data mining, Naïve Bayes.",
"title": ""
}
] |
[
{
"docid": "242030243133cd57d6cc62be154fd6ec",
"text": "| The inverse kinematics of serial manipulators is a central problem in the automatic control of robot manipula-tors. The main interest has been in inverse kinematics of a six revolute (6R) jointed manipulator with arbitrary geometry. It has been recently shown that the joints of a general 6R manipulator can orient themselves in 16 diierent con-gurations (at most), for a given pose of the end{eeector. However, there are no good practical solutions available, which give a level of performance expected of industrial ma-nipulators. In this paper, we present an algorithm and implementation for eecient inverse kinematics for a general 6R manipulator. When stated mathematically, the problem reduces to solving a system of multivariate equations. We make use of the algebraic properties of the system and the symbolic formulation used for reducing the problem to solving a univariate polynomial. However, the polynomial is expressed as a matrix determinant and its roots are computed by reducing to an eigenvalue problem. The other roots of the multivariate system are obtained by computing eigenvectors and substitution. The algorithm involves symbolic preprocessing, matrix computations and a variety of other numerical techniques. The average running time of the algorithm, for most cases, is 11 milliseconds on an IBM RS/6000 workstation. This approach is applicable to inverse kinematics of all serial manipulators.",
"title": ""
},
{
"docid": "a6def37312896cf470360b2c2282af69",
"text": "The use of herbal medicinal products and supplements has increased during last decades. At present, some herbs are used to enhance muscle strength and body mass. Emergent evidence suggests that the health benefits from plants are attributed to their bioactive compounds such as Polyphenols, Terpenoids, and Alkaloids which have several physiological effects on the human body. At times, manufacturers launch numerous products with banned ingredient inside with inappropriate amounts or fake supplement inducing harmful side effect. Unfortunately up to date, there is no guarantee that herbal supplements are safe for anyone to use and it has not helped to clear the confusion surrounding the herbal use in sport field especially. Hence, the purpose of this review is to provide guidance on the efficacy and side effect of most used plants in sport. We have identified plants according to the following categories: Ginseng, alkaloids, and other purported herbal ergogenics such as Tribulus Terrestris, Cordyceps Sinensis. We found that most herbal supplement effects are likely due to activation of the central nervous system via stimulation of catecholamines. Ginseng was used as an endurance performance enhancer, while alkaloids supplementation resulted in improvements in sprint and cycling intense exercises. Despite it is prohibited, small amount of ephedrine was usually used in combination with caffeine to enhance muscle strength in trained individuals. Some other alkaloids such as green tea extracts have been used to improve body mass and composition in athletes. Other herb (i.e. Rhodiola, Astragalus) help relieve muscle and joint pain, but results about their effects on exercise performance are missing.",
"title": ""
},
{
"docid": "d98186e7dde031b99330be009b600e43",
"text": "This paper contributes a new high quality dataset for person re-identification, named \"Market-1501\". Generally, current datasets: 1) are limited in scale, 2) consist of hand-drawn bboxes, which are unavailable under realistic settings, 3) have only one ground truth and one query image for each identity (close environment). To tackle these problems, the proposed Market-1501 dataset is featured in three aspects. First, it contains over 32,000 annotated bboxes, plus a distractor set of over 500K images, making it the largest person re-id dataset to date. Second, images in Market-1501 dataset are produced using the Deformable Part Model (DPM) as pedestrian detector. Third, our dataset is collected in an open system, where each identity has multiple images under each camera. As a minor contribution, inspired by recent advances in large-scale image search, this paper proposes an unsupervised Bag-of-Words descriptor. We view person re-identification as a special task of image search. In experiment, we show that the proposed descriptor yields competitive accuracy on VIPeR, CUHK03, and Market-1501 datasets, and is scalable on the large-scale 500k dataset.",
"title": ""
},
{
"docid": "f73c88a8a6d0bd1790e8c8a5b73619a6",
"text": "This critical review examines the evidence evaluating the efficacy of non-speech oral motor exercises (NSOMEs) as a treatment approach for children with phonological/articulation disorders. Research studies include one randomized clinical trial design, one single group pre-test post-test design and one single subject design. Overall, the evidence does not support the use of NSOMEs to treat children with phonological/articulation disorders. Future and clinical recommendations are discussed.",
"title": ""
},
{
"docid": "b70671957a5259ae58833ddcf4fe9703",
"text": "In this paper, we present a near infrared (NIR) image based face recognition system. Firstly, we describe a design of NIR image capture device which minimizes influence of environmental lighting on face images. Both face and facial feature localization and face recognition are performed using local features with AdaBoost learning. An evaluation in real-world user scenario shows that the system achieves excellent accuracy, speed and usability",
"title": ""
},
{
"docid": "233c9d97c70a95f71897b6f289c7d8a7",
"text": "The group Steiner tree problem is a generalization of the Steiner tree problem where we are given several subsets (groups) of vertices in a weighted graph, and the goal is to find a minimum-weight connected subgraph containing at least one vertex from each group. The problem was introduced by Reich and Widmayer and linds applications in VLSI design. The group Steiner tree problem generalizes the set covering problem, and is therefore at least as hard. We give a randomized O(log3 n log k)-approximation algorithm for the group Steiner tree problem on an n-node graph, where k is the number of groups. The best previous performance guarantee was (1 + ?)a (Bateman, Helvig, Robins and Zelikovsky). Noting that the group Steiner problem also models the network design problems with location-theoretic constraints studied by Marathe, Bavi and Sundaram, our results also improve their bicriteria approximation results. Similarly, we improve previous results by Slavik on a tour version, called the errand scheduling problem. We use the result of Bartal on probabilistic approximation of finite metric spaces by tree metrics problem to one in a tree metric. To find a solution on a tree, we use a generalization of randomized rounding. Our approximation guarantees improve to O(log’ nlog k) in the case of graphs that exclude small minors by using a better alternative to Bartal’s result on probabilistic approximations of metrics induced by such graphs (Konjevod, Ravi and Salman) this improvement is valid for the group Steiner problem on planar graphs as well as on a set of points in the 2D-Euclidean case. -",
"title": ""
},
{
"docid": "284a52cf750577cc7c7939d0366af774",
"text": "Male circumcision protects against cancer of the penis, the invasive form of which is a devastating disease confined almost exclusively to uncircumcised men. Major etiological factors are phimosis, balanitis, and high-risk types of human papillomavirus (HPV), which are more prevalent in the glans penis and coronal sulcus covered by the foreskin, as well as on the penile shaft, of uncircumcised men. Circumcised men clear HPV infections more quickly. Phimosis (a constricted foreskin opening impeding the passage of urine) is confined to uncircumcised men, in whom balanitis (affecting 10%) is more common than in circumcised men. Each is strongly associated with risk of penile cancer. These findings have led to calls for promotion of male circumcision, especially in infancy, to help reduce the global burden of penile cancer. Even more relevant globally is protection from cervical cancer, which is 10-times more common, being much higher in women with uncircumcised male partners. Male circumcision also provides indirect protection against various other infections in women, along with direct protection for men from a number of genital tract infections, including HIV. Given that adverse consequences of medical male circumcision, especially when performed in infancy, are rare, this simple prophylactic procedure should be promoted.",
"title": ""
},
{
"docid": "ccd356a943f19024478c42b5db191293",
"text": "This paper discusses the relationship between concepts of narrative, patterns of interaction within computer games constituting gameplay gestalts, and the relationship between narrative and the gameplay gestalt. The repetitive patterning involved in gameplay gestalt formation is found to undermine deep narrative immersion. The creation of stronger forms of interactive narrative in games requires the resolution of this confl ict. The paper goes on to describe the Purgatory Engine, a game engine based upon more fundamentally dramatic forms of gameplay and interaction, supporting a new game genre referred to as the fi rst-person actor. The fi rst-person actor does not involve a repetitive gestalt mode of gameplay, but defi nes gameplay in terms of character development and dramatic interaction.",
"title": ""
},
{
"docid": "b84d6210438144ebe20271ceaffc28a3",
"text": "Although precision agriculture has been adopted in few countries; the agriculture industry in India still needs to be modernized with the involvement of technologies for better production, distribution and cost control. In this paper we proposed a multidisciplinary model for smart agriculture based on the key technologies: Internet-of-Things (IoT), Sensors, Cloud-Computing, MobileComputing, Big-Data analysis. Farmers, AgroMarketing agencies and Agro-Vendors need to be registered to the AgroCloud module through MobileApp module. AgroCloud storage is used to store the details of farmers, periodic soil properties of farmlands, agro-vendors and agro-marketing agencies, Agro e-governance schemes and current environmental conditions. Soil and environment properties are sensed and periodically sent to AgroCloud through IoT (Beagle Black Bone). Bigdata analysis on AgroCloud data is done for fertilizer requirements, best crop sequences analysis, total production, and current stock and market requirements. Proposed model is beneficial for increase in agricultural production and for cost control of Agro-products.",
"title": ""
},
{
"docid": "f9c2457b4ba8da2011120e0834a6101d",
"text": "The advent of new touch technologies and the wide spread of smart mobile phones made humans embrace technology more and depend on it extensively in their lives. With new communication technologies and smart phones the world really became a small village. Although these technologies provided many positive features, we cannot neglect the negative influences inherited in these technologies. One of the major negative sides of smart phones is their side effects on human health. This paper will address this issue by exploring the exiting literature related to the negative side of smart phones on human health and behavior by investigating the literature related to three major dimensions: health, addiction and behavior. The third section will describe the research method used. The fourth section will discuss the analysis side followed by a section on the conclusions and future work. Index Terms Mobile phones, smart phone, touch screen, health effects, ergonomics, addiction, behavior.",
"title": ""
},
{
"docid": "ebea79abc60a5d55d0397d21f54cc85e",
"text": "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract useful business intelligence, which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors, improving customer experiences, and increasing business performances. However, extracting business intelligence from location traces is not a trivial task. Conventional data analytic tools are usually not customized for handling large, complex, dynamic, and distributed nature of location traces. To that end, we develop a taxi business intelligence system to explore the massive taxi location traces from different business perspectives with various data mining functions. Since we implement the system using the real-world taxi GPS data, this demonstration will help taxi companies to improve their business performances by understanding the behaviors of both drivers and customers. In addition, several identified technical challenges also motivate data mining people to develop more sophisticate techniques in the future.",
"title": ""
},
{
"docid": "13e61389de352298bf9581bc8a8714cc",
"text": "A bacterial gene (neo) conferring resistance to neomycin-kanamycin antibiotics has been inserted into SV40 hybrid plasmid vectors and introduced into cultured mammalian cells by DNA transfusion. Whereas normal cells are killed by the antibiotic G418, those that acquire and express neo continue to grow in the presence of G418. In the course of the selection, neo DNA becomes associated with high molecular weight cellular DNA and is retained even when cells are grown in the absence of G418 for extended periods. Since neo provides a marker for dominant selections, cell transformation to G418 resistance is an efficient means for cotransformation of nonselected genes.",
"title": ""
},
{
"docid": "9193aad006395bd3bd76cabf44012da5",
"text": "In recent years, there is growing evidence that plant-foods polyphenols, due to their biological properties, may be unique nutraceuticals and supplementary treatments for various aspects of type 2 diabetes mellitus. In this article we have reviewed the potential efficacies of polyphenols, including phenolic acids, flavonoids, stilbenes, lignans and polymeric lignans, on metabolic disorders and complications induced by diabetes. Based on several in vitro, animal models and some human studies, dietary plant polyphenols and polyphenol-rich products modulate carbohydrate and lipid metabolism, attenuate hyperglycemia, dyslipidemia and insulin resistance, improve adipose tissue metabolism, and alleviate oxidative stress and stress-sensitive signaling pathways and inflammatory processes. Polyphenolic compounds can also prevent the development of long-term diabetes complications including cardiovascular disease, neuropathy, nephropathy and retinopathy. Further investigations as human clinical studies are needed to obtain the optimum dose and duration of supplementation with polyphenolic compounds in diabetic patients.",
"title": ""
},
{
"docid": "1127b964ad114909a2aa8d78eb134a78",
"text": "RFID technology is gaining adoption on an increasin g scale for tracking and monitoring purposes. Wide deployments of RFID devices will soon generate an unprecedented volume of data. Emerging applications require the RFID data to be f ilt red and correlated for complex pattern detection and transf ormed to events that provide meaningful, actionable informat ion to end applications. In this work, we design and develop S ASE, a complex event processing system that performs such dat ainformation transformation over real-time streams. We design a complex event language for specifying application l gic for such transformation, devise new query processing techniq ues to efficiently implement the language, and develop a comp rehensive system that collects, cleans, and processes RFID da ta for delivery of relevant, timely information as well as stor ing necessary data for future querying. We demonstrate an initial prototype of SASE through a real-world retail management scenari o.",
"title": ""
},
{
"docid": "f1cfe1cb5ddf46076dae6cd0f69d137f",
"text": "SiC-SIT power semiconductor switching devices has an advantage that its switching time is high speed compared to those of other power semiconductor switching devices. We adopt newly developed SiC-SITs which have the maximum ratings 800V/4A and prepare a breadboard of a conventional single-ended push-pull(SEPP) high frequency inverter. This paper describes the characteristics of SiC-SIT on the basis of the experimental results of the breadboard. Its operational frequencies are varied at from 100 kHz to 250kHz with PWM control technique for output power regulation. Its load is induction fluid heating systems for super-heated-steam production.",
"title": ""
},
{
"docid": "361fc2b80275b786d24bf0e979dc7aec",
"text": "Well-run datacenter application architectures are heavily instrumented to provide detailed traces of messages and remote invocations. Reconstructing user sessions, call graphs, transaction trees, and other structural information from these messages, a process known as sessionization, is the foundation for a variety of diagnostic, profiling, and monitoring tasks essential to the operation of the datacenter.\n We present the design and implementation of a system which processes log streams at gigabits per second and reconstructs user sessions comprising millions of transactions per second in real time with modest compute resources, while dealing with clock skew, message loss, and other real-world phenomena that make such a task challenging. Our system is based on the Timely Dataflow framework for low latency, data-parallel computation, and we demonstrate its utility with a number of use-cases and traces from a large, operational, mission-critical enterprise data center.",
"title": ""
},
{
"docid": "0d497cab1989b04c855352e1c45ff359",
"text": "The ratio between second and fourth finger (2D:4D) is sexually dimorphic; it is lower in men than in women. Studies using broad personality domains yielded correlations of 2D:4D with neuroticism, extraversion or agreeableness, but the obtained results have been inconsistent. We correlated 2D:4D of 184 women and 101 men with their scores in Cattell’s 16 Personality Factor (16PF) Questionnaire. We found women with a higher (more ‘feminine’) right hand 2D:4D to score lower in emotional stability and social boldness and higher in privateness. Mediator analysis showed emotional stability to be probably primarily correlated with 2D:4D and to act as a mediator between 2D:4D and social boldness. Privateness appears to be mediated by an even more complex path. We discuss the usefulness of primary-level personality questionnaires and mediator analyses in the investigation of psycho-morphological associations. Copyright # 2007 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "14142516a29d4606acc352d0dfd70b22",
"text": "In shared autonomy, a user and autonomous system work together to achieve shared goals. To collaborate effectively, the autonomous system must know the user’s goal. As such, most prior works follow a predict-then-act model, first predicting the user’s goal with high confidence, then assisting given that goal. Unfortunately, confidently predicting the user’s goal may not be possible until they have nearly achieved it, causing predict-then-act methods to provide little assistance. However, the system can often provide useful assistance even when confidence for any single goal is low (e.g. move towards multiple goals). In this work, we formalize this insight by modeling shared autonomy as a partially observable Markov decision process (POMDP), providing assistance that minimizes the expected cost-to-go with an unknown goal. As solving this POMDP optimally is intractable, we use hindsight optimization to approximate. We apply our framework to both shared-control teleoperation and human–robot teaming. Compared with predict-then-act methods, our method achieves goals faster, requires less user input, decreases user idling time, and results in fewer user–robot collisions.",
"title": ""
},
{
"docid": "5ec4a87235a98a1ea1c01baedd6a3cc2",
"text": "Chest X-ray (CXR) is one of the most commonly prescribed medical imaging procedures, often with over 2– 10x more scans than other imaging modalities such as MRI, CT scan, and PET scans. These voluminous CXR scans place significant workloads on radiologists and medical practitioners. Organ segmentation is a crucial step to obtain effective computer-aided detection on CXR. In this work, we propose Structure Correcting Adversarial Network (SCAN) to segment lung fields and the heart in CXR images. SCAN incorporates a critic network to impose on the convolutional segmentation network the structural regularities emerging from human physiology. During training, the critic network learns to discriminate between the ground truth organ annotations from the masks synthesized by the segmentation network. Through this adversarial process the critic network learns the higher order structures and guides the segmentation model to achieve realistic segmentation outcomes. Extensive experiments show that our method produces highly accurate and natural segmentation. Using only very limited training data available, our model reaches human-level performance without relying on any existing trained model or dataset. Our method also generalizes well to CXR images from a different patient population and disease profiles, surpassing the current state-of-the-art.",
"title": ""
},
{
"docid": "0eae6fe59e90ff07e8aa831a3a4029f6",
"text": "This paper presents the design and fabrication of a zone plate Fresnel lens. 3D Printing is used for rapid prototyping this low-cost and light-weight lens to operate at 10 GHz. This lens is comprised of four different 3D printed dielectric zones to form phase compensation in a Fresnel lens. The dielectric zones are fabricated with different infill percentage to create tailored dielectric constants. The dielectric lens offers 18 dBi directivity at 10 GHz when illuminated by a waveguide source.",
"title": ""
}
] |
scidocsrr
|
997c7a0a0c6b3e15401e4b6389e86150
|
Trading with optimized uptrend and downtrend pattern templates using a genetic algorithm kernel
|
[
{
"docid": "c35fa79bd405ec0fb6689d395929c055",
"text": "This study examines the potential profit of bull flag technical trading rules using a template matching technique based on pattern recognition for the Nasdaq Composite Index (NASDAQ) and Taiwan Weighted Index (TWI). To minimize measurement error due to data snooping, this study performed a series of experiments to test the effectiveness of the proposed method. The empirical results indicated that all of the technical trading rules correctly predict the direction of changes in the NASDAQ and TWI. This finding may provide investors with important information on asset allocation. Moreover, better bull flag template price fit is associated with higher average return. The empirical results demonstrated that the average return of trading rules conditioned on bull flag significantly better than buying every day for the study period, especially for TWI. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1c31f32f43819b2baad003c344cde1b1",
"text": "One of the major duties of financial analysts is technical analysis. It is necessary to locate the technical patterns in the stock price movement charts to analyze the market behavior. Indeed, there are two main problems: how to define those preferred patterns (technical patterns) for query and how to match the defined pattern templates in different resolutions. As we can see, defining the similarity between time series (or time series subsequences) is of fundamental importance. By identifying the perceptually important points (PIPs) directly from the time domain, time series and templates of different lengths can be compared. Three ways of distance measure, including Euclidean distance (PIP-ED), perpendicular distance (PIP-PD) and vertical distance (PIP-VD), for PIP identification are compared in this paper. After the PIP identification process, both templateand rule-based pattern-matching approaches are introduced. The proposed methods are distinctive in their intuitiveness, making them particularly user friendly to ordinary data analysts like stock market investors. As demonstrated by the experiments, the templateand the rule-based time series matching and subsequence searching approaches provide different directions to achieve the goal of pattern identification. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6d8908ddf475d6571574aa4fd25ec3fe",
"text": "In this case study in knowledge engineering and data mining, we implement a recognizer for two variations of thèbull ¯ag' technical charting heuristic and use this recognizer to discover trading rules on the NYSE Composite Index. Out-of-sample results indicate that these rules are effective. q 2002 Elsevier Science Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "dac5090c367ef05c8863da9c7979a619",
"text": "Full vinyl polysiloxane casts of the vagina were obtained from 23 Afro-American, 39 Caucasian and 15 Hispanic women in lying, sitting and standing positions. A new shape, the pumpkin seed, was found in 40% of Afro-American women, but not in Caucasians or Hispanics. Analyses of cast and introital measurements revealed: (1) posterior cast length is significantly longer, anterior cast length is significantly shorter and cast width is significantly larger in Hispanics than in the other two groups and (2) the Caucasian introitus is significantly greater than that of the Afro-American subject.",
"title": ""
},
{
"docid": "66ad5e67a06504b1062316c3e3bbc5cf",
"text": "We investigate the community structure of physics subfields in the citation network of all Physical Review publications between 1893 and August 2007. We focus on well-cited publications (those receiving more than 100 citations), and apply modularity maximization to uncover major communities that correspond to clearly identifiable subfields of physics. While most of the links between communities connect those with obvious intellectual overlap, there sometimes exist unexpected connections between disparate fields due to the development of a widely applicable theoretical technique or by cross fertilization between theory and experiment. We also examine communities decade by decade and also uncover a small number of significant links between communities that are widely separated in time. © 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a25bd124c29b9ca41f794e327d822a91",
"text": "SUMO is an open source traffic simulation package including the simulation application itself as well as supporting tools, mainly for network import and demand modeling. SUMO helps to investigate a large variety of research topics, mainly in the context of traffic management and vehicular communications. We describe the current state of the package, its major applications, both by research topic and by example, as well as future developments and extensions. Keywords-microscopic traffic simulation; traffic management; open source; software",
"title": ""
},
{
"docid": "5f6e77c95d92c1b8f571921954f252d6",
"text": "Parallel job scheduling has gained increasing recognition in recent years as a distinct area of study. However , there is concern about the divergence of theory and practice in the eld. We review theoretical research in this area, and recommendations based on recent results. This is contrasted with a proposal for standard interfaces among the components of a scheduling system, that has grown from requirements in the eld.",
"title": ""
},
{
"docid": "d4a060243a2bf27f88e8893946e838b9",
"text": "The phylogenetic relationships of the alpheid shrimp genera Betaeus (Dana, 1852) (15 species) and Betaeopsis (Yaldwyn, 1971) (three species), collectively known as hooded shrimps, are analyzed with morphological, molecular (16S and H3) and combined \"total evidence\" (morphology+DNA) datasets. The tree topology resulting from morphological and combined analyses places Betaeus jucundus as sister to all the remaining species of Betaeus and Betaeopsis, rendering Betaeus paraphyletic. On the other hand, Betaeopsis is recovered as monophyletic. Betaeus australis is positioned as sister to the remaining species of Betaeus s. str. (excluding B. jucundus), which is composed of three well-supported and resolved clades. Mapping of biogeographic traits on the combined tree suggests at least two possible historic scenarios. In the first scenario, the North-East Pacific harboring the highest diversity of hooded shrimps (seven species of Betaeus), acted as the \"center of origin\", where species appeared, matured and eventually migrated toward peripheral regions. In the second scenario, Betaeus+Betaeopsis originated in the southern Indo-West Pacific and subsequently colonized the North-East Pacific, where a major radiation involving dispersal/vicariance events took place. The mapping of life history traits (symbiosis vs. free living and gregariousness vs. single/pair living) in the combined tree suggests (1) that different types of symbioses with dissimilar host organisms (sea urchins, abalones, other decapods, spoon worms) evolved independently more than once in the group (in B. jucundus and in various lineages of Betaeus s. str.), and (2) that gregariousness was ancestral in the Betaeus s. str. -Betaeopsis clade and later shifted toward single/pair living in several lineages.",
"title": ""
},
{
"docid": "5e94e30719ac09e86aaa50d9ab4ad57b",
"text": "Blogs, regularly updated online journals, allow people to quickly and easily create and share online content. Most bloggers write about their everyday lives and generally have a small audience of regular readers. Readers interact with bloggers by contributing comments in response to specific blog posts. Moreover, readers of blogs are often bloggers themselves and acknowledge their favorite blogs by adding them to their blogrolls or linking to them in their posts. This paper presents a study of bloggers’ online and real life relationships in three blog communities: Kuwait Blogs, Dallas/Fort Worth Blogs, and United Arab Emirates Blogs. Through a comparative analysis of the social network structures created by blogrolls and blog comments, we find different characteristics for different kinds of links. Our online survey of the three communities reveals that few of the blogging interactions reflect close offline relationships, and moreover that many online relationships were formed through blogging.",
"title": ""
},
{
"docid": "7eeb2bf2aaca786299ebc8507482e109",
"text": "In this paper we argue that questionanswering (QA) over technical domains is distinctly different from TREC-based QA or Web-based QA and it cannot benefit from data-intensive approaches. Technical questions arise in situations where concrete problems require specific answers and explanations. Finding a justification of the answer in the context of the document is essential if we have to solve a real-world problem. We show that NLP techniques can be used successfully in technical domains for high-precision access to information stored in documents. We present ExtrAns, an answer extraction system over technical domains, its architecture, its use of logical forms for answer extractions and how terminology extraction becomes an important part of the system.",
"title": ""
},
{
"docid": "50fe419f19754991e4356212c4fe2fab",
"text": "In a recent book (Stanovich, 2004), I spent a considerable effort trying to work out the implications of dual process theory for the great rationality debate in cognitive science (see Cohen, 1981; Gigerenzer, 1996; Kahneman and Tversky, 1996; Stanovich, 1999; Stein, 1996). In this chapter, I wish to advance that discussion, first by discussing additions and complications to dual-process theory and then by working through the implications of these ideas for our view of human rationality.",
"title": ""
},
{
"docid": "64c6012d2e97a1059161c295ae3b9cdb",
"text": "One of the most popular user activities on the Web is watching videos. Services like YouTube, Vimeo, and Hulu host and stream millions of videos, providing content that is on par with TV. While some of this content is popular all over the globe, some videos might be only watched in a confined, local region.\n In this work we study the relationship between popularity and locality of online YouTube videos. We investigate whether YouTube videos exhibit geographic locality of interest, with views arising from a confined spatial area rather than from a global one. Our analysis is done on a corpus of more than 20 millions YouTube videos, uploaded over one year from different regions. We find that about 50% of the videos have more than 70% of their views in a single region. By relating locality to viralness we show that social sharing generally widens the geographic reach of a video. If, however, a video cannot carry its social impulse over to other means of discovery, it gets stuck in a more confined geographic region. Finally, we analyze how the geographic properties of a video's views evolve on a daily basis during its lifetime, providing new insights on how the geographic reach of a video changes as its popularity peaks and then fades away.\n Our results demonstrate how, despite the global nature of the Web, online video consumption appears constrained by geographic locality of interest: this has a potential impact on a wide range of systems and applications, spanning from delivery networks to recommendation and discovery engines, providing new directions for future research.",
"title": ""
},
{
"docid": "d48430f65d844c92661d3eb389cdb2f2",
"text": "In organizations that use DevOps practices, software changes can be deployed as fast as 500 times or more per day. Without adequate involvement of the security team, rapidly deployed software changes are more likely to contain vulnerabilities due to lack of adequate reviews. The goal of this paper is to aid software practitioners in integrating security and DevOps by summarizing experiences in utilizing security practices in a DevOps environment. We analyzed a selected set of Internet artifacts and surveyed representatives of nine organizations that are using DevOps to systematically explore experiences in utilizing security practices. We observe that the majority of the software practitioners have expressed the potential of common DevOps activities, such as automated monitoring, to improve the security of a system. Furthermore, organizations that integrate DevOps and security utilize additional security activities, such as security requirements analysis and performing security configurations. Additionally, these teams also have established collaboration between the security team and the development and operations teams.",
"title": ""
},
{
"docid": "bdc9bc09af90bd85f64c79cbca766b61",
"text": "The inhalation route is frequently used to administer drugs for the management of respiratory diseases such as asthma or chronic obstructive pulmonary disease. Compared with other routes of administration, inhalation offers a number of advantages in the treatment of these diseases. For example, via inhalation, a drug is directly delivered to the target organ, conferring high pulmonary drug concentrations and low systemic drug concentrations. Therefore, drug inhalation is typically associated with high pulmonary efficacy and minimal systemic side effects. The lung, as a target, represents an organ with a complex structure and multiple pulmonary-specific pharmacokinetic processes, including (1) drug particle/droplet deposition; (2) pulmonary drug dissolution; (3) mucociliary and macrophage clearance; (4) absorption to lung tissue; (5) pulmonary tissue retention and tissue metabolism; and (6) absorptive drug clearance to the systemic perfusion. In this review, we describe these pharmacokinetic processes and explain how they may be influenced by drug-, formulation- and device-, and patient-related factors. Furthermore, we highlight the complex interplay between these processes and describe, using the examples of inhaled albuterol, fluticasone propionate, budesonide, and olodaterol, how various sequential or parallel pulmonary processes should be considered in order to comprehend the pulmonary fate of inhaled drugs.",
"title": ""
},
{
"docid": "98b6da9a1ab53b94c50a98b25cdf2da4",
"text": "There are many thousands of hereditary diseases in humans, each of which has a specific combination of phenotypic features, but computational analysis of phenotypic data has been hampered by lack of adequate computational data structures. Therefore, we have developed a Human Phenotype Ontology (HPO) with over 8000 terms representing individual phenotypic anomalies and have annotated all clinical entries in Online Mendelian Inheritance in Man with the terms of the HPO. We show that the HPO is able to capture phenotypic similarities between diseases in a useful and highly significant fashion.",
"title": ""
},
{
"docid": "11913ec11f39eb944f5ffde3ac727268",
"text": "Shared-memory multiprocessors are frequently used in a time-sharing style with multiple parallel applications executing at the same time. In such an environment, where the machine load is continuously varying, the question arises of how an application should maximize its performance while being fair to other users of the system. In this paper, we address this issue. We first show that if the number of runnable processes belonging to a parallel application significantly exceeds the effective number of physical processors executing it, its performance can be significantly degraded. We then propose a way of controlling the number of runnable processes associated with an application dynamically, to ensure good performance. The optimal number of runnable processes for each application is determined by a centralized server, and applications dynamically suspend or resume processes in order to match that number. A preliminary implementation of the proposed scheme is now running on the Encore Multimax and we show how it helps improve the performance of several applications. In some cases the improvement is more than a factor of two. We also discuss implications of the proposed scheme for multiprocessor schedulers, and how the scheme should interface with parallel programming languages.",
"title": ""
},
{
"docid": "6a68383137a2b4041a251ae2c12d2710",
"text": "Stochastic natural language generation systems that are trained from labelled datasets are often domainspecific in their annotation and in their mapping from semantic input representations to lexical-syntactic outputs. As a result, learnt models fail to generalize across domains, heavily restricting their usability beyond single applications. In this article, we focus on the problem of domain adaptation for natural language generation. We show how linguistic knowledge from a source domain, for which labelled data is available, can be adapted to a target domain by reusing training data across domains. As a key to this, we propose to employ abstract meaning representations as a common semantic representation across domains. We model natural language generation as a long short-term memory recurrent neural network encoderdecoder, in which one recurrent neural network learns a latent representation of a semantic input, and a second recurrent neural network learns to decode it to a sequence of words. We show that the learnt representations can be transferred across domains and can be leveraged effectively to improve training on new unseen domains. Experiments in three different domains and with six datasets demonstrate that the lexical-syntactic constructions learnt in one domain can be transferred to new domains and achieve up to 75-100% of the performance of in-domain training. This is based on objective metrics such as BLEU and semantic error rate and a subjective human rating study. Training a policy from prior knowledge from a different domain is consistently better than pure in-domain training by up to 10%.",
"title": ""
},
{
"docid": "957863eafec491fae0710dd33c043ba8",
"text": "In this paper, we present an automated behavior analysis system developed to assist the elderly and individuals with disabilities who live alone, by learning and predicting standard behaviors to improve the efficiency of their healthcare. Established behavioral patterns have been recorded using wireless sensor networks composed by several event-based sensors that captured raw measures of the actions of each user. Using these data, behavioral patterns of the residents were extracted using Bayesian statistics. The behavior was statistically estimated based on three probabilistic features we introduce, namely sensor activation likelihood, sensor sequence likelihood, and sensor event duration likelihood. Real data obtained from different home environments were used to verify the proposed method in the individual analysis. The results suggest that the monitoring system can be used to detect anomalous behavior signs which could reflect changes in health status of the user, thus offering an opportunity to intervene if required.",
"title": ""
},
{
"docid": "528812aa635d6b9f0b65cc784fb256e1",
"text": "Pointing tasks are commonly studied in HCI research, for example to evaluate and compare different interaction techniques or devices. A recent line of work has modelled user-specific touch behaviour with machine learning methods to reveal spatial targeting error patterns across the screen. These models can also be applied to improve accuracy of touchscreens and keyboards, and to recognise users and hand postures. However, no implementation of these techniques has been made publicly available yet, hindering broader use in research and practical deployments. Therefore, this paper presents a toolkit which implements such touch models for data analysis (Python), mobile applications (Java/Android), and the web (JavaScript). We demonstrate several applications, including hand posture recognition, on touch targeting data collected in a study with 24 participants. We consider different target types and hand postures, changing behaviour over time, and the influence of hand sizes.",
"title": ""
},
{
"docid": "8ef2ab1c25af8290e7f6492fbcfb4321",
"text": "This chapter discusses the topic of Goal Reasoning and its relation to Trusted Autonomy. Goal Reasoning studies how autonomous agents can extend their reasoning capabilities beyond their plans and actions, to consider their goals. Such capability allows a Goal Reasoning system to more intelligently react to unexpected events or changes in the environment. We present two different models of Goal Reasoning: Goal-Driven Autonomy (GDA) and goal refinement. We then discuss several research topics related to each, and how they relate to the topic of Trusted Autonomy. Finally, we discuss several directions of ongoing work that are particularly interesting in the context of the chapter: using a model of inverse trust as a basis for adaptive autonomy, and studying how Goal Reasoning agents may choose to rebel (i.e., act contrary to a given command). Benjamin Johnson NRC Research Associate at the US Naval Research Laboratory; Washington, DC; USA e-mail: benjamin.johnson.ctr@nrl.navy.mil Michael W. Floyd Knexus Research Corporation; Springfield, VA; USA e-mail: michael.floyd@knexusresearch.com Alexandra Coman NRC Research Associate at the US Naval Research Laboratory; Washington, DC; USA e-mail: alexandra.coman.ctr.ro@nrl.navy.mil Mark A. Wilson Navy Center for Applied Research in AI, US Naval Research Laboratory; Washington, DC; USA e-mail: mark.wilson@nrl.navy.mil David W. Aha Navy Center for Applied Research in AI, US Naval Research Laboratory; Washington, DC; USA e-mail: david.aha@nrl.navy.mil",
"title": ""
},
{
"docid": "fa04415325731a0f1b80a93d2e434c80",
"text": "Human gait is an attractive modality for recognizing people at a distance. In this paper we adopt an appearance-basedapproach to the problem of gait recognition. The width of the outer contour of the binarized silhouette of a walking person is chosen as the basic image feature. Different gait features are extracted from the width vector such as the dowsampled, smoothed width vectors, the velocity profile etc. and sequences of such temporally ordered feature vectors are used for representing a person’s gait. We use the dynamic time-warping (DTW) approach for matching so that non-linear time normalization may be used to deal with the naturally-occuring changes in walking speed. The performance of the proposed method is tested using different gait databases.",
"title": ""
},
{
"docid": "add36ca538a8ae362c0224acfa020700",
"text": "A frustrating aspect of software development is that compiler error messages often fail to locate the actual cause of a syntax error. An errant semicolon or brace can result in many errors reported throughout the file. We seek to find the actual source of these syntax errors by relying on the consistency of software: valid source code is usually repetitive and unsurprising. We exploit this consistency by constructing a simple N-gram language model of lexed source code tokens. We implemented an automatic Java syntax-error locator using the corpus of the project itself and evaluated its performance on mutated source code from several projects. Our tool, trained on the past versions of a project, can effectively augment the syntax error locations produced by the native compiler. Thus we provide a methodology and tool that exploits the naturalness of software source code to detect syntax errors alongside the parser.",
"title": ""
}
] |
scidocsrr
|
838eda75ac4f44b8a9d1d199050fb409
|
Overview of the 1st Classification of Spanish Election Tweets Task at IberEval 2017
|
[
{
"docid": "f3e5941be4543d5900d56c1a7d93d0ea",
"text": "These working notes summarize the different approaches we have explored in order to classify a corpus of tweets related to the 2015 Spanish General Election (COSET 2017 task from IberEval 2017). Two approaches were tested during the COSET 2017 evaluations: Neural Networks with Sentence Embeddings (based on TensorFlow) and N-gram Language Models (based on SRILM). Our results with these approaches were modest: both ranked above the “Most frequent baseline”, but below the “Bag-of-words + SVM” baseline. A third approach was tried after the COSET 2017 evaluation phase was over: Advanced Linear Models (based on fastText). Results measured over the COSET 2017 Dev and Test show that this approach is well above the “TF-IDF+RF” baseline.",
"title": ""
},
{
"docid": "6081f8b819133d40522a4698d4212dfc",
"text": "We present a lexicon-based approach to extracting sentiment from text. The Semantic Orientation CALculator (SO-CAL) uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation. SO-CAL is applied to the polarity classification task, the process of assigning a positive or negative label to a text that captures the text's opinion towards its main subject matter. We show that SO-CAL's performance is consistent across domains and in completely unseen data. Additionally, we describe the process of dictionary creation, and our use of Mechanical Turk to check dictionaries for consistency and reliability.",
"title": ""
},
{
"docid": "e59d1a3936f880233001eb086032d927",
"text": "In microblogging services such as Twitter, the users may become overwhelmed by the raw data. One solution to this problem is the classification of short text messages. As short texts do not provide sufficient word occurrences, traditional classification methods such as \"Bag-Of-Words\" have limitations. To address this problem, we propose to use a small set of domain-specific features extracted from the author's profile and text. The proposed approach effectively classifies the text to a predefined set of generic classes such as News, Events, Opinions, Deals, and Private Messages.",
"title": ""
},
{
"docid": "b3a3dfdc32f9751fabdd6fd06fc598ca",
"text": "L-LDA is a new supervised topic model for assigning \"topics\" to a collection of documents (e.g., Twitter profiles). User studies have shown that L-LDA effectively performs a variety of tasks in Twitter that include not only assigning topics to profiles, but also re-ranking feeds, and suggesting new users to follow. Building upon these promising qualitative results, we here run an extensive quantitative evaluation of L-LDA. We test the extent to which, compared to the competitive baseline of Support Vector Machines (SVM), L-LDA is effective at two tasks: 1) assigning the correct topics to profiles; and 2) measuring the similarity of a profile pair. We find that L-LDA generally performs as well as SVM, and it clearly outperforms SVM when training data is limited, making it an ideal classification technique for infrequent topics and for (short) profiles of moderately active users. We have also built a web application that uses L-LDA to classify any given profile and graphically map predominant topics in specific geographic regions.",
"title": ""
}
] |
[
{
"docid": "444f13da468b5a495de96b1d5badcc5d",
"text": "The ability to regulate emotions is an important part of adaptive functioning in society. Advances in cognitive and affective neuroscience and biological psychiatry have facilitated examination of neural systems that may be important for emotion regulation. In this critical review we first develop a neural model of emotion regulation that includes neural systems implicated in different voluntary and automatic emotion regulatory subprocesses. We then use this model as a theoretical framework to examine functional neural abnormalities in these neural systems that may predispose to the development of a major psychiatric disorder characterized by severe emotion dysregulation, bipolar disorder.",
"title": ""
},
{
"docid": "2e0b2bc23117bbe8d41f400761410638",
"text": "Free radicals and other reactive species (RS) are thought to play an important role in many human diseases. Establishing their precise role requires the ability to measure them and the oxidative damage that they cause. This article first reviews what is meant by the terms free radical, RS, antioxidant, oxidative damage and oxidative stress. It then critically examines methods used to trap RS, including spin trapping and aromatic hydroxylation, with a particular emphasis on those methods applicable to human studies. Methods used to measure oxidative damage to DNA, lipids and proteins and methods used to detect RS in cell culture, especially the various fluorescent \"probes\" of RS, are also critically reviewed. The emphasis throughout is on the caution that is needed in applying these methods in view of possible errors and artifacts in interpreting the results.",
"title": ""
},
{
"docid": "6c1138ec8f490f824e34d15c13593007",
"text": "We present a DSP simulation environment that will enable students to perform laboratory exercises using Android mobile devices and tablets. Due to the pervasive nature of the mobile technology, education applications designed for mobile devices have the potential to stimulate student interest in addition to offering convenient access and interaction capabilities. This paper describes a portable signal processing laboratory for the Android platform. This software is intended to be an educational tool for students and instructors in DSP, and signals and systems courses. The development of Android JDSP (A-JDSP) is carried out using the Android SDK, which is a Java-based open source development platform. The proposed application contains basic DSP functions for convolution, sampling, FFT, filtering and frequency domain analysis, with a convenient graphical user interface. A description of the architecture, functions and planned assessments are presented in this paper. Introduction Mobile technologies have grown rapidly in recent years and play a significant role in modern day computing. The pervasiveness of mobile devices opens up new avenues for developing applications in education, entertainment and personal communications. Understanding the effectiveness of smartphones and tablets in classroom instruction have been a subject of considerable research in recent years. The advantages of handheld devices over personal computers in K-12 education have been investigated 1 . The study has found that the easy accessibility and maneuverability of handheld devices lead to an increase in student interest. By incorporating mobile technologies into mathematics and applied mathematics courses, it has been shown that smartphones can broaden the scope and effectiveness of technical education in classrooms 2 . Fig 1: Splash screen of the AJDSP Android application Designing interactive applications to complement traditional teaching methods in STEM education has also been of considerable interest. The role of interactive learning in knowledge dissemination and acquisition has been discussed and it has been found to assist in the development of cognitive skills 3 . It has been showed learning potential is enhanced when education tools that possess a higher degree of interactivity are employed 4 . Software applications that incorporate visual components in learning, in order to simplify the understanding of complex theoretical concepts, have been also been developed 5-9 . These applications are generally characterized by rich user interaction and ease of accessibility. Modern mobile phones and tablets possess abundant memory and powerful processors, in addition to providing highly interactive interfaces. These features enable the design of applications that require intensive calculations to be supported on mobile devices. In particular, Android operating system based smartphones and tablets have large user base and sophisticated hardware configurations. Though several applications catering to elementary school education have been developed for Android devices, not much effort has been undertaken towards building DSP simulation applications 10 . In this paper, we propose a mobile based application that will enable students to perform Digital Signal Processing laboratories on their smartphone devices (Figure 1). In order to enable students to perform DSP labs over the Internet, the authors developed J-DSP, a visual programming environment 11-12 . J-DSP was designed as a zero footprint, standalone Java applet that can run directly on a browser. Several interactive laboratories have been developed and assessed in undergraduate courses. In addition to containing basic signal processing functions such as sampling, convolution, digital filter design and spectral analysis, J-DSP is also supported by several toolboxes. An iOS version of the software has also been developed and presented 13-15 . Here, we describe an Android based graphical application, A-JDSP, for signal processing simulation. The proposed tool has the potential to enhance DSP education by supporting both educators and students alike to teach and learn digital signal processing. The rest of the paper is organized as follows. We review related work in Section 2 and present the architecture of the proposed application in Section 3. In Section 4 we describe some of the functionalities of the software. We describe planned assessment strategies for the proposed application in Section 5. The concluding remarks and possible directions of extending this work are discussed in Section 6. Related Work Commercial packages such as MATLAB 16 and LabVIEW 17 are commonly used in signal processing research and application development. J-DSP, a web-based graphical DSP simulation package, was proposed as a non-commercial alternative for performing laboratories in undergraduate courses 3 . Though J-DSP is a light-weight application, running J-DSP over the web on mobile devices can be data-intensive. Hence, executing simulations directly on the mobile device is a suitable alternative. A mobile application that supports functions pertinent to different areas in electrical engineering, such as circuit theory, control systems and DSP has been reported 18 . However, it does not contain a comprehensive set of functions to simulate several DSP systems. In addition to this, a mobile interface for the MATLAB package has been released 19 . However, this requires an active version of MATLAB on a remote machine and a high speed internet connection to access the remote machine from the mobile device. In order to circumvent these problems, i-JDSP, an iOS version of the J-DSP software was proposed 13-15 . It implements DSP functions and algorithms optimized for mobile devices, thereby removing the need for internet connectivity. Our work builds upon J-DSP 11-12 and the iOS version of J-DSP 13-15 , and proposes to build an application for the Android operating system. Presently, to the best of our knowledge, there are no freely available Android applications that focus on signal processing education. Architecture The proposed application is implemented using Android-SDK 22 , which is a Java based development framework. The user interfaces are implemented using XML as it is well suited for Android development. The architecture of the proposed system is illustrated in Figure 2. It has five main components: (i) User Interfaces, (ii) Part Object, (iii) Part Calculator, (iv) Part View, and (v) Parts Controller. The role of each of them is described below in detail. The blocks in A-JDSP can be accessed through a function palette (user interface) and each block is associated with a view using which the function properties can be modified. The user interfaces obtain the user input data and pass them to the Part Object. Furthermore, every block has a separate Calculator function to perform the mathematical and signal processing algorithms. The Part Calculator uses the data from the input pins of the block, implements the relevant algorithms and updates the output pins. Figure 2. Architecture of AJDSP. Parts Controller Part Calculator Part Object User Interface Part View All the configuration information, such as the pin specifications, the part name and location of the block is contained in the Part Object class. In addition, the Part Object can access the data from each of the input pins of the block. When the user adds a particular block in the simulation, an instance of the Part Object class is created and is stored by a list object in the Parts Controller. The Parts Controller is an interface between the Part Object and the Part View. One of the main functions of Parts Controller is supervising block creation. The process of block creation by the Parts Controller can be described as follows: The block is configured by the user through the user interface and the block data is passed to an instance of the Part Object class. The Part Object then sends the block configuration information through the Parts Controller to the Part View, which finally renders the block. The Part View is the main graphical interface of the application. This displays the blocks and connections on the screen. It contains functionalities for selecting, moving and deleting blocks. Examples of block diagrams in the A-JDSP application for different simulations are illustrated in Figure 3(a), Figure 4(a) and Figure 5(a) respectively. Functionalities In this section, we describe some of the DSP functionalities that have been developed as part of A-JDSP. Android based Signal Generator block This generates the various input signals necessary for A-JDSP simulations. In addition to deterministic signals such as square, triangular and sinusoids; random signals from Gaussian Rayleigh and Uniform distributions can be generated. The signal related parameters such as signal frequency, time shift, mean and variance can be set through the user interface.",
"title": ""
},
{
"docid": "360c0b6f1a31fc9103cd21a7d18a6a59",
"text": "We review the characteristics of the optical excitations of graphene involving interband, intraband, and collective (plasmon) electronic excitations. We then discuss the different mechanisms by which photon energy can be converted to an electrical current in graphene. Finally, we review applications of graphene as transparent conductive screens, as photodetectors and light modulators at different wavelength ranges.",
"title": ""
},
{
"docid": "b8f6411673d866c6464509b6fa7e9498",
"text": "In computer vision there has been increasing interest in learning hashing codes whose Hamming distance approximates the data similarity. The hashing functions play roles in both quantizing the vector space and generating similarity-preserving codes. Most existing hashing methods use hyper-planes (or kernelized hyper-planes) to quantize and encode. In this paper, we present a hashing method adopting the k-means quantization. We propose a novel Affinity-Preserving K-means algorithm which simultaneously performs k-means clustering and learns the binary indices of the quantized cells. The distance between the cells is approximated by the Hamming distance of the cell indices. We further generalize our algorithm to a product space for learning longer codes. Experiments show our method, named as K-means Hashing (KMH), outperforms various state-of-the-art hashing encoding methods.",
"title": ""
},
{
"docid": "51b6b50fb9ea3b578a476a4c12cfa83f",
"text": "Deficient cognitive top-down executive control has long been hypothesized to underlie inattention and impulsivity in attention-deficit/hyperactivity disorder (ADHD). However, top-down cognitive dysfunction explains a modest proportion of the ADHD phenotype whereas the salience of emotional dysregulation is being noted increasingly. Together, these two types of dysfunction have the potential to account for more of the phenotypic variance in patients diagnosed with ADHD. We develop this idea and suggest that top-down dysregulation constitutes a gradient extending from mostly non-emotional top-down control processes (i.e., \"cool\" executive functions) to mainly emotional regulatory processes (including \"hot\" executive functions). While ADHD has been classically linked primarily to the former, conditions involving emotional instability such as borderline and antisocial personality disorder are closer to the other. In this model, emotional subtypes of ADHD are located at intermediate levels of this gradient. Neuroanatomically, gradations in \"cool\" processing appear to be related to prefrontal dysfunction involving dorsolateral prefrontal cortex (dlPFC) and caudal anterior cingulate cortex (cACC), while \"hot\" processing entails orbitofrontal cortex and rostral anterior cingulate cortex (rACC). A similar distinction between systems related to non-emotional and emotional processing appears to hold for the basal ganglia (BG) and the neuromodulatory effects of the dopamine system. Overall we suggest that these two systems could be divided according to whether they process non-emotional information related to the exteroceptive environment (associated with \"cool\" regulatory circuits) or emotional information related to the interoceptive environment (associated with \"hot\" regulatory circuits). We propose that this framework can integrate ADHD, emotional traits in ADHD, borderline and antisocial personality disorder into a related cluster of mental conditions.",
"title": ""
},
{
"docid": "ba87cc4660707a2f4f477cc8005cc014",
"text": "Modern applications increasingly rely on continuous monitoring of video, audio, or other sensor data to provide their functionality, particularly in platforms such as the Microsoft Kinect and Google Glass. Continuous sensing by untrusted applications poses significant privacy challenges for both device users and bystanders. Even honest users will struggle to manage application permissions using existing approaches.\n We propose a general, extensible framework for controlling access to sensor data on multi-application continuous sensing platforms. Our approach, world-driven access control, allows real-world objects to explicitly specify access policies. This approach relieves the user's permission management burden while mediating access at the granularity of objects rather than full sensor streams. A trusted policy module on the platform senses policies in the world and modifies applications' \"views\" accordingly. For example, world-driven access control allows the system to automatically stop recording in bathrooms or remove bystanders from video frames,without the user prompted to specify or activate such policies. To convey and authenticate policies, we introduce passports, a new kind of certificate that includes both a policy and optionally the code for recognizing a real-world object.\n We implement a prototype system and use it to study the feasibility of world-driven access control in practice. Our evaluation suggests that world-driven access control can effectively reduce the user's permission management burden in emerging continuous sensing systems. Our investigation also surfaces key challenges for future access control mechanisms for continuous sensing applications.",
"title": ""
},
{
"docid": "b299b939b73e1af0167519c4090dd639",
"text": "Machine learning models hosted in a cloud service are increasingly popular but risk privacy: clients sending prediction requests to the service need to disclose potentially sensitive information. In this paper, we explore the problem of privacy-preserving predictions: after each prediction, the server learns nothing about clients' input and clients learn nothing about the model.\n We present MiniONN, the first approach for transforming an existing neural network to an oblivious neural network supporting privacy-preserving predictions with reasonable efficiency. Unlike prior work, MiniONN requires no change to how models are trained. To this end, we design oblivious protocols for commonly used operations in neural network prediction models. We show that MiniONN outperforms existing work in terms of response latency and message sizes. We demonstrate the wide applicability of MiniONN by transforming several typical neural network models trained from standard datasets.",
"title": ""
},
{
"docid": "d49be8d0aab471b48de56cf533c8333f",
"text": "The Paxos protocol is the foundation for building many fault-tolerant distributed systems and services. This paper posits that there are significant performance benefits to be gained by implementing Paxos logic in network devices. Until recently, the notion of a switch-based implementation of Paxos would be a daydream. However, new flexible hardware is on the horizon that will provide customizable packet processing pipelines needed to implement Paxos. While this new hardware is still not readily available, several vendors and consortia have made the programming languages that target these devices public. This paper describes an implementation of Paxos in one of those languages, P4. Implementing Paxos provides a critical use case for P4, and will help drive the requirements for data plane languages in general. In the long term, we imagine that consensus could someday be offered as a network service, just as point-to-point communication is provided today.",
"title": ""
},
{
"docid": "5289fc231c716e2ce9e051fb0652ce94",
"text": "Noninvasive body contouring has become one of the fastest-growing areas of esthetic medicine. Many patients appear to prefer nonsurgical less-invasive procedures owing to the benefits of fewer side effects and shorter recovery times. Increasingly, 635-nm low-level laser therapy (LLLT) has been used in the treatment of a variety of medical conditions and has been shown to improve wound healing, reduce edema, and relieve acute pain. Within the past decade, LLLT has also emerged as a new modality for noninvasive body contouring. Research has shown that LLLT is effective in reducing overall body circumference measurements of specifically treated regions, including the hips, waist, thighs, and upper arms, with recent studies demonstrating the long-term effectiveness of results. The treatment is painless, and there appears to be no adverse events associated with LLLT. The mechanism of action of LLLT in body contouring is believed to stem from photoactivation of cytochrome c oxidase within hypertrophic adipocytes, which, in turn, affects intracellular secondary cascades, resulting in the formation of transitory pores within the adipocytes' membrane. The secondary cascades involved may include, but are not limited to, activation of cytosolic lipase and nitric oxide. Newly formed pores release intracellular lipids, which are further metabolized. Future studies need to fully outline the cellular and systemic effects of LLLT as well as determine optimal treatment protocols.",
"title": ""
},
{
"docid": "d21308f9ffa990746c6be137964d2e12",
"text": "'Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers', This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "23b85a1f32b3b57919f4ba66d79eb7ef",
"text": "We prove that every C1 diffeomorphism away from homoclinic tangencies is entropy expansive, with locally uniform expansivity constant. Consequently, such diffeomorphisms satisfy Shub’s entropy conjecture: the entropy is bounded from below by the spectral radius in homology. Moreover, they admit principal symbolic extensions, and the topological entropy and metrical entropy vary upper semicontinuously with the map. In contrast, generic diffeomorphisms with persistent tangencies are not entropy expansive.",
"title": ""
},
{
"docid": "eef7fcdcb53070709a231cb132c48004",
"text": "Social networks have known an important development since the appea ranc of web 2.0 platforms. This leads to a growing need for social network mining and social network analysis (SN A) methods and tools in order to provide deeper analysis of the network but also to detect communities in view of various applications. For this reason, a lot of works have focused on graph characterization or clustering and several new SNA tools have be en developed over these last years. The purpose of this article is to compare some of these tools which implement algorithms dedicated to social network analysis.",
"title": ""
},
{
"docid": "a4788b60b0fc16551f03557483a8a532",
"text": "The rapid growth in the population density in urban cities demands tolerable provision of services and infrastructure. To meet the needs of city inhabitants. Thus, increase in the request for embedded devices, such as sensors, actuators, and smartphones, etc., which is providing a great business potential towards the new era of Internet of Things (IoT); in which all the devices are capable of interconnecting and communicating with each other over the Internet. Therefore, the Internet technologies provide a way towards integrating and sharing a common communication medium. Having such knowledge, in this paper, we propose a combined IoT-based system for smart city development and urban planning using Big Data analytics. We proposed a complete system, which consists of various types of sensors deployment including smart home sensors, vehicular networking, weather and water sensors, smart parking sensors, and surveillance objects, etc. A four-tier architecture is proposed which include 1) Bottom Tier-1: which is responsible for IoT sources, data generations, and collections 2) Intermediate Tier-1: That is responsible for all type of communication between sensors, relays, base stations, the internet, etc. 3) Intermediate Tier 2: it is responsible for data management and processing using Hadoop framework, and 4) Top tier: is responsible for application and usage of the data analysis and results generated. The system implementation consists of various steps that start from data generation and collecting, aggregating, filtration, classification, preprocessing, computing and decision making. The proposed system is implemented using Hadoop with Spark, voltDB, Storm or S4 for real time processing of the IoT data to generate results in order to establish the smart city. For urban planning or city future development, the offline historical data is analyzed on Hadoop using MapReduce programming. IoT datasets generated by smart homes, smart parking weather, pollution, and vehicle data sets are used for analysis and evaluation. Such type of system with full functionalities does not exist. Similarly, the results show that the proposed system is more scalable and efficient than the existing systems. Moreover, the system efficiency is measured in term of throughput and processing time.",
"title": ""
},
{
"docid": "a43c69d6abf84ae8b2ee9257b630458c",
"text": "Chest X-Ray are Gray scale images used to diagnose or monitor treatment for conditions of pneumonia, emphysema, lung cancer, line and tube placement and tuberculosis by Physicians. This work is an attempt to extract the Lung Boundary so that it can trace severity of any infection and it is divided into three segments. 1. Content-based image retrieval approach for identifying training images (with masks) most similar to the patient CXR using a partial Radon transform and Bhattacharyya shape similarity measure. 2. Creating the initial patient-specific anatomical model of lung shape using SIFT-flow for deformable registration. 3. Extracting refined lung boundaries using a graph cuts optimization approach with a customized energy function. In earlier approach the accuracy rate was 95.4%. By combining the Integral values, this work tries to increase the accuracy level by 97.5%. This is done by optimizing the graph cut values. This research work will help the Doctors to identify the severity of the infection with the chest X-Ray image itself, instead of going for other expensive diagnosis tests.",
"title": ""
},
{
"docid": "cf9bb9836c68bfd09cf778b0029ed417",
"text": "Items of the Affect Balance Scale, the Life Satisfaction Index-Z and the Philadelphia Geriatric Center Scale together with 22 new items were used in the construction of a happiness scale for the elderly. Items were initially administered to 301 subjects from urban, rural, and institutional settings and correlated with ratings of happiness. A new scale consisting of 24 items was cross-validated on an additional 297 subjects. Test-retest reliability scores were obtained on 56 subjects. Results indicated that the new scale was a better predictor of \"avowed happiness\" in both validation and cross-validation samples than the existing scales used for comparison. Moreover, the new scale's test-rated reliability was within an acceptable range for this type of scale.",
"title": ""
},
{
"docid": "45df032a26dc7a27ed6f68cea5f7c033",
"text": "Computer animation of articulated figures can be tedious, largely due to the amount of data which must be specified at each frame. Animation techniques range from simple interpolation between keyframed figure poses to higher-level algorithmic models of specific movement patterns. The former provides the animator with complete control over the movement, whereas the latter may provide only limited control via some high-level parameters incorporated into the model. Inverse kinematic techniques adopted from the robotics literature have the potential to relieve the animator of detailed specification of every motion parameter within a figure, while retaining complete control over the movement, if desired. This work investigates the use of inverse kinematics and simple geometric constraints as tools for the animator. Previous applications of inverse kinematic algorithms to conlputer animation are reviewed. A pair of alternative algorithms suitable for a direct manipulation interface are presented and qualitatively compared. Application of these algorithms to enforce simple geometric constraints on a figure during interactive manipulation is discussed. An implementation of one of these algorithms within an existing figure animation editor is described, which provides constrained inverse kinematic figure manipulation for the creation of keyframes.",
"title": ""
},
{
"docid": "ea6ce46dc61b1974bce9f1c13cca0ef5",
"text": "While prior work has focused on the process through which IT may be adopted by micro-enterprises, this research takes it one step further by assessing the outcomes from IT adoption in micro-enterprises that have undergone technology, training and trust building interventions. It analyzes these using a systematic evaluation model that ties in the key components of development. The contribution of this paper is in providing insights into how researchers and practitioners can better stimulate micro-enterprise growth and economic development through IT adoption.",
"title": ""
},
{
"docid": "889375e986a22917d55ee82316591ff8",
"text": "We argue that the United States does not have comprehensive national health insurance (NHI) because American political institutions are biased against this type of reform. The original design of a fragmented and federated national political system serving an increasingly large and diverse polity has been further fragmented by a series of political reforms beginning with the Progressive era and culminating with the congressional reforms of the mid-1970s. This institutional structure yields enormous power to intransigent interest groups and thus makes efforts by progressive reformers such as President Clinton (and previous reform-minded presidents before him) to mount a successful NHI campaign impossible. We show how this institutional structure has shaped political strategies and political outcomes related to NHI since Franklin D. Roosevelt. Finally, we argue that this institutional structure contributes to the antigovernment attitudes so often observed among Americans.",
"title": ""
},
{
"docid": "836565eca85463346355c2e16272bec7",
"text": "Trip hazards are a significant contributor to accidents on construction and manufacturing sites. Current safety inspections are labor intensive and limited by human fallibility, making automation of trip hazard detection appealing from both a safety and economic perspective. Trip hazards present an interesting challenge to modern learning techniques because they are defined as much by affordance as by object type, for example, wires on a table are not a trip hazard, but can be if lying on the ground. To address these challenges, we conduct a comprehensive investigation into the performance characteristics of 11 different colors and depth fusion approaches, including four fusion and one nonfusion approach, using color and two types of depth images. Trained and tested on more than 600 labeled trip hazards over four floors and 2000 m2 in an active construction site, this approach was able to differentiate between identical objects in different physical configurations. Outperforming a color-only detector, our multimodal trip detector fuses color and depth information to achieve a 4% absolute improvement in F1-score. These investigative results and the extensive publicly available dataset move us one step closer to assistive or fully automated safety inspection systems on construction sites.",
"title": ""
}
] |
scidocsrr
|
f1babf866f6c16af566d24316de0e63e
|
Learning modular neural network policies for multi-task and multi-robot transfer
|
[
{
"docid": "ec8bd6218fccc82deb23f5d52c10a7fe",
"text": "The options framework provides a method for reinforcement learning agents to build new high-level skills. However, since options are usually learned in the same state space as the problem the agent is currently solving, they cannot be ported to other similar tasks that have different state spaces. We introduce the notion of learning options in agent-space, the portion of the agent’s sensation that is present and retains the same semantics across successive problem instances, rather than in problem-space. Agent-space options can be reused in later tasks that share the same agent-space but are sufficiently distinct to require different problem-spaces. We present experimental results that demonstrate the use of agent-space options in building reusable skills.",
"title": ""
},
{
"docid": "217e76cc7d8a7d680b40d5c658460513",
"text": "The reinforcement learning paradigm is a popular way to addr ess problems that have only limited environmental feedback, rather than correctly labeled exa mples, as is common in other machine learning contexts. While significant progress has been made t o improve learning in a single task, the idea oftransfer learninghas only recently been applied to reinforcement learning ta sks. The core idea of transfer is that experience gained in learning t o perform one task can help improve learning performance in a related, but different, task. In t his article we present a framework that classifies transfer learning methods in terms of their capab ilities and goals, and then use it to survey the existing literature, as well as to suggest future direct ions for transfer learning work.",
"title": ""
},
{
"docid": "5407b47ec95002f98552dcfc87b1306f",
"text": "We present an automatic method for interactive control of physical humanoid robots based on high-level tasks that does not require manual specification of motion trajectories or specially-designed control policies. The method is based on the combination of a model-based policy that is trained off-line in simulation and sends high-level commands to a model-free controller that executes these commands on the physical robot. This low-level controller simultaneously learns and adapts a local model of dynamics on-line and computes optimal controls under the learned model. The high-level policy is trained using a combination of trajectory optimization and neural network learning, while considering physical limitations such as limited sensors and communication delays. The entire system runs in real-time on the robot's computer and uses only on-board sensors. We demonstrate successful policy execution on a range of tasks such as leaning, hand reaching, and robust balancing behaviors atop a tilting base on the physical robot and in simulation.",
"title": ""
},
{
"docid": "9ec7b122117acf691f3bee6105deeb81",
"text": "We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.",
"title": ""
}
] |
[
{
"docid": "9cce3a9ed14279acae533befc31735c7",
"text": "Flower pollination algorithm (FPA) is a nature-inspired meta-heuristics to handle a large scale optimization process. This paper reviews the previous studies on the application of FPA, modified FPA and hybrid FPA for solving optimization problems. The effectiveness of FPA for solving the optimization problems are highlighted and discussed. The improvement aspects include local and global search strategies and the quality of the solutions. The measured enhancements in FPA are based on various research domains. The results of review indicate the capability of the enhanced and hybrid FPA for solving optimization problems in variety of applications and outperformed the results of other established optimization techniques.",
"title": ""
},
{
"docid": "4ba866eb1a9c541f87c9e3b7632cc5bf",
"text": "Biologists worry that the rapid rates of warming projected for the planet (1) will doom many species to extinction. Species could face extinction with climate change if climatically suitable habitat disappears or is made inaccessible by geographic barriers or species' inability to disperse (see the figure, panels A to E). Previous studies have provided region- or taxon-specific estimates of biodiversity loss with climate change that range from 0% to 54%, making it difficult to assess the seriousness of this problem. On page 571 of this issue, Urban (2) provides a synthetic and sobering estimate of climate change–induced biodiversity loss by applying a model-averaging approach to 131 of these studies. The result is a projection that up to one-sixth of all species may go extinct if we follow “business as usual” trajectories of carbon emissions.",
"title": ""
},
{
"docid": "910a3be33d479be4ed6e7e44a56bb8fb",
"text": "Support vector machine (SVM) is a supervised machine learning approach that was recognized as a statistical learning apotheosis for the small-sample database. SVM has shown its excellent learning and generalization ability and has been extensively employed in many areas. This paper presents a performance analysis of six types of SVMs for the diagnosis of the classical Wisconsin breast cancer problem from a statistical point of view. The classification performance of standard SVM (St-SVM) is analyzed and compared with those of the other modified classifiers such as proximal support vector machine (PSVM) classifiers, Lagrangian support vector machines (LSVM), finite Newton method for Lagrangian support vector machine (NSVM), Linear programming support vector machines (LPSVM), and smooth support vector machine (SSVM). The experimental results reveal that these SVM classifiers achieve very fast, simple, and efficient breast cancer diagnosis. The training results indicated that LSVM has the lowest accuracy of 95.6107 %, while St-SVM performed better than other methods for all performance indices (accuracy = 97.71 %) and is closely followed by LPSVM (accuracy = 97.3282). However, in the validation phase, the overall accuracies of LPSVM achieved 97.1429 %, which was superior to LSVM (95.4286 %), SSVM (96.5714 %), PSVM (96 %), NSVM (96.5714 %), and St-SVM (94.86 %). Value of ROC and MCC for LPSVM achieved 0.9938 and 0.9369, respectively, which outperformed other classifiers. The results strongly suggest that LPSVM can aid in the diagnosis of breast cancer.",
"title": ""
},
{
"docid": "d6d9cb649294de96ea2bfe18753559df",
"text": "Since health care on foods is drawing people's attention recently, a system that can record everyday meals easily is being awaited. In this paper, we propose an automatic food image recognition system for recording people's eating habits. In the proposed system, we use the Multiple Kernel Learning (MKL) method to integrate several kinds of image features such as color, texture and SIFT adaptively. MKL enables to estimate optimal weights to combine image features for each category. In addition, we implemented a prototype system to recognize food images taken by cellular-phone cameras. In the experiment, we have achieved the 61.34% classification rate for 50 kinds of foods. To the best of our knowledge, this is the first report of a food image classification system which can be applied for practical use.",
"title": ""
},
{
"docid": "51256458513e99bf3750049d542692b8",
"text": "Text-level discourse parsing remains a challenge: most approaches employ features that fail to capture the intentional, semantic, and syntactic aspects that govern discourse coherence. In this paper, we propose a recursive model for discourse parsing that jointly models distributed representations for clauses, sentences, and entire discourses. The learned representations can to some extent learn the semantic and intentional import of words and larger discourse units automatically,. The proposed framework obtains comparable performance regarding standard discoursing parsing evaluations when compared against current state-of-art systems.",
"title": ""
},
{
"docid": "2a0194f2af99910546ece94abc4ee6e9",
"text": "CBCT is a widely applied imaging modality in dentistry. It enables the visualization of high-contrast structures of the oral region (bone, teeth, air cavities) at a high resolution. CBCT is now commonly used for the assessment of bone quality, primarily for pre-operative implant planning. Traditionally, bone quality parameters and classifications were primarily based on bone density, which could be estimated through the use of Hounsfield units derived from multidetector CT (MDCT) data sets. However, there are crucial differences between MDCT and CBCT, which complicates the use of quantitative gray values (GVs) for the latter. From experimental as well as clinical research, it can be seen that great variability of GVs can exist on CBCT images owing to various reasons that are inherently associated with this technique (i.e. the limited field size, relatively high amount of scattered radiation and limitations of currently applied reconstruction algorithms). Although attempts have been made to correct for GV variability, it can be postulated that the quantitative use of GVs in CBCT should be generally avoided at this time. In addition, recent research and clinical findings have shifted the paradigm of bone quality from a density-based analysis to a structural evaluation of the bone. The ever-improving image quality of CBCT allows it to display trabecular bone patterns, indicating that it may be possible to apply structural analysis methods that are commonly used in micro-CT and histology.",
"title": ""
},
{
"docid": "b54a2d0350ceac52ed92565af267b6e2",
"text": "In this paper, we address the problem of classifying image sets for face recognition, where each set contains images belonging to the same subject and typically covering large variations. By modeling each image set as a manifold, we formulate the problem as the computation of the distance between two manifolds, called manifold-manifold distance (MMD). Since an image set can come in three pattern levels, point, subspace, and manifold, we systematically study the distance among the three levels and formulate them in a general multilevel MMD framework. Specifically, we express a manifold by a collection of local linear models, each depicted by a subspace. MMD is then converted to integrate the distances between pairs of subspaces from one of the involved manifolds. We theoretically and experimentally study several configurations of the ingredients of MMD. The proposed method is applied to the task of face recognition with image sets, where identification is achieved by seeking the minimum MMD from the probe to the gallery of image sets. Our experiments demonstrate that, as a general set similarity measure, MMD consistently outperforms other competing nondiscriminative methods and is also promisingly comparable to the state-of-the-art discriminative methods.",
"title": ""
},
{
"docid": "efb9686dbd690109e8e5341043648424",
"text": "Because of the precise temporal resolution of electrophysiological recordings, the event-related potential (ERP) technique has proven particularly valuable for testing theories of perception and attention. Here, I provide a brief tutorial on the ERP technique for consumers of such research and those considering the use of human electrophysiology in their own work. My discussion begins with the basics regarding what brain activity ERPs measure and why they are well suited to reveal critical aspects of perceptual processing, attentional selection, and cognition, which are unobservable with behavioral methods alone. I then review a number of important methodological issues and often-forgotten facts that should be considered when evaluating or planning ERP experiments.",
"title": ""
},
{
"docid": "aa98b79f4c20ad55a979329a6df947b3",
"text": "Parallel processing is an essential requirement for optimum computations in modern equipment. In this paper, a communication strategy for the parallelized Flower Pollination Algorithm is proposed for solving numerical optimization problems. In this proposed method, the population flowers are split into several independent groups based on the original structure of the Flower Pollination Algorithm (FPA), and the proposed communication strategy provides the information flow for the flowers to communicate in different groups. Four benchmark functions are used to test the behavior of convergence, the accuracy, and the speed of the proposed method. According to the experimental result, the proposed communicational strategy increases the accuracy of the FPA on finding the best solution is up to 78% in comparison with original method.",
"title": ""
},
{
"docid": "038f34588540683674f7ec44325b510a",
"text": "We propose a framework for automatic modeling, detection, and tracking of 3D objects with a Kinect. The detection part is mainly based on the recent template-based LINEMOD approach [1] for object detection. We show how to build the templates automatically from 3D models, and how to estimate the 6 degrees-of-freedom pose accurately and in real-time. The pose estimation and the color information allow us to check the detection hypotheses and improves the correct detection rate by 13% with respect to the original LINEMOD. These many improvements make our framework suitable for object manipulation in Robotics applications. Moreover we propose a new dataset made of 15 registered, 1100+ frame video sequences of 15 various objects for the evaluation of future competing methods. Fig. 1. 15 different texture-less 3D objects are simultaneously detected with our approach under different poses on heavy cluttered background with partial occlusion. Each detected object is augmented with its 3D model. We also show the corresponding coordinate systems.",
"title": ""
},
{
"docid": "6b527c906789f6e32cd5c28f684d9cc8",
"text": "This paper addresses an essential application of microkernels; its role in virtualization for embedded systems. Virtualization in embedded systems and microkernel-based virtualization are topics of intensive research today. As embedded systems specifically mobile phones are evolving to do everything that a PC does, employing virtualization in this case is another step to make this vision a reality. Hence, recently, much time and research effort have been employed to validate ways to host virtualization on embedded system processors i.e., the ARM processors. This paper reviews the research work that have had significant impact on the implementation approaches of virtualization in embedded systems and how these approaches additionally provide security features that are beneficial to equipment manufacturers, carrier service providers and end users.",
"title": ""
},
{
"docid": "5ed744299cb2921bcb42f57cf1809f69",
"text": "Credit risk prediction models seek to predict quality factors such as whether an individual will default (bad applicant) on a loan or not (good applicant). This can be treated as a kind of machine learning (ML) problem. Recently, the use of ML algorithms has proven to be of great practical value in solving a variety of risk problems including credit risk prediction. One of the most active areas of recent research in ML has been the use of ensemble (combining) classifiers. Research indicates that ensemble individual classifiers lead to a significant improvement in classification performance by having them vote for the most popular class. This paper explores the predicted behaviour of five classifiers for different types of noise in terms of credit risk prediction accuracy, and how such accuracy could be improved by using classifier ensembles. Benchmarking results on four credit datasets and comparison with the performance of each individual classifier on predictive accuracy at various attribute noise levels are presented. The experimental evaluation shows that the ensemble of classifiers technique has the potential to improve prediction accuracy. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b44df1268804e966734ea404b8c29360",
"text": "A new night-time lane detection system and its accompanying framework are presented in this paper. The accompanying framework consists of an automated ground truth process and systematic storage of captured videos that will be used for training and testing. The proposed Advanced Lane Detector 2.0 (ALD 2.0) is an improvement over the ALD 1.0 or Layered Approach with integration of pixel remapping, outlier removal, and prediction with tracking. Additionally, a novel procedure to generate the ground truth data for lane marker locations is also proposed. The procedure consists of an original process called time slicing, which provides the user with unique visualization of the captured video and enables quick generation of ground truth information. Finally, the setup and implementation of a database hosting lane detection videos and standardized data sets for testing are also described. The ALD 2.0 is evaluated by means of the user-created annotations accompanying the videos. Finally, the planned improvements and remaining work are addressed.",
"title": ""
},
{
"docid": "4dd690ffa1a73674e1b0488b7656b26e",
"text": "In this paper, we propose the deep reinforcement relevance network (DRRN), a novel deep architecture, to design a model for handling an action space characterized using natural language with applications to text-based games. For a particular class of games, a user must choose among a number of actions described by text, with the goal of maximizing long-term reward. In these games, the best action is typically what fits the current situation best (modeled as a state in the DRRN), also described by text. Because of the exponential complexity of natural language with respect to sentence length, there is typically an unbounded set of unique actions. Even with a constrained vocabulary, the action space is very large and sparse, posing challenges for learning. To address this challenge, the DRRN extracts separate high-level embedding vectors from the texts that describe states and actions, respectively, using a general interaction function, such as inner product, bilinear, and DNN interaction, between these embedding vectors to approximate the Qfunction. We evaluate the DRRN on two popular text games, showing superior performance over other deep Q-learning architectures.",
"title": ""
},
{
"docid": "d31c6830ee11fc73b53c7930ad0e638f",
"text": "This paper proposes two rectangular ring planar monopole antennas for wideband and ultra-wideband applications. Simple planar rectangular rings are used to design the planar antennas. These rectangular rings are designed in a way to achieve the wideband operations. The operating frequency band ranges from 1.85 GHz to 4.95 GHz and 3.12 GHz to 14.15 GHz. The gain varies from 1.83 dBi to 2.89 dBi for rectangular ring wideband antenna and 1.89 dBi to 5.2 dBi for rectangular ring ultra-wideband antenna. The design approach and the results are discussed.",
"title": ""
},
{
"docid": "27329c67322a5ed2c4f2a7dd6ceb79a8",
"text": "In the world’s largest-ever deployment of online voting, the iVote Internet voting system was trusted for the return of 280,000 ballots in the 2015 state election in New South Wales, Australia. During the election, we performed an independent security analysis of parts of the live iVote system and uncovered severe vulnerabilities that could be leveraged to manipulate votes, violate ballot privacy, and subvert the verification mechanism. These vulnerabilities do not seem to have been detected by the election authorities before we disclosed them, despite a preelection security review and despite the system having run in a live state election for five days. One vulnerability, the result of including analytics software from an insecure external server, exposed some votes to complete compromise of privacy and integrity. At least one parliamentary seat was decided by a margin much smaller than the number of votes taken while the system was vulnerable. We also found fundamental protocol flaws, including vote verification that was itself susceptible to manipulation. This incident underscores the difficulty of conducting secure elections online and carries lessons for voters, election officials, and the e-voting research community.",
"title": ""
},
{
"docid": "2b8296f8760e826046cd039c58026f83",
"text": "This study provided a descriptive and quantitative comparative analysis of data from an assessment protocol for adolescents referred clinically for gender identity disorder (n = 192; 105 boys, 87 girls) or transvestic fetishism (n = 137, all boys). The protocol included information on demographics, behavior problems, and psychosexual measures. Gender identity disorder and transvestic fetishism youth had high rates of general behavior problems and poor peer relations. On the psychosexual measures, gender identity disorder patients had considerably greater cross-gender behavior and gender dysphoria than did transvestic fetishism youth and other control youth. Male gender identity disorder patients classified as having a nonhomosexual sexual orientation (in relation to birth sex) reported more indicators of transvestic fetishism than did male gender identity disorder patients classified as having a homosexual sexual orientation (in relation to birth sex). The percentage of transvestic fetishism youth and male gender identity disorder patients with a nonhomosexual sexual orientation self-reported similar degrees of behaviors pertaining to transvestic fetishism. Last, male and female gender identity disorder patients with a homosexual sexual orientation had more recalled cross-gender behavior during childhood and more concurrent cross-gender behavior and gender dysphoria than did patients with a nonhomosexual sexual orientation. The authors discuss the clinical utility of their assessment protocol.",
"title": ""
},
{
"docid": "ad45d9a69112010f84ff8d0fae04596d",
"text": "PURPOSE\nWe document the postpubertal outcome of feminizing genitoplasty.\n\n\nMATERIALS AND METHODS\nA total of 14 girls, mean age 13.1 years, with congenital adrenal hyperplasia were assessed under anesthesia by a pediatric urologist, plastic/reconstructive surgeon and gynecologist. Of these patients 13 had previously undergone feminizing genitoplasty in early childhood at 4 different specialist centers in the United Kingdom.\n\n\nRESULTS\nThe outcome of clitoral surgery was unsatisfactory (clitoral atrophy or prominent glans) in 6 girls, including 3 whose genitoplasty had been performed by 3 different specialist pediatric urologists. Additional vaginal surgery was necessary for normal comfortable intercourse in 13 patients. Fibrosis and scarring were most evident in those who had undergone aggressive attempts at vaginal reconstruction in infancy.\n\n\nCONCLUSIONS\nThese disappointing results, even in the hands of specialists, highlight the importance of late followup and challenge the prevailing assumption that total correction can be achieved with a single stage operation in infancy. Although simple exteriorization of a low vagina can reasonably be combined with cosmetic correction of virilized external genitalia in infancy, we now believe that in some cases it may be best to defer definitive reconstruction of the intermediate or high vagina until after puberty. The psychological issues surrounding sexuality in these patients are inadequately researched and poorly understood.",
"title": ""
},
{
"docid": "52c99a0230a309d57a996ffbebf95e22",
"text": "Recent distributed denial-of-service attacks demonstrate the high vulnerability of Internet of Things (IoT) systems and devices. Addressing this challenge will require scalable security solutions optimized for the IoT ecosystem.",
"title": ""
},
{
"docid": "c0ee7bd21a1a261a73f7b831c655ca00",
"text": "NMDA receptors are preeminent neurotransmitter-gated channels in the CNS, which respond to glutamate in a manner that integrates multiple external and internal cues. They belong to the ionotropic glutamate receptor family and fulfil unique and crucial roles in neuronal development and function. These roles depend on characteristic response kinetics, which reflect the operation of the receptors. Here, we review biologically salient features of the NMDA receptor signal and its mechanistic origins. Knowledge of distinctive NMDA receptor biophysical properties, their structural determinants and physiological roles is necessary to understand the physiological and neurotoxic actions of glutamate and to design effective therapeutics.",
"title": ""
}
] |
scidocsrr
|
d11c6cf92de9947a1f3311306eab65a4
|
Optimal and stable gait planning of a quadruped robot for trotting over uneven terrains
|
[
{
"docid": "48e26039d9b2e4ed3cfdbc0d3ba3f1d0",
"text": "This paper presents a trajectory generator and an active compliance control scheme, unified in a framework to synthesize dynamic, feasible and compliant trot-walking locomotion cycles for a stiff-by-nature hydraulically actuated quadruped robot. At the outset, a CoP-based trajectory generator that is constructed using an analytical solution is implemented to obtain feasible and dynamically balanced motion references in a systematic manner. Initial conditions are uniquely determined for symmetrical motion patterns, enforcing that trajectories are seamlessly connected both in position, velocity and acceleration levels, regardless of the given support phase. The active compliance controller, used simultaneously, is responsible for sufficient joint position/force regulation. An admittance block is utilized to compute joint displacements that correspond to joint force errors. In addition to position feedback, these joint displacements are inserted to the position control loop as a secondary feedback term. In doing so, active compliance control is achieved, while the position/force trade-off is modulated via the virtual admittance parameters. Various trot-walking experiments are conducted with the proposed framework using HyQ, a ~ 75kg hydraulically actuated quadruped robot. We present results of repetitive, continuous, and dynamically equilibrated trot-walking locomotion cycles, both on level surface and uneven surface walking experiments.",
"title": ""
},
{
"docid": "41bef2a78d95c413aa519b0b21f7a8e2",
"text": "In this paper, a “virtual slope method” for walking trajectory planning on stairs for biped robots is proposed. In conventional methods for walking on stairs, there are two problems about the zero-moment point (ZMP). One is a ZMP equation problem, and the other is a ZMP definition problem in a double-support phase. First, a ZMP equation on stairs is different from that on flat ground. Therefore, the same trajectory generation as flat ground cannot be implemented. This problem is defined as a “ZMP equation problem.” Second, the ZMP cannot be defined in the double-support phase on stairs because contact points of the feet do not constitute a plane. The ZMP can be defined only on the plane. This problem is defined as a “ZMP definition problem.” These two problems are solved concurrently by the virtual slope method. It is the method that regards the stairs as a virtual slope. In walking trajectory planning on a slope of the constant gradient, the two problems about the ZMP do not exist. Additionally, a trajectory planning procedure based on the virtual slope method is explained. The validity of the proposed method is confirmed by some simulations and experiments.",
"title": ""
}
] |
[
{
"docid": "63c815c9aa92acec6664c0865f1856e1",
"text": "We examined the role of kisspeptin and its receptor, the G-protein-coupled receptor GPR54, in governing the onset of puberty in the mouse. In the adult male and female mouse, kisspeptin (10-100 nM) evoked a remarkably potent, long-lasting depolarization of >90% of gonadotropin-releasing hormone (GnRH)-green fluorescent protein neurons in situ. In contrast, in juvenile [postnatal day 8 (P8) to P19] and prepubertal (P26-P33) male mice, kisspeptin activated only 27 and 44% of GnRH neurons, respectively. This developmental recruitment of GnRH neurons into a kisspeptin-responsive pool was paralleled by an increase in the ability of centrally administered kisspeptin to evoke luteinizing hormone secretion in vivo. To learn more about the mechanisms through which kisspeptin-GPR54 signaling at the GnRH neuron may change over postnatal development, we performed quantitative in situ hybridization for kisspeptin and GPR54 transcripts. Approximately 90% of GnRH neurons were found to express GPR54 mRNA in both juvenile and adult mice, without a detectable difference in the mRNA content between the age groups. In contrast, the expression of KiSS-1 mRNA increased dramatically across the transition from juvenile to adult life in the anteroventral periventricular nucleus (AVPV; p < 0.001). These results demonstrate that kisspeptin exerts a potent depolarizing effect on the excitability of almost all adult GnRH neurons and that the responsiveness of GnRH neurons to kisspeptin increases over postnatal development. Together, these observations suggest that activation of GnRH neurons by kisspeptin at puberty reflects a dual process involving an increase in kisspeptin input from the AVPV and a post-transcriptional change in GPR54 signaling within the GnRH neuron.",
"title": ""
},
{
"docid": "a9a608d7467839acf270a95f86f77a8a",
"text": "In this paper, we study the problem of question answering over knowledge base. We identify that the primary bottleneck in this problem is the difficulty in accurately predicting the relations connecting the subject entity to the object entities. We advocate a new model architecture, APVA, which includes a verification mechanism responsible for checking the correctness of predicted relations. The APVA framework naturally supports a well-principled iterative training procedure, which we call turbo training. We demonstrate via experiments that the APVA-TUBRO approach drastically improves the question answering performance. Title and Abstract in Chinese 面向知识库问答的 APVA-TURBO方法 本文主要围绕目前基于知识库的问答方法中的存在的问题展开研究。经过对现有各类问 答方法的调查与分析,我们发现目前知识库问答的瓶颈主要体现在关系预测上,即如何 准确地预测出问题中的实体与答案实体之间的关联,是需要面临的最大挑战。因此本 文在现有问答框架的基础上,加入了负责对预测关系的可靠性进行评价检验的验证机 制,用以增强关系预测效果,并提出了一种新的知识库问答框架“APVA”。另外,文中 为 APVA 设计了一种具有良好理论基础的迭代训练过程,我们称之为“涡轮式(turbo)训 练”。通过实验证明,APVA-TUBRO方法可以在问答数据集上取得优异效果,大大提升 了目前问答方法的准确性。",
"title": ""
},
{
"docid": "5bee5208fa2676b7a7abf4ef01f392b8",
"text": "Artificial Intelligence (AI) is a general term that implies the use of a computer to model intelligent behavior with minimal human intervention. AI is generally accepted as having started with the invention of robots. The term derives from the Czech word robota, meaning biosynthetic machines used as forced labor. In this field, Leonardo Da Vinci's lasting heritage is today's burgeoning use of robotic-assisted surgery, named after him, for complex urologic and gynecologic procedures. Da Vinci's sketchbooks of robots helped set the stage for this innovation. AI, described as the science and engineering of making intelligent machines, was officially born in 1956. The term is applicable to a broad range of items in medicine such as robotics, medical diagnosis, medical statistics, and human biology-up to and including today's \"omics\". AI in medicine, which is the focus of this review, has two main branches: virtual and physical. The virtual branch includes informatics approaches from deep learning information management to control of health management systems, including electronic health records, and active guidance of physicians in their treatment decisions. The physical branch is best represented by robots used to assist the elderly patient or the attending surgeon. Also embodied in this branch are targeted nanorobots, a unique new drug delivery system. The societal and ethical complexities of these applications require further reflection, proof of their medical utility, economic value, and development of interdisciplinary strategies for their wider application.",
"title": ""
},
{
"docid": "3b5d119416d602a31d5975bacd7acc8e",
"text": "We present a parametric family of regression models for interval-censored event-time (survival) data that accomodates both fixed (e.g. baseline) and time-dependent covariates. The model employs a three-parameter family of survival distributions that includes the Weibull, negative binomial, and log-logistic distributions as special cases, and can be applied to data with left, right, interval, or non-censored event times. Standard methods, such as Newton-Raphson, can be employed to estimate the model and the resulting estimates have an asymptotically normal distribution about the true values with a covariance matrix that is consistently estimated by the information function. The deviance function is described to assess model fit and a robust sandwich estimate of the covariance may also be employed to provide asymptotically robust inferences when the model assumptions do not apply. Spline functions may also be employed to allow for non-linear covariates. The model is applied to data from a long-term study of type 1 diabetes to describe the effects of longitudinal measures of glycemia (HbA1c) over time (the time-dependent covariate) on the risk of progression of diabetic retinopathy (eye disease), an interval-censored event-time outcome.",
"title": ""
},
{
"docid": "b438df9eaffe249b62c1c01e5eefaa8a",
"text": "This paper presents a straightforward way to fully calibrate low-cost IMU sensor in field. In this approach, a plastic cube that is made by 3D printer is introduced as the body carrier of IMU. The accelerometer is firstly calibrated by using famous multi-position method and then followed with the correction of misalignment between sensor frame and body frame. A normal swivel chair is reformed as an easy turntable to perform smooth rotation. The gyroscope is calibrated by using angle domain method with the angular displacement reference that is estimated by using laser scan matching process. Experiment is executed and the comparison between raw readings and calibrated measurements is conducted, which verifies the validation of the proposed method.",
"title": ""
},
{
"docid": "23ae026d482a0d4805cac3bb0762aed0",
"text": "Time series motifs are pairs of individual time series, or subsequences of a longer time series, which are very similar to each other. As with their discrete analogues in computational biology, this similarity hints at structure which has been conserved for some reason and may therefore be of interest. Since the formalism of time series motifs in 2002, dozens of researchers have used them for diverse applications in many different domains. Because the obvious algorithm for computing motifs is quadratic in the number of items, more than a dozen approximate algorithms to discover motifs have been proposed in the literature. In this work, for the first time, we show a tractable exact algorithm to find time series motifs. As we shall show through extensive experiments, our algorithm is up to three orders of magnitude faster than brute-force search in large datasets. We further show that our algorithm is fast enough to be used as a subroutine in higher level data mining algorithms for anytime classification, near-duplicate detection and summarization, and we consider detailed case studies in domains as diverse as electroencephalograph interpretation and entomological telemetry data mining.",
"title": ""
},
{
"docid": "2b1e2b90d7fcff0f3b159908d58c0cae",
"text": "Existing blind image quality assessment (BIQA) methods are mostly opinion-aware. They learn regression models from training images with associated human subjective scores to predict the perceptual quality of test images. Such opinion-aware methods, however, require a large amount of training samples with associated human subjective scores and of a variety of distortion types. The BIQA models learned by opinion-aware methods often have weak generalization capability, hereby limiting their usability in practice. By comparison, opinion-unaware methods do not need human subjective scores for training, and thus have greater potential for good generalization capability. Unfortunately, thus far no opinion-unaware BIQA method has shown consistently better quality prediction accuracy than the opinion-aware methods. Here, we aim to develop an opinion-unaware BIQA method that can compete with, and perhaps outperform, the existing opinion-aware methods. By integrating the features of natural image statistics derived from multiple cues, we learn a multivariate Gaussian model of image patches from a collection of pristine natural images. Using the learned multivariate Gaussian model, a Bhattacharyya-like distance is used to measure the quality of each image patch, and then an overall quality score is obtained by average pooling. The proposed BIQA method does not need any distorted sample images nor subjective quality scores for training, yet extensive experiments demonstrate its superior quality-prediction performance to the state-of-the-art opinion-aware BIQA methods. The MATLAB source code of our algorithm is publicly available at www.comp.polyu.edu.hk/~cslzhang/IQA/ILNIQE/ILNIQE.htm.",
"title": ""
},
{
"docid": "462a832560542036295153549214e614",
"text": "In this paper, an on-line training PIDNN controller using an improved DEPSO algorithm for trajectory tracking of the ball and plate system is proposed. Since the ball and plate system is a typical under-actuated system with inherent nonlinearity and coupling between its parameters, the accurate mathematical model is difficult to be derived, so that a lot of nonlinear control and intelligent control methods are used for the ball and plate system control. The control method using a PID neural network is one of the intelligent methods. In this paper, an improved particle swarm optimization method based on differential evolution algorithm (DEPSO) is used to train the weighting factors of multilayered forward neural network. This PIDNN control method based on DEPSO algorithm can overcome the shortcoming of the BP algorithm which is easy to get into local minimum. At the same time, the simulation results of tracking control for ball and plate system show that the proposed PIDNN controller has simple structure, nice static and dynamic characteristics.",
"title": ""
},
{
"docid": "7cf14ea5044b95df4f618c4e2506f397",
"text": "0.18μm BCD technology with the best-in-class nLDMOS is presented. The drift of nLDMOS is optimized to ensure lowest Rsp by using multi-implants and appropriate thermal recipe. The optimized 24V nLDMOS has BV<inf>DSS</inf>=36V and Rsp=14.5 mΩ-mm<sup>2</sup>. Electrical SOA and long-term hot electron (HE) SOA are also evaluated. The maximum operating voltage less than 10% degradation of on-resistance is 24.4V.",
"title": ""
},
{
"docid": "8d3a5a9327ab93fef50712e931d0e06b",
"text": "Cite this article Romager JA, Hughes K, Trimble JE. Personality traits as predictors of leadership style preferences: Investigating the relationship between social dominance orientation and attitudes towards authentic leaders. Soc Behav Res Pract Open J. 2017; 3(1): 1-9. doi: 10.17140/SBRPOJ-3-110 Personality Traits as Predictors of Leadership Style Preferences: Investigating the Relationship Between Social Dominance Orientation and Attitudes Towards Authentic Leaders Original Research",
"title": ""
},
{
"docid": "2526915745dda9026836347292f79d12",
"text": "I show that a functional representation of self-similarity (as the one occurring in fractals) is provided by squeezed coherent states. In this way, the dissipative model of brain is shown to account for the self-similarity in brain background activity suggested by power-law distributions of power spectral densities of electrocorticograms. I also briefly discuss the action-perception cycle in the dissipative model with reference to intentionality in terms of trajectories in the memory state space.",
"title": ""
},
{
"docid": "3ea9d312027505fb338a1119ff01d951",
"text": "Many experiments provide evidence that practicing retrieval benefits retention relative to conditions of no retrieval practice. Nearly all prior research has employed retrieval practice requiring overt responses, but a few experiments have shown that covert retrieval also produces retention advantages relative to control conditions. However, direct comparisons between overt and covert retrieval are scarce: Does covert retrieval-thinking of but not producing responses-on a first test produce the same benefit as overt retrieval on a criterial test given later? We report 4 experiments that address this issue by comparing retention on a second test following overt or covert retrieval on a first test. In Experiment 1 we used a procedure designed to ensure that subjects would retrieve on covert as well as overt test trials and found equivalent testing effects in the 2 cases. In Experiment 2 we replicated these effects using a procedure that more closely mirrored natural retrieval processes. In Experiment 3 we showed that overt and covert retrieval produced equivalent testing effects after a 2-day delay. Finally, in Experiment 4 we showed that covert retrieval benefits retention more than restudying. We conclude that covert retrieval practice is as effective as overt retrieval practice, a conclusion that contravenes hypotheses in the literature proposing that overt responding is better. This outcome has an important educational implication: Students can learn as much from covert self-testing as they would from overt responding.",
"title": ""
},
{
"docid": "36b1972e3a1f8c8f192b80c8f49ef406",
"text": "Twitter, with its rising popularity as a micro-blogging website, has inevitably attracted the attention of spammers. Spammers use myriad of techniques to evade security mechanisms and post spam messages, which are either unwelcome advertisements for the victim or lure victims in to clicking malicious URLs embedded in spam tweets. In this paper, we propose several novel features capable of distinguishing spam accounts from legitimate accounts. The features analyze the behavioral and content entropy, bait-techniques, and profile vectors characterizing spammers, which are then fed into supervised learning algorithms to generate models for our tool, CATS. Using our system on two real-world Twitter data sets, we observe a 96% detection rate with about 0.8% false positive rate beating state of the art detection approach. Our analysis reveals detection of more than 90% of spammers with less than five tweets and about half of the spammers detected with only a single tweet. Our feature computation has low latency and resource requirement making fast detection feasible. Additionally, we cluster the unknown spammers to identify and understand the prevalent spam campaigns on Twitter.",
"title": ""
},
{
"docid": "f8e20046f9ad2e4ef63339f7c611e815",
"text": "We propose and evaluate a probabilistic framework for estimating a Twitter user's city-level location based purely on the content of the user's tweets, even in the absence of any other geospatial cues. By augmenting the massive human-powered sensing capabilities of Twitter and related microblogging services with content-derived location information, this framework can overcome the sparsity of geo-enabled features in these services and enable new location-based personalized information services, the targeting of regional advertisements, and so on. Three of the key features of the proposed approach are: (i) its reliance purely on tweet content, meaning no need for user IP information, private login information, or external knowledge bases; (ii) a classification component for automatically identifying words in tweets with a strong local geo-scope; and (iii) a lattice-based neighborhood smoothing model for refining a user's location estimate. The system estimates k possible locations for each user in descending order of confidence. On average we find that the location estimates converge quickly (needing just 100s of tweets), placing 51% of Twitter users within 100 miles of their actual location.",
"title": ""
},
{
"docid": "7550ec8917588a6adb629e3d1beabd76",
"text": "This paper describes the algorithm for deriving the total column ozone from spectral radiances and irradiances measured by the Ozone Monitoring Instrument (OMI) on the Earth Observing System Aura satellite. The algorithm is based on the differential optical absorption spectroscopy technique. The main characteristics of the algorithm as well as an error analysis are described. The algorithm has been successfully applied to the first available OMI data. First comparisons with ground-based instruments are very encouraging and clearly show the potential of the method.",
"title": ""
},
{
"docid": "b46a967ad85c5b64c0f14f703d385b24",
"text": "Bitcoin has shown great utility around the world with the drastic increase in its value and global consensus method of proof-of-work (POW). Over the years after the revolution in the digital transaction space, we are looking at major scalability issue with old POW consensus method and bitcoin peak limit of processing only 7 transactions per second. With more companies trying to adopt blockchain to modify their existing systems, blockchain working on old consensus methods and with scalability issues can't deliver the optimal solution. Specifically, with new trends like smart contracts and DAPPs, much better performance is needed to support any actual business applications. Such requirements are pushing the new platforms away from old methods of consensus and adoption of off-chain solutions. In this paper, we discuss various scalability issues with the Bitcoin and Ethereum blockchain and recent proposals like the lighting protocol, sharding, super quadratic sharding, DPoS to solve these issues. We also draw the comparison between these proposals on their ability to overcome scalability limits and highlighting major problems in these approaches. In the end, we propose our solution to suffice the scalability issue and conclude with the fact that with better scalability, blockchain has the potential to outrageously support varied domains of the industry.",
"title": ""
},
{
"docid": "3d02737fa76e85619716a9dc7136248a",
"text": "Recent captioning models are limited in their ability to scale and describe concepts unseen in paired image-text corpora. We propose the Novel Object Captioner (NOC), a deep visual semantic captioning model that can describe a large number of object categories not present in existing image-caption datasets. Our model takes advantage of external sources – labeled images from object recognition datasets, and semantic knowledge extracted from unannotated text. We propose minimizing a joint objective which can learn from these diverse data sources and leverage distributional semantic embeddings, enabling the model to generalize and describe novel objects outside of image-caption datasets. We demonstrate that our model exploits semantic information to generate captions for hundreds of object categories in the ImageNet object recognition dataset that are not observed in MSCOCO image-caption training data, as well as many categories that are observed very rarely. Both automatic evaluations and human judgements show that our model considerably outperforms prior work in being able to describe many more categories of objects.",
"title": ""
},
{
"docid": "ac2d4f4e6c73c5ab1734bfeae3a7c30a",
"text": "While neural, encoder-decoder models have had significant empirical success in text generation, there remain several unaddressed problems with this style of generation. Encoderdecoder models are largely (a) uninterpretable, and (b) difficult to control in terms of their phrasing or content. This work proposes a neural generation system using a hidden semimarkov model (HSMM) decoder, which learns latent, discrete templates jointly with learning to generate. We show that this model learns useful templates, and that these templates make generation both more interpretable and controllable. Furthermore, we show that this approach scales to real data sets and achieves strong performance nearing that of encoderdecoder text generation models.",
"title": ""
}
] |
scidocsrr
|
f76b344b0c6bb08c656c098fb6f42633
|
Semantic Stixels: Depth is not enough
|
[
{
"docid": "d7562b32dc75c3b599980006ce924251",
"text": "This work concentrates on vision processing for ADAS and intelligent vehicle applications. We propose a color extension to the disparity-based Stixel World method, so that the road can be robustly distinguished from obstacles with respect to erroneous disparity measurements. Our extension learns color appearance models for road and obstacle classes in an online and self-supervised fashion. The algorithm is tightly integrated within the core of the optimization process of the original Stixel World, allowing for strong fusion of the disparity and color signals. We perform an extensive evaluation, including different self-supervised learning strategies and different color models. Our newly recorded, publicly available data set is intentionally focused on challenging traffic scenes with many low-texture regions, causing numerous disparity artifacts. In this evaluation, we increase the F-score of the drivable distance from 0.86 to 0.97, compared to a tuned version of the state-of-the-art baseline method. This clearly shows that our color extension increases the robustness of the Stixel World, by reducing the number of falsely detected obstacles while not deteriorating the detection of true obstacles.",
"title": ""
},
{
"docid": "a77eddf9436652d68093946fbe1d2ed0",
"text": "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008–2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community’s progress through time using the methods of Hoiem et al. (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.",
"title": ""
}
] |
[
{
"docid": "afa3fa35061b54c1ca662f0885b2e4be",
"text": "This paper discusses an analytical study that quantifies the expected earthquake-induced losses in typical office steel frame buildings designed with perimeter special moment frames in highly seismic regions. It is shown that for seismic events associated with low probabilities of occurrence, losses due to demolition and collapse may be significantly overestimated when the expected loss computations are based on analytical models that ignore the composite beam effects and the interior gravity framing system of a steel frame building. For frequently occurring seismic events building losses are dominated by non-structural content repairs. In this case, the choice of the analytical model representation of the steel frame building becomes less important. Losses due to demolition and collapse in steel frame buildings with special moment frames designed with strong-column/weak-beam ratio larger than 2.0 are reduced by a factor of two compared with those in the same frames designed with a strong-column/weak-beam ratio larger than 1.0 as recommended in ANSI/AISC-341-10. The expected annual losses (EALs) of steel frame buildings with SMFs vary from 0.38% to 0.74% over the building life expectancy. The EALs are dominated by repairs of accelerationsensitive non-structural content followed by repairs of drift-sensitive non-structural components. It is found that the effect of strong-column/weak-beam ratio on EALs is negligible. This is not the case when the present value of life-cycle costs is selected as a loss-metric. It is advisable to employ a combination of loss-metrics to assess the earthquake-induced losses in steel frame buildings with special moment frames depending on the seismic performance level of interest. Copyright c © 2017 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "8442bf64a1c89bbddb6ffb8001b1381e",
"text": "In this paper we present a scalable hardware architecture to implement large-scale convolutional neural networks and state-of-the-art multi-layered artificial vision systems. This system is fully digital and is a modular vision engine with the goal of performing real-time detection, recognition and segmentation of mega-pixel images. We present a performance comparison between a software, FPGA and ASIC implementation that shows a speed up in custom hardware implementations.",
"title": ""
},
{
"docid": "ce24b783f2157fdb4365b60aa2e6163a",
"text": "Geosciences is a field of great societal relevance that requires solutions to several urgent problems facing our humanity and the planet. As geosciences enters the era of big data, machine learning (ML)— that has been widely successful in commercial domains—offers immense potential to contribute to problems in geosciences. However, problems in geosciences have several unique challenges that are seldom found in traditional applications, requiring novel problem formulations and methodologies in machine learning. This article introduces researchers in the machine learning (ML) community to these challenges offered by geoscience problems and the opportunities that exist for advancing both machine learning and geosciences. We first highlight typical sources of geoscience data and describe their properties that make it challenging to use traditional machine learning techniques. We then describe some of the common categories of geoscience problems where machine learning can play a role, and discuss some of the existing efforts and promising directions for methodological development in machine learning. We conclude by discussing some of the emerging research themes in machine learning that are applicable across all problems in the geosciences, and the importance of a deep collaboration between machine learning and geosciences for synergistic advancements in both disciplines.",
"title": ""
},
{
"docid": "7fc3dfcc8fa43c36938f41877a65bed7",
"text": "We propose a real-time RGB-based pipeline for object detection and 6D pose estimation. Our novel 3D orientation estimation is based on a variant of the Denoising Autoencoder that is trained on simulated views of a 3D model using Domain Randomization. This so-called Augmented Autoencoder has several advantages over existing methods: It does not require real, pose-annotated training data, generalizes to various test sensors and inherently handles object and view symmetries. Instead of learning an explicit mapping from input images to object poses, it provides an implicit representation of object orientations defined by samples in a latent space. Experiments on the T-LESS and LineMOD datasets show that our method outperforms similar modelbased approaches and competes with state-of-the art approaches that require real pose-annotated images. 1",
"title": ""
},
{
"docid": "b41c05577d59271495ce60c104469854",
"text": "A method for human head pose estimation in multicamera environments is proposed. The method computes the textured visual hull of the subject and unfolds the texture of the head on a hypothetical sphere around it, whose parameterization is iteratively rotated so that the face eventually occurs on its equator. This gives rise to a spherical image, in which face detection is simplified, because exactly one frontal face is guaranteed to appear in it. In this image, the face center yields two components of pose (yaw, pitch), while the third (roll) is retrieved from the orientation of the major symmetry axis of the face. Face detection applied on the original images reduces the required iterations and anchors tracking drift. The method is demonstrated and evaluated in several data sets, including ones with known ground truth. Experimental results show that the proposed method is accurate and robust to distant imaging, despite the low-resolution appearance of subjects.",
"title": ""
},
{
"docid": "1819af3b3d96c182b7ea8a0e89ba5bbe",
"text": "The fingerprint is one of the oldest and most widely used biometric modality for person identification. Existing automatic fingerprint matching systems perform well when the same sensor is used for both enrollment and verification (regular matching). However, their performance significantly deteriorates when different sensors are used (cross-matching, fingerprint sensor interoperability problem). We propose an automatic fingerprint verification method to solve this problem. It was observed that the discriminative characteristics among fingerprints captured with sensors of different technology and interaction types are ridge orientations, minutiae, and local multi-scale ridge structures around minutiae. To encode this information, we propose two minutiae-based descriptors: histograms of gradients obtained using a bank of Gabor filters and binary gradient pattern descriptors, which encode multi-scale local ridge patterns around minutiae. In addition, an orientation descriptor is proposed, which compensates for the spurious and missing minutiae problem. The scores from the three descriptors are fused using a weighted sum rule, which scales each score according to its verification performance. Extensive experiments were conducted using two public domain benchmark databases (FingerPass and Multi-Sensor Optical and Latent Fingerprint) to show the effectiveness of the proposed system. The results showed that the proposed system significantly outperforms the state-of-the-art methods based on minutia cylinder-code (MCC), MCC with scale, VeriFinger—a commercial SDK, and a thin-plate spline model.",
"title": ""
},
{
"docid": "400d7dd2d6575edc3a5f34667a8eb426",
"text": "The Internet has facilitated the emergence of new strategies and business models in several industries. In the UK, significant changes are happening in supermarket retailing with the introduction of online shopping, especially in terms of channel development and coordination, business scope redefinition, the development of fulfilment centre model and core processes, new ways of customer value creation, and online partnerships. In fact the role of online supermarket itself has undergone some significant changes in the last few years. Based on recent empirical evidence gathered in the UK, this paper will illustrate current developments in the strategies and business models of online supermarket retailing. The main evidence has been collected through an online survey of 6 online supermarkets and in-depth case studies of two leading players. Some of the tendencies are comparable to what happened in retail banking with the introduction of Internet banking, but other tendencies are unique to the supermarket retailing industry. This is a rapidly evolving area and further studies are clearly needed.",
"title": ""
},
{
"docid": "87fe73a5bc0b80fd0af1d0e65d1039c1",
"text": "Reactive programming improves the design of reactive applications by relocating the logic for managing dependencies between dependent values away from the application logic to the language implementation. Many distributed applications are reactive. Yet, existing change propagation algorithms are not suitable in a distributed setting.\n We propose Distributed REScala, a reactive language with a change propagation algorithm that works without centralized knowledge about the topology of the dependency structure among reactive values and avoids unnecessary propagation of changes, while retaining safety guarantees (glitch freedom). Distributed REScala enables distributed reactive programming, bringing the benefits of reactive programming to distributed applications. We demonstrate the enabled design improvements by a case study. We also empirically evaluate the performance of our algorithm in comparison to other algorithms in a simulated distributed setting.",
"title": ""
},
{
"docid": "7b5f6f0e3c1af5cc4047b8cec373de24",
"text": "Recognizing lexical entailment (RLE) always plays an important role in inference of natural language, i.e., identifying whether one word entails another, for example, fox entails animal. In the literature, automatically recognizing lexical entailment for word pairs deeply relies on words’ contextual representations. However, as a “prototype” vector, a single representation cannot reveal multifaceted aspects of the words due to their homonymy and polysemy. In this paper, we propose a supervised Context-Enriched Neural Network (CENN) method for recognizing lexical entailment. To be specific, we first utilize multiple embedding vectors from different contexts to represent the input word pairs. Then, through different combination methods and attention mechanism, we integrate different embedding vectors and optimize their weights to predict whether there are entailment relations in word pairs. Moreover, our proposed framework is flexible and open to handle different word contexts and entailment perspectives in the text corpus. Extensive experiments on five datasets show that our approach significantly improves the performance of automatic RLE in comparison with several state-of-the-art methods.",
"title": ""
},
{
"docid": "bd3e5a403cc42952932a7efbd0d57719",
"text": "The acoustic echo cancellation system is very important in the communication applications that are used these days; in view of this importance we have implemented this system practically by using DSP TMS320C6713 Starter Kit (DSK). The acoustic echo cancellation system was implemented based on 8 subbands techniques using Least Mean Square (LMS) algorithm and Normalized Least Mean Square (NLMS) algorithm. The system was evaluated by measuring the performance according to Echo Return Loss Enhancement (ERLE) factor and Mean Square Error (MSE) factor. Keywords—Acoustic echo canceller; Least Mean Square (LMS); Normalized Least Mean Square (NLMS); TMS320C6713; 8 subbands adaptive filter",
"title": ""
},
{
"docid": "53821da1274fd420fe0f7eeba024b95d",
"text": "An empirical study was performed to train naive subjects in the use of a prototype Boolean logic-based information retrieval system on a bibliographic database. Subjects were undergraduates with little or no prior computing experience. Subjects trained with a conceptual model of the system performed better than subjects trained with procedural instructions, but only on complex, problem-solving tasks. Performance was equal on simple tasks. Differences in patterns of interaction with the system (based on a stochastic process model) showed parallel results. Most subjects were able to articulate some description of the system's operation, but few articulated a model similar to the card catalog analogy provided in training. Eleven of 43 subjects were unable to achieve minimal competency in system use. The failure rate was equal between training conditions and genders; the only differences found between those passing and failing the benchmark test were academic major and in frequency of library use.",
"title": ""
},
{
"docid": "ee473a0bb8b96249e61ad5e3925c11c2",
"text": "Simple, short, and compact hashtags cover a wide range of information on social networks. Although many works in the field of natural language processing (NLP) have demonstrated the importance of hashtag recommendation, hashtag recommendation for images has barely been studied. In this paper, we introduce the HARRISON dataset, a benchmark on hashtag recommendation for real world images in social networks. The HARRISON dataset is a realistic dataset, composed of 57,383 photos from Instagram and an average of 4.5 associated hashtags for each photo. To evaluate our dataset, we design a baseline framework consisting of visual feature extractor based on convolutional neural network (CNN) and multi-label classifier based on neural network. Based on this framework, two single feature-based models, object-based and scene-based model, and an integrated model of them are evaluated on the HARRISON dataset. Our dataset shows that hashtag recommendation task requires a wide and contextual understanding of the situation conveyed in the image. As far as we know, this work is the first vision-only attempt at hashtag recommendation for real world images in social networks. We expect this benchmark to accelerate the advancement of hashtag recommendation.",
"title": ""
},
{
"docid": "38102dfe63b707499c2f01e2e46b4031",
"text": "Recent advances in convolutional neural networks have considered model complexity and hardware efficiency to enable deployment onto embedded systems and mobile devices. For example, it is now well-known that the arithmetic operations of deep networks can be encoded down to 8-bit fixed-point without significant deterioration in performance. However, further reduction in precision down to as low as 3-bit fixed-point results in significant losses in performance. In this paper we propose a new data representation that enables state-of-the-art networks to be encoded to 3 bits with negligible loss in classification performance. To perform this, we take advantage of the fact that the weights and activations in a trained network naturally have non-uniform distributions. Using non-uniform, base-2 logarithmic representation to encode weights, communicate activations, and perform dot-products enables networks to 1) achieve higher classification accuracies than fixed-point at the same resolution and 2) eliminate bulky digital multipliers. Finally, we propose an end-to-end training procedure that uses log representation at 5-bits, which achieves higher final test accuracy than linear at 5-bits.",
"title": ""
},
{
"docid": "7d7c596d334153f11098d9562753a1ee",
"text": "The design of systems for intelligent control of urban traffic is important in providing a safe environment for pedestrians and motorists. Artificial neural networks (ANNs) (learning systems) and expert systems (knowledge-based systems) have been extensively explored as approaches for decision making. While the ANNs compute decisions by learning from successfully solved examples, the expert systems rely on a knowledge base developed by human reasoning for decision making. It is possible to integrate the learning abilities of an ANN and the knowledge-based decision-making ability of the expert system. This paper presents a real-time intelligent decision making system, IDUTC, for urban traffic control applications. The system integrates a backpropagation-based ANN that can learn and adapt to the dynamically changing environment and a fuzzy expert system for decision making. The performance of the proposed intelligent decision-making system is evaluated by mapping the the adaptable traffic light control problem. The application is implemented using the ANN approach, the FES approach, and the proposed integrated system approach. The results of extensive simulations using the three approaches indicate that the integrated system provides better performance and leads to a more efficient implementation than the other two approaches.",
"title": ""
},
{
"docid": "979a3ca422e92147b25ca1b8e8ff9e5a",
"text": "Open Information Extraction (Open IE) is a promising approach for unrestricted Information Discovery (ID). While Open IE is a highly scalable approach, allowing unsupervised relation extraction from open domains, it currently has some limitations. First, it lacks the expressiveness needed to properly represent and extract complex assertions that are abundant in text. Second, it does not consolidate the extracted propositions, which causes simple queries above Open IE assertions to return insufficient or redundant information. To address these limitations, we propose in this position paper a novel representation for ID – Propositional Knowledge Graphs (PKG). PKGs extend the Open IE paradigm by representing semantic inter-proposition relations in a traversable graph. We outline an approach for constructing PKGs from single and multiple texts, and highlight a variety of high-level applications that may leverage PKGs as their underlying information discovery and representation framework.",
"title": ""
},
{
"docid": "0c1f01d9861783498c44c7c3d0acd57e",
"text": "We understand a sociotechnical system as a multistakeholder cyber-physical system. We introduce governance as the administration of such a system by the stakeholders themselves. In this regard, governance is a peer-to-peer notion and contrasts with traditional management, which is a top-down hierarchical notion. Traditionally, there is no computational support for governance and it is achieved through out-of-band interactions among system administrators. Not surprisingly, traditional approaches simply do not scale up to large sociotechnical systems.\n We develop an approach for governance based on a computational representation of norms in organizations. Our approach is motivated by the Ocean Observatory Initiative, a thirty-year $400 million project, which supports a variety of resources dealing with monitoring and studying the world's oceans. These resources include autonomous underwater vehicles, ocean gliders, buoys, and other instrumentation as well as more traditional computational resources. Our approach has the benefit of directly reflecting stakeholder needs and assuring stakeholders of the correctness of the resulting governance decisions while yielding adaptive resource allocation in the face of changes in both stakeholder needs and physical circumstances.",
"title": ""
},
{
"docid": "431581766931936e22acdae57fb192be",
"text": "Social network analysis (SNA), in essence, is not a formal theory in social science, but rather an approach for investigating social structures, which is why SNA is often referred to as structural analysis [1]. The most important difference between social network analysis and the traditional or classic social research approach is that the contexts of the social actor, or the relationships between actors are the first considerations of the former, while the latter focuses on individual properties. A social network is a group of collaborating, and/or competing individuals or entities that are related to each other. It may be presented as a graph, or a multi-graph; each participant in the collaboration or competition is called an actor and depicted as a node in the graph theory. Valued relations between actors are depicted as links, or ties, either directed or undirected, between the corresponding nodes. Actors can be persons, organizations, or groups – any set of related entities. As such, SNA may be used on different levels, ranging from individuals, web pages, families, small groups, to large organizations, parties, and even to nations. According to the well known SNA researcher Lin Freeman [2], network analysis is based on the intuitive notion that these patterns are important features of the lives of the individuals or social entities who display them; Network analysts believe that how an individual lives, or social entity depends in large part on how that they are tied into the larger web of social connections/structures. Many believe, moreover, that the success or failure of societies and organizations often depends on the patterning of their internal structure. With a history of more than 70 years, SNA as an interdisciplinary technique developed under many influences, which come from different fields such as sociology, mathematics and computer science, are becoming increasingly important across many disciplines, including sociology, economics, communication science, and psychology around the world. In the current chapter of this book, the author discusses",
"title": ""
},
{
"docid": "835fd7a4410590a3d848222eb3159aeb",
"text": "Modularity in organizations can facilitate the creation and development of dynamic capabilities. Paradoxically, however, modular management can also stifle the strategic potential of such capabilities by conflicting with the horizontal integration of units. We address these issues through an examination of how modular management of information technology (IT), project teams and front-line personnel in concert with knowledge management (KM) interventions influence the creation and development of dynamic capabilities at a large Asia-based call center. Our findings suggest that a full capitalization of the efficiencies created by modularity may be closely linked to the strategic sense making abilities of senior managers to assess the long-term business value of the dominant designs available in the market. Drawing on our analysis we build a modular management-KM-dynamic capabilities model, which highlights the evolution of three different levels of dynamic capabilities and also suggests an inherent complementarity between modular and integrated approaches. © 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "36f37bdf7da56a57f29d026dca77e494",
"text": "Fifth generation (5G) systems are expected to introduce a revolution in the ICT domain with innovative networking features, such as device-to-device (D2D) communications. Accordingly, in-proximity devices directly communicate with each other, thus avoiding routing the data across the network infrastructure. This innovative technology is deemed to be also of high relevance to support effective heterogeneous objects interconnection within future IoT ecosystems. However, several open challenges shall be solved to achieve a seamless and reliable deployment of proximity-based communications. In this paper, we give a contribution to trust and security enhancements for opportunistic hop-by-hop forwarding schemes that rely on cellular D2D communications. To tackle the presence of malicious nodes in the network, reliability and reputation notions are introduced to model the level of trust among involved devices. To this aim, social-awareness of devices is accounted for, to better support D2D-based multihop content uploading. Our simulative results in small-scale IoT environments, demonstrate that data loss due to malicious nodes can be drastically reduced and gains in uploading time be reached with the proposed solution.",
"title": ""
},
{
"docid": "615a24719fe4300ea8971e86014ed8fe",
"text": "This paper presents a new code for the analysis of gamma spectra generated by an equipment for continuous measurement of gamma radioactivity in aerosols with paper filter. It is called pGamma and has been developed by the Nuclear Engineering Research Group at the Technical University of Catalonia - Barcelona Tech and by Raditel Serveis i Subministraments Tecnològics, Ltd. The code has been developed to identify the gamma emitters and to determine their activity concentration. It generates alarms depending on the activity of the emitters and elaborates reports. Therefore it includes a library with NORM and artificial emitters of interest. The code is being adapted to the monitors of the Environmental Radiological Surveillance Network of the local Catalan Government in Spain (Generalitat de Catalunya) and is used at three stations of the Network.",
"title": ""
}
] |
scidocsrr
|
4c247298eda287f71f6f88803b0f3beb
|
Ultra-Wideband Crossover Using Microstrip-to-Coplanar Waveguide Transitions
|
[
{
"docid": "75961ecd0eadf854ad9f7d0d76f7e9c8",
"text": "This paper presents the design of a microstrip-CPW transition where the CPW line propagates close to slotline mode. This design allows the solution to be determined entirely though analytical techniques. In addition, a planar via-less microwave crossover using this technique is proposed. The experimental results at 5 GHz show that the crossover has a minimum isolation of 32 dB. It also has low in-band insertion loss and return loss of 1.2 dB and 18 dB respectively over more than 44 % of bandwidth.",
"title": ""
},
{
"docid": "0f9cdcf2d00ebf8dd5a5ed4dec253aee",
"text": "The letter describes double microstrip-slot transitions for use in planar ± 90° phase shifters. The described devices exhibit broadband performance and offer compatibility with ordinary microstrip circuits. Full-wave EM simulation results show a phase shift of ± 90° ± 7° over the frequency band of 3.1-12.0 GHz when compared with a suitably chosen section of microstripline. The observed differential phase shift is accompanied by return losses of not less than 14 dB and insertion losses between 0.7 to 1.5 dB in the band 3.1-11.0 GHz. The simulated performance is confirmed by experimental results of ± 90° ± 8° phase shift, return loss not less than 14 dB and insertion loss between 0.5 and 1.8 dB in the frequency band of 3.1-11.0 GHz.",
"title": ""
}
] |
[
{
"docid": "e6f34af64d18dba5de82d107828cd979",
"text": "Multiple computational cameras can be assembled from a common set of imaging components.",
"title": ""
},
{
"docid": "4dda701b0bf796f044abf136af7b0a9c",
"text": "Legacy substation automation protocols and architectures typically provided basic functionality for power system automation and were designed to accommodate the technical limitations of the networking technology available for implementation. There has recently been a vast improvement in networking technology that has changed dramatically what is now feasible for power system automation in the substation. Technologies such as switched Ethernet, TCP/IP, high-speed wide area networks, and high-performance low-cost computers are providing capabilities that could barely be imagined when most legacy substation automation protocols were designed. In order to take advantage of modern technology to deliver additional new benefits to users of substation automation, the International Electrotechnical Commission (IEC) has developed and released a new global standard for substation automation: IEC 61850. The paper provides a basic technical overview of IEC 61850 and discusses the benefits of each major aspect of the standard. The concept of a virtual model comprising both physical and logical device models that includes a set of standardized communications services are described along with explanations of how these standardized models, object naming conventions, and communication services bring significant benefits to the substation automation user. New services to support self-describing devices and object-orient peer-to-peer data exchange are explained with an emphasis on how these services can be applied to reduce costs for substation automation. The substation configuration language (SCL) of IEC 61850 is presented with information on how the standardization of substation configuration will impact the future of substation automation. The paper concludes with a brief introduction to the UCA International Users Group as a forum where users and suppliers cooperate in improving substation automation with testing, education, and demonstrations of IEC 61850 and other IEC standards technology",
"title": ""
},
{
"docid": "a4d89f698e3049adc70bcd51b26878cc",
"text": "The design and measured results of a 2 times 2 microstrip line fed U-slot rectangular antenna array are presented. The U-slot patches and the feeding network are placed on the same layer, resulting in a very simple structure. The advantage of the microstrip line fed U-slot patch is that it is easy to form the array. An impedance bandwidth (VSWR < 2) of 18% ranging from 5.65 GHz to 6.78 GHz is achieved. The radiation performance including radiation pattern, cross polarization, and gain is also satisfactory within this bandwidth. The measured peak gain of the array is 11.5 dBi. The agreement between simulated results and the measurement ones is good. The 2 times 2 array may be used as a module to form larger array.",
"title": ""
},
{
"docid": "a3dbc3b7a06a2f506874da4ded926351",
"text": "The problem of graph classification has attracted great interest in the last decade. Current research on graph classification assumes the existence of large amounts of labeled training graphs. However, in many applications, the labels of graph data are very expensive or difficult to obtain, while there are often copious amounts of unlabeled graph data available. In this paper, we study the problem of semi-supervised feature selection for graph classification and propose a novel solution, called gSSC, to efficiently search for optimal subgraph features with labeled and unlabeled graphs. Different from existing feature selection methods in vector spaces which assume the feature set is given, we perform semi-supervised feature selection for graph data in a progressive way together with the subgraph feature mining process. We derive a feature evaluation criterion, named gSemi, to estimate the usefulness of subgraph features based upon both labeled and unlabeled graphs. Then we propose a branch-and-bound algorithm to efficiently search for optimal subgraph features by judiciously pruning the subgraph search space. Empirical studies on several real-world tasks demonstrate that our semi-supervised feature selection approach can effectively boost graph classification performances with semi-supervised feature selection and is very efficient by pruning the subgraph search space using both labeled and unlabeled graphs.",
"title": ""
},
{
"docid": "0f5c1d2503a2845e409d325b085bf600",
"text": "We present Accel, a novel semantic video segmentation system that achieves high accuracy at low inference cost by combining the predictions of two network branches: (1) a reference branch that extracts high-detail features on a reference keyframe, and warps these features forward using frame-to-frame optical flow estimates, and (2) an update branch that computes features of adjustable quality on the current frame, performing a temporal update at each video frame. The modularity of the update branch, where feature subnetworks of varying layer depth can be inserted (e.g. ResNet-18 to ResNet-101), enables operation over a new, state-of-the-art accuracy-throughput trade-off spectrum. Over this curve, Accel models achieve both higher accuracy and faster inference times than the closest comparable single-frame segmentation networks. In general, Accel significantly outperforms previous work on efficient semantic video segmentation, correcting warping-related error that compounds on datasets with complex dynamics. Accel is end-to-end trainable and highly modular: the reference network, the optical flow network, and the update network can each be selected independently, depending on application requirements, and then jointly fine-tuned. The result is a robust, general system for fast, high-accuracy semantic segmentation on video.",
"title": ""
},
{
"docid": "5171afa49c3990e88bd5aa877966e8c2",
"text": "There is a growing interest among scientists and the lay public alike in using the South American psychedelic brew, ayahuasca, to treat psychiatric disorders like depression and anxiety. Such a practice is controversial due to a style of reasoning within conventional psychiatry that sees psychedelic-induced modified states of consciousness as pathological. This article analyzes the academic literature on ayahuasca’s psychological effects to determine how this style of reasoning is shaping formal scientific discourse on ayahuasca’s therapeutic potential as a treatment for depression and anxiety. Findings from these publications suggest that different kinds of experiments are differentially affected by this style of reasoning but can nonetheless indicate some potential therapeutic utility of the ayahuasca-induced modified state of consciousness. The article concludes by suggesting ways in which conventional psychiatry’s dominant style of reasoning about psychedelic modified states of consciousness could be reconsidered. k e yword s : ayahuasca, psychedelic, hallucinogen, psychiatry, depression",
"title": ""
},
{
"docid": "048f237ad6cb844a79c63d7f6f3d6aa9",
"text": "Superpixel segmentation has emerged as an important research problem in the areas of image processing and computer vision. In this paper, we propose a framework, namely Iterative Spanning Forest (ISF), in which improved sets of connected superpixels (supervoxels in 3D) can be generated by a sequence of Image Foresting Transforms. In this framework, one can choose the most suitable combination of ISF components for a given application - i.e., i) a seed sampling strategy, ii) a connectivity function, iii) an adjacency relation, and iv) a seed pixel recomputation procedure. The superpixels in ISF structurally correspond to spanning trees rooted at those seeds. We present five ISF-based methods to illustrate different choices for those components. These methods are compared with a number of state-of-the-art approaches with respect to effectiveness and efficiency. Experiments are carried out on several datasets containing 2D and 3D objects with distinct texture and shape properties, including a high-level application, named sky image segmentation. The theoretical properties of ISF are demonstrated in the supplementary material and the results show ISF-based methods rank consistently among the best for all datasets.",
"title": ""
},
{
"docid": "19937d689287ba81d2d01efd9ce8f2e4",
"text": "We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better than more shallow ones. Learning is surprisingly rapid. NORB is completely trained within five epochs. Test error rates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively.",
"title": ""
},
{
"docid": "a35387165cc7ca200b8eaa4b829086c8",
"text": "This paper presents a new density-based clustering algorithm, ST-DBSCAN, which is based on DBSCAN. We propose three marginal extensions to DBSCAN related with the identification of (i) core objects, (ii) noise objects, and (iii) adjacent clusters. In contrast to the existing density-based clustering algorithms, our algorithm has the ability of discovering clusters according to non-spatial, spatial and temporal values of the objects. In this paper, we also present a spatial–temporal data warehouse system designed for storing and clustering a wide range of spatial–temporal data. We show an implementation of our algorithm by using this data warehouse and present the data mining results. 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "67808f54305bc2bb2b3dd666f8b4ef42",
"text": "Sensing devices are becoming the source of a large portion of the Web data. To facilitate the integration of sensed data with data from other sources, both sensor stream sources and data are being enriched with semantic descriptions, creating Linked Stream Data. Despite its enormous potential, little has been done to explore Linked Stream Data. One of the main characteristics of such data is its “live” nature, which prohibits existing Linked Data technologies to be applied directly. Moreover, there is currently a lack of tools to facilitate publishing Linked Stream Data and making it available to other applications. To address these issues we have developed the Linked Stream Middleware (LSM), a platform that brings together the live real world sensed data and the Semantic Web. A LSM deployment is available at http://lsm.deri.ie/. It provides many functionalities such as: i) wrappers for real time data collection and publishing; ii) a web interface for data annotation and visualisation; and iii) a SPARQL endpoint for querying unified Linked Stream Data and Linked Data. In this paper we describe the system architecture behind LSM, provide details how Linked Stream Data is generated, and demonstrate the benefits of the platform by showcasing its interface.",
"title": ""
},
{
"docid": "1c90adf8ec68ff52e777b2041f8bf4c4",
"text": "In many situations we have some measurement of confidence on “positiveness” for a binary label. The “positiveness” is a continuous value whose range is a bounded interval. It quantifies the affiliation of each training data to the positive class. We propose a novel learning algorithm called expectation loss SVM (eSVM) that is devoted to the problems where only the “positiveness” instead of a binary label of each training sample is available. Our e-SVM algorithm can also be readily extended to learn segment classifiers under weak supervision where the exact positiveness value of each training example is unobserved. In experiments, we show that the e-SVM algorithm can effectively address the segment proposal classification task under both strong supervision (e.g. the pixel-level annotations are available) and the weak supervision (e.g. only bounding-box annotations are available), and outperforms the alternative approaches. Besides, we further validate this method on two major tasks of computer vision: semantic segmentation and object detection. Our method achieves the state-of-the-art object detection performance on PASCAL VOC 2007 dataset.",
"title": ""
},
{
"docid": "5b15a833cb6b4d9dd56dea59edb02cf8",
"text": "BACKGROUND\nQuantification of the biomechanical properties of each individual medial patellar ligament will facilitate an understanding of injury patterns and enhance anatomic reconstruction techniques by improving the selection of grafts possessing appropriate biomechanical properties for each ligament.\n\n\nPURPOSE\nTo determine the ultimate failure load, stiffness, and mechanism of failure of the medial patellofemoral ligament (MPFL), medial patellotibial ligament (MPTL), and medial patellomeniscal ligament (MPML) to assist with selection of graft tissue for anatomic reconstructions.\n\n\nSTUDY DESIGN\nDescriptive laboratory study.\n\n\nMETHODS\nTwenty-two nonpaired, fresh-frozen cadaveric knees were dissected free of all soft tissue structures except for the MPFL, MPTL, and MPML. Two specimens were ultimately excluded because their medial structure fibers were lacerated during dissection. The patella was obliquely cut to test the MPFL and the MPTL-MPML complex separately. To ensure that the common patellar insertion of the MPTL and MPML was not compromised during testing, only one each of the MPML and MPTL were tested per specimen (n = 10 each). Specimens were secured in a dynamic tensile testing machine, and the ultimate load, stiffness, and mechanism of failure of each ligament (MPFL = 20, MPML = 10, and MPTL = 10) were recorded.\n\n\nRESULTS\nThe mean ± SD ultimate load of the MPFL (178 ± 46 N) was not significantly greater than that of the MPTL (147 ± 80 N; P = .706) but was significantly greater than that of the MPML (105 ± 62 N; P = .001). The mean ultimate load of the MPTL was not significantly different from that of the MPML ( P = .210). Of the 20 MPFLs tested, 16 failed by midsubstance rupture and 4 by bony avulsion on the femur. Of the 10 MPTLs tested, 9 failed by midsubstance rupture and 1 by bony avulsion on the patella. Finally, of the 10 MPMLs tested, all 10 failed by midsubstance rupture. No significant difference was found in mean stiffness between the MPFL (23 ± 6 N/mm2) and the MPTL (31 ± 21 N/mm2; P = .169), but a significant difference was found between the MPFL and the MPML (14 ± 8 N/mm2; P = .003) and between the MPTL and MPML ( P = .028).\n\n\nCONCLUSION\nThe MPFL and MPTL had comparable ultimate loads and stiffness, while the MPML had lower failure loads and stiffness. Midsubstance failure was the most common type of failure; therefore, reconstruction grafts should meet or exceed the values reported herein.\n\n\nCLINICAL RELEVANCE\nFor an anatomic medial-sided knee reconstruction, the individual biomechanical contributions of the medial patellar ligamentous structures (MPFL, MPTL, and MPML) need to be characterized to facilitate an optimal reconstruction design.",
"title": ""
},
{
"docid": "4125dba64f9d693a8b89854ee712eca5",
"text": "Given two consecutive frames, video interpolation aims at generating intermediate frame(s) to form both spatially and temporally coherent video sequences. While most existing methods focus on single-frame interpolation, we propose an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled. We start by computing bi-directional optical flow between the input images using a U-Net architecture. These flows are then linearly combined at each time step to approximate the intermediate bi-directional optical flows. These approximate flows, however, only work well in locally smooth regions and produce artifacts around motion boundaries. To address this shortcoming, we employ another U-Net to refine the approximated flow and also predict soft visibility maps. Finally, the two input images are warped and linearly fused to form each intermediate frame. By applying the visibility maps to the warped images before fusion, we exclude the contribution of occluded pixels to the interpolated intermediate frame to avoid artifacts. Since none of our learned network parameters are time-dependent, our approach is able to produce as many intermediate frames as needed. To train our network, we use 1,132 240-fps video clips, containing 300K individual video frames. Experimental results on several datasets, predicting different numbers of interpolated frames, demonstrate that our approach performs consistently better than existing methods.",
"title": ""
},
{
"docid": "4653c085c5b91107b5eb637e45364943",
"text": "Legged locomotion excels when terrains become too rough for wheeled systems or open-loop walking pattern generators to succeed, i.e., when accurate foot placement is of primary importance in successfully reaching the task goal. In this paper we address the scenario where the rough terrain is traversed with a static walking gait, and where for every foot placement of a leg, the location of the foot placement was selected irregularly by a planning algorithm. Our goal is to adjust a smooth walking pattern generator with the selection of every foot placement such that the COG of the robot follows a stable trajectory characterized by a stability margin relative to the current support triangle. We propose a novel parameterization of the COG trajectory based on the current position, velocity, and acceleration of the four legs of the robot. This COG trajectory has guaranteed continuous velocity and acceleration profiles, which leads to continuous velocity and acceleration profiles of the leg movement, which is ideally suited for advanced model-based controllers. Pitch, yaw, and ground clearance of the robot are easily adjusted automatically under any terrain situation. We evaluate our gait generation technique on the Little-Dog quadruped robot when traversing complex rocky and sloped terrains.",
"title": ""
},
{
"docid": "b1284e354e6b35e0da08dfb5aa2e3e15",
"text": "Recent research on e-learning shows that blended learning is more effective than faceto-face learning. However, a clear empirical response has not been given to the cause of such improvement. Using a data set of 9044 students at two Catalan universities and a quasi-experimental approach, two possible hypotheses identified in previous research are studied. The results show that the principal cause of the improvement is not, in itself, the increase in time spent online for educational purposes. Rather, increasing the time devoted to studying online is only useful when it takes place as some form of interactive learning. The educational implications of these results are discussed. Introduction Internet use in higher education has grown exponentially in recent years (Allen & Seaman, 2010; OECD, 2010; Smith Jaggars & Bailey, 2010). However, with regard to improving students’ achievement, there is no agreement in the literature on what the benefits of Internet use are. Empirical research on the effectiveness of e-learning has focused mainly on comparing the results of fully computer-mediated teaching with those of face-to-face teaching (comparison of means). In such research, the results are often inconclusive and occasionally contradictory. Indeed, more recent meta-analyses generally tend to show that there are no significant differences in the level of learning that students achieve (Bernard et al, 2004; Means, Toyama, Murphy, Bakia & Jones, 2009). No significant differences between the effectiveness of online and face-to-face learning have been detected in recent research either. However, the incorporation of the Internet into face-to-face education has indeed been shown to have beneficial effects on academic achievement (Means et al, 2009; Tamim, Bernard, Borokhovski, Abrami & Schmid, 2011). The incorporation of technology into education is not a homogenous intervention (Ross, Morrison & Lowther, 2010). Today, the Internet can be used to accomplish various types of learning (Means et al, 2009): British Journal of Educational Technology Vol 45 No 1 2014 149–159 doi:10.1111/bjet.12007 © 2013 British Educational Research Association 1. Expository, where content is transmitted unidirectionally via the technology. 2. Active, where students use the technology individually to explore information and solve problems. 3. Interactive, where the technology mediates human interaction and knowledge emerges from such interaction. In addition, the different learning modalities are often linked to different learning conditions. In this respect, Means et al (2009) point out that Internet use in education cannot be compared with face-to-face education because Internet use was usually associated with different curricula, although nowadays the curricula are often the same. Furthermore, Internet use may lead to longer study times and greater interaction. Therefore, the higher academic achievement of students on blended learning courses compared to those on fully face-to-face courses (for a detailed review of research on this issue, see Tamim et al, 2011) cannot be explained as something intrinsic to the incorporation of the Internet into education, but rather as a consequence of the different instructional conditions that may be associated with the technology. However, while the conclusions of such studies (Means et al, 2009; Ross et al, 2010) highlight the need for further research to elucidate what the conditions are that make blended learning improve achievement, the data analysed in the studies reviewed do not allow this question to be answered. Aim In this research paper, we shall go a stage further and answer the following questions in the context of higher education: • With an equivalent curriculum, is increasing the time spent studying online always effective in terms of improving academic achievement? Practitioner Notes What is already known about this topic • The incorporation of technology into education is not a homogenous intervention. • Blended learning is often more effective than face-to-face learning. What this paper adds • This paper explores what causes blended learning to be more effective than face-to-face learning. • The principal cause of the improvement is not, in itself, the increase in the time spent studying that using the Internet for educational purposes may entail. • Increasing the time spent studying online is only useful when it takes place as some form of interactive learning. Implications for practice and/or policy • To maximise the time spent studying online, it is beneficial to interact with other actors in the learning process. • Incorporating the Internet as an interactive learning catalyst is an effective strategy to get the maximum benefit from the investment made in that technology. • Online interaction is especially relevant to the study of academic achievement inequality that the incorporation of the Internet can introduce into higher education systems. 150 British Journal of Educational Technology Vol 45 No 1 2014 © 2013 British Educational Research Association • With an equivalent curriculum, does the greater capacity of online instruction to incorporate interaction have a beneficial effect on improving academic achievement? For our analytical objectives, we divided the three types of learning that can be accomplished online into two: interactive learning and individual learning, the latter of which encompasses the categories of expository learning and active learning. On the basis of this division, we shall compare the effects of making an intensive use of the Internet to accomplish interactive learning with the effects of making an intensive use of the Internet for other, more individual types of learning activities. Taking account of the fact that the indicators measuring both treatments are comparable with respect to the weekly number of hours that students spend on each one, if it is shown that Internet use for interactive learning is more effective than Internet use for individual learning, then it will be possible to support the hypothesis that the principal element that makes the incorporation of e-learning effective is its capacity to increase the potential for interaction in the learning process. Data and methods Data We used an online questionnaire to obtain the necessary data for the research. In 2006, the questionnaire was sent out to all students attending two Catalan universities of different types. A generalist university (University of Barcelona) and a technical university (Universitat Politécnica de Catalunya. BarcelonaTech). These data were complemented with information from the Government of Catalonia’s administrative registers of academic achievement. The differences between the two universities studied allowed us to access a variety of students that made the results less dependent on the type of degree courses offered and on the curricula and teaching–learning strategies of each university. The information gathering method (online questionnaire sent via institutional email services) allowed those students that had dropped out of their studies (the number of students with zero credits passed was 3.37% from the sample and 7.6% from all students of the three selected universities) and that were not Internet users to be excluded, since a lack of response to the questionnaire served as a filter to eliminate those students for whom the effect of the Internet on achievement would be zero, either because they did not use the Internet or because their achievement was equal to zero. In total, information was available on 8046 students. The characteristics of the students were similar to the characteristics of all students of the three universities studied as a whole, other than the variables for academic achievement and Internet usage (time). Given the high number of individuals in the study, the results obtained have greater external validity than those obtained in the majority of experimental studies on the effect of Internet use in education. Treatment measurement and output In order to meet the proposed objectives, we divided the types of learning that can be accomplished online into two: interactive learning and individual learning, the latter of which encompasses the categories of expository learning and active learning. The items used to measure the degree of Internet use for learning are dichotomous variables, which include the students’ answers to a set of questions on whether they had used the Internet for different academic purposes. Five of such uses were defined as individual learning (searching for information, looking up teaching plans and bibliographical references, looking up course materials, work with bookmarks and subscription to mailing lists on the study area) and four as interactive learning (communicating with lecturers, communicating with fellow students, online The Internet in face-to-face higher education 151 © 2013 British Educational Research Association discussions on the study area and cooperative work). Table 1 shows the percentage of students who claim to use the Internet for each of the proposed purposes. In order to elaborate indices of use for each type of learning (individual and interactive), the number of uses from those described previously was totalled. The result of this operation is an ordinal variable reflecting the number of different uses that each student makes of the Internet in each of the proposed types of learning (individual and interactive). So the maximum is five uses for individual learning and four uses for interactive learning. In the analyses performed, students with intensive Internet use for individual learning were considered to be those making four or five of the uses defined as such (36.73% higher), and students with intensive Internet use for interaction to be those making three or four of the uses in that category (39.01% higher). In order to measure academic achievement in the 2005–2006 academic year, the offici",
"title": ""
},
{
"docid": "40c2110eaefe79a096099aa5db7426fe",
"text": "One-hop broadcasting is the predominate form of network traffic in VANETs. Exchanging status information by broadcasting among the vehicles enhances vehicular active safety. Since there is no MAC layer broadcasting recovery for 802.11 based VANETs, efforts should be made towards more robust and effective transmission of such safety-related information. In this paper, a channel adaptive broadcasting method is proposed. It relies solely on channel condition information available at each vehicle by employing standard supported sequence number mechanisms. The proposed method is fully compatible with 802.11 and introduces no communication overhead. Simulation studies show that it outperforms standard broadcasting in term of reception rate and channel utilization.",
"title": ""
},
{
"docid": "a1a8dc4d3c1c0d2d76e0f1cd0cb039d2",
"text": "73 generalized vertex median of a weighted graph, \" Operations Res., pp. 955-961, July 1967. and 1973, respectively. He spent two and a half years at Bell Laboratories , Murray Hill, NJ, developing telemetrized automatic surveillance and control systems. He is now Manager at Data Communications Systems, Vienna, VA, where he has major responsibilities in research and development of network analysis and design capabilities, and has applied these capabilities in the direction of projects ranging from feasability analysis and design of front end processors for the Navy to development of network architectures for the FAA. NY, responsible for contributing to the ongoing research in the areas of large network design, topological optimization for terminal access, the concentrator location problem, and flow and congestion control strategies for packet switching networks. At present, Absfruct-An algorithm is defined for establishing routing tables in the individual nodes of a data network. The routing fable at a node i specifies, for each other node j , what fraction of the traffic destined far node j should leave node i on each of the links emanating from node i. The algorithm is applied independently at each node and successively updates the routing table at that node based on information communicated between adjacent nodes about the marginal delay to each destination. For stationary input traffic statistics, the average delay per message through the network converges, with successive updates of the routing tables, to the minimum average delay over all routing assignments. The algorithm has the additional property that the traffic to each destination is guaranteed to be loop free at each iteration of the algorithm. In addition, a new global convergence theorem for non-continuous iteration algorithms is developed. INTRODUCTION T HE problem of routing assignments has been one of the most intensively studied areas in the field of data networks in recent years. These routing problems can be roughly classified as static routing, quasi-static routing, and dynamic routing. Static routing can be typified by the following type of problem. One wishes to establish a new data network and makes various assumptions about the node locations, the link locations, and the capacities of the links. Given the traffic between each source and destination, one can calculate the traffic on each link as a function of the routing of the traffic. If one approximates the queueing delays on each link as a function of the link traffic, one can …",
"title": ""
},
{
"docid": "82dcbecb4c1c6bb61ac9b029fc2f9871",
"text": "A complete list of the titles in this series appears at the end of this volume. No part of this publication may he reproduced, stored in a retrieval system, or transmitted in any form or by any means Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic format. For information about Wiley products, visit our web site at www.wiley.com.",
"title": ""
},
{
"docid": "ce020748bd9bc7529036aa41dcd59a92",
"text": "In this paper a new isolated SEPIC converter which is a proper choice for PV applications, is introduced and analyzed. The proposed converter has the advantage of high voltage gain while the switch voltage stress is same as a regular SEPIC converter. The converter operating modes are discussed and design considerations are presented. Also simulation results are illustrated which justifies the theoretical analysis. Finally the proposed converter is improved using active clamp technique.",
"title": ""
}
] |
scidocsrr
|
88786a5f653471b956befce547fc090e
|
Robotic calligraphy — Learning how to write single strokes of Chinese and Japanese characters
|
[
{
"docid": "1f37b0d252de40c55eee0109c168983b",
"text": "The algorithm may be programmed without multiplication or division instructions and is eficient with respect to speed of execution and memory utilization. This paper describes an algorithm for computer control of a type of digital plotter that is now in common use with digital computers .' The plotter under consideration is capable of executing, in response to an appropriate pulse, any one of the eight linear movements shown in Figure 1. Thus, the plotter can move linearly from a point on a mesh to any adjacent point on the mesh. A typical mesh size is 1/100th of an inch. The data to be plotted are expressed in an (x , y) rectangular coordinate system which has been scaled with respect to the mesh; i.e., the data points lie on mesh points and consequently have integral coordinates. It is assumed that the data include a sufficient number of appropriately selected points to produce a satisfactory representation of the curve by connecting the points with line segments, as illustrated in Figure 2. In Figure 3, the line segment connecting",
"title": ""
}
] |
[
{
"docid": "538f1b131a9803db07ab20f202ecc96e",
"text": "In this paper, we propose a direction-of-arrival (DOA) estimation method by combining multiple signal classification (MUSIC) of two decomposed linear arrays for the corresponding coprime array signal processing. The title “DECOM” means that, first, the nonlinear coprime array needs to be DECOMposed into two linear arrays, and second, Doa Estimation is obtained by COmbining the MUSIC results of the linear arrays, where the existence and uniqueness of the solution are proved. To reduce the computational complexity of DECOM, we design a two-phase adaptive spectrum search scheme, which includes a coarse spectrum search phase and then a fine spectrum search phase. Extensive simulations have been conducted and the results show that the DECOM can achieve accurate DOA estimation under different SNR conditions.",
"title": ""
},
{
"docid": "659736f536f23c030f6c9cd86df88d1d",
"text": "Studies of human addicts and behavioural studies in rodent models of addiction indicate that key behavioural abnormalities associated with addiction are extremely long lived. So, chronic drug exposure causes stable changes in the brain at the molecular and cellular levels that underlie these behavioural abnormalities. There has been considerable progress in identifying the mechanisms that contribute to long-lived neural and behavioural plasticity related to addiction, including drug-induced changes in gene transcription, in RNA and protein processing, and in synaptic structure. Although the specific changes identified so far are not sufficiently long lasting to account for the nearly permanent changes in behaviour associated with addiction, recent work has pointed to the types of mechanism that could be involved.",
"title": ""
},
{
"docid": "06e74a431b45aec75fb21066065e1353",
"text": "Despite the prevalence of sleep complaints among psychiatric patients, few questionnaires have been specifically designed to measure sleep quality in clinical populations. The Pittsburgh Sleep Quality Index (PSQI) is a self-rated questionnaire which assesses sleep quality and disturbances over a 1-month time interval. Nineteen individual items generate seven \"component\" scores: subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleeping medication, and daytime dysfunction. The sum of scores for these seven components yields one global score. Clinical and clinimetric properties of the PSQI were assessed over an 18-month period with \"good\" sleepers (healthy subjects, n = 52) and \"poor\" sleepers (depressed patients, n = 54; sleep-disorder patients, n = 62). Acceptable measures of internal homogeneity, consistency (test-retest reliability), and validity were obtained. A global PSQI score greater than 5 yielded a diagnostic sensitivity of 89.6% and specificity of 86.5% (kappa = 0.75, p less than 0.001) in distinguishing good and poor sleepers. The clinimetric and clinical properties of the PSQI suggest its utility both in psychiatric clinical practice and research activities.",
"title": ""
},
{
"docid": "ab32c8e5a5f8f7054d7a820514b1a84b",
"text": "Descriptions and reviews for products abound on the web and characterise the corresponding products through their aspects. Extracting these aspects is essential to better understand these descriptions, e.g., for comparing or recommending products. Current pattern-based aspect extraction approaches focus on flat patterns extracting flat sets of adjective-noun pairs. Aspects also have crucial importance on sentiment classification in which sentiments are matched with aspect-level expressions. A preliminary step in both aspect extraction and aspect based sentiment analysis is to detect aspect terms and opinion targets. In this paper, we propose a sequential learning approach to extract aspect terms and opinion targets from opinionated documents. For the first time, we use semi-markov conditional random fields for this task and we incorporate word embeddings as features into the learning process. We get comparative results on the benchmark datasets for the subtask of aspect term extraction in SemEval-2014 Task 4 and the subtask of opinion target extraction in SemEval-2015 Task 12. Our results show that word embeddings improve the detection accuracy for aspect terms and opinion targets.",
"title": ""
},
{
"docid": "f76088febc06463f01e98561d89d06cd",
"text": "We present a novel stereo-to-multiview video conversion method for glasses-free multiview displays. Different from previous stereo-to-multiview approaches, our mapping algorithm utilizes the limited depth range of autostereoscopic displays optimally and strives to preserve the scene’s artistic composition and perceived depth even under strong depth compression. We first present an investigation of how perceived image quality relates to spatial frequency and disparity. The outcome of this study is utilized in a two-step mapping algorithm, where we (i) compress the scene depth using a non-linear global function to the depth range of an autostereoscopic display, and (ii) enhance the depth gradients of salient objects to restore the perceived depth and salient scene structure. Finally, an adapted image domain warping algorithm is proposed to generate the multiview output, which enables overall disparity range extension.",
"title": ""
},
{
"docid": "48931b870057884b8b1c679781e2adc9",
"text": "Recommender systems have been researched extensively by the Technology Enhanced Learning (TEL) community during the last decade. By identifying suitable resources from a potentially overwhelming variety of choices, such systems offer a promising approach to facilitate both learning and teaching tasks. As learning is taking place in extremely diverse and rich environments, the incorporation of contextual information about the user in the recommendation process has attracted major interest. Such contextualization is researched as a paradigm for building intelligent systems that can better predict and anticipate the needs of users, and act more efficiently in response to their behavior. In this paper, we try to assess the degree to which current work in TEL recommender systems has achieved this, as well as outline areas in which further work is needed. First, we present a context framework that identifies relevant context dimensions for TEL applications. Then, we present an analysis of existing TEL recommender systems along these dimensions. Finally, based on our survey results, we outline topics on which further research is needed.",
"title": ""
},
{
"docid": "ba5d0acb79bcd3fd1ffdb85ed345badc",
"text": "Although the Transformer translation model (Vaswani et al., 2017) has achieved state-ofthe-art performance in a variety of translation tasks, how to use document-level context to deal with discourse phenomena problematic for Transformer still remains a challenge. In this work, we extend the Transformer model with a new context encoder to represent document-level context, which is then incorporated into the original encoder and decoder. As large-scale document-level parallel corpora are usually not available, we introduce a two-step training method to take full advantage of abundant sentence-level parallel corpora and limited document-level parallel corpora. Experiments on the NIST ChineseEnglish datasets and the IWSLT FrenchEnglish datasets show that our approach improves over Transformer significantly. 1",
"title": ""
},
{
"docid": "7c5f1b12f540c8320587ead7ed863ee5",
"text": "This paper studies the non-fragile mixed H∞ and passive synchronization problem for Markov jump neural networks. The randomly occurring controller gain fluctuation phenomenon is investigated for non-fragile strategy. Moreover, the mixed time-varying delays composed of discrete and distributed delays are considered. By employing stochastic stability theory, synchronization criteria are developed for the Markov jump neural networks. On the basis of the derived criteria, the non-fragile synchronization controller is designed. Finally, an illustrative example is presented to demonstrate the validity of the control approach.",
"title": ""
},
{
"docid": "03ab3aeee4eb4505956a0c516cab26dd",
"text": "The present study investigated the effect of 21 days of horizontal bed rest on cutaneous cold and warm sensitivity, and on behavioural temperature regulation. Healthy male subjects (N = 10) were accommodated in a hospital ward for the duration of the study and were under 24-h medical care. All activities (eating, drinking, hygiene, etc.) were conducted in the horizontal position. On the 1st and 22nd day of bed rest, cutaneous temperature sensitivity was tested by applying cold and warm stimuli of different magnitudes to the volar region of the forearm via a Peltier element thermode. Behavioural thermoregulation was assessed by having the subjects regulate the temperature of the water within a water-perfused suit (T wps) they were wearing. A control unit established a sinusoidal change in T wps, such that it varied from 27 to 42°C. The subjects could alter the direction of the change of T wps, when they perceived it as thermally uncomfortable. The magnitude of the oscillations towards the end of the trial was assumed to represent the upper and lower boundaries of the thermal comfort zone. The cutaneous threshold for detecting cold stimulus decreased (P < 0.05) from 1.6 (1.0)°C on day 1 to 1.0 (0.3)°C on day 22. No effect was observed on the ability to detect warm stimuli or on the regulated T wps. We conclude that although cold sensitivity increased after bed rest, it was not of sufficient magnitude to cause any alteration in behavioural thermoregulatory responses.",
"title": ""
},
{
"docid": "db5ff75a7966ec6c1503764d7e510108",
"text": "Qualitative content analysis as described in published literature shows conflicting opinions and unsolved issues regarding meaning and use of concepts, procedures and interpretation. This paper provides an overview of important concepts (manifest and latent content, unit of analysis, meaning unit, condensation, abstraction, content area, code, category and theme) related to qualitative content analysis; illustrates the use of concepts related to the research procedure; and proposes measures to achieve trustworthiness (credibility, dependability and transferability) throughout the steps of the research procedure. Interpretation in qualitative content analysis is discussed in light of Watzlawick et al.'s [Pragmatics of Human Communication. A Study of Interactional Patterns, Pathologies and Paradoxes. W.W. Norton & Company, New York, London] theory of communication.",
"title": ""
},
{
"docid": "58f1ba92eb199f4d105bf262b30dbbc5",
"text": "Before the big data era, scene recognition was often approached with two-step inference using localized intermediate representations (objects, topics, and so on). One of such approaches is the semantic manifold (SM), in which patches and images are modeled as points in a semantic probability simplex. Patch models are learned resorting to weak supervision via image labels, which leads to the problem of scene categories co-occurring in this semantic space. Fortunately, each category has its own co-occurrence patterns that are consistent across the images in that category. Thus, discovering and modeling these patterns are critical to improve the recognition performance in this representation. Since the emergence of large data sets, such as ImageNet and Places, these approaches have been relegated in favor of the much more powerful convolutional neural networks (CNNs), which can automatically learn multi-layered representations from the data. In this paper, we address many limitations of the original SM approach and related works. We propose discriminative patch representations using neural networks and further propose a hybrid architecture in which the semantic manifold is built on top of multiscale CNNs. Both representations can be computed significantly faster than the Gaussian mixture models of the original SM. To combine multiple scales, spatial relations, and multiple features, we formulate rich context models using Markov random fields. To solve the optimization problem, we analyze global and local approaches, where a top–down hierarchical algorithm has the best performance. Experimental results show that exploiting different types of contextual relations jointly consistently improves the recognition accuracy.",
"title": ""
},
{
"docid": "c526e32c9c8b62877cb86bc5b097e2cf",
"text": "This paper proposes a new field of user interfaces called multi-computer direct manipulation and presents a penbased direct manipulation technique that can be used for data transfer between different computers as well as within the same computer. The proposed Pick-andDrop allows a user to pick up an object on a display and drop it on another display as if he/she were manipulating a physical object. Even though the pen itself does not have storage capabilities, a combination of Pen-ID and the pen manager on the network provides the illusion that the pen can physically pick up and move a computer object. Based on this concept, we have built several experimental applications using palm-sized, desk-top, and wall-sized pen computers. We also considered the importance of physical artifacts in designing user interfaces in a future computing environment.",
"title": ""
},
{
"docid": "c2c994664e3aecff1ccb8d8feaf860e9",
"text": "Hazard zones associated with LNG handling activities have been a major point of contention in recent terminal development applications. Debate has reflected primarily worst case scenarios and discussion of these. This paper presents results from a maximum credible event approach. A comparison of results from several models either run by the authors or reported in the literature is presented. While larger scale experimental trials will be necessary to reduce the uncertainty, in the interim a set of base cases are suggested covering both existing trials and credible and worst case events is proposed. This can assist users to assess the degree of conservatism present in quoted modeling approaches and model selections.",
"title": ""
},
{
"docid": "efde92d1e86ff0b5f91b006521935621",
"text": "Sizing equations for electrical machinery are developed from basic principles. The technique provides new insights into: 1. The effect of stator inner and outer diameters. 2. The amount of copper and steel used. 3. A maximizing function. 4. Equivalent slot dimensions in terms of diameters and flux density distribution. 5. Pole number effects. While the treatment is analytical, the scope is broad and intended to assist in the design of electrical machinery. Examples are given showing how the machine's internal geometry can assume extreme proportions through changes in basic variables.",
"title": ""
},
{
"docid": "f945b645e492e2b5c6c2d2d4ea6c57ae",
"text": "PURPOSE\nThe aim of this review was to look at relevant data and research on the evolution of ventral hernia repair.\n\n\nMETHODS\nResources including books, research, guidelines, and online articles were reviewed to provide a concise history of and data on the evolution of ventral hernia repair.\n\n\nRESULTS\nThe evolution of ventral hernia repair has a very long history, from the recognition of ventral hernias to its current management, with significant contributions from different authors. Advances in surgery have led to more cases of ventral hernia formation, and this has required the development of new techniques and new materials for ventral hernia management. The biocompatibility of prosthetic materials has been important in mesh development. The functional anatomy and physiology of the abdominal wall has become important in ventral hernia management. New techniques in abdominal wall closure may prevent or reduce the incidence of ventral hernia in the future.\n\n\nCONCLUSION\nThe management of ventral hernia is continuously evolving as it responds to new demands and new technology in surgery.",
"title": ""
},
{
"docid": "5508603a802abb9ab0203412b396b7bc",
"text": "We present an optimal algorithm for informative path planning (IPP), using a branch and bound method inspired by feature selection algorithms. The algorithm uses the monotonicity of the objective function to give an objective function-dependent speedup versus brute force search. We present results which suggest that when maximizing variance reduction in a Gaussian process model, the speedup is significant.",
"title": ""
},
{
"docid": "5744e87741b6154b333e0f24bb17f0ea",
"text": "We describe two new related resources that facilitate modelling of general knowledge reasoning in 4th grade science exams. The first is a collection of curated facts in the form of tables, and the second is a large set of crowd-sourced multiple-choice questions covering the facts in the tables. Through the setup of the crowd-sourced annotation task we obtain implicit alignment information between questions and tables. We envisage that the resources will be useful not only to researchers working on question answering, but also to people investigating a diverse range of other applications such as information extraction, question parsing, answer type identification, and lexical semantic modelling.",
"title": ""
},
{
"docid": "0e5a11ef4daeb969702e40ea0c50d7f3",
"text": "OBJECTIVES\nThe aim of this study was to assess the long-term safety and efficacy of the CYPHER (Cordis, Johnson and Johnson, Bridgewater, New Jersey) sirolimus-eluting coronary stent (SES) in percutaneous coronary intervention (PCI) for ST-segment elevation myocardial infarction (STEMI).\n\n\nBACKGROUND\nConcern over the safety of drug-eluting stents implanted during PCI for STEMI remains, and long-term follow-up from randomized trials are necessary. TYPHOON (Trial to assess the use of the cYPHer sirolimus-eluting stent in acute myocardial infarction treated with ballOON angioplasty) randomized 712 patients with STEMI treated by primary PCI to receive either SES (n = 355) or bare-metal stents (BMS) (n = 357). The primary end point, target vessel failure at 1 year, was significantly lower in the SES group than in the BMS group (7.3% vs. 14.3%, p = 0.004) with no increase in adverse events.\n\n\nMETHODS\nA 4-year follow-up was performed. Complete data were available in 501 patients (70%), and the survival status is known in 580 patients (81%).\n\n\nRESULTS\nFreedom from target lesion revascularization (TLR) at 4 years was significantly better in the SES group (92.4% vs. 85.1%; p = 0.002); there were no significant differences in freedom from cardiac death (97.6% and 95.9%; p = 0.37) or freedom from repeat myocardial infarction (94.8% and 95.6%; p = 0.85) between the SES and BMS groups. No difference in definite/probable stent thrombosis was noted at 4 years (SES: 4.4%, BMS: 4.8%, p = 0.83). In the 580 patients with known survival status at 4 years, the all-cause death rate was 5.8% in the SES and 7.0% in the BMS group (p = 0.61).\n\n\nCONCLUSIONS\nIn the 70% of patients with complete follow-up at 4 years, SES demonstrated sustained efficacy to reduce TLR with no difference in death, repeat myocardial infarction or stent thrombosis. (The Study to Assess AMI Treated With Balloon Angioplasty [TYPHOON]; NCT00232830).",
"title": ""
},
{
"docid": "44c66a2654fdc7ab72dabaa8e31f0e99",
"text": "The availability of new generation multispectral sensors of the Landsat 8 and Sentinel-2 satellite platforms offers unprecedented opportunities for long-term high-frequency monitoring applications. The present letter aims at highlighting some potentials and challenges deriving from the spectral and spatial characteristics of the two instruments. Some comparisons between corresponding bands and band combinations were performed on the basis of different datasets: the first consists of a set of simulated images derived from a hyperspectral Hyperion image, the other five consist instead of pairs of real images (Landsat 8 and Sentinel-2A) acquired on the same date, over five areas. Results point out that in most cases the two sensors can be well combined; however, some issues arise regarding near-infrared bands when Sentinel-2 data are combined with both Landsat 8 and older Landsat images.",
"title": ""
},
{
"docid": "b229aa8b39b3df3fec941ce4791a2fe9",
"text": "Translating information between text and image is a fundamental problem in artificial intelligence that connects natural language processing and computer vision. In the past few years, performance in image caption generation has seen significant improvement through the adoption of recurrent neural networks (RNN). Meanwhile, text-to-image generation begun to generate plausible images using datasets of specific categories like birds and flowers. We've even seen image generation from multi-category datasets such as the Microsoft Common Objects in Context (MSCOCO) through the use of generative adversarial networks (GANs). Synthesizing objects with a complex shape, however, is still challenging. For example, animals and humans have many degrees of freedom, which means that they can take on many complex shapes. We propose a new training method called Image-Text-Image (I2T2I) which integrates text-to-image and image-to-text (image captioning) synthesis to improve the performance of text-to-image synthesis. We demonstrate that I2T2I can generate better multi-categories images using MSCOCO than the state-of-the-art. We also demonstrate that I2T2I can achieve transfer learning by using a pre-trained image captioning module to generate human images on the MPII Human Pose dataset (MHP) without using sentence annotation.",
"title": ""
}
] |
scidocsrr
|
6fcd207466540ad8cccf033618cdb330
|
Automated Road Lane Detection for Intelligent Vehicles
|
[
{
"docid": "bbdd4ffd6797d00c3547626959118b92",
"text": "A vision system was designed to detect multiple lanes on structured highway using an “estimate and detect” scheme. It detected the lane in which the vehicle was driving (the central lane) and estimated the possible position of two adjacent lanes. Then the detection was made based on these estimations. The vehicle was first recognized if it was driving on a straight road or in a curve using its GPS position and the OpenStreetMap digital map. The two cases were processed differently. For straight road, the central lane was detected in the original image using Hough transformation and a simplified perspective transformation was designed to make estimations. In the case of curve path, a complete perspective transformation was performed and the central lane was detected by scanning at each row in the top view image. The system was able to detected lane marks that were not distinct or even obstructed by other vehicles.",
"title": ""
}
] |
[
{
"docid": "54f95cef02818cb4eb86339ee12a8b07",
"text": "The problem of discontinuities in broadband multisection coupled-stripline 3-dB directional couplers, phase shifters, high-pass tapered-line 3-dB directional couplers, and magic-T's, regarding the connections of coupled and terminating signal lines, is comprehensively investigated in this paper for the first time. The equivalent circuit of these discontinuities proposed in Part I has been used for accurate modeling of the broadband multisection and ultra-broadband high-pass coupled-stripline circuits. It has been shown that parasitic reactances, which result from the connections of signal and coupled lines, severely deteriorate the return losses and the isolation of such circuits and also-in case of tapered-line directional couplers-the coupling responses. Moreover, it has been proven theoretically and experimentally that these discontinuity effects can be substantially reduced by introducing compensating shunt capacitances in a number of cross sections of coupled and signal lines. Results of measurements carried out for various designed and manufactured coupled-line circuits have been very promising and have proven the efficiency of the proposed broadband compensation technique. The theoretical and measured data are given for the following coupled-stripline circuits: a decade-bandwidth asymmetric three-section 3-dB directional coupler, a decade-bandwidth three-section phase-shifter compensator, and a high-pass asymmetric tapered-line 3-dB coupler",
"title": ""
},
{
"docid": "2bf9e347e163d97c023007f4cc88ab02",
"text": "State-of-the-art graph kernels do not scale to large graphs with hundreds of nodes and thousands of edges. In this article we propose to compare graphs by counting graphlets, i.e., subgraphs with k nodes where k ∈ {3, 4, 5}. Exhaustive enumeration of all graphlets being prohibitively expensive, we introduce two theoretically grounded speedup schemes, one based on sampling and the second one specifically designed for bounded degree graphs. In our experimental evaluation, our novel kernels allow us to efficiently compare large graphs that cannot be tackled by existing graph kernels.",
"title": ""
},
{
"docid": "5a2c6049e23473a5845b17da4101ab41",
"text": "This paper discusses the design of a battery-less wirelessly-powered UWB system-on-a-chip (SoC) tag for area-constrained localization applications. An antenna-rectifier co-design methodology optimizes sensitivity and increases range under tag area constraints. A low-voltage (0.8-V) UWB TX enables high rectifier sensitivity by reducing required rectifier output voltage. The 2.4GHz rectifier, power-management unit and 8GHz UWB TX are integrated in 65nm CMOS and the rectifier demonstrates state-of-the-art -30.7dBm sensitivity for 1V output with only 1.3cm2 antenna area, representing a 2.3× improvement in sensitivity over previously published work, at 2.6× higher frequency with 9× smaller antenna area. Measurements in an office corridor demonstrate 20m range with 36dBm TX EIRP. The 0.8-V 8GHz UWB TX consumes 64pJ/pulse at 28MHz pulse repetition rate and achieves 2.4GHz -10dB bandwidth. Wireless measurements demonstrate sub-10cm range resolution at range > 10m.",
"title": ""
},
{
"docid": "5bd3626d03619cf300efb70ce0664513",
"text": "A front-illuminated global-shutter CMOS image sensor has been developed with super 35-mm optical format. We have developed a chip-on-chip integration process to realize a front-illuminated image sensor stacked with 2 diced logic chips through 38K micro bump interconnections. The global-shutter pixel achieves a parasitic light sensitivity of −99.6dB. The stacked device allows highly parallel column ADCs and high-speed output interfaces to attain a frame rate of 480 fps with 8.3M-pixel resolution.",
"title": ""
},
{
"docid": "00c5760f14752e8f455a3c48704b0f9c",
"text": "Secure and efficient lightweight user authentication protocol for mobile cloud computing becomes a paramount concern due to the data sharing using Internet among the end users and mobile devices. Mutual authentication of a mobile user and cloud service provider is necessary for accessing of any cloud services. However, resource constraint nature of mobile devices makes this task more challenging. In this paper, we propose a new secure and lightweight mobile user authentication scheme for mobile cloud computing, based on cryptographic hash, bitwise XOR, and fuzzy extractor functions. Through informal security analysis and rigorous formal security analysis using random oracle model, it has been demonstrated that the proposed scheme is secure against possible well-known passive and active attacks and also provides user anonymity. Moreover, we provide formal security verification through ProVerif 1.93 simulation for the proposed scheme. Also, we have done authentication proof of our proposed scheme using the Burrows-Abadi-Needham logic. Since the proposed scheme does not exploit any resource constrained cryptosystem, it has the lowest computation cost in compare to existing related schemes. Furthermore, the proposed scheme does not involve registration center in the authentication process, for which it is having lowest communication cost compared with existing related schemes.",
"title": ""
},
{
"docid": "a67df1737ca4e5cb41fe09ccb57c0e88",
"text": "Generation of electricity from solar energy has gained worldwide acceptance due to its abundant availability and eco-friendly nature. Even though the power generated from solar looks to be attractive; its availability is subjected to variation owing to many factors such as change in irradiation, temperature, shadow etc. Hence, extraction of maximum power from solar PV using Maximum Power Point Tracking (MPPT) method was the subject of study in the recent past. Among many methods proposed, Hill Climbing and Incremental Conductance MPPT methods were popular in reaching Maximum Power under constant irradiation. However, these methods show large steady state oscillations around MPP and poor dynamic performance when subjected to change in environmental conditions. On the other hand, bioinspired algorithms showed excellent characteristics when dealing with non-linear, non-differentiable and stochastic optimization problems without involving excessive mathematical computations. Hence, in this paper an attempt is made by applying modifications to Particle Swarm Optimization technique, with emphasis on initial value selection, for Maximum Power Point Tracking. The key features of this method include ability to track the global peak power accurately under change in environmental condition with almost zero steady state oscillations, faster dynamic response and easy implementation. Systematic evaluation has been carried out for different partial shading conditions and finally the results obtained are compared with existing methods. In addition, simulations results are validated via built-in hardware prototype. © 2015 Published by Elsevier B.V. 37 38 39 40 41 42 43 44 45 46 47 48 . Introduction Ever growing energy demand by mankind and the limited availbility of resources remain as a major challenge to the power sector ndustry. The need for renewable energy resources has been augented in large scale and aroused due to its huge availability nd pollution free operation. Among the various renewable energy esources, solar energy has gained worldwide recognition because f its minimal maintenance, zero noise and reliability. Because of he aforementioned advantages; solar energy have been widely sed for various applications, but not limited to, such as megawatt cale power plants, water pumping, solar home systems, commuPlease cite this article in press as: R. Venugopalan, et al., Modified Parti Tracking for uniform and under partial shading condition, Appl. Soft C ication satellites, space vehicles and reverse osmosis plants [1]. owever, power generation using solar energy still remain uncerain, despite of all the efforts, due to various factors such as poor ∗ Corresponding author at: SELECT, VIT University, Vellore, Tamilnadu 632014, ndia. Tel.: +91 9600117935; fax: +91 9490113830. E-mail address: sudhakar.babu2013@vit.ac.in (T. Sudhakarbabu). ttp://dx.doi.org/10.1016/j.asoc.2015.05.029 568-4946/© 2015 Published by Elsevier B.V. 49 50 51 52 conversion efficiency, high installation cost and reduced power output under varying environmental conditions. Further, the characteristics of solar PV are non-linear in nature imposing constraints on solar power generation. Therefore, to maximize the power output from solar PV and to enhance the operating efficiency of the solar photovoltaic system, Maximum Power Point Tracking (MPPT) algorithms are essential [2]. Various MPPT algorithms [3–5] have been investigated and reported in the literature and the most popular ones are Fractional Open Circuit Voltage [6–8], Fractional Short Circuit Current [9–11], Perturb and Observe (P&O) [12–17], Incremental Conductance (Inc. Cond.) [18–22], and Hill Climbing (HC) algorithm [23–26]. In fractional open circuit voltage, and fractional short circuit current method; its performance depends on an approximate linear correlation between Vmpp, Voc and Impp, Isc values. However, the above relation is not practically valid; hence, exact value of Maximum cle Swarm Optimization technique based Maximum Power Point omput. J. (2015), http://dx.doi.org/10.1016/j.asoc.2015.05.029 Power Point (MPP) cannot be assured. Perturb and Observe (P&O) method works with the voltage perturbation based on present and previous operating power values. Regardless of its simple structure, its efficiency principally depends on the tradeoff between the 53 54 55 56 ARTICLE IN G Model ASOC 2982 1–12 2 R. Venugopalan et al. / Applied Soft C Nomenclature IPV Current source Rs Series resistance Rp Parallel resistance VD diode voltage ID diode current I0 leakage current Vmpp voltage at maximum power point Voc open circuit voltage Impp current at maximum power point Isc short circuit current Vmpn nominal maximum power point voltage at 1000 W/m2 Npp number of parallel PV modules Nss number of series PV modules w weight factor c1 acceleration factor c2 acceleration factor pbest personal best position gbest global best position Vt thermal voltage K Boltzmann constant T temperature q electron charge Ns number of cells in series Vocn nominal open circuit voltage at 1000W/m2 G irradiation Gn nominal Irradiation Kv voltage temperature coefficient dT difference in temperature RLmin minimum value of load at output RLmax maximum value of load at output Rin internal resistance of the PV module RPVmin minimum reflective impedance of PV array RPVmax maximum reflective impedance of PV array R equivalent output load resistance t M o w t b A c M h n ( e a i p p w a u t H o i 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 o b converter efficiency racking speed and the steady state oscillations in the region of PP [15]. Incremental Conductance (Inc. Cond.) algorithm works n the principle of comparing ratios of Incremental Conductance ith instantaneous conductance and it has the similar disadvanage as that of P&O method [20,21]. HC method works alike P&O ut it is based on the perturbation of duty cycle of power converter. ll these traditional methods have the following disadvantages in ommon; reduced efficiency and steady state oscillations around PP. Realizing the above stated drawbacks; various researchers ave worked on applying certain Artificial Intelligence (AI) techiques like Neural Network (NN) [27,28] and Fuzzy Logic Control FLC) [29,30]. However, these techniques require periodic training, normous volume of data for training, computational complexity nd large memory capacity. Application of aforementioned MPPT methods for centralzed/string PV system is limited as they fail to track the global eak power under partial shading conditions. In addition, multile peaks occur in P-V curve under partial shading condition in hich the unique peak point i.e., global power peak should be ttained. However, when conventional MPPT techniques are used nder such conditions, they usually get trapped in any one of Please cite this article in press as: R. Venugopalan, et al., Modified Part Tracking for uniform and under partial shading condition, Appl. Soft C he local power peaks; drastically lowering the search efficiency. ence, to improve MPP tracking efficiency of conventional methds under PS conditions certain modifications have been proposed n Ref. [31]. Some used two stage approach to track the MPP [32]. PRESS omputing xxx (2015) xxx–xxx In the first stage, a wide search is performed which ensures that the operating point is moved closer to the global peak which is further fine-tuned in the second stage to reach the global peak value. Even though tracking efficiency has improved the method still fails to find the global maximum under all conditions. Another interesting approach is improving the Fibonacci search method for global MPP tracking [33]. Alike two stage method, this one also suffers from the same drawback that it does not guarantee accurate MPP tracking under all shaded conditions [34]. Yet another unique formulation combining DIRECT search method with P&O was put forward for global MPP tracking in Ref. [35]. Even though it is rendered effective, it is very complex and increases the computational burden. In the recent past, bio-inspired algorithms like GA, PSO and ACO have drawn considerable researcher’s attention for MPPT application; since they ensure sufficient class of accuracy while dealing with non-linear, non-differentiable and stochastic optimization problems without involving excessive mathematical computations [32,36–38]. Further, these methods offer various advantages such as computational simplicity, easy implementation and faster response. Among those methods, PSO method is largely discussed and widely used for solar MPPT due to the fact that it has simple structure, system independency, high adaptability and lesser number of tuning parameters. Further in PSO method, particles are allowed to move in random directions and the best values are evolved based on pbest and gbest values. This exploration process is very suitable for MPPT application. To improve the search efficiency of the conventional PSO method authors have proposed modifications to the existing algorithm. In Ref. [39], the authors have put forward an additional perception capability for the particles in search space so that best solutions are evolved with higher accuracy than PSO. However, details on implementation under partial shading condition are not discussed. Further, this method is only applicable when the entire module receive uniform insolation cannot be considered. Traditional PSO method is modified in Ref. [40] by introducing equations for velocity update and inertia. Even though the method showed better performance, use of extra coefficients in the conventional PSO search limits its advantage and increases the computational burden of the algorithm. Another approach",
"title": ""
},
{
"docid": "06d146f0f44775e05161a90a95f4eca9",
"text": "The authors discuss various filling agents currently available that can be used to augment the lips, correct perioral rhytides, and enhance overall lip appearance. Fillers are compared and information provided about choosing the appropriate agent based on the needs of each patient to achieve the much coveted \"pouty\" look while avoiding hypercorrection. The authors posit that the goal for the upper lip is to create a form that harmonizes with the patient's unique features, taking into account age and ethnicity; the goal for the lower lip is to create bulk, greater prominence, and projection of the vermillion.",
"title": ""
},
{
"docid": "7c82a4aa866d57dd6f592d848f727cff",
"text": "A novel printed diversity monopole antenna is presented for WiFi/WiMAX applications. The antenna comprises two crescent shaped radiators placed symmetrically with respect to a defected ground plane and a neutralization lines is connected between them to achieve good impedance matching and low mutual coupling. Theoretical and experimental characteristics are illustrated for this antenna, which achieves an impedance bandwidth of 54.5% (over 2.4-4.2 GHz), with a reflection coefficient <;-10 dB and mutual coupling <;-17 dB. An acceptable agreement is obtained for the computed and measured gain, radiation patterns, envelope correlation coefficient, and channel capacity loss. These characteristics demonstrate that the proposed antenna is an attractive candidate for multiple-input multiple-output portable or mobile devices.",
"title": ""
},
{
"docid": "0837e8d6e372b83ddb743a52f3a763fd",
"text": "This paper presents a new three dimensional kinematic and dynamic model for variable length continuum arm robotic structures using a novel shape function-based approach. The model incorporates geometrically constrained structure of the arm to derive its deformation shape function. It is able to simulate spatial bending, pure elongation, and incorporates a new stiffness control feature. The model is validated through numerical simulations, based on a prototype continuum arm, that yields physically accurate results.",
"title": ""
},
{
"docid": "18b32aa0ffd8a3a7b84f9768d57b5cde",
"text": "In this paper we propose a recognition system of medical concepts from free text clinical reports. Our approach tries to recognize also concepts which are named with local terminology, with medical writing scripts, short words, abbreviations and even spelling mistakes. We consider a clinical terminology ontology (Snomed-CT), as a dictionary of concepts. In a first step we obtain an embedding model using word2vec methodology from a big corpus database of clinical reports. Word vectors are positioned in the vector space such that words that share common contexts in the corpus are located in close proximity to one another in the space, and so the geometrical similarity can be considered a measure of semantic relation. We have considered 615513 emergency clinical reports from the Hospital \"Rafael Méndez\" in Lorca, Murcia. In these reports there are a lot of local language of the emergency domain, medical writing scripts, short words, abbreviations and even spelling mistakes. With the model obtained we represent the words and sentences as vectors, and by applying cosine similarity we identify which concepts of the ontology are named in the text. Finally, we represent the clinical reports (EHR) like a bag of concepts, and use this representation to search similar documents. The paper illustrates 1) how we build the word2vec model from the free text clinical reports, 2) How we extend the embedding from words to sentences, and 3) how we use the cosine similarity to identify concepts. The experimentation, and expert human validation, shows that: a) the concepts named in the text with the ontology terminology are well recognized, and b) others concepts that are not named with the ontology terminology are also recognized, obtaining a high precision and recall measures.",
"title": ""
},
{
"docid": "9cb832657be4d4d80682c1a49249a319",
"text": "0377-2217/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.ejor.2010.08.023 ⇑ Corresponding author. Tel.: +47 73593602; fax: + E-mail address: Marielle.Christiansen@iot.ntnu.no This paper considers a maritime inventory routing problem faced by a major cement producer. A heterogeneous fleet of bulk ships transport multiple non-mixable cement products from producing factories to regional silo stations along the coast of Norway. Inventory constraints are present both at the factories and the silos, and there are upper and lower limits for all inventories. The ship fleet capacity is limited, and in peak periods the demand for cement products at the silos exceeds the fleet capacity. In addition, constraints regarding the capacity of the ships’ cargo holds, the depth of the ports and the fact that different cement products cannot be mixed must be taken into consideration. A construction heuristic embedded in a genetic algorithmic framework is developed. The approach adopted is used to solve real instances of the problem within reasonable solution time and with good quality solutions. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "461ec14463eb20962ef168de781ac2a2",
"text": "Local descriptors based on the image noise residual have proven extremely effective for a number of forensic applications, like forgery detection and localization. Nonetheless, motivated by promising results in computer vision, the focus of the research community is now shifting on deep learning. In this paper we show that a class of residual-based descriptors can be actually regarded as a simple constrained convolutional neural network (CNN). Then, by relaxing the constraints, and fine-tuning the net on a relatively small training set, we obtain a significant performance improvement with respect to the conventional detector.",
"title": ""
},
{
"docid": "e8318c6ef6d710b9da6ed4dff50066ec",
"text": "Convolution is one of the most important operators used in image processing. With the constant need to increase the performance in high-end applications and the rise and popularity of parallel architectures, such as GPUs and the ones implemented in FPGAs, comes the necessity to compare these architectures in order to determine which of them performs better and in what scenario. In this article, convolution was implemented in each of the aforementioned architectures with the following languages: CUDA for GPUs and Verilog for FPGAs. In addition, the same algorithms were also implemented in MATLAB, using predefined operations and in C using a regular x86 quad-core processor. Comparative performance measures, considering the execution time and the clock ratio, were taken and commented in the paper. Overall, it was possible to achieve a CUDA speedup of roughly 200× in comparison to C, 70× in comparison to Matlab and 20× in comparison to FPGA.",
"title": ""
},
{
"docid": "83863c6bb0da320b63eede2b5e783e83",
"text": "BACKGROUND\nUnsafe behavior is closely related to occupational accidents. Work pressure is one the main factors affecting employees' behavior. The aim of the present study was to provide a path analysis model for explaining how work pressure affects safety behavior.\n\n\nMETHODS\nUsing a self-administered questionnaire, six variables supposed to affect safety employees' behavior were measured. The path analysis model was constructed based on several hypotheses. The goodness of fit of the model was assessed using both absolute and comparative fit indices.\n\n\nRESULTS\nWork pressure was determined not to influence safety behavior directly. However, it negatively influenced other variables. Group attitude and personal attitude toward safety were the main factors mediating the effect of work pressure on safety behavior. Among the variables investigated in the present study, group attitude, personal attitude and work pressure had the strongest effects on safety behavior.\n\n\nCONCLUSION\nManagers should consider that in order to improve employees' safety behavior, work pressure should be reduced to a reasonable level, and concurrently a supportive environment, which ensures a positive group attitude toward safety, should be provided. Replication of the study is recommended.",
"title": ""
},
{
"docid": "e2630765e2fa4b203a4250cb5b69b9e9",
"text": "Thirteen years have passed since Karl Sims published his work onevolving virtual creatures. Since then,several novel approaches toneural network evolution and genetic algorithms have been proposed.The aim of our work is to apply recent results in these areas tothe virtual creatures proposed by Karl Sims, leading to creaturescapable of solving more complex tasks. This paper presents oursuccess in reaching the first milestone -a new and completeimplementation of the original virtual creatures. All morphologicaland control properties of the original creatures were implemented.Laws of physics are simulated using ODE library. Distributedcomputation is used for CPU-intensive tasks, such as fitnessevaluation.Experiments have shown that our system is capable ofevolving both morphology and control of the creatures resulting ina variety of non-trivial swimming and walking strategies.",
"title": ""
},
{
"docid": "a8d9293972cda5f1961e46130a435a1a",
"text": "The vehicle parking system is designed to prevent usual problems associated with the parking. This system is designed to solve the problem of locating empty parking slot, congestion and indiscriminate parking. The system has been designed using VHDL and successfully implemented on CPLD (family-MAX V,Device-5M1270ZT144C5, Board-Krypton v1.2). A complete design and layout was drawn out and suitably implemented. The different modules were separately tested and then combined together as a working model of intelligent vehicle parking system. The results discussed in this paper shows that very less hardware is utilized (21%) on CPLD board, thus proving the system to be cost effective.",
"title": ""
},
{
"docid": "5b97d597534e65bf5d00f89d8df97767",
"text": "Research into online gaming has steadily increased over the last decade, although relatively little research has examined the relationship between online gaming addiction and personality factors. This study examined the relationship between a number of personality traits (sensation seeking, self-control, aggression, neuroticism, state anxiety, and trait anxiety) and online gaming addiction. Data were collected over a 1-month period using an opportunity sample of 123 university students at an East Midlands university in the United Kingdom. Gamers completed all the online questionnaires. Results of a multiple linear regression indicated that five traits (neuroticism, sensation seeking, trait anxiety, state anxiety, and aggression) displayed significant associations with online gaming addiction. The study suggests that certain personality traits may be important in the acquisition, development, and maintenance of online gaming addiction, although further research is needed to replicate the findings of the present study.",
"title": ""
},
{
"docid": "1cf7ae85e797c3cb2d869afbc491a783",
"text": "Early diagnostic is one of the most important steps in cancer therapy which helps to design and choose a better therapeutic approach. The finding of biomarkers in various levels including genomics, transcriptomics, and proteomics levels could provide better treatment for various cancers such as chronic lymphocytic leukemia (CLL). The CLL is the one of main lymphoid malignancies which is specified by aggregation of mature B lymphocytes. Among different biomarkers (e.g., CD38, chromosomes abnormalities, ZAP-70, TP53, and microRNA [miRNA]), miRNAs have appeared as new diagnostic and therapeutic biomarkers in patients with the CLL disease. Multiple lines of evidence indicated that deregulation of miRNAs could be associated with pathological events which are present in the CLL. These molecules have an effect on a variety of targets such as Bcl2, c-fos, c-Myc, TP53, TCL1, and STAT3 which play critical roles in the CLL pathogenesis. It has been shown that expression of miRNAs could lead to the activation of B cells and B cell antigen receptor (BCR). Moreover, exosomes containing miRNAs are one of the other molecules which could contribute to BCR stimulation and progression of CLL cells. Hence, miRNAs and exosomes released from CLL cells could be used as potential diagnostic and therapeutic biomarkers for CLL. This critical review focuses on a very important aspect of CLL based on biomarker discovery covers the pros and cons of using miRNAs as important diagnostics and therapeutics biomarkers for this deadly disease.",
"title": ""
},
{
"docid": "a68ccab91995603b3dbb54e014e79091",
"text": "Qualitative models arising in artificial intelligence domain often concern real systems that are difficult to represent with traditional means. However, some promise for dealing with such systems is offered by research in simulation methodology. Such research produces models that combine both continuous and discrete-event formalisms. Nevertheless, the aims and approaches of the AI and the simulation communities remain rather mutually ill understood. Consequently, there is a need to bridge theory and methodology in order to have a uniform language when either analyzing or reasoning about physical systems. This article introduces a methodology and formalism for developing multiple, cooperative models of physical systems of the type studied in qualitative physics. The formalism combines discrete-event and continuous models and offers an approach to building intelligent machines capable of physical modeling and reasoning.",
"title": ""
},
{
"docid": "578130d8ef9d18041c84ed226af8c84a",
"text": "Ranking and scoring are ubiquitous. We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics. The final output is a ranking that represents the relative quality of the individuals. While automatic and therefore seemingly objective, rankers can, and often do, discriminate against individuals and systematically disadvantage members of protected groups. This warrants a careful study of the fairness of a ranking scheme, to enable data science for social good applications, among others.\n In this paper we propose fairness measures for ranked outputs. We develop a data generation procedure that allows us to systematically control the degree of unfairness in the output, and study the behavior of our measures on these datasets. We then apply our proposed measures to several real datasets, and detect cases of bias. Finally, we show preliminary results of incorporating our ranked fairness measures into an optimization framework, and show potential for improving fairness of ranked outputs while maintaining accuracy.\n The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/FairRank.",
"title": ""
}
] |
scidocsrr
|
0b8515c4c99945c79fc7b3f5e1956bd2
|
Inkjet printing of silver nanowire networks.
|
[
{
"docid": "15b05bdc1310d038110b545686082c98",
"text": "The class of materials combining high electrical or thermal conductivity, optical transparency and flexibility is crucial for the development of many future electronic and optoelectronic devices. Silver nanowire networks show very promising results and represent a viable alternative to the commonly used, scarce and brittle indium tin oxide. The science and technology research of such networks are reviewed to provide a better understanding of the physical and chemical properties of this nanowire-based material while opening attractive new applications.",
"title": ""
}
] |
[
{
"docid": "2536596ecba0498e7dbcb097695171b0",
"text": "How can we effectively encode evolving information over dynamic graphs into low-dimensional representations? In this paper, we propose DyRep – an inductive deep representation learning framework that learns a set of functions to efficiently produce low-dimensional node embeddings that evolves over time. The learned embeddings drive the dynamics of two key processes namely, communication and association between nodes in dynamic graphs. These processes exhibit complex nonlinear dynamics that evolve at different time scales and subsequently contribute to the update of node embeddings. We employ a time-scale dependent multivariate point process model to capture these dynamics. We devise an efficient unsupervised learning procedure and demonstrate that our approach significantly outperforms representative baselines on two real-world datasets for the problem of dynamic link prediction and event time prediction.",
"title": ""
},
{
"docid": "c366303728d2a8ee47fe4cbfe67dec24",
"text": "Terrestrial Gamma-ray Flashes (TGFs), discovered in 1994 by the Compton Gamma-Ray Observatory, are high-energy photon bursts originating in the Earth’s atmosphere in association with thunderstorms. In this paper, we demonstrate theoretically that, while TGFs pass through the atmosphere, the large quantities of energetic electrons knocked out by collisions between photons and air molecules generate excited species of neutral and ionized molecules, leading to a significant amount of optical emissions. These emissions represent a novel type of transient luminous events in the vicinity of the cloud tops. We show that this predicted phenomenon illuminates a region with a size notably larger than the TGF source and has detectable levels of brightness. Since the spectroscopic, morphological, and temporal features of this luminous event are closely related with TGFs, corresponding measurements would provide a novel perspective for investigation of TGFs, as well as lightning discharges that produce them.",
"title": ""
},
{
"docid": "62ddf025b9ac7556cea899254c271dfd",
"text": "The purpose of this project is to increase the security that customer use the ATM machine. Once user's bank card is lost and the password is stolen, the criminal will draw all cash in the shortest time, which will bring enormous financial losses to customer, so to rectify this problem we are implementing this project. The chip of LPC2148 is used for the core of microprocessor in ARM7, furthermore, an improved enhancement algorithm of fingerprint image increase the security that customer use the ATM machine.",
"title": ""
},
{
"docid": "c432ab159f9e71323ebb5b38f48702c0",
"text": "We consider the problem of grounding the meaning of words in the physical world and focus on the visual modality which we represent by visual attributes. We create a new large-scale taxonomy of visual attributes covering more than 500 concepts and their corresponding 688K images. We use this dataset to train attribute classifiers and integrate their predictions with text-based distributional models of word meaning. We show that these bimodal models give a better fit to human word association data compared to amodal models and word representations based on handcrafted norming data.",
"title": ""
},
{
"docid": "ae05afb899ac3a5bda26b20bde5af7ec",
"text": "A compact microstrip rat-race hybrid with a 50% bandwidth employing space-filling curves is reported in this letter. The footprint of the proposed design occupies 31% of the area of the conventional similar design. Across the frequency bandwidth, the maximum amplitude unbalance is 0.5 dB, the phase variation is plusmn5deg , the isolation is better than 25 dB and the return loss is greater than 10 dB. Moreover, the circuit is planar, easy to design, and consists of only one layer without requiring plated thru holes, slots or bonding wires.",
"title": ""
},
{
"docid": "9358b1401213fba02fed56be6cfea353",
"text": "Nowadays, many current real financial applications have nonlinear and uncertain behaviors which change across the time. Therefore, the need to solve highly nonlinear, time variant problems has been growing rapidly. These problems along with other problems of traditional models caused growing interest in artificial intelligent techniques. In this paper, comparative research review of three famous artificial intelligence techniques, i.e., artificial neural networks, expert systems and hybrid intelligence systems, in financial market has been done. A financial market also has been categorized on three domains: credit evaluation, portfolio management and financial prediction and planning. For each technique, most famous and especially recent researches have been discussed in comparative aspect. Results show that accuracy of these artificial intelligent methods is superior to that of traditional statistical methods in dealing with financial problems, especially regarding nonlinear patterns. However, this outperformance is not absolute.",
"title": ""
},
{
"docid": "c98e8abd72ba30e0d2cb2b7d146a3d13",
"text": "Process mining techniques help organizations discover and analyze business processes based on raw event data. The recently released \"Process Mining Manifesto\" presents guiding principles and challenges for process mining. Here, the authors summarize the manifesto's main points and argue that analysts should take into account the context in which events occur when analyzing processes.",
"title": ""
},
{
"docid": "c64cc935b0a898f66d8fd34bbbbb6832",
"text": "Zinc oxide (ZnO) appears as a promising preservative for pharmaceutical or cosmetic formulations. The other ingredients of the formulations may have specific interactions with ZnO that alter its antimicrobial properties. The influence of common formulation excipients on the antimicrobial efficacy of ZnO has been investigated in simple model systems and in typical topical products containing a complex formulation. A wide variety of formulation excipients have been investigated for their interactions with ZnO: antioxidants, chelating agents, electrolytes, titanium dioxide pigment. The antimicrobial activity of ZnO against Escherichia coli was partially inhibited by NaCl and MgSO4 salts. A synergistic influence of uncoated titanium dioxide has been observed. The interference effects of antioxidants and chelating agents were quite specific. The interactions of these substances with ZnO particles and with the soluble species released by ZnO were discussed so as to reach scientific guidelines for the choice of the ingredients. The preservative efficacy of ZnO was assessed by challenge testing in three different formulations: an oil-in-water emulsion; a water-in-oil emulsion and a dry powder. The addition of ZnO in complex formulations significantly improved the microbiological quality of the products, in spite of the presence of other ingredients that modulate the antimicrobial activity.",
"title": ""
},
{
"docid": "a827f7ceabd844453dcf81cf7f87c7db",
"text": "Steganography means hiding the secret message within an ordinary message and extraction of it as its destination. In the texture synthesis process here re-samples smaller texture image which gives a new texture image with a similar local appearance. In the existing system, work is done for the texture synthesis process but the embedding capacity of those systems is very low. In the project introduced the method SURTDS (steganography using reversible texture synthesis) for enhancing the embedding capacity of the system by using the difference expansion method with texture synthesis. Initially, this system evaluates the binary value of the secret image and converts this value into a decimal value. The process of embedding is performed by using the difference expansion techniques. Difference expansion computes the average and difference in a patch and embedded the value one by one. This system improves the embedding capacity of the stego image. The experimental result has verified that this system improves the embedding capacity of the SURTDS is better than the existing system.",
"title": ""
},
{
"docid": "b68a716a1ef3e7970b94ad7cda366b8b",
"text": "The underlying mechanisms and neuroanatomical correlates of theory of mind (ToM), the ability to make inferences on others' mental states, remain largely unknown. While numerous studies have implicated the ventromedial (VM) frontal lobes in ToM, recent findings have questioned the role of the prefrontal cortex. We designed two novel tasks that examined the hypothesis that affective ToM processing is distinct from that related to cognitive ToM and depends in part on separate anatomical substrates. The performance of patients with localized lesions in the VM was compared to responses of patients with dorsolateral lesions, mixed prefrontal lesions, and posterior lesions and with healthy control subjects. While controls made fewer errors on affective as compared to cognitive ToM conditions in both tasks, patients with VM damage showed a different trend. Furthermore, while affective ToM was mostly impaired by VM damage, cognitive ToM was mostly impaired by extensive prefrontal damage, suggesting that cognitive and affective mentalizing abilities are partly dissociable. By introducing the concept of 'affective ToM' to the study of social cognition, these results offer new insights into the mediating role of the VM in the affective facets of social behavior that may underlie the behavioral disturbances observed in these patients.",
"title": ""
},
{
"docid": "0d0fd1c837b5e45b83ee590017716021",
"text": "General intelligence and personality traits from the Five-Factor model were studied as predictors of academic achievement in a large sample of Estonian schoolchildren from elementary to secondary school. A total of 3618 students (1746 boys and 1872 girls) from all over Estonia attending Grades 2, 3, 4, 6, 8, 10, and 12 participated in this study. Intelligence, as measured by the Raven’s Standard Progressive Matrices, was found to be the best predictor of students’ grade point average (GPA) in all grades. Among personality traits (measured by self-reports on the Estonian Big Five Questionnaire for Children in Grades 2 to 4 and by the NEO Five Factor Inventory in Grades 6 to 12), Openness, Agreeableness, and Conscientiousness correlated positively and Neuroticism correlated negatively with GPA in almost every grade. When all measured variables were entered together into a regression model, intelligence was still the strongest predictor of GPA, being followed by Agreeableness in Grades 2 to 4 and Conscientiousness in Grades 6 to 12. Interactions between predictor variables and age accounted for only a small percentage of variance in GPA, suggesting that academic achievement relies basically on the same mechanisms through the school years. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6fcaea5228ea964854ab92cca69859d7",
"text": "The well-characterized cellular and structural components of the kidney show distinct regional compositions and distribution of lipids. In order to more fully analyze the renal lipidome we developed a matrix-assisted laser desorption/ionization mass spectrometry approach for imaging that may be used to pinpoint sites of changes from normal in pathological conditions. This was accomplished by implanting sagittal cryostat rat kidney sections with a stable, quantifiable and reproducible uniform layer of silver using a magnetron sputtering source to form silver nanoparticles. Thirty-eight lipid species including seven ceramides, eight diacylglycerols, 22 triacylglycerols, and cholesterol were detected and imaged in positive ion mode. Thirty-six lipid species consisting of seven sphingomyelins, 10 phosphatidylethanolamines, one phosphatidylglycerol, seven phosphatidylinositols, and 11 sulfatides were imaged in negative ion mode for a total of seventy-four high-resolution lipidome maps of the normal kidney. Thus, our approach is a powerful tool not only for studying structural changes in animal models of disease, but also for diagnosing and tracking stages of disease in human kidney tissue biopsies.",
"title": ""
},
{
"docid": "058bcdfd935b5906381d7c5b31a8b744",
"text": "BACKGROUND\nValproate was initially introduced as an antiepileptic agent in 1967, but has been used over the years to treat a variety of psychiatric disorders. Its use in the treatment of patients exhibiting aggressive and violent behaviors has been reported in the literature as far back as 1988. However, these reports are uncontrolled, which is in marked contrast to the actual wide and established use of valproate for the treatment of aggressive behaviors. The aim of this report is to critically review the available data on valproate's use in nonbipolar patients with aggressive and violent behaviors.\n\n\nDATA SOURCES\nThe MEDLINE and PsycLIT databases were searched for all reports published from 1987-1998 containing the keywords valproate, the names of all commercial preparations, aggression, and violence.\n\n\nSTUDY FINDINGS\nSeventeen reports with a total of 164 patients were located. Ten of these were case reports with a total of 31 patients. Three were retrospective chart reviews with 83 patients, and 3 were open-label prospective studies with a total of 34 patients. No double-blind, placebo-controlled study could be found. An overall response rate of 77.1% was calculated when response was defined as a 50% reduction of target behavior. Most frequent diagnoses recorded were dementia, organic brain syndromes, and mental retardation. The antiaggressive response usually occurred in conjunction with other psychotropic medication. The dose and plasma valproate level required for response appeared to be the same as in the treatment of seizure disorders.\n\n\nDISCUSSION\nWhile valproate's general antiaggressive effect is promising, in the absence of controlled data, conclusions are limited at this time. Specific recommendations for study design are given to obtain interpretable data for this indication.",
"title": ""
},
{
"docid": "fabcb243bff004279cfb5d522a7bed4b",
"text": "Vein pattern is the network of blood vessels beneath person’s skin. Vein patterns are sufficiently different across individuals, and they are stable unaffected by ageing and no significant changed in adults by observing. It is believed that the patterns of blood vein are unique to every individual, even among twins. Finger vein authentication technology has several important features that set it apart from other forms of biometrics as a highly secure and convenient means of personal authentication. This paper presents a finger-vein image matching method based on minutiae extraction and curve analysis. This proposed system is implemented in MATLAB. Experimental results show that the proposed method performs well in improving finger-vein matching accuracy.",
"title": ""
},
{
"docid": "f09274077cf821715c57b2d173c9ed8d",
"text": "OBJECTIVE\nBruxism, the parafunctional habit of nocturnal grinding of the teeth and clenching, is associated with the onset of joint degeneration. Especially prolonged clenching is suggested to cause functional overloading in the temporomandibular joint (TMJ). In this study, the distributions of stresses in the cartilaginous TMJ disc and articular cartilage, were analysed during prolonged clenching. The purpose of this study was to examine if joint degradation due to prolonged clenching can be attributed to changes in stress concentration in the cartilaginous tissues.\n\n\nDESIGN\nFinite element model was developed on the basis of magnetic resonance images from a healthy volunteer. Condylar movements recorded during prolonged clenching were used as the loading condition for stress analysis.\n\n\nRESULTS\nAt the onset of clenching (time=0s), the highest von Mises stresses were located in the middle and posterior areas (6.18MPa) of the inferior disc surface facing the condylar cartilage. The largest magnitude of the minimum principal stress (-6.72MPa) was found in the condylar cartilage. The stress concentrations were relieved towards the superior disc surface facing the temporal cartilage. On the surfaces of the temporal cartilage, relatively lower stresses were found. After 5-min clenching, both stress values induced in the TMJ components were reduced to 50-80% of the stress values at the onset of clenching, although the concomitant strains increased slightly during this period.\n\n\nCONCLUSIONS\nIt is suggested that both the condylar and temporal cartilage layers along with the TMJ disc, play an important role in stress distribution and transmission during prolonged clenching due to tissue expansion. Furthermore, our study suggests that a development of stress concentrations in the TMJ during prolonged clenching and risk factors for the initiation of TMJ degeneration could not be confirmed.",
"title": ""
},
{
"docid": "57e16fe9f238c79d1ffd746aa4b84cfc",
"text": "We evaluate transfer representation-learning for anomaly detection using convolutional neural networks by: (i) transfer learning from pretrained networks, and (ii) transfer learning from an auxiliary task by defining sub-categories of the normal class. We empirically show that both approaches offer viable representations for the task of anomaly detection, without explicitly imposing a prior on the data.",
"title": ""
},
{
"docid": "8d20b2a4d205684f6353fe710f989fde",
"text": "Financial institutions manage numerous portfolios whose risk must be managed continuously, and the large amounts of data that has to be processed renders this a considerable effort. As such, a system that autonomously detects anomalies in the risk measures of financial portfolios, would be of great value. To this end, the two econometric models ARMA-GARCH and EWMA, and the two machine learning based algorithms LSTM and HTM, were evaluated for the task of performing unsupervised anomaly detection on the streaming time series of portfolio risk measures. Three datasets of returns and Value-at-Risk series were synthesized and one dataset of real-world Value-at-Risk series had labels handcrafted for the experiments in this thesis. The results revealed that the LSTM has great potential in this domain, due to an ability to adapt to different types of time series and for being effective at finding a wide range of anomalies. However, the EWMA had the benefit of being faster and more interpretable, but lacked the ability to capture anomalous trends. The ARMA-GARCH was found to have difficulties in finding a good fit to the time series of risk measures, resulting in poor performance, and the HTM was outperformed by the other algorithms in every regard, due to an inability to learn the autoregressive behaviour of the time series.",
"title": ""
},
{
"docid": "6a4a3b2e82bd922f277832fddec887fb",
"text": "The paper describes tests of hypotheses from economic history and theory concerning the significance of financial development as a possible determinant of economic growth. The empirical analysis is based on a large panel data set covering 93 countries from 1970–90. It goes beyond existing studies by drawing on a new proxy for financial development that refers to the input of real resources into the financial system. Moreover, interaction effects between financial development and catching-up as well as education are considered. It is shown that, according to our data, during the 1970s and 1980s, finance may well have been a determinant of growth. Finally, to clarify whether socio-economic characteristics possibly modify the general structure of the finance-growth nexus, the countries in our sample are classified according to their degree of corporatism. The resulting ranking is used to split the sample into a more and a less corporatist subgroup. It is shown that the generally positive partial correlation's between (lagged) proxies for financial development as well as its interaction terms and growth are significantly higher in the more corporatist subgroup of countries. It is therefore crucial to study the embeddedness of economic institutions like the financial sector into its broader social, cultural and historical surroundings.",
"title": ""
},
{
"docid": "165195f20110158a26bc62b74821dc46",
"text": "Prior studies on knowledge contribution started with the motivating role of social capital to predict knowledge contribution but did not specifically examine how they can be built in the first place. Our research addresses this gap by highlighting the role technology plays in supporting the development of social capital and eventual knowledge sharing intention. Herein, we propose four technology-based social capital builders – identity profiling, sub-community building, feedback mechanism, and regulatory practice – and theorize that individuals’ use of these IT artifacts determine the formation of social capital, which in turn, motivate knowledge contribution in online communities. Data collected from 253 online community users provide support for the proposed structural model. The results show that use of IT artifacts facilitates the formation of social capital (network ties, shared language, identification, trust in online community, and norms of cooperation) and their effects on knowledge contribution operate indirectly through social capital.",
"title": ""
},
{
"docid": "c13386ba4dc503715dfa81d8d08988fe",
"text": "In this paper the patient flow and perioperative processes involved in day of surgery admissions are considered for a hospital that is undergoing a staged redesign of its operating room. In particular, the day of surgery admission area where patients are prepared for surgery is being relocated and some additional functions for the new unit are being considered. The goal of the simulation study is to map the patient flows and functions of the current area into the newly designed space, to measure potential changes in productivity, and to determine opportunities for future improvements.",
"title": ""
}
] |
scidocsrr
|
2c900f9a50c5d0734e0b36dd9b94a16d
|
Three Levels Load Balancing on Cloudsim
|
[
{
"docid": "8a7cf92704d06baee24cb6f2a551094d",
"text": "Network bandwidth and hardware technology are developing rapidly, resulting in the vigorous development of the Internet. A new concept, cloud computing, uses low-power hosts to achieve high reliability. The cloud computing, an Internet-based development in which dynamically scalable and often virtualized resources are provided as a service over the Internet has become a significant issue. The cloud computing refers to a class of systems and applications that employ distributed resources to perform a function in a decentralized manner. Cloud computing is to utilize the computing resources (service nodes) on the network to facilitate the execution of complicated tasks that require large-scale computation. Thus, the selecting nodes for executing a task in the cloud computing must be considered, and to exploit the effectiveness of the resources, they have to be properly selected according to the properties of the task. However, in this study, a two-phase scheduling algorithm under a three-level cloud computing network is advanced. The proposed scheduling algorithm combines OLB (Opportunistic Load Balancing) and LBMM (Load Balance Min-Min) scheduling algorithms that can utilize more better executing efficiency and maintain the load balancing of system.",
"title": ""
}
] |
[
{
"docid": "c2392b947816f271f4b7a71ff343bceb",
"text": "The main purpose of the present meta-analysis was to examine the scientific literature on the criterion-related validity of sit-and-reach tests for estimating hamstring and lumbar extensibility. For this purpose relevant studies were searched from seven electronic databases dated up through December 2012. Primary outcomes of criterion-related validity were Pearson´s zero-order correlation coefficients (r) between sit-and-reach tests and hamstrings and/or lumbar extensibility criterion measures. Then, from the included studies, the Hunter- Schmidt´s psychometric meta-analysis approach was conducted to estimate population criterion- related validity of sit-and-reach tests. Firstly, the corrected correlation mean (rp), unaffected by statistical artefacts (i.e., sampling error and measurement error), was calculated separately for each sit-and-reach test. Subsequently, the three potential moderator variables (sex of participants, age of participants, and level of hamstring extensibility) were examined by a partially hierarchical analysis. Of the 34 studies included in the present meta-analysis, 99 correlations values across eight sit-and-reach tests and 51 across seven sit-and-reach tests were retrieved for hamstring and lumbar extensibility, respectively. The overall results showed that all sit-and-reach tests had a moderate mean criterion-related validity for estimating hamstring extensibility (rp = 0.46-0.67), but they had a low mean for estimating lumbar extensibility (rp = 0. 16-0.35). Generally, females, adults and participants with high levels of hamstring extensibility tended to have greater mean values of criterion-related validity for estimating hamstring extensibility. When the use of angular tests is limited such as in a school setting or in large scale studies, scientists and practitioners could use the sit-and-reach tests as a useful alternative for hamstring extensibility estimation, but not for estimating lumbar extensibility. Key PointsOverall sit-and-reach tests have a moderate mean criterion-related validity for estimating hamstring extensibility, but they have a low mean validity for estimating lumbar extensibility.Among all the sit-and-reach test protocols, the Classic sit-and-reach test seems to be the best option to estimate hamstring extensibility.End scores (e.g., the Classic sit-and-reach test) are a better indicator of hamstring extensibility than the modifications that incorporate fingers-to-box distance (e.g., the Modified sit-and-reach test).When angular tests such as straight leg raise or knee extension tests cannot be used, sit-and-reach tests seem to be a useful field test alternative to estimate hamstring extensibility, but not to estimate lumbar extensibility.",
"title": ""
},
{
"docid": "7aa8fa3d64f6a03f121acd8dd0899e7e",
"text": "Dyslexia is more than just difficulty with translating letters into sounds. Many dyslexics have problems with clearly seeing letters and their order. These difficulties may be caused by abnormal development of their visual \"magnocellular\" (M) nerve cells; these mediate the ability to rapidly identify letters and their order because they control visual guidance of attention and of eye fixations. Evidence for M cell impairment has been demonstrated at all levels of the visual system: in the retina, in the lateral geniculate nucleus, in the primary visual cortex and throughout the dorsal visuomotor \"where\" pathway forward from the visual cortex to the posterior parietal and prefrontal cortices. This abnormality destabilises visual perception; hence, its severity in individuals correlates with their reading deficit. Treatments that facilitate M function, such as viewing text through yellow or blue filters, can greatly increase reading progress in children with visual reading problems. M weakness may be caused by genetic vulnerability, which can disturb orderly migration of cortical neurones during development or possibly reduce uptake of omega-3 fatty acids, which are usually obtained from fish oils in the diet. For example, M cell membranes require replenishment of the omega-3 docosahexaenoic acid to maintain their rapid responses. Hence, supplementing some dyslexics' diets with DHA can greatly improve their M function and their reading.",
"title": ""
},
{
"docid": "143a4fcc0f2949e797e6f51899e811e2",
"text": "A long-standing problem at the interface of artificial intelligence and applied mathematics is to devise an algorithm capable of achieving human level or even superhuman proficiency in transforming observed data into predictive mathematical models of the physical world. In the current era of abundance of data and advanced machine learning capabilities, the natural question arises: How can we automatically uncover the underlying laws of physics from high-dimensional data generated from experiments? In this work, we put forth a deep learning approach for discovering nonlinear partial differential equations from scattered and potentially noisy observations in space and time. Specifically, we approximate the unknown solution as well as the nonlinear dynamics by two deep neural networks. The first network acts as a prior on the unknown solution and essentially enables us to avoid numerical differentiations which are inherently ill-conditioned and unstable. The second network represents the nonlinear dynamics and helps us distill the mechanisms that govern the evolution of a given spatiotemporal data-set. We test the effectiveness of our approach for several benchmark problems spanning a number of scientific domains and demonstrate how the proposed framework can help us accurately learn the underlying dynamics and forecast future states of the system. In particular, we study the Burgers’, Kortewegde Vries (KdV), Kuramoto-Sivashinsky, nonlinear Schrödinger, and NavierStokes equations.",
"title": ""
},
{
"docid": "2013e66a3f96ab6c65daa1a0f8244ec9",
"text": "Recent years have seen a dramatic growth of semantic web on the data level, but unfortunately not on the schema level, which contains mostly concept hierarchies. The shortage of schemas makes the semantic web data difficult to be used in many semantic web applications, so schemas learning from semantic web data becomes an increasingly pressing issue. In this paper we propose a novel schemas learning approach -BelNet, which combines description logics (DLs) with Bayesian networks. In this way BelNet is capable to understand and capture the semantics of the data on the one hand, and to handle incompleteness during the learning procedure on the other hand. The main contributions of this work are: (i)we introduce the architecture of BelNet, and corresponding lypropose the ontology learning techniques in it, (ii) we compare the experimental results of our approach with the state-of-the-art ontology learning approaches, and provide discussions from different aspects.",
"title": ""
},
{
"docid": "54581984ce217217d59b7118721e2f60",
"text": "Exposure to antibiotics induces the expression of mutagenic bacterial stress-response pathways, but the evolutionary benefits of these responses remain unclear. One possibility is that stress-response pathways provide a short-term advantage by protecting bacteria against the toxic effects of antibiotics. Second, it is possible that stress-induced mutagenesis provides a long-term advantage by accelerating the evolution of resistance. Here, we directly measure the contribution of the Pseudomonas aeruginosa SOS pathway to bacterial fitness and evolvability in the presence of sublethal doses of ciprofloxacin. Using short-term competition experiments, we demonstrate that the SOS pathway increases competitive fitness in the presence of ciprofloxacin. Continued exposure to ciprofloxacin results in the rapid evolution of increased fitness and antibiotic resistance, but we find no evidence that SOS-induced mutagenesis accelerates the rate of adaptation to ciprofloxacin during a 200 generation selection experiment. Intriguingly, we find that the expression of the SOS pathway decreases during adaptation to ciprofloxacin, and this helps to explain why this pathway does not increase long-term evolvability. Furthermore, we argue that the SOS pathway fails to accelerate adaptation to ciprofloxacin because the modest increase in the mutation rate associated with SOS mutagenesis is offset by a decrease in the effective strength of selection for increased resistance at a population level. Our findings suggest that the primary evolutionary benefit of the SOS response is to increase bacterial competitive ability, and that stress-induced mutagenesis is an unwanted side effect, and not a selected attribute, of this pathway.",
"title": ""
},
{
"docid": "5f1f7847600207d1216384f8507be63b",
"text": "This paper introduces U-relations, a succinct and purely relational representation system for uncertain databases. U-relations support attribute-level uncertainty using vertical partitioning. If we consider positive relational algebra extended by an operation for computing possible answers, a query on the logical level can be translated into, and evaluated as, a single relational algebra query on the U-relational representation. The translation scheme essentially preserves the size of the query in terms of number of operations and, in particular, number of joins. Standard techniques employed in off-the-shelf relational database management systems are effective for optimizing and processing queries on U-relations. In our experiments we show that query evaluation on U-relations scales to large amounts of data with high degrees of uncertainty.",
"title": ""
},
{
"docid": "8680ee8f949e02529d6914fcea6f7a5b",
"text": "Natural language inference (NLI) is one of the most important tasks in NLP. In this study, we propose a novel method using word dictionaries, which are pairs of a word and its definition, as external knowledge. Our neural definition embedding mechanism encodes input sentences with the definitions of each word of the sentences on the fly. It can encode definitions of words considering the context of the input sentences by using an attention mechanism. We evaluated our method using WordNet as a dictionary and confirmed that it performed better than baseline models when using the full or a subset of 100d GloVe as word embeddings.",
"title": ""
},
{
"docid": "ffbebb5d8f4d269353f95596c156ba5c",
"text": "Decision trees and random forests are common classifiers with widespread use. In this paper, we develop two protocols for privately evaluating decision trees and random forests. We operate in the standard two-party setting where the server holds a model (either a tree or a forest), and the client holds an input (a feature vector). At the conclusion of the protocol, the client learns only the model’s output on its input and a few generic parameters concerning the model; the server learns nothing. The first protocol we develop provides security against semi-honest adversaries. Next, we show an extension of the semi-honest protocol that obtains one-sided security against malicious adversaries. We implement both protocols and show that both variants are able to process trees with several hundred decision nodes in just a few seconds and a modest amount of bandwidth. Compared to previous semi-honest protocols for private decision tree evaluation, we demonstrate tenfold improvements in computation and bandwidth.",
"title": ""
},
{
"docid": "5bef975924d427c3ae186d92a93d4f74",
"text": "The Voronoi diagram of a set of sites partitions space into regions, one per site; the region for a site s consists of all points closer to s than to any other site. The dual of the Voronoi diagram, the Delaunay triangulation, is the unique triangulation such that the circumsphere of every simplex contains no sites in its interior. Voronoi diagrams and Delaunay triangulations have been rediscovered or applied in many areas of mathematics and the natural sciences; they are central topics in computational geometry, with hundreds of papers discussing algorithms and extensions. Section 27.1 discusses the definition and basic properties in the usual case of point sites in R with the Euclidean metric, while Section 27.2 gives basic algorithms. Some of the many extensions obtained by varying metric, sites, environment, and constraints are discussed in Section 27.3. Section 27.4 finishes with some interesting and nonobvious structural properties of Voronoi diagrams and Delaunay triangulations.",
"title": ""
},
{
"docid": "0b2cff582a4b7d81b42e5bab2bd7a237",
"text": "The increasing popularity of real-world recommender systems produces data continuously and rapidly, and it becomes more realistic to study recommender systems under streaming scenarios. Data streams present distinct properties such as temporally ordered, continuous and high-velocity, which poses tremendous challenges to traditional recommender systems. In this paper, we investigate the problem of recommendation with stream inputs. In particular, we provide a principled framework termed sRec, which provides explicit continuous-time random process models of the creation of users and topics, and of the evolution of their interests. A variational Bayesian approach called recursive meanfield approximation is proposed, which permits computationally efficient instantaneous on-line inference. Experimental results on several real-world datasets demonstrate the advantages of our sRec over other state-of-the-arts.",
"title": ""
},
{
"docid": "6936b03672c64798ca4be118809cc325",
"text": "We present a deep learning framework for accurate visual correspondences and demonstrate its effectiveness for both geometric and semantic matching, spanning across rigid motions to intra-class shape or appearance variations. In contrast to previous CNN-based approaches that optimize a surrogate patch similarity objective, we use deep metric learning to directly learn a feature space that preserves either geometric or semantic similarity. Our fully convolutional architecture, along with a novel correspondence contrastive loss allows faster training by effective reuse of computations, accurate gradient computation through the use of thousands of examples per image pair and faster testing with O(n) feedforward passes for n keypoints, instead of O(n) for typical patch similarity methods. We propose a convolutional spatial transformer to mimic patch normalization in traditional features like SIFT, which is shown to dramatically boost accuracy for semantic correspondences across intra-class shape variations. Extensive experiments on KITTI, PASCAL and CUB-2011 datasets demonstrate the significant advantages of our features over prior works that use either hand-constructed or learned features.",
"title": ""
},
{
"docid": "bf9910e87c2294e307f142e0be4ed4f6",
"text": "The rapidly developing cloud computing and virtualization techniques provide mobile devices with battery energy saving opportunities by allowing them to offload computation and execute applications remotely. A mobile device should judiciously decide whether to offload computation and which portion of application should be offloaded to the cloud. In this paper, we consider a mobile cloud computing (MCC) interaction system consisting of multiple mobile devices and the cloud computing facilities. We provide a nested two stage game formulation for the MCC interaction system. In the first stage, each mobile device determines the portion of its service requests for remote processing in the cloud. In the second stage, the cloud computing facilities allocate a portion of its total resources for service request processing depending on the request arrival rate from all the mobile devices. The objective of each mobile device is to minimize its power consumption as well as the service request response time. The objective of the cloud computing controller is to maximize its own profit. Based on the backward induction principle, we derive the optimal or near-optimal strategy for all the mobile devices as well as the cloud computing controller in the nested two stage game using convex optimization technique. Experimental results demonstrate the effectiveness of the proposed nested two stage game-based optimization framework on the MCC interaction system. The mobile devices can achieve simultaneous reduction in average power consumption and average service request response time, by 21.8% and 31.9%, respectively, compared with baseline methods.",
"title": ""
},
{
"docid": "49dd1fd4640a160ba41fed048b2c804b",
"text": "This paper proposes a novel method to predict increases in YouTube viewcount driven from the Twitter social network. Specifically, we aim to predict two types of viewcount increases: a sudden increase in viewcount (named as Jump), and the viewcount shortly after the upload of a new video (named as Early). Experiments on hundreds of thousands of videos and millions of tweets show that Twitter-derived features alone can predict whether a video will be in the top 5% for Early popularity with 0.7 Precision@100. Furthermore, our results reveal that while individual influence is indeed important for predicting how Twitter drives YouTube views, it is a diversity of interest from the most active to the least active Twitter users mentioning a video (measured by the variation in their total activity) that is most informative for both Jump and Early prediction. In summary, by going beyond features that quantify individual influence and additionally leveraging collective features of activity variation, we are able to obtain an effective cross-network predictor of Twitter-driven YouTube views.",
"title": ""
},
{
"docid": "fa6ec1ea2a509c837cd65774a78d5d2e",
"text": "Critically ill patients frequently experience poor sleep, characterized by frequent disruptions, loss of circadian rhythms, and a paucity of time spent in restorative sleep stages. Factors that are associated with sleep disruption in the intensive care unit (ICU) include patient-ventilator dysynchrony, medications, patient care interactions, and environmental noise and light. As the field of critical care increasingly focuses on patients' physical and psychological outcomes following critical illness, understanding the potential contribution of ICU-related sleep disruption on patient recovery is an important area of investigation. This review article summarizes the literature regarding sleep architecture and measurement in the critically ill, causes of ICU sleep fragmentation, and potential implications of ICU-related sleep disruption on patients' recovery from critical illness. With this background information, strategies to optimize sleep in the ICU are also discussed.",
"title": ""
},
{
"docid": "db83ca64b54bbd54b4097df425c48017",
"text": "This paper introduces the application of high-resolution angle estimation algorithms for a 77GHz automotive long range radar sensor. Highresolution direction of arrival (DOA) estimation is important for future safety systems. Using FMCW principle, major challenges discussed in this paper are small number of snapshots, correlation of the signals, and antenna mismatches. Simulation results allow analysis of these effects and help designing the sensor. Road traffic measurements show superior DOA resolution and the feasibility of high-resolution angle estimation.",
"title": ""
},
{
"docid": "7c1ce170b4258e46f98c24209f0f6def",
"text": "It has been widely accepted that iris biometric systems are not subject to a template aging effect. Baker et al. [1] recently presented the first published evidence of a template aging effect, using images acquired from 2004 through 2008 with an LG 2200 iris imaging system, representing a total of 13 subjects (26 irises). We report on a template aging study involving two different iris recognition algorithms, a larger number of subjects (43), a more modern imaging system (LG 4000), and over a shorter time-lapse (2 years). We also investigate the degree to which the template aging effect may be related to pupil dilation and/or contact lenses. We find evidence of a template aging effect, resulting in an increase in match hamming distance and false reject rate.",
"title": ""
},
{
"docid": "db50001ee0a3ee4da8982541591447d1",
"text": "This paper introduces a tool to automatically generate meta-data from game sprite sheets. MuSSE is a tool developed to extract XML data from sprite sheet images with non-uniform - multi-sized - sprites. MuSSE (Multi-sized Sprite Sheet meta-data Exporter) is based on a Blob detection algorithm that incorporates a connected-component labeling system. Hence, blobs of arbitrary size can be extracted by adjusting component connectivity parameters. This image detection algorithm defines boundary blobs for each individual sprite in a sprite sheet. Every specific blob defines a sprite characteristic within the sheet: position, name and size, which allows for subsequent data specification for each blob/image. Several examples on real images illustrate the performance of the proposed algorithm and working tool.",
"title": ""
},
{
"docid": "5a805b6f9e821b7505bccc7b70fdd557",
"text": "There are many factors that influence the translators while translating a text. Amongst these factors is the notion of ideology transmission through the translated texts. This paper is located within the framework of Descriptive Translation Studies (DTS) and Critical Discourse Analysis (CDA). It investigates the notion of ideology with particular use of critical discourse analysis. The purpose is to highlight the relationship between language and ideology in translated texts. It also aims at discovering whether the translator’s socio-cultural and ideology constraints influence the production of his/her translations. As a mixed research method study, the corpus consists of two different Arabic translated versions of the English book “Media Control” by Noam Chomsky. The micro-level contains the qualitative stage where detailed description and comparison -contrastive and comparativeanalysis will be provided. The micro-level analysis should include the lexical items along with the grammatical items (passive verses. active, nominalisation vs. de-nominalisation, moralisation and omission vs. addition). In order to have more reliable and objective data, computed frequencies of the ideological significance occurrences along with percentage and Chi-square formula were conducted through out the data analysis stage which then form the quantitative part of the current study. The main objective of the mentioned data analysis methodologies is to find out the dissimilarity between the proportions of the information obtained from the target texts (TTs) and their equivalent at the source text (ST). The findings indicts that there are significant differences amongst the two TTs in relation to International Journal of Linguistics ISSN 1948-5425 2014, Vol. 6, No. 3 www.macrothink.org/ijl 119 the word choices including the lexical items and the other syntactic structure compared by the ST. These significant differences indicate some ideological transmission through translation process of the two TTs. Therefore, and to some extent, it can be stated that the differences were also influenced by the translators’ socio-cultural and ideological constraints.",
"title": ""
},
{
"docid": "eff8993770389a798eeca4996c69474a",
"text": "Swarm intelligence is a research field that models the collective intelligence in swarms of insects or animals. Many algorithms that simulates these models have been proposed in order to solve a wide range of problems. The Artificial Bee Colony algorithm is one of the most recent swarm intelligence based algorithms which simulates the foraging behaviour of honey bee colonies. In this work, modified versions of the Artificial Bee Colony algorithm are introduced and applied for efficiently solving real-parameter optimization problems. 2010 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
b896f163455c487b745448b823917cc5
|
Information centric services in Smart Cities
|
[
{
"docid": "b85a6286ca2fb14a9255c9d70c677de3",
"text": "0140-3664/$ see front matter 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.comcom.2013.01.009 q The research leading to these results has been conducted in the SAIL project and received funding from the European Community’s Seventh Framework Program (FP7/2007-2013) under Grant Agreement No. 257448. ⇑ Corresponding author. Tel.: +49 5251 60 5385; fax: +49 5251 60 5377. E-mail addresses: cdannewitz@upb.de (C. Dannewitz), Dirk.Kutscher@neclab.eu (D. Kutscher), Borje.Ohlman@ericsson.com (B. Ohlman), stephen.farrell@cs.tcd.ie (S. Farrell), bengta@sics.se (B. Ahlgren), hkarl@upb.de (H. Karl). 1 <http://www.cisco.com/web/solutions/sp/vni/vni_mobile_forecast_highlights/ index.html>. Christian Dannewitz , Dirk Kutscher b,⇑, Börje Ohlman , Stephen Farrell , Bengt Ahlgren , Holger Karl a",
"title": ""
}
] |
[
{
"docid": "3c07ea072adb8f63b3cba36e39974d87",
"text": "We describe a general methodology for the design of large-sc ale recursive neural network architectures (DAG-RNNs) which comprises three fundamental steps: (1) representation of a given domain using suitable directed acyclic graphs (DAGs) to connect vi sible and hidden node variables; (2) parameterization of the relationship between each variabl e nd its parent variables by feedforward neural networks; and (3) application of weight-sharing wit hin appropriate subsets of DAG connections to capture stationarity and control model complexity . Here we use these principles to derive severalspecificclasses of DAG-RNN architectures based on lattices, trees, and other structured graphs. These architectures can process a wide range of data structures with variable sizes and dimensions. While the overall resulting models remain prob abilistic, the internal deterministic dynamics allows efficient propagation of information, as well as training by gradient descent, in order to tackle large-scale problems. These methods are used here to derive state-of-the-art predictors for protein structural features such as secondary structur e (1D) and both fineand coarse-grained contact maps (2D). Extensions, relationships to graphical models, and implications for the design of neural architectures are briefly discussed. The protein p rediction servers are available over the Web at:www.igb.uci.edu/tools.htm.",
"title": ""
},
{
"docid": "be0d51871cad4912dcfa05f1edfec3f5",
"text": "Peripheral information is information that is not central to a person's current task, but provides the person the opportunity to learn more, to do a better job, or to keep track of less important tasks. Though peripheral information displays are ubiquitous, they have been rarely studied. For computer users, a common peripheral display is a scrolling text display that provides announcements, sports scores, stock prices, or other news. In this paper, we investigate how to design peripheral displays so that they provide the most information while having the least impact on the user's performance on the main task. We report a series of experiments on scrolling displays aimed at examining tradeoffs between distraction of scrolling motion and memorability of information displayed. Overall, we found that continuously scrolling displays are more distracting than displays that start and stop, but information in both is remembered equally well. These results are summarized in a set of design recommendations.",
"title": ""
},
{
"docid": "fd36ca11c37101b566245b6ee29cb7df",
"text": "Hand, foot and mouth disease (HFMD) is considered a common disease among children. However, HFMD recent outbreaks in Sarawak had caused many death particularly children below the age of ten. In this study we are building a simple deterministic model based on the SIR (Susceptible-Infected-Recovered) model to predict the number of infected and the duration of an outbreak when it occurs. Our findings show that the disease spread quite rapidly and the parameter that may be able to control that would be the number of susceptible persons. We hope the model will allow public health personnel to plan intervention in an effective manner in order to reduce the effect of the disease in the coming outbreak.",
"title": ""
},
{
"docid": "2cb9c59c22271a7f51324ae79a64eeb4",
"text": "Despite a decrease in the use of currency due to the recent growth in the use of electronic financial transactions, real money transactions remain very important in the global market. While performing transactions with real money, touching and counting notes by hand, is still a common practice in daily life, various types of automated machines, such as ATMs and banknote counters, are essential for large-scale and safe transactions. This paper presents studies that have been conducted in four major areas of research (banknote recognition, counterfeit banknote detection, serial number recognition, and fitness classification) in the accurate banknote recognition field by various sensors in such automated machines, and describes the advantages and drawbacks of the methods presented in those studies. While to a limited extent some surveys have been presented in previous studies in the areas of banknote recognition or counterfeit banknote recognition, this paper is the first of its kind to review all four areas. Techniques used in each of the four areas recognize banknote information (denomination, serial number, authenticity, and physical condition) based on image or sensor data, and are actually applied to banknote processing machines across the world. This study also describes the technological challenges faced by such banknote recognition techniques and presents future directions of research to overcome them.",
"title": ""
},
{
"docid": "8d80bfe0015c6b867c5ad8311e45d3fa",
"text": "OBJECTIVES\nIt has been argued that mixed methods research can be useful in nursing and health science because of the complexity of the phenomena studied. However, the integration of qualitative and quantitative approaches continues to be one of much debate and there is a need for a rigorous framework for designing and interpreting mixed methods research. This paper explores the analytical approaches (i.e. parallel, concurrent or sequential) used in mixed methods studies within healthcare and exemplifies the use of triangulation as a methodological metaphor for drawing inferences from qualitative and quantitative findings originating from such analyses.\n\n\nDESIGN\nThis review of the literature used systematic principles in searching CINAHL, Medline and PsycINFO for healthcare research studies which employed a mixed methods approach and were published in the English language between January 1999 and September 2009.\n\n\nRESULTS\nIn total, 168 studies were included in the results. Most studies originated in the United States of America (USA), the United Kingdom (UK) and Canada. The analytic approach most widely used was parallel data analysis. A number of studies used sequential data analysis; far fewer studies employed concurrent data analysis. Very few of these studies clearly articulated the purpose for using a mixed methods design. The use of the methodological metaphor of triangulation on convergent, complementary, and divergent results from mixed methods studies is exemplified and an example of developing theory from such data is provided.\n\n\nCONCLUSION\nA trend for conducting parallel data analysis on quantitative and qualitative data in mixed methods healthcare research has been identified in the studies included in this review. Using triangulation as a methodological metaphor can facilitate the integration of qualitative and quantitative findings, help researchers to clarify their theoretical propositions and the basis of their results. This can offer a better understanding of the links between theory and empirical findings, challenge theoretical assumptions and develop new theory.",
"title": ""
},
{
"docid": "7b62576998aec77a574ebf617f4f31c2",
"text": "This paper addresses the fault detection and isolation (FDI) problem for robotic assembly of electrical connectors in the framework of set-membership. Both the fault-free and faulty cases of assembly are modeled by different switched linear models with known switching sequences, bounded parameters, and external disturbances. The locations of switching points of each model are assumed to be inside some areas but the accurate positions are not clear. Given current input/output data, the feasible parameter set of fault-free switched linear model is obtained by sequentially calculating an optimal ellipsoid. If the pair of data is not consistent with any possible submodel, a fault is then detected. The isolation of fault is realized by checking the consistency between the data sequence and each possible fault model one by one. The robustness of the proposed FDI algorithms is proved. The effectiveness of these algorithms is verified by the robotic assembly experiments of mating electrical connectors.Note to Practitioners—In modern robotic assembly tasks, the industrial robots often need to manipulate tiny objects with complex structure. Electrical connectors are a typical kind of these objects and widely used in many industrial fields. To avoid damaging the fragile connectors and accelerate the assembly process, it is required to promptly detect and isolate the certain assembly fault in real time so that the robot can immediately implement an error recovery procedure according to the identified fault. The proposed set-membership-based fault detection and isolation (FDI) methodology satisfies both the timing and fault-isolation requirements for this kind of robotic assembly task. In terms of the set-membership theory, no false alarm will occur if there are sufficient training data for the proposed method. In addition, it turns out that the proposed method can signal an alarm faster than conventional residual-based FDI method from plentiful experiments. Although only the robotic assembly of electrical connectors is investigated, our FDI method can also be applied in the assembly task of other small and complex parts. This is especially useful for increasing the productivity and promoting the automation level of electronic industries.",
"title": ""
},
{
"docid": "ae23145d649c6df81a34babdfc142b31",
"text": "Multi-head attention is appealing for the ability to jointly attend to information from different representation subspaces at different positions. In this work, we introduce a disagreement regularization to explicitly encourage the diversity among multiple attention heads. Specifically, we propose three types of disagreement regularization, which respectively encourage the subspace, the attended positions, and the output representation associated with each attention head to be different from other heads. Experimental results on widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness and universality of the proposed approach.",
"title": ""
},
{
"docid": "30a1c10fd5e1fce850f1969c797a0c38",
"text": "Data quality (DQ) assessment can be significantly enhanced with the use of the right DQ assessment methods, which provide automated solutions to assess DQ. The range of DQ assessment methods is very broad: from data profiling and semantic profiling to data matching and data validation. This paper gives an overview of current methods for DQ assessment and classifies the DQ assessment methods into an existing taxonomy of DQ problems. Specific examples of the placement of each DQ method in the taxonomy are provided and illustrate why the method is relevant to the particular taxonomy position. The gaps in the taxonomy, where no current DQ methods exist, show where new methods are required and can guide future research and DQ tool development.",
"title": ""
},
{
"docid": "dcf24a58fe16912556de7d9f5395dba9",
"text": "This review provides detailed insight on the effects of magnetic fields on germination, growth, development, and yield of plants focusing on ex vitro growth and development and discussing the possible physiological and biochemical responses. The MFs considered in this review range from the nanoTesla (nT) to geomagnetic levels, up to very strong MFs greater than 15 Tesla (T) and also super-weak MFs (near 0 T). The theoretical bases of the action of MFs on plant growth, which are complex, are not discussed here and thus far, there is limited mathematical background about the action of MFs on plant growth. MFs can positively influence the morphogenesis of several plants which allows them to be used in practical situations. MFs have thus far been shown to modify seed germination and affect seedling growth and development in a wide range of plants, including field, fodder, and industrial crops; cereals and pseudo-cereals; grasses; herbs and medicinal plants; horticultural crops (vegetables, fruits, ornamentals); trees; and model crops. This is important since MFs may constitute a non-residual and non-toxic stimulus. In addition to presenting and summarizing the effects of MFs on plant growth and development, we also provide possible physiological and biochemical explanations for these responses including stress-related responses of plants, explanations based on dia-, para-, and ferromagnetism, oriented movements of substances, and cellular and molecular changes.",
"title": ""
},
{
"docid": "1ac124cd7f8f4c92693ee959b5b39425",
"text": "The intestinal microbiota plays a fundamental role in maintaining immune homeostasis. In controlled clinical trials probiotic bacteria have demonstrated a benefit in treating gastrointestinal diseases, including infectious diarrhea in children, recurrent Clostridium difficile-induced infection, and some inflammatory bowel diseases. This evidence has led to the proof of principle that probiotic bacteria can be used as a therapeutic strategy to ameliorate human diseases. The precise mechanisms influencing the crosstalk between the microbe and the host remain unclear but there is growing evidence to suggest that the functioning of the immune system at both a systemic and a mucosal level can be modulated by bacteria in the gut. Recent compelling evidence has demonstrated that manipulating the microbiota can influence the host. Several new mechanisms by which probiotics exert their beneficial effects have been identified and it is now clear that significant differences exist between different probiotic bacterial species and strains; organisms need to be selected in a more rational manner to treat disease. Mechanisms contributing to altered immune function in vivo induced by probiotic bacteria may include modulation of the microbiota itself, improved barrier function with consequent reduction in immune exposure to microbiota, and direct effects of bacteria on different epithelial and immune cell types. These effects are discussed with an emphasis on those organisms that have been used to treat human inflammatory bowel diseases in controlled clinical trials.",
"title": ""
},
{
"docid": "67989a9fe9d56e27eb42ca867a919a7d",
"text": "Data remanence is the residual physical representation of data that has been erased or overwritten. In non-volatile programmable devices, such as UV EPROM, EEPROM or Flash, bits are stored as charge in the floating gate of a transistor. After each erase operation, some of this charge remains. Security protection in microcontrollers and smartcards with EEPROM/Flash memories is based on the assumption that information from the memory disappears completely after erasing. While microcontroller manufacturers successfully hardened already their designs against a range of attacks, they still have a common problem with data remanence in floating-gate transistors. Even after an erase operation, the transistor does not return fully to its initial state, thereby allowing the attacker to distinguish between previously programmed and not programmed transistors, and thus restore information from erased memory. The research in this direction is summarised here and it is shown how much information can be extracted from some microcontrollers after their memory has been ‘erased’.",
"title": ""
},
{
"docid": "10c969deedfb18c99a36ee956acb867b",
"text": "Paragonimiasis is an important re-emerging parasitosis in Japan. Although the lungs and pleural cavity are the principal sites affected with the parasite, ectopic infection can occur in unexpected sites such as skin and brain. This case report describes a patient with active hepatic capsulitis due to Paragonimus westermani infection. The patient was successfully treated with praziquantel at the dose of 75 mg/kg/day for 3 days.",
"title": ""
},
{
"docid": "93776cd8940c44360886d672bb5bad59",
"text": "Prior research indicates that victims of intimate partner violence (IPV) are most likely to disclose their victimization experiences to an informal support (e.g., friend, family), and that IPV disclosures are often met with both positive (e.g., empathic support) and negative (e.g., victim blame) reactions. However, research on social reactions to disclosure largely has neglected the perspectives of disclosure recipients. Guided by the attribution framework, the current study extends prior research by assessing factors (i.e., situation-specific, individual, relational, attributional, and emotional response) related to positive and negative reactions from the perspective of disclosure recipients ( N = 743 college students). Linear regression analyses indicated that positive social reactions were related to the victim being a woman, greater frequency of IPV victimization by the victim, greater frequency of IPV victimization by the disclosure recipient, less accepting attitudes toward IPV, a closer relationship with the victim, a less close relationship with the perpetrator, lower perceptions of victim responsibility, more empathy for the victim, and more emotional distress experienced by the disclosure recipient during the disclosure. Negative social reactions were associated with more accepting attitudes toward IPV, greater frequency of IPV victimization by the disclosure recipient, a less close relationship with the victim, higher perceptions of victim responsibility, and more emotional distress experienced by the disclosure recipient. Results suggest that programs to improve responses to victim disclosure should focus on decreasing IPV-supportive attitudes, increasing empathy, and assisting disclosure recipients in managing difficult emotional responses effectively.",
"title": ""
},
{
"docid": "caf866341ad9f74b1ac1dc8572f6e95c",
"text": "One important but often overlooked aspect of human contexts of ubiquitous computing environment is human’s emotional status. And, there are no realistic and robust humancentric contents services so far, because there are few considers about combining context awareness computing with wearable computing for improving suitability of contents to each user’s needs. In this paper, we discuss combining context awareness computing with wearable computing to develop more effective personalized services. And we propose new algorithms to develop efficiently personalized emotion based content service system.",
"title": ""
},
{
"docid": "2dd8b7004f45ae374a72e2c7d40b0892",
"text": "In this letter, a multifeed tightly coupled patch array antenna capable of broadband operation is analyzed and designed. First, an antenna array composed of infinite elements with each element excited by a feed is proposed. To produce specific polarized radiation efficiently, a new patch element is proposed, and its characteristics are studied based on a 2-port network model. Full-wave simulation results show that the infinite antenna array exhibits both a high efficiency and desirable radiation pattern in a wide frequency band (10 dB bandwidth) from 1.91 to 5.35 GHz (94.8%). Second, to validate its outstanding performance, a realistic finite 4 × 4 antenna prototype is designed, fabricated, and measured in our laboratory. The experimental results agree well with simulated ones, where the frequency bandwidth (VSWR < 2) is from 2.5 to 3.8 GHz (41.3%). The inherent compact size, light weight, broad bandwidth, and good radiation characteristics make this array antenna a promising candidate for future communication and advanced sensing systems.",
"title": ""
},
{
"docid": "7c21bd628a501b269a76c52d0e066ac4",
"text": "Incorporating knowledge graph into recommender systems has attracted increasing attention in recent years. By exploring the interlinks within a knowledge graph, the connectivity between users and items can be discovered as paths, which provide rich and complementary information to user-item interactions. Such connectivity not only reveals the semantics of entities and relations, but also helps to comprehend a user’s interest. However, existing efforts have not fully explored this connectivity to infer user preferences, especially in terms of modeling the sequential dependencies within and holistic semantics of a path. In this paper, we contribute a new model named Knowledgeaware Path Recurrent Network (KPRN) to exploit knowledge graph for recommendation. KPRN can generate path representations by composing the semantics of both entities and relations. By leveraging the sequential dependencies within a path, we allow effective reasoning on paths to infer the underlying rationale of a user-item interaction. Furthermore, we design a new weighted pooling operation to discriminate the strengths of different paths in connecting a user with an item, endowing our model with a certain level of explainability. We conduct extensive experiments on two datasets about movie and music, demonstrating significant improvements over state-of-the-art solutions Collaborative Knowledge Base Embedding and Neural Factorization Machine. Introduction Prior efforts have shown the importance of incorporating auxiliary data into recommender systems, such as user profiles (Wang et al. 2018c) and item attributes (Bayer et al. 2017). Recently, knowledge graphs (KGs) have attracted increasing attention (Zhang et al. 2016; Shu et al. 2018; Wang et al. 2018a), due to its comprehensive auxiliary data: background knowledge of items and their relations amongst them. It usually organizes the facts of items in the form of triplets like (Ed Sheeran, IsSingerOf, Shape of You), which can be seamlessly integrated with user-item interactions (Chaudhari, Azaria, and Mitchell 2016; Cao et al. 2017). More important, by exploring the interlinks within ∗The first three authors have equal contribution. †Dingxian Wang is the corresponding author. Copyright c © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Knowledge Graph Shape of You Ed Sheeran SungBy WrittenBy ÷",
"title": ""
},
{
"docid": "dff6c531a57d890aaae44c04ff5d3037",
"text": "OBJECTIVE\nWe highlight some of the key discoveries and developments in the area of team performance over the past 50 years, especially as reflected in the pages of Human Factors.\n\n\nBACKGROUND\nTeams increasingly have become a way of life in many organizations, and research has kept up with the pace.\n\n\nMETHOD\nWe have characterized progress in the field in terms of eight discoveries and five challenges.\n\n\nRESULTS\nDiscoveries pertain to the importance of shared cognition, the measurement of shared cognition, advances in team training, the use of synthetic task environments for research, factors influencing team effectiveness, models of team effectiveness, a multidisciplinary perspective, and training and technological interventions designed to improve team effectiveness. Challenges that are faced in the coming decades include an increased emphasis on team cognition; reconfigurable, adaptive teams; multicultural influences; and the need for naturalistic study and better measurement.\n\n\nCONCLUSION\nWork in human factors has contributed significantly to the science and practice of teams, teamwork, and team performance. Future work must keep pace with the increasing use of teams in organizations.\n\n\nAPPLICATION\nThe science of teams contributes to team effectiveness in the same way that the science of individual performance contributes to individual effectiveness.",
"title": ""
},
{
"docid": "72b1a4204d49e588c793f3ec5f91c18d",
"text": "Humans convey their intentions through the usage of both verbal and nonverbal behaviors during face-to-face communication. Speaker intentions often vary dynamically depending on different nonverbal contexts, such as vocal patterns and facial expressions. As a result, when modeling human language, it is essential to not only consider the literal meaning of the words but also the nonverbal contexts in which these words appear. To better model human language, we first model expressive nonverbal representations by analyzing the fine-grained visual and acoustic patterns that occur during word segments. In addition, we seek to capture the dynamic nature of nonverbal intents by shifting word representations based on the accompanying nonverbal behaviors. To this end, we propose the Recurrent Attended Variation Embedding Network (RAVEN) that models the fine-grained structure of nonverbal subword sequences and dynamically shifts word representations based on nonverbal cues. Our proposed model achieves competitive performance on two publicly available datasets for multimodal sentiment analysis and emotion recognition. We also visualize the shifted word representations in different nonverbal contexts and summarize common patterns regarding multimodal variations of word representations.",
"title": ""
},
{
"docid": "5f6f0bd98fa03e4434fabe18642a48bc",
"text": "Previous research suggests that women's genital arousal is an automatic response to sexual stimuli, whereas men's genital arousal is dependent upon stimulus features specific to their sexual interests. In this study, we tested the hypothesis that a nonhuman sexual stimulus would elicit a genital response in women but not in men. Eighteen heterosexual women and 18 heterosexual men viewed seven sexual film stimuli, six human films and one nonhuman primate film, while measurements of genital and subjective sexual arousal were recorded. Women showed small increases in genital arousal to the nonhuman stimulus and large increases in genital arousal to both human male and female stimuli. Men did not show any genital arousal to the nonhuman stimulus and demonstrated a category-specific pattern of arousal to the human stimuli that corresponded to their stated sexual orientation. These results suggest that stimulus features necessary to evoke genital arousal are much less specific in women than in men.",
"title": ""
},
{
"docid": "7de911386f69397afe76e427e7ae3997",
"text": "Photonic crystal slabs are a versatile and important platform for molding the flow of light. In this thesis, we consider ways to control the emission of light from photonic crystal slab structures, specifically focusing on directional, asymmetric emission, and on emitting light with interesting topological features. First, we develop a general coupled-mode theory formalism to derive bounds on the asymmetric decay rates to top and bottom of a photonic crystal slab, for a resonance with arbitrary in-plane wavevector. We then employ this formalism to inversionsymmetric structures, and show through numerical simulations that asymmetries of top-down decay rates exceeding 104 can be achieved by tuning the resonance frequency to coincide with the perfectly transmitting Fabry-Perot frequency. The emission direction can also be rapidly switched from top to bottom by tuning the wavevector or frequency. We then consider the generation of Mobius strips of light polarization, i.e. vector beams with half-integer polarization winding, from photonic crystal slabs. We show that a quadratic degeneracy formed by symmetry considerations can be split into a pair of Dirac points, which can be further split into four exceptional points. Through calculations of an analytical two-band model and numerical simulations of two-dimensional photonic crystals and photonic crystal slabs, we demonstrate the existence of isofrequency contours encircling two exceptional points, and show the half-integer polarization winding along these isofrequency contours. We further propose a realistic photonic crystal slab structure and experimental setup to verify the existence of such Mobius strips of light polarization. Thesis Supervisor: Marin Solja-id Title: Professor of Physics and MacArthur Fellow",
"title": ""
}
] |
scidocsrr
|
c12dbbd2b850b360083fbe2dbeb35922
|
Convolutional Neural Networks for Fashion Classification and Object Detection
|
[
{
"docid": "26884c49c5ada3fc80dbc2f2d1e5660b",
"text": "We introduce a complete pipeline for recognizing and classifying people’s clothing in natural scenes. This has several interesting applications, including e-commerce, event and activity recognition, online advertising, etc. The stages of the pipeline combine a number of state-of-the-art building blocks such as upper body detectors, various feature channels and visual attributes. The core of our method consists of a multi-class learner based on a Random Forest that uses strong discriminative learners as decision nodes. To make the pipeline as automatic as possible we also integrate automatically crawled training data from the web in the learning process. Typically, multi-class learning benefits from more labeled data. Because the crawled data may be noisy and contain images unrelated to our task, we extend Random Forests to be capable of transfer learning from different domains. For evaluation, we define 15 clothing classes and introduce a benchmark data set for the clothing classification task consisting of over 80, 000 images, which we make publicly available. We report experimental results, where our classifier outperforms an SVM baseline with 41.38 % vs 35.07 % average accuracy on challenging benchmark data.",
"title": ""
}
] |
[
{
"docid": "24ed6aa28099dcdd17dd775450d98355",
"text": "Convolutional Neural Networks can be effectively used only when data are endowed with an intrinsic concept of neighbourhood in the input space, as is the case of pixels in images. We introduce here Ph-CNN, a novel deep learning architecture for the classification of metagenomics data based on the Convolutional Neural Networks, with the patristic distance defined on the phylogenetic tree being used as the proximity measure. The patristic distance between variables is used together with a sparsified version of MultiDimensional Scaling to embed the phylogenetic tree in a Euclidean space. Ph-CNN is tested with a domain adaptation approach on synthetic data and on a metagenomics collection of gut microbiota of 38 healthy subjects and 222 Inflammatory Bowel Disease patients, divided in 6 subclasses. Classification performance is promising when compared to classical algorithms like Support Vector Machines and Random Forest and a baseline fully connected neural network, e.g. the Multi-Layer Perceptron. Ph-CNN represents a novel deep learning approach for the classification of metagenomics data. Operatively, the algorithm has been implemented as a custom Keras layer taking care of passing to the following convolutional layer not only the data but also the ranked list of neighbourhood of each sample, thus mimicking the case of image data, transparently to the user.",
"title": ""
},
{
"docid": "7d3f0c22674ac3febe309c2440ad3d90",
"text": "MAC address randomization is a common privacy protection measure deployed in major operating systems today. It is used to prevent user-tracking with probe requests that are transmitted during IEEE 802.11 network scans. We present an attack to defeat MAC address randomization through observation of the timings of the network scans with an off-the-shelf Wi-Fi interface. This attack relies on a signature based on inter-frame arrival times of probe requests, which is used to group together frames coming from the same device although they use distinct MAC addresses. We propose several distance metrics based on timing and use them together with an incremental learning algorithm in order to group frames. We show that these signatures are consistent over time and can be used as a pseudo-identifier to track devices. Our framework is able to correctly group frames using different MAC addresses but belonging to the same device in up to 75% of the cases. These results show that the timing of 802.11 probe frames can be abused to track individual devices and that address randomization alone is not always enough to protect users against tracking.",
"title": ""
},
{
"docid": "2c28d01814e0732e59d493f0ea2eafcb",
"text": "Victor Frankenstein sought to create an intelligent being imbued with the r ules of civilized human conduct, who could further learn how to behave and possibly even evolve through successive g nerations into a more perfect form. Modern human composers similarly strive to create intell igent algorithmic music composition systems that can follow prespecified rules, learn appropriate patte rns from a collection of melodies, or evolve to produce output more perfectly matched to some aesthetic criteria . H re we review recent efforts aimed at each of these three types of algorithmic composition. We focus pa rticularly on evolutionary methods, and indicate how monstrous many of the results have been. We present a ne w method that uses coevolution to create linked artificial music critics and music composers , and describe how this method can attach the separate parts of rules, learning, and evolution together in to one coherent body. “Invention, it must be humbly admitted, does not consist in creating out of void, but ou t of chaos; the materials must, in the first place, be afforded...” --Mary Shelley, Frankenstein (1831/1993, p. 299)",
"title": ""
},
{
"docid": "33c453cec25a77e1bde4ecb353fc678b",
"text": "This article introduces the functional model of self-disclosure on social network sites by integrating a functional theory of self-disclosure and research on audience representations as situational cues for activating interpersonal goals. According to this model, people pursue strategic goals and disclose differently depending on social media affordances, and self-disclosure goals mediate between media affordances and disclosure intimacy. The results of the empirical study examining self-disclosure motivations and characteristics in Facebook status updates, wall posts, and private messaging lend support to this model and provide insights into the motivational drivers of self-disclosure on SNSs, helping to reconcile traditional views on self-disclosure and self-disclosing behaviors in new media contexts.",
"title": ""
},
{
"docid": "a194a66033d5450b75562535ae7bfe83",
"text": "In this paper we present a novel large scale SLAM system that combines dense stereo vision with inertial tracking. The system divides space into a grid and efficiently allocates GPU memory only when there is surface information within a grid cell. A rolling grid approach allows the system to work for large scale outdoor SLAM. A dense visual inertial dense tracking pipeline incrementally localizes stereo cameras against the scene. The proposed system is tested with both a simulated data set and several real-life data in different lighting (illumination changes), motion (slow and fast), and weather (snow, sunny) conditions. Compared to structured light-RGBD systems the proposed system works indoors and outdoors and over large scales beyond single rooms or desktop scenes. Crucially, the system is able to leverage inertial measurements for robust tracking when visual measurements do not suffice. Results demonstrate effective operation with simulated and real data, and both indoors and outdoors under varying lighting conditions.",
"title": ""
},
{
"docid": "14a45e3e7aadee56b7d2e28c692aba9f",
"text": "Radiation therapy as a mode of cancer treatment is well-established. Telecobalt and telecaesium units were used extensively during the early days. Now, medical linacs offer more options for treatment delivery. However, such systems are prohibitively expensive and beyond the reach of majority of the worlds population living in developing and under-developed countries. In India, there is shortage of cancer treatment facilities, mainly due to the high cost of imported machines. Realizing the need of technology for affordable radiation therapy machines, Bhabha Atomic Research Centre (BARC), the premier nuclear research institute of Government of India, started working towards a sophisticated telecobalt machine. The Bhabhatron is the outcome of the concerted efforts of BARC and Panacea Medical Technologies Pvt. Ltd., India. It is not only less expensive, but also has a number of advanced features. It incorporates many safety and automation features hitherto unavailable in the most advanced telecobalt machine presently available. This paper describes various features available in Bhabhatron-II. The authors hope that this machine has the potential to make safe and affordable radiation therapy accessible to the common people in India as well as many other countries.",
"title": ""
},
{
"docid": "04b62ed72ddf8f97b9cb8b4e59a279c1",
"text": "This paper aims to explore some of the manifold and changing links that official Pakistani state discourses forged between women and work from the 1940s to the late 2000s. The focus of the analysis is on discursive spaces that have been created for women engaged in non-domestic work. Starting from an interpretation of the existing academic literature, this paper argues that Pakistani women’s non-domestic work has been conceptualised in three major ways: as a contribution to national development, as a danger to the nation, and as non-existent. The paper concludes that although some conceptualisations of work have been more powerful than others and, at specific historical junctures, have become part of concrete state policies, alternative conceptualisations have always existed alongside them. Disclosing the state’s implication in the discursive construction of working women’s identities might contribute to the destabilisation of hegemonic concepts of gendered divisions of labour in Pakistan. DOI: https://doi.org/10.1016/j.wsif.2013.05.007 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-78605 Accepted Version Originally published at: Grünenfelder, Julia (2013). Discourses of gender identities and gender roles in Pakistan: Women and non-domestic work in political representations. Women’s Studies International Forum, 40:68-77. DOI: https://doi.org/10.1016/j.wsif.2013.05.007",
"title": ""
},
{
"docid": "7dcba854d1f138ab157a1b24176c2245",
"text": "Essential oils distilled from members of the genus Lavandula have been used both cosmetically and therapeutically for centuries with the most commonly used species being L. angustifolia, L. latifolia, L. stoechas and L. x intermedia. Although there is considerable anecdotal information about the biological activity of these oils much of this has not been substantiated by scientific or clinical evidence. Among the claims made for lavender oil are that is it antibacterial, antifungal, carminative (smooth muscle relaxing), sedative, antidepressive and effective for burns and insect bites. In this review we detail the current state of knowledge about the effect of lavender oils on psychological and physiological parameters and its use as an antimicrobial agent. Although the data are still inconclusive and often controversial, there does seem to be both scientific and clinical data that support the traditional uses of lavender. However, methodological and oil identification problems have severely hampered the evaluation of the therapeutic significance of much of the research on Lavandula spp. These issues need to be resolved before we have a true picture of the biological activities of lavender essential oil.",
"title": ""
},
{
"docid": "7402c22fe591a49db4ab237e5ff552d9",
"text": "The paper presents the design and implementation of an intelligent advertising system using Raspberry Pi B+ v1.2 development board, RFID and display panels integrated over a network. This system can be used in multi retail stores, malls and shopping complexes. The proposed system works on performing data mining and trend analysis over sales, inventory data which is collected in real time by using POS system implemented using RFID tags attached to products. The display panels(billboards) will be placed at strategic locations. The proposed outcome of the project aims to provide better offers and deals to customers without targeting them individually, rather using a segmented approach.",
"title": ""
},
{
"docid": "4467f4fc7e9f1199ca6b57f7818ca42c",
"text": "Banking in several developing countries has transcended from a traditional brick-and mortar model of customers queuing for services in the banks to modern day banking where banks can be reached at any point for their services. This can be attributed to the tremendous growth in mobile penetration in many countries across the globe including Jordan. The current exploratory study is an attempt to identify the underlying factors that affects mobile banking adoption in Jordan. Data for this study have been collected using a questionnaire containing 22 questions. Out of 450 questionnaires that have been distributed, 301 are returned (66.0%). In the survey, factors that may affect Jordanian mobile phone users' to adopt mobile banking services were examined. The research findings suggested that all the six factors; self efficacy, trailability, compatibility, complexity, risk and relative advantage were statistically significant in influencing mobile banking adoption.",
"title": ""
},
{
"docid": "af013d9eb2365034f587e407b6824540",
"text": "Marching Cubes is the most frequently used method to reconstruct isosurface from a point cloud. However, the point clouds are getting denser and denser, thus the efficiency of Marching cubes method has become an obstacle. This paper presents a novel GPU-based parallel surface reconstruction algorithm. The algorithm firstly creates a GPU-based uniform grid structure to manage point cloud. Then directed distances from vertices of cubes to the point cloud are computed in a newly put forwarded parallel way. Finally, after the generation of triangles, a space indexing scheme is adopted to reconstruct the connectivity of the resulted surface. The results show that our algorithm can run more than 10 times faster compared to the CPU-based implementations.",
"title": ""
},
{
"docid": "15d2651aa06ac8276a8cc48d3399a504",
"text": "Recently, the NLP community has shown a renewed interest in lexical semantics in the extent of automatic recognition of semantic relationships between pairs of words in text. Lexical semantics has become increasingly important in many natural language applications, this approach to semantics is concerned with psychological facts associated with meaning of words and how these words can be connected in semantic relations to build ontologies that provide a shared vocabulary to model a specified domain. And represent a structural framework for organizing information across fields of Artificial Intelligence (AI), Semantic Web, systems engineering and information architecture. But current systems mainly concentrate on classification of semantic relations rather than to give solutions for how these relations can be created [14]. At the same time, systems that do provide methods for creating the relations tend to ignore the context in which the conceptual relationships occur. Furthermore, methods that address semantic (non-taxonomic) relations are yet to come up with widely accepted ways of enhancing the process of classifying and extracting semantic relations. In this research we will focus on the learning of semantic relations patterns between word meanings by taking into consideration the surrounding context in the general domain. We will first generate semantic patterns in domain independent environment depending on previous specific semantic information, and a set of input examples. Our case of study will be causation relations. Then these patterns will classify causation in general domain texts taking into consideration the context of the relations, and then the classified relations will be used to learn new causation semantic patterns.",
"title": ""
},
{
"docid": "00ec1bd8c0a3d4a5b56e83bd7c7edd51",
"text": "The fresh water polyp Hydra belongs to the phylum Cnidaria, which diverged from the metazoan lineage before the appearance of bilaterians. In order to understand the evolution of apoptosis in metazoans, we have begun to elucidate the molecular cell death machinery in this model organism. Based on ESTs and the whole Hydra genome assembly, we have identified 15 caspases. We show that one is activated during apoptosis, four have characteristics of initiator caspases with N-terminal DED, CARD or DD domain and two undergo autoprocessing in vitro. In addition, we describe seven Bcl-2-like and two Bak-like proteins. For most of the Bcl-2 family proteins, we have observed mitochondrial localization. When expressed in mammalian cells, HyBak-like 1 and 2 strongly induced apoptosis. Six of the Bcl-2 family members inhibited apoptosis induced by camptothecin in mammalian cells with HyBcl-2-like 4 showing an especially strong protective effect. This protein also interacted with HyBak-like 1 in a yeast two-hybrid assay. Mutation of the conserved leucine in its BH3 domain abolished both the interaction with HyBak-like 1 and the anti-apoptotic effect. Moreover, we describe novel Hydra BH-3-only proteins. One of these interacted with Bcl-2-like 4 and induced apoptosis in mammalian cells. Our data indicate that the evolution of a complex network for cell death regulation arose at the earliest and simplest level of multicellular organization, where it exhibited a substantially higher level of complexity than in the protostome model organisms Caenorhabditis and Drosophila.",
"title": ""
},
{
"docid": "f59078ead2dc4df7a1c141f435a16415",
"text": "PURPOSE OF REVIEW\nInfants are traditionally introduced to solid foods using spoon-feeding of specially prepared infant foods.\n\n\nRECENT FINDINGS\nHowever, over the last 10-15 years, an alternative approach termed 'baby-led weaning' has grown in popularity. This approach involves allowing infants to self-feed family foods, encouraging the infant to set the pace and intake of the meal. Proponents of the approach believe it promotes healthy eating behaviour and weight gain trajectories, and evidence is starting to build surrounding the method. This review brings together all empirical evidence to date examining behaviours associated with the approach, its outcomes and confounding factors.\n\n\nSUMMARY\nOverall, although there is limited evidence suggesting that a baby-led approach may encourage positive outcomes, limitations of the data leave these conclusions weak. Further research is needed, particularly to explore pathways to impact and understand the approach in different contexts and populations.",
"title": ""
},
{
"docid": "fe59d96ddb5a777f154da5cf813c556c",
"text": "For a set $P$ of $n$ points in the plane and an integer $k \\leq n$, consider the problem of finding the smallest circle enclosing at least $k$ points of $P$. We present a randomized algorithm that computes in $O( n k )$ expected time such a circle, improving over previously known algorithms. Further, we present a linear time $\\delta$-approximation algorithm that outputs a circle that contains at least $k$ points of $P$ and has radius less than $(1+\\delta)r_{opt}(P,k)$, where $r_{opt}(P,k)$ is the radius of the minimum circle containing at least $k$ points of $P$. The expected running time of this approximation algorithm is $O(n + n \\cdot\\min((1/k\\delta^3) \\log^2 (1/\\delta), k))$.",
"title": ""
},
{
"docid": "5bb9ca3c14dd84f1533789c3fe4bbd10",
"text": "The field of spondyloarthritis (SpA) has experienced major progress in the last decade, especially with regard to new treatments, earlier diagnosis, imaging technology and a better definition of outcome parameters for clinical trials. In the present work, the Assessment in SpondyloArthritis international Society (ASAS) provides a comprehensive handbook on the most relevant aspects for the assessments of spondyloarthritis, covering classification criteria, MRI and x rays for sacroiliac joints and the spine, a complete set of all measurements relevant for clinical trials and international recommendations for the management of SpA. The handbook focuses at this time on axial SpA, with ankylosing spondylitis (AS) being the prototype disease, for which recent progress has been faster than in peripheral SpA. The target audience includes rheumatologists, trial methodologists and any doctor and/or medical student interested in SpA. The focus of this handbook is on practicality, with many examples of MRI and x ray images, which will help to standardise not only patient care but also the design of clinical studies.",
"title": ""
},
{
"docid": "6d396f65f8cb4b7dcd4b502e7b167aca",
"text": "We study cost-sensitive learning of decision trees that incorporate both test costs and misclassification costs. In particular, we first propose a lazy decision tree learning that minimizes the total cost of tests and misclassifications. Then assuming test examples may contain unknown attributes whose values can be obtained at a cost (the test cost), we design several novel test strategies which attempt to minimize the total cost of tests and misclassifications for each test example. We empirically evaluate our treebuilding and various test strategies, and show that they are very effective. Our results can be readily applied to real-world diagnosis tasks, such as medical diagnosis where doctors must try to determine what tests (e.g., blood tests) should be ordered for a patient to minimize the total cost of tests and misclassifications (misdiagnosis). A case study on heart disease is given throughout the paper.",
"title": ""
},
{
"docid": "a01965406575363328f4dae4241a05b7",
"text": "IT governance is one of these concepts that suddenly emerged and became an important issue in the information technology area. Some organisations started with the implementation of IT governance in order to achieve a better alignment between business and IT. This paper interprets important existing theories, models and practices in the IT governance domain and derives research questions from it. Next, multiple research strategies are triangulated in order to understand how organisations are implementing IT governance in practice and to analyse the relationship between these implementations and business/IT alignment. Major finding is that organisations with more mature IT governance practices likely obtain a higher degree of business/IT alignment maturity.",
"title": ""
},
{
"docid": "b8032e13156e0168e2c5850cdf452e5b",
"text": "We observe that end-to-end memory networks (MN) trained for task-oriented dialogue, such as for recommending restaurants to a user, suffer from an out-ofvocabulary (OOV) problem – the entities returned by the Knowledge Base (KB) may not be seen by the network at training time, making it impossible for it to use them in dialogue. We propose a Hierarchical Pointer Memory Network (HyP-MN), in which the next word may be generated from the decode vocabulary or copied from a hierarchical memory maintaining KB results and previous utterances. Evaluating over the dialog bAbI tasks, we find that HyP-MN drastically outperforms MN obtaining 12% overall accuracy gains. Further analysis reveals that MN fails completely in recommending any relevant restaurant, whereas HyP-MN recommends the best next restaurant 80% of the time.",
"title": ""
},
{
"docid": "13cb793ca9cdf926da86bb6fc630800a",
"text": "In this paper, we present the first formal study of how mothers of young children (aged three and under) use social networking sites, particularly Facebook and Twitter, including mothers' perceptions of which SNSes are appropriate for sharing information about their children, changes in post style and frequency after birth, and the volume and nature of child-related content shared in these venues. Our findings have implications for improving the utility and usability of SNS tools for mothers of young children, as well as for creating and improving sociotechnical systems related to maternal and child health.",
"title": ""
}
] |
scidocsrr
|
f5c24fefacc11d2a60233faec5514fd9
|
MuseUs: Case study of a pervasive cultural heritage serious game
|
[
{
"docid": "a1f2d91de4ba7899c03bfbe7a7a8f422",
"text": "Pervasive gaming is a genre of gaming systematically blurring and breaking the traditional boundaries of game. The limits of the magic circle are explored in spatial, temporal and social dimensions. These ways of expanding the game are not new, since many intentional and unintentional examples of similar expansions can be found from earlier games, but the recently emerged fashion of pervasive gaming is differentiated with the use of these expansions in new, efficient ways to produce new kinds of gameplay experiences. These new game genres include alternate reality games, reality games, trans-reality games and crossmedia games.",
"title": ""
},
{
"docid": "a8c62ba0314f8ae102f0ca720102417e",
"text": "Designers of mobile, social systems must carefully think about how to help their users manage spatial, semantic, and social modes of navigation. Here, we describe our deployment of MobiTags, a system to help museum visitors interact with a collection of \"open storage\" exhibits, those where the museum provides little curatorial information. MobiTags integrates social tagging, art information, and a map to support navigation and collaborative curation of these open storage collections. We studied 23 people's use of MobiTags in a local museum, combining interview data with device use logs and tracking of people's movements to understand how MobiTags affected their navigation and experience in the museum. Despite a lack of social cues, people feel a strong sense of social presence--and social pressure--through seeing others' tags. The tight coupling of tags, item information, and map features also supported a rich set of practices around these modes of navigation.",
"title": ""
}
] |
[
{
"docid": "5cd9031a58457c0cb5fb2d49f1da40f6",
"text": "Induction heating (IH) technology is nowadays the heating technology of choice in many industrial, domestic, and medical applications due to its advantages regarding efficiency, fast heating, safety, cleanness, and accurate control. Advances in key technologies, i.e., power electronics, control techniques, and magnetic component design, have allowed the development of highly reliable and cost-effective systems, making this technology readily available and ubiquitous. This paper reviews IH technology summarizing the main milestones in its development and analyzing the current state of art of IH systems in industrial, domestic, and medical applications, paying special attention to the key enabling technologies involved. Finally, an overview of future research trends and challenges is given, highlighting the promising future of IH technology.",
"title": ""
},
{
"docid": "129f1acf479075f06394dd0b9e9eb5a0",
"text": "The authors present a theory of sexism formulated as ambivalence toward women and validate a corresponding measure, the Ambivalent Sexism Inventory (ASI). The ASI taps 2 positively correlated components of sexism that nevertheless represent opposite evaluative orientations toward women: sexist antipathy or Hostile Sexism (HS) and a subjectively positive ( for sexist men ) orientation toward women, Benevolent Sexism (BS). HS and BS are hypothesized to encompass 3 sources of male ambivalence: Paternalism, Gender Differentiation, and Heterosexuality. Six ASI studies on 2,250 respondents established convergent, discriminant, and predictive validity. Overall ASI scores predict ambivalent attitudes toward women, the HS scale correlates with negative attitudes toward and stereotypes about women, and the BS scale (for nonstudent men only) correlates with positive attitudes toward and stereotypes about women. A copy of the ASI is provided, with scoring instructions, as a tool for further explorations of sexist ambivalence.",
"title": ""
},
{
"docid": "e85a0f0edaf18c1f5cd5b6fdbbd464b0",
"text": "This paper focuses on the challenging problem of 3D pose estimation of a diverse spectrum of articulated objects from single depth images. A novel structured prediction approach is considered, where 3D poses are represented as skeletal models that naturally operate on manifolds. Given an input depth image, the problem of predicting the most proper articulation of underlying skeletal model is thus formulated as sequentially searching for the optimal skeletal configuration. This is subsequently addressed by convolutional neural nets trained end-to-end to render sequential prediction of the joint locations as regressing a set of tangent vectors of the underlying manifolds. Our approach is examined on various articulated objects including human hand, mouse, and fish benchmark datasets. Empirically it is shown to deliver highly competitive performance with respect to the state-of-the-arts, while operating in real-time (over 30 FPS).",
"title": ""
},
{
"docid": "6d15f9766e35b2c78ce5402ed44cdf57",
"text": "Models that acquire semantic representations from both linguistic and perceptual input are of interest to researchers in NLP because of the obvious parallels with human language learning. Performance advantages of the multi-modal approach over language-only models have been clearly established when models are required to learn concrete noun concepts. However, such concepts are comparatively rare in everyday language. In this work, we present a new means of extending the scope of multi-modal models to more commonly-occurring abstract lexical concepts via an approach that learns multimodal embeddings. Our architecture outperforms previous approaches in combining input from distinct modalities, and propagates perceptual information on concrete concepts to abstract concepts more effectively than alternatives. We discuss the implications of our results both for optimizing the performance of multi-modal models and for theories of abstract conceptual representation.",
"title": ""
},
{
"docid": "c636b8c942728fd7883f74b12eba5ac9",
"text": "In this paper we propose a novel approach to detect and reconstruct transparent objects. This approach makes use of the fact that many transparent objects, especially the ones consisting of usual glass, absorb light in certain wavelengths [1]. Given a controlled illumination, this absorption is measurable in the intensity response by comparison to the background. We show the usage of a standard infrared emitter and the intensity sensor of a time of flight (ToF) camera to reconstruct the structure given we have a second view point. The structure can not be measured by the usual 3D measurements of the ToF camera. We take advantage of this fact by deriving this internal sensory contradiction from two ToF images and reconstruct an approximated surface of the original transparent object. Therefor we are using a perspectively invariant matching in the intensity channels from the first to the second view of initially acquired candidates. For each matched pixel in the first view a 3D movement can be predicted given their original 3D measurement and the known distance to the second camera position. If their line of sight did not pass a transparent object or suffered any other major defect, this prediction will highly correspond to the actual measured 3D points of the second view. Otherwise, if a detectable error occurs, we approximate a more exact point to point matching and reconstruct the original shape by triangulating the points in the stereo setup. We tested our approach using a mobile platform with one Swissranger SR4k. As this platform is mobile, we were able to create a stereo setup by moving it. Our results show a detection of transparent objects on tables while simultaneously identifying opaque objects that also existed in the test setup. The viability of our results is demonstrated by a successful automated manipulation of the respective transparent object.",
"title": ""
},
{
"docid": "a083a09e0b156781d1a782e2b6951c9d",
"text": "If a person with carious lesions needs or requests crowns or inlays, these dental fillings have to be manufactured for each tooth and each person individually. We survey computer vision techniques which can be used to automate this process. We introduce three particular applications which are concerned with the reconstruction of surface information. The first one aims at building up a database of normalized depth images of posterior teeth and at extracting characteristic features from these images. In the second application, a given occlusal surface of a posterior tooth with a prepared cavity is digitally reconstructed using an intact model tooth from a given database. The calculated surface data can then be used for automatic milling of a dental prosthesis, e.g. from a preshaped ceramic block. In the third application a hand-made provisoric wax inlay or crown can be digitally scanned by a laser sensor and copied three dimensionally into a different material such as ceramic. The results are converted to a format required by the computer-integrated manufacturing (CIM) system for automatic milling.",
"title": ""
},
{
"docid": "cebcd53ef867abb158445842cd0f4daf",
"text": "Let [ be a random variable over a finite set with an arbitrary probability distribution. In this paper we make improvements to a fast method of generating sample values for ( in constant time.",
"title": ""
},
{
"docid": "948b157586c75674e75bd50b96162861",
"text": "We propose a database design methodology for NoSQL systems. The approach is based on NoAM (NoSQL Abstract Model), a novel abstrac t d ta model for NoSQL databases, which exploits the commonalities of various N SQL systems and is used to specify a system-independent representatio n of the application data. This intermediate representation can be then implemented in target NoSQL databases, taking into account their specific features. Ov rall, the methodology aims at supporting scalability, performance, and consisten cy, as needed by next-generation web applications.",
"title": ""
},
{
"docid": "5dad207fe80469fe2b80d1f1e967575e",
"text": "As the geolocation capabilities of smartphones continue to improve, developers have continued to create more innovative applications that rely on this location information for their primary function. This can be seen with Niantic’s release of Pokémon GO, which is a massively multiplayer online role playing and augmented reality game. This game became immensely popular within just a few days of its release. However, it also had the propensity to be a distraction to drivers resulting in numerous accidents, and was used to as a tool by armed robbers to lure unsuspecting users into secluded areas. This facilitates a need for forensic investigators to be able to analyze the data within the application in order to determine if it may have been involved in these incidents. Because this application is new, limited research has been conducted regarding the artifacts that can be recovered from the application. In this paper, we aim to fill the gaps within the current research by assessing what forensically relevant information may be recovered from the application, and understanding the circumstances behind the creation of this information. Our research focuses primarily on the artifacts generated by the Upsight analytics platform, those contained within the bundles directory, and the Pokémon Go Plus accessory. Moreover, we present our new application specific analysis tool that is capable of extracting forensic artifacts from a backup of the Android application, and presenting them to an investigator in an easily readable format. This analysis tool exceeds the capabilities of UFED Physical Analyzer in processing Pokémon GO application data.",
"title": ""
},
{
"docid": "b3b4e93b48914aa5844beae27a8af2b2",
"text": "http://ebmh.bmj.com/content/13/2/35.full.html Updated information and services can be found at: These include: References http://ebmh.bmj.com/content/13/2/35.full.html#ref-list-1 This article cites 30 articles, 9 of which can be accessed free at: service Email alerting box at the top right corner of the online article. Receive free email alerts when new articles cite this article. Sign up in the Notes",
"title": ""
},
{
"docid": "8e878e5083d922d97f8d573c54cbb707",
"text": "Deep neural networks have become the stateof-the-art models in numerous machine learning tasks. However, general guidance to network architecture design is still missing. In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations. This finding brings us a brand new perspective on the design of effective deep architectures. We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks. As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations. The LMarchitecture is an effective structure that can be used on any ResNet-like networks. In particular, we demonstrate that LM-ResNet and LMResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters. In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress the original networkSchool of Mathematical Sciences, Peking University, Beijing, China MGH/BWH Center for Clinical Data Science, Masschusetts General Hospital, Harvard Medical School Center for Data Science in Health and Medicine, Peking University Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research Beijing International Center for Mathematical Research, Peking University Center for Data Science, Peking University. Correspondence to: Bin Dong <dongbin@math.pku.edu.cn>, Quanzheng Li <Li.Quanzheng@mgh.harvard.edu>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). s while maintaining a similar performance. This can be explained mathematically using the concept of modified equation from numerical analysis. Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture. As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.",
"title": ""
},
{
"docid": "5edaa2ed52f29eeb9576ebdaeb819997",
"text": "Alzheimer's disease (AD) is the most common neurodegenerative disorder characterized by cognitive and intellectual deficits and behavior disturbance. The electroencephalogram (EEG) has been used as a tool for diagnosing AD for several decades. The hallmark of EEG abnormalities in AD patients is a shift of the power spectrum to lower frequencies and a decrease in coherence of fast rhythms. These abnormalities are thought to be associated with functional disconnections among cortical areas resulting from death of cortical neurons, axonal pathology, cholinergic deficits, etc. This article reviews main findings of EEG abnormalities in AD patients obtained from conventional spectral analysis and nonlinear dynamical methods. In particular, nonlinear alterations in the EEG of AD patients, i.e. a decreased complexity of EEG patterns and reduced information transmission among cortical areas, and their clinical implications are discussed. For future studies, improvement of the accuracy of differential diagnosis and early detection of AD based on multimodal approaches, longitudinal studies on nonlinear dynamics of the EEG, drug effects on the EEG dynamics, and linear and nonlinear functional connectivity among cortical regions in AD are proposed to be investigated. EEG abnormalities of AD patients are characterized by slowed mean frequency, less complex activity, and reduced coherences among cortical regions. These abnormalities suggest that the EEG has utility as a valuable tool for differential and early diagnosis of AD.",
"title": ""
},
{
"docid": "feee488a72016554ebf982762d51426e",
"text": "Optical imaging sensors, such as television or infrared cameras, collect information about targets or target regions. It is thus necessary to control the sensor's line-of-sight (LOS) to achieve accurate pointing. Maintaining sensor orientation toward a target is particularly challenging when the imaging sensor is carried on a mobile vehicle or when the target is highly dynamic. Controlling an optical sensor LOS with an inertially stabilized platform (ISP) can meet these challenges.A target tracker is a process, typically involving image processing techniques, for detecting targets in optical imagery. This article describes the use and design of ISPs and target trackers for imaging optical sensors.",
"title": ""
},
{
"docid": "52688089da3419fdaf16964e140e3701",
"text": "OBJECTIVES\nThe aim of this study was to elucidate the factors associated with the occurrence of mixed episodes, characterized by the presence of concomitant symptoms of both affective poles, during the course of illness in bipolar I disorder patients treated with an antidepressant, as well as the role of antidepressants in the course and outcome of the disorder.\n\n\nMETHOD\nWe enrolled a sample of 144 patients followed for up to 20 years in the referral Barcelona Bipolar Disorder Program and compared subjects who had experienced at least one mixed episode during the follow-up (n=60) with subjects who had never experienced a mixed episode (n=84) regarding clinical variables.\n\n\nRESULTS\nNearly 40% of bipolar I disorder patients treated with antidepressants experienced at least one mixed episode during the course of their illness; no gender differences were found between two groups. Several differences regarding clinical variables were found between the two groups, but after performing logistic regression analysis, only suicide attempts (p<0.001), the use of serotonin norepinephrine reuptake inhibitors (p=0.041), switch rates (p=0.010), and years spent ill (p=0.022) were significantly associated with the occurrence of at least one mixed episode during follow-up.\n\n\nCONCLUSIONS\nThe occurrence of mixed episodes is associated with a tendency to chronicity, with a poorer outcome, a higher number of depressive episodes, and greater use of antidepressants, especially serotonin norepinephrine reuptake inhibitors.",
"title": ""
},
{
"docid": "14d5c8ed0b48d5625287fecaf5f72691",
"text": "In this paper we attempt to demonstrate the strengths of Hierarchical Hidden Markov Models (HHMMs) in the representation and modelling of musical structures. We show how relatively simple HHMMs, containing a minimum of expert knowledge, use their advantage of having multiple layers to perform well on tasks where flat Hidden Markov Models (HMMs) struggle. The examples in this paper show a HHMM’s performance at extracting higherlevel musical properties through the construction of simple pitch sequences, correctly representing the data set on which it was trained.",
"title": ""
},
{
"docid": "e62e09ce3f4f135b12df4d643df02de6",
"text": "Septic arthritis/tenosynovitis in the horse can have life-threatening consequences. The purpose of this cross-sectional retrospective study was to describe ultrasound characteristics of septic arthritis/tenosynovitis in a group of horses. Diagnosis of septic arthritis/tenosynovitis was based on historical and clinical findings as well as the results of the synovial fluid analysis and/or positive synovial culture. Ultrasonographic findings recorded were degree of joint/sheath effusion, degree of synovial membrane thickening, echogenicity of the synovial fluid, and presence of hyperechogenic spots and fibrinous loculations. Ultrasonographic findings were tested for dependence on the cause of sepsis, time between admission and beginning of clinical signs, and the white blood cell counts in the synovial fluid. Thirty-eight horses with confirmed septic arthritis/tenosynovitis of 43 joints/sheaths were included. Degree of effusion was marked in 81.4% of cases, mild in 16.3%, and absent in 2.3%. Synovial thickening was mild in 30.9% of cases and moderate/severe in 69.1%. Synovial fluid was anechogenic in 45.2% of cases and echogenic in 54.8%. Hyperechogenic spots were identified in 32.5% of structures and fibrinous loculations in 64.3%. Relationships between the degree of synovial effusion, degree of the synovial thickening, presence of fibrinous loculations, and the time between admission and beginning of clinical signs were identified, as well as between the presence of fibrinous loculations and the cause of sepsis (P ≤ 0.05). Findings indicated that ultrasonographic findings of septic arthritis/tenosynovitis may vary in horses, and may be influenced by time between admission and beginning of clinical signs.",
"title": ""
},
{
"docid": "a98887592358e43394469037a4632c3a",
"text": "The construct of school engagement has attracted growing interest as a way to ameliorate the decline in academic achievement and increase in dropout rates. The current study tested the fit of a second-order multidimensional factor model of school engagement, using large-scale representative data on 1103 students in middle school. In order to make valid model comparisons by group, we evaluated the extent to which the measurement structure of this model was invariant by gender and by race/ethnicity (European-American vs. African-American students). Finally, we examined differences in latent factor means by these same groups. From our confirmatory factor analyses, we concluded that school engagement was a multidimensional construct, with evidence to support the hypothesized second-order engagement factor structure with behavioral, emotional, and cognitive dimensions. In this sample, boys and girls did not substantially differ, nor did European-American and African-American students, in terms of the underlying constructs of engagement and the composition of these constructs. Finally, there were substantial differences in behavioral and emotional engagement by gender and by racial/ethnic groups in terms of second-order factor mean differences.",
"title": ""
},
{
"docid": "7c6d2ede54f0445e852b8f9da95fca32",
"text": "In this paper we apply Conformal Prediction (CP) to the k -Nearest Neighbours Regression (k -NNR) algorithm and propose ways of extending the typical nonconformity measure used for regression so far. Unlike traditional regression methods which produce point predictions, Conformal Predictors output predictive regions that satisfy a given confidence level. The regions produced by any Conformal Predictor are automatically valid, however their tightness and therefore usefulness depends on the nonconformity measure used by each CP. In effect a nonconformity measure evaluates how strange a given example is compared to a set of other examples based on some traditional machine learning algorithm. We define six novel nonconformity measures based on the k -Nearest Neighbours Regression algorithm and develop the corresponding CPs following both the original (transductive) and the inductive CP approaches. A comparison of the predictive regions produced by our measures with those of the typical regression measure suggests that a major improvement in terms of predictive region tightness is achieved by the new measures.",
"title": ""
},
{
"docid": "5b29ed448e3685e6c7f057b0fa8135e9",
"text": "Crop growth and productivity are determined by a large number of weather, soil and management variables, which vary significantly across space. Remote Sensing (RS) data, acquired repetitively over agricultural land help in identification and mapping of crops and also in assessing crop vigour. As RS data and techniques have improved, the initial efforts that directly related RS-derived vegetation indices (VI) to crop yield have been replaced by approaches that involve retrieved biophysical quantities from RS data. Thus, crop simulation models (CSM) that have been successful in field-scale applications are being adapted in a GIS framework to model and monitor crop growth with remote sensing inputs making assessments sensitive to seasonal weather factors, local variability and crop management signals. The RS data can provide information of crop environment, crop distribution, leaf area index (LAI), and crop phenology. This information is integrated in CSM, in a number of ways such as use as direct forcing variable, use for re-calibrating specific parameters, or use simulation-observation differences in a variable to correct yield prediction. A number of case studies that demonstrated such use of RS data and demonstrated applications of CSM-RS linkage are presented.",
"title": ""
},
{
"docid": "f05fa9201158a546c109c501902be5fc",
"text": "A wide band power divider using modified Wilkinson design is presented. It is designed to operate at 5G wireless communication band of China. This power divider using wide band Wilkinson design method can be used as the feeding network of antenna array. The presented design can operate over a wide bandwidth covering 36.5GHz-44GHz which covers the 5G wireless communication band of China. The simulated and verified results show good agreement that the input return loss (S11) and the output return losses (S22, S33, S44, S55) are all under -12dB from 37GHz-42GHz. The insertion losses (including division loss) are within 7.2dB and the isolation between any two output ports are all under -10dB.",
"title": ""
}
] |
scidocsrr
|
38d2b61cf03b84ee81e408944d567e4e
|
Ontology Based Expert-System for Suspicious Transactions Detection
|
[
{
"docid": "d7aeb8de7bf484cbaf8e23fcf675d002",
"text": "One method for detecting fraud is to check for suspicious changes in user behavior. This paper proposes a novel method, built upon ontology and ontology instance similarity. Ontology is now widely used to enable knowledge sharing and reuse, so some personality ontologies can be easily used to present user behavior. By measure the similarity of ontology instances, we can determine whether an account is defrauded. This method lows the data model cost and make the system very adaptive to different applications.",
"title": ""
}
] |
[
{
"docid": "ee6612fa13482f7e3bbc7241b9e22297",
"text": "The MOND limit is shown to follow from a requirement of space-time scale invariance of the equations of motion for nonrelativistic, purely gravitational systems; i.e., invariance of the equations of motion under (t, r) → (λt, λr) in the limit a 0 → ∞. It is suggested that this should replace the definition of the MOND limit based on the asymptotic behavior of a Newtonian-MOND interpolating function. In this way, the salient, deep-MOND results–asymptotically flat rotation curves, the mass-rotational-speed relation (baryonic Tully-Fisher relation), the Faber-Jackson relation, etc.–follow from a symmetry principle. For example, asymptotic flatness of rotation curves reflects the fact that radii change under scaling, while velocities do not. I then comment on the interpretation of the deep-MOND limit as one of \" zero mass \" : Rest masses, whose presence obstructs scaling symmetry, become negligible compared to the \" phantom \" , dynamical masses–those that some would attribute to dark matter. Unlike the former masses, the latter transform in a way that is consistent with the symmetry. Finally, I discuss the putative MOND-cosmology connection, in particular the possibility that MOND-especially the deep-MOND limit– is related to the asymptotic de Sitter geometry of our universe. I point out, in this connection, the possible relevance of a (classical) de Sitter-conformal-field-theory (dS/CFT) correspondence.",
"title": ""
},
{
"docid": "e09d45316d48894bcfb3c5657cd19118",
"text": "In recent years, multiple-line acquisition (MLA) has been introduced to increase frame rate in cardiac ultrasound medical imaging. However, this method induces blocklike artifacts in the image. One approach suggested, synthetic transmit beamforming (STB), involves overlapping transmit beams which are then interpolated to remove the MLA blocking artifacts. Independently, the application of minimum variance (MV) beamforming has been suggested in the context of MLA. We demonstrate here that each approach is only a partial solution and that combining them provides a better result than applying either approach separately. This is demonstrated by using both simulated and real phantom data, as well as cardiac data. We also show that the STB-compensated MV beamfomer outperforms single-line acquisition (SLA) delay- and-sum in terms of lateral resolution.",
"title": ""
},
{
"docid": "bfd23678afff2ac4cd4650cf46195590",
"text": "The Islamic State of Iraq and ash-Sham (ISIS) continues to use social media as an essential element of its campaign to motivate support. On Twitter, ISIS' unique ability to leverage unaffiliated sympathizers that simply retweet propaganda has been identified as a primary mechanism in their success in motivating both recruitment and \"lone wolf\" attacks. The present work explores a large community of Twitter users whose activity supports ISIS propaganda diffusion in varying degrees. Within this ISIS supporting community, we observe a diverse range of actor types, including fighters, propagandists, recruiters, religious scholars, and unaffiliated sympathizers. The interaction between these users offers unique insight into the people and narratives critical to ISIS' sustainment. In their entirety, we refer to this diverse set of users as an online extremist community or OEC. We present Iterative Vertex Clustering and Classification (IVCC), a scalable analytic approach for OEC detection in annotated heterogeneous networks, and provide an illustrative case study of an online community of over 22,000 Twitter users whose online behavior directly advocates support for ISIS or contibutes to the group's propaganda dissemination through retweets.",
"title": ""
},
{
"docid": "39cc52cd5ba588e9d4799c3b68620f18",
"text": "Using data from a popular online social network site, this paper explores the relationship between profile structure (namely, which fields are completed) and number of friends, giving designers insight into the importance of the profile and how it works to encourage connections and articulated relationships between users. We describe a theoretical framework that draws on aspects of signaling theory, common ground theory, and transaction costs theory to generate an understanding of why certain profile fields may be more predictive of friendship articulation on the site. Using a dataset consisting of 30,773 Facebook profiles, we determine which profile elements are most likely to predict friendship links and discuss the theoretical and design implications of our findings.",
"title": ""
},
{
"docid": "aa3c4e267122b636eae557513900dd85",
"text": "At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way to quantify whether the student has mastered a skill. A large amount of work has been done on building student models that can predict student performance on the next question. In this paper, we leverage this prior work with a new whento-stop policy that is compatible with any such predictive student model. Our results suggest that, when employed as part of our new predictive similarity policy, student models with similar predictive accuracies can suggest that substantially different amounts of practice are necessary. This suggests that predictive accuracy may not be a sufficient metric by itself when choosing which student model to use in intelligent tutoring systems.",
"title": ""
},
{
"docid": "e0580a51b7991f86559a7a3aa8b26204",
"text": "A new ultra-wideband monocycle pulse generator with good performance is designed and demonstrated. The pulse generator circuits employ SRD(step recovery diode), Schottky diode, and simple RC coupling and decoupling circuit, and are completely fabricated on the planar microstrip structure, which have the characteristic of low cost and small size. Through SRD modeling, the accuracy of the simulation is improved, which save the design period greatly. The generated monocycle pulse has the peak-to-peak amplitude 1.3V, pulse width 370ps and pulse repetition rate of 10MHz, whose waveform features are symmetric well and low ringing level. Good agreement between the measured and calculated results is achieved.",
"title": ""
},
{
"docid": "b103e091df051f4958317b3b7806fa71",
"text": "We present a static, precise, and scalable technique for finding CVEs (Common Vulnerabilities and Exposures) in stripped firmware images. Our technique is able to efficiently find vulnerabilities in real-world firmware with high accuracy. Given a vulnerable procedure in an executable binary and a firmware image containing multiple stripped binaries, our goal is to detect possible occurrences of the vulnerable procedure in the firmware image. Due to the variety of architectures and unique tool chains used by vendors, as well as the highly customized nature of firmware, identifying procedures in stripped firmware is extremely challenging. Vulnerability detection requires not only pairwise similarity between procedures but also information about the relationships between procedures in the surrounding executable. This observation serves as the foundation for a novel technique that establishes a partial correspondence between procedures in the two binaries. We implemented our technique in a tool called FirmUp and performed an extensive evaluation over 40 million procedures, over 4 different prevalent architectures, crawled from public vendor firmware images. We discovered 373 vulnerabilities affecting publicly available firmware, 147 of them in the latest available firmware version for the device. A thorough comparison of FirmUp to previous methods shows that it accurately and effectively finds vulnerabilities in firmware, while outperforming the detection rate of the state of the art by 45% on average.",
"title": ""
},
{
"docid": "dc53e2bf9576fd3fb7670b0860eae754",
"text": "In the field of ADAS and self-driving car, lane and drivable road detection play an essential role in reliably accomplishing other tasks, such as objects detection. For monocular vision based semantic segmentation of lane and road, we propose a dilated feature pyramid network (FPN) with feature aggregation, called DFFA, where feature aggregation is employed to combine multi-level features enhanced with dilated convolution operations and FPN under the framework of ResNet. Experimental results validate effectiveness and efficiency of the proposed deep learning model for semantic segmentation of lane and drivable road. Our DFFA achieves the best performance both on Lane Estimation Evaluation and Behavior Evaluation tasks in KITTI-ROAD and take the second place on UU ROAD task.",
"title": ""
},
{
"docid": "4960f2d2215dbc8cf746b4f1a22f6756",
"text": "Current deep learning approaches have been very successful using convolutional neural networks trained on large graphical-processing-unit-based computers. Three limitations of this approach are that (1) they are based on a simple layered network topology, i.e., highly connected layers, without intra-layer connections; (2) the networks are manually configured to achieve optimal results, and (3) the implementation of the network model is expensive in both cost and power. In this article, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. We use the MNIST dataset for our experiment, due to input size limitations of current quantum computers. Our results show the feasibility of using the three architectures in tandem to address the above deep learning limitations. We show that a quantum computer can find high quality values of intra-layer connection weights in a tractable time as the complexity of the network increases, a high performance computer can find optimal layer-based topologies, and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware.",
"title": ""
},
{
"docid": "7403408ad427f9613110a4f40c693893",
"text": "Recommending news items is traditionally done by term-based algorithms like TF-IDF. This paper concentrates on the benefits of recommending news items using a domain ontology instead of using a term-based approach. For this purpose, we propose Athena, which is an extension to the existing Hermes framework. Athena employs a user profile to store terms or concepts found in news items browsed by the user. Based on this information, the framework uses a traditional method based on TF-IDF, and several ontology-based methods to recommend new articles to the user. The paper concludes with the evaluation of the different methods, which shows that the new ontology-based method that we propose in this paper performs better (w.r.t. accuracy, precision, and recall) than the traditional method and, with the exception of one measure (recall), also better than the other considered ontology-based approaches.",
"title": ""
},
{
"docid": "3ddcf5f0e4697a0d43eff2cca77a1ab7",
"text": "Lymph nodes are assessed routinely in clinical practice and their size is followed throughout radiation or chemotherapy to monitor the effectiveness of cancer treatment. This paper presents a robust learning-based method for automatic detection and segmentation of solid lymph nodes from CT data, with the following contributions. First, it presents a learning based approach to solid lymph node detection that relies on marginal space learning to achieve great speedup with virtually no loss in accuracy. Second, it presents a computationally efficient segmentation method for solid lymph nodes (LN). Third, it introduces two new sets of features that are effective for LN detection, one that self-aligns to high gradients and another set obtained from the segmentation result. The method is evaluated for axillary LN detection on 131 volumes containing 371 LN, yielding a 83.0% detection rate with 1.0 false positive per volume. It is further evaluated for pelvic and abdominal LN detection on 54 volumes containing 569 LN, yielding a 80.0% detection rate with 3.2 false positives per volume. The running time is 5-20 s per volume for axillary areas and 15-40 s for pelvic. An added benefit of the method is the capability to detect and segment conglomerated lymph nodes.",
"title": ""
},
{
"docid": "118738ca4b870e164c7be53e882a9ab4",
"text": "IA. Cause and Effect . . . . . . . . . . . . . . 465 1.2. Prerequisites of Selforganization . . . . . . . 467 1.2.3. Evolut ion Must S ta r t f rom R andom Even ts 467 1.2.2. Ins t ruc t ion Requires In format ion . . . . 467 1.2.3. In format ion Originates or Gains Value by S e l e c t i o n . . . . . . . . . . . . . . . 469 1.2.4. Selection Occurs wi th Special Substances under Special Conditions . . . . . . . . 470",
"title": ""
},
{
"docid": "8c0d3cfffb719f757f19bbb33412d8c6",
"text": "In this paper, we present a parallel Image-to-Mesh Conversion (I2M) algorithm with quality and fidelity guarantees achieved by dynamic point insertions and removals. Starting directly from an image, it is able to recover the isosurface and mesh the volume with tetrahedra of good shape. Our tightly-coupled shared-memory parallel speculative execution paradigm employs carefully designed contention managers, load balancing, synchronization and optimizations schemes which boost the parallel efficiency with little overhead: our single-threaded performance is faster than CGAL, the state of the art sequential mesh generation software we are aware of. The effectiveness of our method is shown on Blacklight, the Pittsburgh Supercomputing Center's cache-coherent NUMA machine, via a series of case studies justifying our choices. We observe a more than 82% strong scaling efficiency for up to 64 cores, and a more than 95% weak scaling efficiency for up to 144 cores, reaching a rate of 14.7 Million Elements per second. To the best of our knowledge, this is the fastest and most scalable 3D Delaunay refinement algorithm.",
"title": ""
},
{
"docid": "864c2987092ca266b97ed11faec42aa3",
"text": "BACKGROUND\nAnxiety is the most common emotional response in women during delivery, which can be accompanied with adverse effects on fetus and mother.\n\n\nOBJECTIVES\nThis study was conducted to compare the effects of aromatherapy with rose oil and warm foot bath on anxiety in the active phase of labor in nulliparous women in Tehran, Iran.\n\n\nPATIENTS AND METHODS\nThis clinical trial study was performed after obtaining informed written consent on 120 primigravida women randomly assigned into three groups. The experimental group 1 received a 10-minute inhalation and footbath with oil rose. The experimental group 2 received a 10-minute warm water footbath. Both interventions were applied at the onset of active and transitional phases. Control group, received routine care in labor. Anxiety was assessed using visual analogous scale (VASA) at onset of active and transitional phases before and after the intervention. Statistical comparison was performed using SPSS software version 16 and P < 0.05 was considered significant.\n\n\nRESULTS\nAnxiety scores in the intervention groups in active phase after intervention were significantly lower than the control group (P < 0.001). Anxiety scores before and after intervention in intervention groups in transitional phase was significantly lower than the control group (P < 0.001).\n\n\nCONCLUSIONS\nUsing aromatherapy and footbath reduces anxiety in active phase in nulliparous women.",
"title": ""
},
{
"docid": "58d7e76a4b960e33fc7b541d04825dc9",
"text": "The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.",
"title": ""
},
{
"docid": "e07a731a2c4fa39be27a13b5b5679593",
"text": "Ocean acidification is rapidly changing the carbonate system of the world oceans. Past mass extinction events have been linked to ocean acidification, and the current rate of change in seawater chemistry is unprecedented. Evidence suggests that these changes will have significant consequences for marine taxa, particularly those that build skeletons, shells, and tests of biogenic calcium carbonate. Potential changes in species distributions and abundances could propagate through multiple trophic levels of marine food webs, though research into the long-term ecosystem impacts of ocean acidification is in its infancy. This review attempts to provide a general synthesis of known and/or hypothesized biological and ecosystem responses to increasing ocean acidification. Marine taxa covered in this review include tropical reef-building corals, cold-water corals, crustose coralline algae, Halimeda, benthic mollusks, echinoderms, coccolithophores, foraminifera, pteropods, seagrasses, jellyfishes, and fishes. The risk of irreversible ecosystem changes due to ocean acidification should enlighten the ongoing CO(2) emissions debate and make it clear that the human dependence on fossil fuels must end quickly. Political will and significant large-scale investment in clean-energy technologies are essential if we are to avoid the most damaging effects of human-induced climate change, including ocean acidification.",
"title": ""
},
{
"docid": "4500c668414d0cb1ff18bb8ec00f1d8f",
"text": "Governments around the world are increasingly utilising online platforms and social media to engage with, and ascertain the opinions of, their citizens. Whilst policy makers could potentially benefit from such enormous feedback from society, they first face the challenge of making sense out of the large volumes of data produced. In this article, we show how the analysis of argumentative and dialogical structures allows for the principled identification of those issues that are central, controversial, or popular in an online corpus of debates. Although areas such as controversy mining work towards identifying issues that are a source of disagreement, by looking at the deeper argumentative structure, we show that a much richer understanding can be obtained. We provide results from using a pipeline of argument-mining techniques on the debate corpus, showing that the accuracy obtained is sufficient to automatically identify those issues that are key to the discussion, attracting proportionately more support than others, and those that are divisive, attracting proportionately more conflicting viewpoints.",
"title": ""
},
{
"docid": "75233d6d94fec1f43fa02e8043470d4d",
"text": "Out-of-autoclave (OoA) prepreg materials and methods have gained acceptance over the past decade because of the ability to produce autoclave-quality components under vacuum-bag-only (VBO) cure. To achieve low porosity and tight dimensional tolerances, VBO prepregs rely on specific microstructural features and processing techniques. Furthermore, successful cure is contingent upon appropriate material property and process parameter selection. In this article, we review the existing literature on VBO prepreg processing to summarize and synthesize knowledge on these issues. First, the context, development, and defining properties of VBO prepregs are presented. The key processing phenomena and the influence on quality are subsequently described. Finally, cost and environmental performance are considered. Throughout, we highlight key considerations for VBO prepreg processing and identify areas where further study is required.",
"title": ""
},
{
"docid": "03c13e81803517d2be66e8bc25b7012c",
"text": "Extractors and taggers turn unstructured text into entity-relation(ER) graphs where nodes are entities (email, paper, person,conference, company) and edges are relations (wrote, cited,works-for). Typed proximity search of the form <B>type=personNEAR company~\"IBM\", paper~\"XML\"</B> is an increasingly usefulsearch paradigm in ER graphs. Proximity search implementations either perform a Pagerank-like computation at query time, which is slow, or precompute, store and combine per-word Pageranks, which can be very expensive in terms of preprocessing time and space. We present HubRank, a new system for fast, dynamic, space-efficient proximity searches in ER graphs. During preprocessing, HubRank computesand indexes certain \"sketchy\" random walk fingerprints for a small fraction of nodes, carefully chosen using query log statistics. At query time, a small \"active\" subgraph is identified, bordered bynodes with indexed fingerprints. These fingerprints are adaptively loaded to various resolutions to form approximate personalized Pagerank vectors (PPVs). PPVs at remaining active nodes are now computed iteratively. We report on experiments with CiteSeer's ER graph and millions of real Cite Seer queries. Some representative numbers follow. On our testbed, HubRank preprocesses and indexes 52 times faster than whole-vocabulary PPV computation. A text index occupies 56 MB. Whole-vocabulary PPVs would consume 102GB. If PPVs are truncated to 56 MB, precision compared to true Pagerank drops to 0.55; incontrast, HubRank has precision 0.91 at 63MB. HubRank's average querytime is 200-300 milliseconds; query-time Pagerank computation takes 11 seconds on average.",
"title": ""
}
] |
scidocsrr
|
a3c26b6b89eeeddb00d5a6a89d59faab
|
Deep Texture and Structure Aware Filtering Network for Image Smoothing
|
[
{
"docid": "60eec67cd3b60258a6b3179c33279a22",
"text": "We present a new efficient edge-preserving filter-“tree filter”-to achieve strong image smoothing. The proposed filter can smooth out high-contrast details while preserving major edges, which is not achievable for bilateral-filter-like techniques. Tree filter is a weighted-average filter, whose kernel is derived by viewing pixel affinity in a probabilistic framework simultaneously considering pixel spatial distance, color/intensity difference, as well as connectedness. Pixel connectedness is acquired by treating pixels as nodes in a minimum spanning tree (MST) extracted from the image. The fact that an MST makes all image pixels connected through the tree endues the filter with the power to smooth out high-contrast, fine-scale details while preserving major image structures, since pixels in small isolated region will be closely connected to surrounding majority pixels through the tree, while pixels inside large homogeneous region will be automatically dragged away from pixels outside the region. The tree filter can be separated into two other filters, both of which turn out to have fast algorithms. We also propose an efficient linear time MST extraction algorithm to further improve the whole filtering speed. The algorithms give tree filter a great advantage in low computational complexity (linear to number of image pixels) and fast speed: it can process a 1-megapixel 8-bit image at ~ 0.25 s on an Intel 3.4 GHz Core i7 CPU (including the construction of MST). The proposed tree filter is demonstrated on a variety of applications.",
"title": ""
},
{
"docid": "87aedf5f9fe7a397ed1a2b6303bdd9b1",
"text": "We propose a principled convolutional neural pyramid (CNP) framework for general low-level vision and image processing tasks. It is based on the essential finding that many applications require large receptive fields for structure understanding. But corresponding neural networks for regression either stack many layers or apply large kernels to achieve it, which is computationally very costly. Our pyramid structure can greatly enlarge the field while not sacrificing computation efficiency. Extra benefit includes adaptive network depth and progressive upsampling for quasirealtime testing on VGA-size input. Our method profits a broad set of applications, such as depth/RGB image restoration, completion, noise/artifact removal, edge refinement, image filtering, image enhancement and colorization.",
"title": ""
}
] |
[
{
"docid": "9415182c28d6c20768cfba247eb63bac",
"text": "The aim of this paper is to perform the main part of the restructuring processes with Business Process Reengineering (BPR) methodology. The first step was to choose the processes for analysis. Two business processes, which occur in most of the manufacturing companies, have been selected. Afterwards, current state of these processes was examined. The conclusions were used to propose own changes in accordance with assumptions of the BPR. This was possible through modelling and simulation of selected processes with iGrafx modeling software.",
"title": ""
},
{
"docid": "6bbf27088fb5185009c5555f8aceeb04",
"text": "BACKGROUND\nGood prosthetic suspension system secures the residual limb inside the prosthetic socket and enables easy donning and doffing. This study aimed to introduce, evaluate and compare a newly designed prosthetic suspension system (HOLO) with the current suspension systems (suction, pin/lock and magnetic systems).\n\n\nMETHODS\nAll the suspension systems were tested (tensile testing machine) in terms of the degree of the shear strength and the patient's comfort. Nine transtibial amputees participated in this study. The patients were asked to use four different suspension systems. Afterwards, each participant completed a questionnaire for each system to evaluate their comfort. Furthermore, the systems were compared in terms of the cost.\n\n\nRESULTS\nThe maximum tensile load that the new system could bear was 490 N (SD, 5.5) before the system failed. Pin/lock, magnetic and suction suspension systems could tolerate loads of 580 N (SD, 8.5), 350.9 (SD, 7) and 310 N (SD, 8.4), respectively. Our subjects were satisfied with the new hook and loop system, particularly in terms of easy donning and doffing. Furthermore, the new system is considerably cheaper (35 times) than the current locking systems in the market.\n\n\nCONCLUSIONS\nThe new suspension system could successfully retain the prosthesis on the residual limb as a good alternative for lower limb amputees. In addition, the new system addresses some problems of the existing systems and is more cost effective than its counterparts.",
"title": ""
},
{
"docid": "605b95e3c0448b5ce9755ce6289894d7",
"text": "Website success hinges on how credible the consumers consider the information on the website. Unless consumers believe the website's information is credible, they are not likely to be willing to act on the advice and will not develop loyalty to the website. This paper reports on how individual differences and initial website impressions affect perceptions of information credibility of an unfamiliar advice website. Results confirm that several individual difference variables and initial impression variables (perceived reputation, perceived website quality, and willingness to explore the website) play an important role in developing information credibility of an unfamiliar website, with first impressions and individual differences playing equivalent roles. The study also confirms the import of information credibility by demonstrating it positively influences perceived usefulness, perceived site risk, willingness to act on website advice, and perceived consumer loyalty toward the website.",
"title": ""
},
{
"docid": "682254fdd4f79a1c04ce5ded334c4d99",
"text": "Measuring voice quality for telephony is not a new problem. However, packet-switched, best-effort networks such as the Internet present significant new challenges for the delivery of real-time voice traffic. Unlike the circuit-switched public switched telephone network (PSTN), Internet protocol (IP) networks guarantee neither sufficient bandwidth for the voice traffic nor a constant, acceptable delay. Dropped packets and varying delays introduce distortions not found in traditional telephony. In addition, if a low bitrate codec is used in voice over IP (VoIP) to achieve a high compression ratio, the original waveform can be significantly distorted. These new potential sources of signal distortion present significant challenges for objectively measuring speech quality. Measurement techniques designed for the PSTN may not perform well in VoIP environments. Our objective is to find a speech quality metric that accurately predicts subjective human perception under the conditions present in VoIP systems. To do this, we compared three types of measures: perceptually weighted distortion measures such as enhanced modified Bark spectral distance (EMBSD) and measuring normalizing blocks (MNB), word-error rates of continuous speech recognizers, and the ITU E-model. We tested the performance of these measures under conditions typical of a VoIP system. We found that the E-model had the highest correlation with mean opinion scores (MOS). The E-model is well-suited for online monitoring because it does not require the original (undistorted) signal to compute its quality metric and because it is computationally simple.",
"title": ""
},
{
"docid": "6b5e9fa6f81e311dcd5e8154b64a111c",
"text": "Silicon Carbide (SiC) devices and modules have been developed with high blocking voltages for Medium Voltage power electronics applications. Silicon devices do not exhibit higher blocking voltage capability due to its relatively low band gap energy compared to SiC counterparts. For the first time, 12kV SiC IGBTs have been fabricated. These devices exhibit excellent switching and static characteristics. A Three-level Neutral Point Clamped Voltage Source Converter (3L-NPC VSC) has been simulated with newly developed SiC IGBTs. This 3L-NPC Converter is used as a 7.2kV grid interface for the solid state transformer and STATCOM operation. Also a comparative study is carried out with 3L-NPC VSC simulated with 10kV SiC MOSFET and 6.5kV Silicon IGBT device data.",
"title": ""
},
{
"docid": "e507c60b8eb437cbd6ca9692f1bf8727",
"text": "We propose an efficient method to estimate the accuracy of classifiers using only unlabeled data. We consider a setting with multiple classification problems where the target classes may be tied together through logical constraints. For example, a set of classes may be mutually exclusive, meaning that a data instance can belong to at most one of them. The proposed method is based on the intuition that: (i) when classifiers agree, they are more likely to be correct, and (ii) when the classifiers make a prediction that violates the constraints, at least one classifier must be making an error. Experiments on four real-world data sets produce accuracy estimates within a few percent of the true accuracy, using solely unlabeled data. Our models also outperform existing state-of-the-art solutions in both estimating accuracies, and combining multiple classifier outputs. The results emphasize the utility of logical constraints in estimating accuracy, thus validating our intuition.",
"title": ""
},
{
"docid": "6af336fb0d0381b8fcb5f361b702de11",
"text": "We highlight an important frontier in algorithmic fairness: disparity in the quality of natural language processing algorithms when applied to language from authors of dierent social groups. For example, current systems sometimes analyze the language of females and minorities more poorly than they do of whites and males. We conduct an empirical analysis of racial disparity in language identication for tweets wrien in African-American English, and discuss implications of disparity in NLP.",
"title": ""
},
{
"docid": "39cad8dd6ad23ad9d4f98f3905ac29c2",
"text": "Estimating the disparity and normal direction of one pixel simultaneously, instead of only disparity, also known as 3D label methods, can achieve much higher subpixel accuracy in the stereo matching problem. However, it is extremely difficult to assign an appropriate 3D label to each pixel from the continuous label space $\\mathbb {R}^{3}$ while maintaining global consistency because of the infinite parameter space. In this paper, we propose a novel algorithm called PatchMatch-based superpixel cut to assign 3D labels of an image more accurately. In order to achieve robust and precise stereo matching between local windows, we develop a bilayer matching cost, where a bottom–up scheme is exploited to design the two layers. The bottom layer is employed to measure the similarity between small square patches locally by exploiting a pretrained convolutional neural network, and then, the top layer is developed to assemble the local matching costs in large irregular windows induced by the tangent planes of object surfaces. To optimize the spatial smoothness of local assignments, we propose a novel strategy to update 3D labels. In the procedure of optimization, both segmentation information and random refinement of PatchMatch are exploited to update candidate 3D label set for each pixel with high probability of achieving lower loss. Since pairwise energy of general candidate label sets violates the submodular property of graph cut, we propose a novel multilayer superpixel structure to group candidate label sets into candidate assignments, which thereby can be efficiently fused by $\\alpha $ -expansion graph cut. Extensive experiments demonstrate that our method can achieve higher subpixel accuracy in different data sets, and currently ranks first on the new challenging Middlebury 3.0 benchmark among all the existing methods.",
"title": ""
},
{
"docid": "420719690b6249322927153daedba87b",
"text": "• In-domain: 91% F1 on the dev set, 5 we reduced the learning rate from 10−4 to 10−5. We then stopped the training when F1 was not improved after 20 epochs. We did the same for ment-norm except that the learning rate was changed at 91.5% F1. Note that all the hyper-parameters except K and the turning point for early stopping were set to the values used by Ganea and Hofmann (2017). Systematic tuning is expensive though may have further ncreased the result of our models.",
"title": ""
},
{
"docid": "2159c89f9f0ef91f8ee99f34027eeed9",
"text": "Mobile Edge Computing (MEC) provides an efficient solution for IoT as it brings the cloud services close to the IoT device. This works well for IoT devices with limited mobility. IoT devices that are mobile by nature introduce a set of challenges to the MEC model. Challenges include security and efficiency aspects. Achieving mutual authentication of IoT device with the cloud edge provider is essential to protect from many security threats. Also, the efficiency of data transmission when connecting to a new cloud edge provider requires efficient data mobility among MEC providers or MEC centers. This research paper proposes a new framework that offers a secure and efficient MEC for IoT applications with mobile devices.",
"title": ""
},
{
"docid": "53d41fb8e188add204ba96669715b49a",
"text": "A nationwide survey was conducted to investigate the prevalence of video game addiction and problematic video game use and their association with physical and mental health. An initial sample comprising 2,500 individuals was randomly selected from the Norwegian National Registry. A total of 816 (34.0 percent) individuals completed and returned the questionnaire. The majority (56.3 percent) of respondents used video games on a regular basis. The prevalence of video game addiction was estimated to be 0.6 percent, with problematic use of video games reported by 4.1 percent of the sample. Gender (male) and age group (young) were strong predictors for problematic use of video games. A higher proportion of high frequency compared with low frequency players preferred massively multiplayer online role-playing games, although the majority of high frequency players preferred other game types. Problematic use of video games was associated with lower scores on life satisfaction and with elevated levels of anxiety and depression. Video game use was not associated with reported amount of physical exercise.",
"title": ""
},
{
"docid": "ceda2e7fb5881c6b2080f09c226d99ba",
"text": "Fraud detection has become an important issue to be explored. Fraud detection involves identifying fraud as quickly as possible once it has been perpetrated. Fraud is often a dynamic and challenging problem in Credit card lending business. Credit card fraud can be broadly classified into behavioral and application fraud, with behavioral fraud being the more prominent of the two. Supervised Modeling/Segmentation techniques are commonly used in fraud",
"title": ""
},
{
"docid": "2dc084d063ec1610917e09921e145c24",
"text": "This article describes an assistant interface to design and produce pop-up cards. A pop-up card is a piece of folded paper from which a three-dimensional structure pops up when opened. The authors propose an interface to assist the user in the design and production of a pop-up card. During the design process, the system examines whether the parts protrude from the card or whether the parts collide with one another when the card is closed. The user can concentrate on the design activity because the error occurrence and the error resolution are continuously fed to the user in real time. The authors demonstrate the features of their system by creating two pop-up card examples and perform an informal preliminary user study, showing that automatic protrusion and collision detection are effective in the design process. DOI: 10.4018/jcicg.2010070104 International Journal of Creative Interfaces and Computer Graphics, 1(2), 40-50, July-December 2010 41 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. start over from the beginning. This process requires a lot of time, energy, and paper. Design and simulation in a computer help both nonprofessionals and professionals to design a pop-up card, eliminate the boring repetition, and save time. Glassner (1998, 2002) proposed methods for designing a pop-up card on a computer. He introduced several simple pop-up mechanisms and described how to use these mechanisms, how to simulate the position of vertices as an intersecting point of three spheres, how to check whether the structure sticks out beyond the cover or if a collision occurs during opening, and how to generate templates. His work is quite useful in designing simple pop-up cards. In this article, we build on Glassner’s pioneering work and introduce several innovative aspects. We add two new mechanisms based on the V-fold: the box and the cube. We present a detailed description of the interface for design, which Glassner did not describe in any detail. In addition, our system provides real-time error detection feedback during editing operations by examining whether parts protrude from the card when closed or whether they collide with one another during opening and closing. Finally, we report on an informal preliminary user study of our system involving four inexperienced users.",
"title": ""
},
{
"docid": "7eff2743d36414e3f008be72598bfd8e",
"text": "BACKGROUND\nPsychiatry has been consistently shown to be a profession characterised by 'high-burnout'; however, no nationwide surveys on this topic have been conducted in Japan.\n\n\nAIMS\nThe objective of this study was to estimate the prevalence of burnout and to ascertain the relationship between work environment satisfaction, work-life balance satisfaction and burnout among psychiatrists working in medical schools in Japan.\n\n\nMETHOD\nWe mailed anonymous questionnaires to all 80 psychiatry departments in medical schools throughout Japan. Work-life satisfaction, work-environment satisfaction and social support assessments, as well as the Maslach Burnout Inventory (MBI), were used.\n\n\nRESULTS\nSixty psychiatric departments (75.0%) responded, and 704 psychiatrists provided answers to the assessments and MBI. Half of the respondents (n = 311, 46.0%) experienced difficulty with their work-life balance. Based on the responses to the MBI, 21.0% of the respondents had a high level of emotional exhaustion, 12.0% had a high level of depersonalisation, and 72.0% had a low level of personal accomplishment. Receiving little support, experiencing difficulty with work-life balance, and having less work-environment satisfaction were significantly associated with higher emotional exhaustion. A higher number of nights worked per month was significantly associated with higher depersonalisation.\n\n\nCONCLUSIONS\nA low level of personal accomplishment was quite prevalent among Japanese psychiatrists compared with the results of previous studies. Poor work-life balance was related to burnout, and social support was noted to mitigate the impact of burnout.",
"title": ""
},
{
"docid": "8db6d52ee2778d24c6561b9158806e84",
"text": "Surface fuctionalization plays a crucial role in developing efficient nanoparticulate drug-delivery systems by improving their therapeutic efficacy and minimizing adverse effects. Here we propose a simple layer-by-layer self-assembly technique capable of constructing mesoporous silica nanoparticles (MSNs) into a pH-responsive drug delivery system with enhanced efficacy and biocompatibility. In this system, biocompatible polyelectrolyte multilayers of alginate/chitosan were assembled on MSN's surface to achieve pH-responsive nanocarriers. The functionalized MSNs exhibited improved blood compatibility over the bare MSNs in terms of low hemolytic and cytotoxic activity against human red blood cells. As a proof-of-concept, the anticancer drug doxorubicin (DOX) was loaded into nanocarriers to evaluate their use for the pH-responsive drug release both in vitro and in vivo. The DOX release from nanocarriers was pH dependent, and the release rate was much faster at lower pH than that of at higher pH. The in vitro evaluation on HeLa cells showed that the DOX-loaded nanocarriers provided a sustained intracellular DOX release and a prolonged DOX accumulation in the nucleus, thus resulting in a prolonged therapeutic efficacy. In addition, the pharmacokinetic and biodistribution studies in healthy rats showed that DOX-loaded nanocarriers had longer systemic circulation time and slower plasma elimination rate than free DOX. The histological results also revealed that the nanocarriers had good tissue compatibility. Thus, the biocompatible multilayers functionalized MSNs hold the substantial potential to be further developed as effective and safe drug-delivery carriers.",
"title": ""
},
{
"docid": "574aca6aa63dd17949fcce6a231cf2d3",
"text": "This paper presents an algorithm for segmenting the hair region in uncontrolled, real life conditions images. Our method is based on a simple statistical hair shape model representing the upper hair part. We detect this region by minimizing an energy which uses active shape and active contour. The upper hair region then allows us to learn the hair appearance parameters (color and texture) for the image considered. Finally, those parameters drive a pixel-wise segmentation technique that yields the desired (complete) hair region. We demonstrate the applicability of our method on several real images.",
"title": ""
},
{
"docid": "aba0d28e9f1a138e569aa2525781e84d",
"text": "A compact coplanar waveguide (CPW) monopole antenna is presented, comprising a fractal radiating patch in which a folded T-shaped element (FTSE) is embedded. The impedance match of the antenna is determined by the number of fractal unit cells, and the FTSE provides the necessary band-notch functionality. The filtering property can be tuned finely by controlling of length of FTSE. Inclusion of a pair of rectangular notches in the ground plane is shown to extend the antenna's impedance bandwidth for ultrawideband (UWB) performance. The antenna's parameters were investigated to fully understand their affect on the antenna. Salient parameters obtained from this analysis enabled the optimization of the antenna's overall characteristics. Experimental and simulation results demonstrate that the antenna exhibits the desired VSWR level and radiation patterns across the entire UWB frequency range. The measured results showed the antenna operates over a frequency band between 2.94–11.17 GHz with fractional bandwidth of 117% for <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm VSWR} \\leq 2$</tex></formula>, except at the notch band between 3.3–4.2 GHz. The antenna has dimensions of 14<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times\\,$</tex> </formula>18<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times \\,$</tex> </formula>1 mm<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{3}$</tex> </formula>.",
"title": ""
},
{
"docid": "1cbdf72cbb83763040abedb74748f6cd",
"text": "Cyber attack is one of the most rapidly growing threats to the world of cutting edge information technology. As new tools and techniques are emerging everyday to make information accessible over the Internet, so is their vulnerabilities. Cyber defense is inevitable in order to ensure reliable and secure communication and transmission of information. Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) are the major technologies dominating in the area of cyber defense. Tremendous efforts have already been put in intrusion detection research for decades but intrusion prevention research is still in its infancy. This paper provides a comprehensive review of the current research in both Intrusion Detection Systems and recently emerged Intrusion Prevention Systems. Limitations of current research works in both fields are also discussed in conclusion.",
"title": ""
},
{
"docid": "ce8de212a3ef98f8e8bd391e731108af",
"text": "Direct democracy is often proposed as a possible solution to the 21st-century problems of democracy. However, this suggestion clashes with the size and complexity of 21st-century societies, entailing an excessive cognitive burden on voters, who would have to submit informed opinions on an excessive number of issues. In this paper I argue for the development of “voting avatars”, autonomous agents debating and voting on behalf of each citizen. Theoretical research from artificial intelligence, and in particular multiagent systems and computational social choice, proposes 21st-century techniques for this purpose, from the compact representation of a voter’s preferences and values, to the development of voting procedures for autonomous agents use only.",
"title": ""
},
{
"docid": "5d4797cffc06cbde079bf4019dc196db",
"text": "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and/or 3-D Convolutional Neural Networks (CNNs) to encode video content and Recurrent Neural Networks (RNNs) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)—a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8% and 74.0% in terms of BLEU@4 and CIDEr-D. Superior results are also reported on M-VAD and MPII-MD when compared to state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
201b38d0f037ad089a00a4d6fc98abe8
|
"Knowing value'' logic as a normal modal logic
|
[
{
"docid": "e1ced56a089d36438b0e6a20936df1c1",
"text": "All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. To my mother Maxine, who gave me a love of learning; to Susan, who is as happy and amazed as I am that The Book is finally completed; to Josh, Tim, and Teddy, who are impressed that their father is an Author; and to my late father George, who would have been proud. To Gale, for putting up with this over the years; to David and Sara, for sometimes letting Daddy do his work; and to my mother Eva, to whom I can finally say \" It's done! \". To Pam, who listened for years to my promises that the book is 90% done; to Aaron, who, I hope, will read this book; to my parents, Zipporah and Pinkhas, who taught me to think; and to my grandparents, who perished in the Holocaust.. iaè knk aè kn yi-m ` , e ` xe ehiad \" Behold and see , if there be any sorrow like unto my sorrow. \" M. Y. V .",
"title": ""
}
] |
[
{
"docid": "0f96bdaca2e1e0faaa785c59d24e9d5a",
"text": "Recent studies indicate that Traditional Chinese medicine (TCM) can play an important role in the whole course of cancer treatment such as recovery stages of post-operative, radiotherapy or chemotherapy stages instead of only terminal stage of cancer. In this review, we have summarized current evidence for using TCM as adjuvant cancer treatment in different stages of cancer lesions. Some TCMs (e.g., TJ-41, Liu-jun-zi-tang, PHY906, Coumarin, and Aescine) are capable of improving the post-operative symptoms such as fatigue, pain, appetite, diarrhea, nausea, vomiting, and lymphedema. Some TCMs (e.g., Ginseng, Huang-Qi, BanZhiLian, TJ-48, Huachansu injection, Shenqi fuzheng injection, and Kanglaite injection) in combination with chemo- or radio-therapy are capable of enhancing the efficacy of and diminishing the side effects and complications caused by chemo- and radiotherapy. Taken together, they have great advantages in terms of suppressing tumor progression, relieving surgery complications, increasing the sensitivity of chemo- and radio- therapeutics, improving an organism's immune system function, and lessening the damage caused by surgery, chemo- or radio-therapeutics. They have significant effects on relieving breast cancer-related lymphedema, reducing cancer-related fatigue and pain, improving radiation pneumonitis and gastrointestinal side effects, protecting liver function, and even ameliorating bone marrow suppression. This review of those medicines should contribute to an understanding of Chinese herbal medicines as an adjunctive therapy in the whole course of cancer treatment instead of only terminal stage of cancer, by providing useful information for development of more effective anti-cancer drugs and making more patients \"survival with cancer\" for a long time.",
"title": ""
},
{
"docid": "4c5de57e2b646ed53c98576731bdb02e",
"text": "FrameNet is a lexico-semantic dataset that embodies the theory of frame semantics. Like other semantic databases, FrameNet is incomplete. We augment it via the paraphrase database, PPDB, and gain a threefold increase in coverage at 65% precision.",
"title": ""
},
{
"docid": "12f2fe4f71f399dd3d40f67bc94b5607",
"text": "This paper presents a novel 3D shape retrieval method, which uses Bag-of-Features and an efficient multi-view shape matching scheme. In our approach, a properly normalized object is first described by a set of depth-buffer views captured on the surrounding vertices of a given unit geodesic sphere. We then represent each view as a word histogram generated by the vector quantization of the view’s salient local features. The dissimilarity between two 3D models is measured by the minimum distance of their all (24) possible matching pairs. This paper also investigates several critical issues including the influence of the number of views, codebook, training data, and distance function. Experiments on four commonly-used benchmarks demonstrate that: 1) Our approach obtains superior performance in searching for rigid models. 2) The local feature and global feature based methods are somehow complementary. Moreover, a linear combination of them significantly outperforms the state-of-the-art in terms of retrieval accuracy.",
"title": ""
},
{
"docid": "d5f77e99ebda2f1419b8dbb56d93c41f",
"text": "We developed a four-arm four-crawler advanced disaster response robot called OCTOPUS. Disaster response robots are expected to be capable of both mobility, e.g., entering narrow spaces over very rough unstable ground, and workability, e.g., conducting complex debris-demolition work. However, conventional disaster response robots are specialized in either mobility or workability. Moreover, strategies to independently enhance the capability of crawlers for mobility and arms for workability will increase the robot size and weight. To balance environmental applicability with the mobility and workability, OCTOPUS is equipped with a mutual complementary strategy between its arms and crawlers. The four arms conduct complex tasks while ensuring stabilization when climbing steps. The four crawlers translate rough terrain while avoiding toppling over when conducting demolition work. OCTOPUS is hydraulic driven and teleoperated by two operators. To evaluate the performance of OCTOPUS, we conducted preliminary experiments involving climbing high steps and removing attached objects by using the four arms. The results showed that OCTOPUS completed the two tasks by adequately coordinating its four arms and four crawlers and improvement in operability needs.",
"title": ""
},
{
"docid": "07e9b961a1196665538d89b60a30a7d1",
"text": "The problem of anomaly detection in time series has received a lot of attention in the past two decades. However, existing techniques cannot locate where the anomalies are within anomalous time series, or they require users to provide the length of potential anomalies. To address these limitations, we propose a self-learning online anomaly detection algorithm that automatically identifies anomalous time series, as well as the exact locations where the anomalies occur in the detected time series. In addition, for multivariate time series, it is difficult to detect anomalies due to the following challenges. First, anomalies may occur in only a subset of dimensions (variables). Second, the locations and lengths of anomalous subsequences may be different in different dimensions. Third, some anomalies may look normal in each individual dimension but different with combinations of dimensions. To mitigate these problems, we introduce a multivariate anomaly detection algorithm which detects anomalies and identifies the dimensions and locations of the anomalous subsequences. We evaluate our approaches on several real-world datasets, including two CPU manufacturing data from Intel. We demonstrate that our approach can successfully detect the correct anomalies without requiring any prior knowledge about the data.",
"title": ""
},
{
"docid": "0801ef431c6e4dab6158029262a3bf82",
"text": "A hallmark of human intelligence is the ability to ask rich, creative, and revealing questions. Here we introduce a cognitive model capable of constructing humanlike questions. Our approach treats questions as formal programs that, when executed on the state of the world, output an answer. The model specifies a probability distribution over a complex, compositional space of programs, favoring concise programs that help the agent learn in the current context. We evaluate our approach by modeling the types of open-ended questions generated by humans who were attempting to learn about an ambiguous situation in a game. We find that our model predicts what questions people will ask, and can creatively produce novel questions that were not present in the training set. In addition, we compare a number of model variants, finding that both question informativeness and complexity are important for producing human-like questions.",
"title": ""
},
{
"docid": "cbc1724bf52d033f372fb7e59de2d670",
"text": "The AI2 Reasoning Challenge (ARC), a new benchmark dataset for question answering (QA) has been recently released. ARC only contains natural science questions authored for human exams, which are hard to answer and require advanced logic reasoning. On the ARC Challenge Set, existing state-of-the-art QA systems fail to significantly outperform random baseline, reflecting the difficult nature of this task. In this paper, we propose a novel framework for answering science exam questions, which mimics human solving process in an open-book exam. To address the reasoning challenge, we construct contextual knowledge graphs respectively for the question itself and supporting sentences. Our model learns to reason with neural embeddings of both knowledge graphs. Experiments on the ARC Challenge Set show that our model outperforms the previous state-of-the-art QA systems.",
"title": ""
},
{
"docid": "2a3809fef25989552f7c3f3ad7ade3f0",
"text": "Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalizability. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study. We defined the scope of the recommendations to cover three main study designs: cohort, case-control and cross-sectional studies. We convened a two-day workshop, in September 2004, with methodologists, researchers and journal editors to draft a checklist of items. This list was subsequently revised during several meetings of the coordinating group and in e-mail discussions with the larger group of STROBE contributors, taking into account empirical evidence and methodological considerations. The workshop and the subsequent iterative process of consultation and revision resulted in a checklist of 22 items (the STROBE Statement) that relate to the title, abstract, introduction, methods, results and discussion sections of articles. Eighteen items are common to all three study designs and four are specific for cohort, case-control, or cross-sectional studies. A detailed Explanation and Elaboration document is published separately and is freely available on the web sites of PLoS Medicine, Annals of Internal Medicine and Epidemiology. We hope that the STROBE Statement will contribute to improving the quality of reporting of observational studies.",
"title": ""
},
{
"docid": "7380ec7e277c036d7fdbe7c5ea58e6be",
"text": "Business process engines and workflow engines (but also web applications and emails) provide information about human tasks to people. Although many of these systems support some kind of human task management, no extensive analysis of involved components has been undertaken. This paper discusses some of these systems exemplarily and defines a first human task reference model to stimulate debates on ways how to manage human tasks crossing system and organization boundaries.",
"title": ""
},
{
"docid": "b15c689ff3dd7b2e7e2149e73b5451ac",
"text": "The Web provides a fertile ground for word-of-mouth communication and more and more consumers write about and share product-related experiences online. Given the experiential nature of tourism, such first-hand knowledge communicated by other travelers is especially useful for travel decision making. However, very little is known about what motivates consumers to write online travel reviews. A Web-based survey using an online consumer panel was conducted to investigate consumers’ motivations to write online travel reviews. Measurement scales to gauge the motivations to contribute online travel reviews were developed and tested. The results indicate that online travel review writers are mostly motivated by helping a travel service provider, concerns for other consumers, and needs for enjoyment/positive self-enhancement. Venting negative feelings through postings is clearly not seen as an important motive. Motivational differences were found for gender and income level. Implications of the findings for online travel communities and tourism marketers are discussed.",
"title": ""
},
{
"docid": "77b78ec70f390289424cade3850fc098",
"text": "As the primary barrier between an organism and its environment, epithelial cells are well-positioned to regulate tolerance while preserving immunity against pathogens. Class II major histocompatibility complex molecules (MHC class II) are highly expressed on the surface of epithelial cells (ECs) in both the lung and intestine, although the functional consequences of this expression are not fully understood. Here, we summarize current information regarding the interactions that regulate the expression of EC MHC class II in health and disease. We then evaluate the potential role of EC as non-professional antigen presenting cells. Finally, we explore future areas of study and the potential contribution of epithelial surfaces to gut-lung crosstalk.",
"title": ""
},
{
"docid": "888095e97b3e18f1394330a3e9e7469e",
"text": "Attempts to secure the enterprise network, even when using strong AAA (authentication, authorization and accounting) schemes, meet the user box spoofing and security middle boxes (firewalls and other filtering tools) bypassing problems. Seeking to strengthen the network security level, the names (users, addresses) and user machines must be bound tightly to the unambiguously defined network appliances and their ports. Using traditional network' architecture these solutions are difficult to realize. The SDN framework allows solving the aforementioned problems more precise and securely. One of the possible ways to gradually implement the controllability of traffic flow in standard hierarchical networks by applying OpenFlow driven SDN architecture and commodity access switches, is described in this paper. The performance impact of the solution is also assessed.",
"title": ""
},
{
"docid": "43cd3b5ac6e2e2f240f4feb44be65b99",
"text": "Executive Overview Toyota’s Production System (TPS) is based on “lean” principles including a focus on the customer, continual improvement and quality through waste reduction, and tightly integrated upstream and downstream processes as part of a lean value chain. Most manufacturing companies have adopted some type of “lean initiative,” and the lean movement recently has gone beyond the shop floor to white-collar offices and is even spreading to service industries. Unfortunately, most of these efforts represent limited, piecemeal approaches—quick fixes to reduce lead time and costs and to increase quality—that almost never create a true learning culture. We outline and illustrate the management principles of TPS that can be applied beyond manufacturing to any technical or service process. It is a true systems approach that effectively integrates people, processes, and technology—one that must be adopted as a continual, comprehensive, and coordinated effort for change and learning across the organization.",
"title": ""
},
{
"docid": "621ae81c61bbeb4804045b3a038980d2",
"text": "A multi-functional in-memory inference processor integrated circuit (IC) in a 65-nm CMOS process is presented. The prototype employs a deep in-memory architecture (DIMA), which enhances both energy efficiency and throughput over conventional digital architectures via simultaneous access of multiple rows of a standard 6T bitcell array (BCA) per precharge, and embedding column pitch-matched low-swing analog processing at the BCA periphery. In doing so, DIMA exploits the synergy between the dataflow of machine learning (ML) algorithms and the SRAM architecture to reduce the dominant energy cost due to data movement. The prototype IC incorporates a 16-kB SRAM array and supports four commonly used ML algorithms—the support vector machine, template matching, <inline-formula> <tex-math notation=\"LaTeX\">$k$ </tex-math></inline-formula>-nearest neighbor, and the matched filter. Silicon measured results demonstrate simultaneous gains (dot product mode) in energy efficiency of 10<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> and in throughput of 5.3<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> leading to a 53<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> reduction in the energy-delay product with negligible (<inline-formula> <tex-math notation=\"LaTeX\">$\\le $ </tex-math></inline-formula>1%) degradation in the decision-making accuracy, compared with the conventional 8-b fixed-point single-function digital implementations.",
"title": ""
},
{
"docid": "33e5718ddad39600605530078d3d152e",
"text": "This work presents the modeling and control of a tilt-rotor UAV with tail controlled surfaces for path tracking with improved forward flight performance. A nonlinear dynamic model is obtained through Euler-Lagrange formulation and linearized around a reference trajectory in order to obtain a linear parameter-varying model. The forward velocity is treated as an uncertain parameter, and the linearized system is represented as a set of polytopes with nonempty intersection regarding the forward velocity. Feedback gains are computed for each of the vertices of the polytopes using a discrete mixed control approach with pole placement constraints strategy. The resultant feedback gain, which is able to control the system inside a given polytope, is obtained using an adaptive law through an optimal convex combination of the vertices' gains. Finally, an adaptive mixing scheme is used to smoothly schedule the feedback gains between the polytopes.",
"title": ""
},
{
"docid": "8c35fd3040e4db2d09e3d6dc0e9ae130",
"text": "Internet of Things is referred to a combination of physical devices having sensors and connection capabilities enabling them to interact with each other (machine to machine) and can be controlled remotely via cloud engine. Success of an IoT device depends on the ability of systems and devices to securely sample, collect, and analyze data, and then transmit over link, protocol, or media selections based on stated requirements, all without human intervention. Among the requirements of the IoT, connectivity is paramount. It's hard to imagine that a single communication technology can address all the use cases possible in home, industry and smart cities. Along with the existing low power technologies like Zigbee, Bluetooth and 6LoWPAN, 802.11 WiFi standards are also making its way into the market with its own advantages in high range and better speed. Along with IEEE, WiFi Alliance has a new standard for the proximity applications. Neighbor Awareness Network (NAN) popularly known as WiFi Aware is that standard which enables low power discovery over WiFi and can light up many proximity based used cases. In this paper we discuss how NAN can influence the emerging IoT market as a connectivity solution for proximity assessment and contextual notifications with its benefits in some of the scenarios. When we consider WiFi the infrastructure already exists in terms of access points all around in public and smart phones or tablets come with WiFi as a default feature hence enabling NAN can be easy and if we can pair them with IoT, many innovative use cases can evolve.",
"title": ""
},
{
"docid": "2f245ca6c15b5b7ac97191baa6a55aff",
"text": "How objects are assigned to components in a distributed system can have a significant impact on performance and resource usage. Social Hash is a framework for producing, serving, and maintaining assignments of objects to components so as to optimize the operations of large social networks, such as Facebook’s Social Graph. The framework uses a two-level scheme to decouple compute-intensive optimization from relatively low-overhead dynamic adaptation. The optimization at the first level occurs on a slow timescale, and in our applications is based on graph partitioning in order to leverage the structure of the social network. The dynamic adaptation at the second level takes place frequently to adapt to changes in access patterns and infrastructure, with the goal of balancing component loads. We demonstrate the effectiveness of Social Hash with two real applications. The first assigns HTTP requests to individual compute clusters with the goal of minimizing the (memory-based) cache miss rate; Social Hash decreased the cache miss rate of production workloads by 25%. The second application assigns data records to storage subsystems with the goal of minimizing the number of storage subsystems that need to be accessed on multiget fetch requests; Social Hash cut the average response time in half on production workloads for one of the storage systems at Facebook.",
"title": ""
},
{
"docid": "87d885bf255c43bff0efdee8f89f0e2b",
"text": "Enabling individuals who are living with reduced mobility of the hand to utilize portable exoskeletons at home has the potential to deliver rehabilitation therapies with a greater intensity and relevance to activities of daily living. Various hand exoskeleton designs have been explored in the past, however, devices have remained nonportable and cumbersome for the intended users. Here we investigate a remote actuation system for wearable hand exoskeletons, which moves weight from the weakened limb to the shoulders, reducing the burden on the user and improving portability. A push-pull Bowden cable was used to transmit actuator forces from a backpack to the hand with strict attention paid to total system weight, size, and the needs of the target population. We present the design and integration of this system into a previously presented hand exoskeleton, as well as its characterization. Integration of remote actuation reduced the exoskeleton weight by 56% to 113g without adverse effects to functionality. Total actuation system weight was kept to 754g. The loss of positional accuracy inherent with Bowden cable transmissions was compensated for through closed loop positional control of the transmission output. The achieved weight reduction makes hand exoskeletons more suitable to the intended user, which will permit the study of their effectiveness in providing long duration, high intensity, and targeted rehabilitation as well as functional assistance.",
"title": ""
},
{
"docid": "b7addd3896cab23e2044294db54ddfa9",
"text": "Resumen. Debido a la necesidad de proporcionar aplicaciones Web con interfaces de usuario cada vez más usables y con mayor funcionalidad, en los últimos años han sugido un nuevo tipo de aplicaciones denominadas RIAs (Rich Internet Applications) que ofrecen interfaces tan interactivos como las aplicaciones de escritorio. Sin embargo, este tipo de aplicaciones son complejas y su desarrollo requiere de un gran esfuerzo de diseño e implementación. Con el objetivo de reducir trabajo y reducir los errores de desarrollo se ha definido una aproximación dirigida por modelos denominada OOH4RIA. Este artículo presenta la herramienta que da soporte a la aproximación denominada OOH4RIA Tool permitiendo representar los modelos y transformaciones para acelerar la obtención de una RIA implementada con el framework GWT.",
"title": ""
},
{
"docid": "ed87fafb6f8e9d68b5bd44c201f1d54b",
"text": "According to the position paper from the European Academy for Allergy and Clinical Immunology (EAACI) “food allergy” summarizes immune-mediated, non-toxic adverse reactions to foods (Figure 1)(Bruijnzeel-Koomen et al., 1995). The most common form of food allergy is mediated by immunoglobulin (Ig)E antibodies and reflects an immediatetype (\"Type 1 hypersensitivity\") reaction, i.e. acute onset of symptoms after ingestion or inhalation of foods. IgE-mediated food allergy is further classified into primary (class 1) and secondary (class 2) food allergy. This distinction is based on clinical appearance, the predominantly affected group of patients (children or adults), disease-eliciting food allergens and the underlying immune mechanisms. Primary (class 1) or “true” food allergy starts in early life and often represents the first manifestation of the atopic syndrome. The most common foods involved are cow ́s milk, hen ́s egg, legumes (peanuts and soybean), fish, shellfish and wheat. Of note, allergens contained in these foods do not only elicit allergic reactions in the gastrointestinal tract but often cause or influence urticaria, atopic dermatitis as well as bronchial obstruction. With a few exceptions (peanut and fish) most children outgrow class 1 food allergy within the first 3 to 6 years of life. Secondary (class 2) food allergy describes allergic reactions to foods in mainly adolescent and adult individuals with established respiratory allergy, for example to pollen of birch, mugwort or ragweed. This form of food allergy is believed to be a consequence of immunological cross-reactivity between respiratory allergens and structurally related proteins in the respective foods. In principle, the recognition of homologous proteins in foods by IgE-antibodies specific for respiratory allergens can induce clinical symptoms. Foods inducing allergic reactions in the different groups of patients vary according to the manifested respiratory allergy. Different syndromes have been defined, such as the birchfruit-hazelnut-vegetable syndrome, the mugwort-celery-spice syndrome or the latex-shrimp syndrome.",
"title": ""
}
] |
scidocsrr
|
023ceb5686383baea82ffef8098e52b0
|
Building Privacy-Preserving Cryptographic Credentials from Federated Online Identities
|
[
{
"docid": "f3ec87229acd0ec98c044ad42fd9fec1",
"text": "Increasingly, Internet users trade privacy for service. Facebook, Google, and others mine personal information to target advertising. This paper presents a preliminary and partial answer to the general question \"Can users retain their privacy while still benefiting from these web services?\". We propose NOYB, a novel approach that provides privacy while preserving some of the functionality provided by online services. We apply our approach to the Facebook online social networking website. Through a proof-of-concept implementation we demonstrate that NOYB is practical and incrementally deployable, requires no changes to or cooperation from an existing online service, and indeed can be non-trivial for the online service to detect.",
"title": ""
},
{
"docid": "4696a275ad4534f23d5bc884bc29fc2a",
"text": "With the advancement in user-centric and URI-based identity systems over the past two years, it has become clear that a single specification will not be the solution to all problems. Rather, like the other layers of the Internet, developing small, interoperable specifications that are independently implementable and useful will ultimately lead to market adoption of these technologies. This is the intent of the OpenID framework. OpenID Authentication 1.0 began as a lightweight HTTP-based URL authentication protocol. OpenID Authentication 2.0 it is now turning into an open community-driven platform that allows and encourages innovation. It supports both URLs and XRIs as user identifiers, uses Yadis XRDS documents for identity service discovery, adds stronger security, and supports both public and private identifiers. With continuing convergence under this broad umbrella, the OpenID framework is emerging as a viable solution for Internet-scale user-centric identity infrastructure.",
"title": ""
}
] |
[
{
"docid": "0f3cad05c9c267f11c4cebd634a12c59",
"text": "The recent, exponential rise in adoption of the most disparate Internet of Things (IoT) devices and technologies has reached also Agriculture and Food (Agri-Food) supply chains, drumming up substantial research and innovation interest towards developing reliable, auditable and transparent traceability systems. Current IoT-based traceability and provenance systems for Agri-Food supply chains are built on top of centralized infrastructures and this leaves room for unsolved issues and major concerns, including data integrity, tampering and single points of failure. Blockchains, the distributed ledger technology underpinning cryptocurrencies such as Bitcoin, represent a new and innovative technological approach to realizing decentralized trustless systems. Indeed, the inherent properties of this digital technology provide fault-tolerance, immutability, transparency and full traceability of the stored transaction records, as well as coherent digital representations of physical assets and autonomous transaction executions. This paper presents AgriBlockIoT, a fully decentralized, blockchain-based traceability solution for Agri-Food supply chain management, able to seamless integrate IoT devices producing and consuming digital data along the chain. To effectively assess AgriBlockIoT, first, we defined a classical use-case within the given vertical domain, namely from-farm-to-fork. Then, we developed and deployed such use-case, achieving traceability using two different blockchain implementations, namely Ethereum and Hyperledger Sawtooth. Finally, we evaluated and compared the performance of both the deployments, in terms of latency, CPU, and network usage, also highlighting their main pros and cons.",
"title": ""
},
{
"docid": "b7aea71af6c926344286fbfa214c4718",
"text": "Semantic segmentation is a task that covers most of the perception needs of intelligent vehicles in an unified way. ConvNets excel at this task, as they can be trained end-to-end to accurately classify multiple object categories in an image at the pixel level. However, current approaches normally involve complex architectures that are expensive in terms of computational resources and are not feasible for ITS applications. In this paper, we propose a deep architecture that is able to run in real-time while providing accurate semantic segmentation. The core of our ConvNet is a novel layer that uses residual connections and factorized convolutions in order to remain highly efficient while still retaining remarkable performance. Our network is able to run at 83 FPS in a single Titan X, and at more than 7 FPS in a Jetson TX1 (embedded GPU). A comprehensive set of experiments demonstrates that our system, trained from scratch on the challenging Cityscapes dataset, achieves a classification performance that is among the state of the art, while being orders of magnitude faster to compute than other architectures that achieve top precision. This makes our model an ideal approach for scene understanding in intelligent vehicles applications.",
"title": ""
},
{
"docid": "c8f47f5c737fa3e8ff14b8a5af575e2f",
"text": "Any information system for precision agriculture is significantly dependent on its data model, i.e. how data is stored, managed and accessed. Most of these information systems deal with data in the proprietary structure that fits the most to the purposes of each organisation and/or individual. The data is commonly in a proprietary structure even if there is a standardized exchangeable format used by an information system. The different groups of stakeholders involved in the agricultural activities have to manage many different and heterogeneous sources of information that need to be combined in order to make economically and environmentally sound decisions, which include (among others) the definition of policies (subsidies, standardisation and regulation, national strategies for rural development, climate change), development of sustainable agriculture, field records and profit analysis, crop management, pest detection, etc. If we would like to integrate data from several sources, we need to establish a unified data model that is capable to include the diversity of the underlying data models. This paper deals with the issues related to the development of the open data model for the precision agriculture information systems. Any information system gains added value when it is based on the standards used within the particular domain. As such, the proposed open data model for precision agriculture addresses the international standardization approaches, European legislation as well as the needs of farmers. The open data model has been designed to be compliant with the requirements originating from the European directive 2007/2/EC (INSPIRE), with the ISO standards like ISO 19156:2011 – Geographic Information – Observations and measurements, principles used within the Land Parcel Identification Systems (LPIS) etc. At the same time, the open data model is customizable and scalable. Its core is defined in a way enabling the import of LPISas well as INSPIREbased data. Moreover, it supports the management of data originating from sensor measurements. From the thematic point of view, the open data model aims primarily at the content related to the precision agriculture. It means, that it incorporates information on crop species, soils properties, nutrient balance, applied fertilizers, pesticides including forms of their applications. On the other hand, information on atmospheric and meteorological conditions, transport networks for the fleet management and/or sensor measurements are supported as well. The open data model for precision agriculture is being used within the European project Farm-Oriented Open Data in Europe (FOODIE). As such, the open data model for precision agriculture may also be understood as one of the founding stones for the Foodie platform hub. The Foodie platform hub aims at enabling in an easy manner the (re)use of open data in the agricultural domain in order to create new applications that provide added value to different stakeholder groups.",
"title": ""
},
{
"docid": "c514cb2acdf18fc4d64dc0df52d09d51",
"text": "Android introduced the dynamic code loading (DCL) mechanism to allow for code reuse, to achieve extensibility, to enable updating functionalities, or to boost application start-up performance. In spite of its wide adoption by developers, previous research has shown that the secure implementation of DCL-based functionality is challenging, often leading to remote code injection vulnerabilities. Unfortunately, previous attempts to address this problem by both the academic and Android developers communities are affected by either practicality or completeness issues, and, in some cases, are affected by severe vulnerabilities.\n In this paper, we propose, design, implement, and test Grab 'n Run, a novel code verification protocol and a series of supporting libraries, APIs, and tools, that address the problem by abstracting away from the developer many of the challenging implementation details. Grab 'n Run is designed to be practical: Among its tools, it provides a drop-in library, which requires no modifications to the Android framework or the underlying Dalvik/ART runtime, is very similar to the native API, and most code can be automatically rewritten to use it. Grab 'n Run also contains an application-rewriting tool, which allows to easily port legacy or third-party applications to use the secure APIs developed in this work.\n We evaluate the Grab 'n Run library with a user study, obtaining very encouraging results in vulnerability reduction, ease of use, and speed of development. We also show that the performance overhead introduced by our library is negligible. For the benefit of the security of the Android ecosystem, we released Grab 'n Run as open source.",
"title": ""
},
{
"docid": "7abdd1fc5f2a8c5b7b19a6a30eadad0a",
"text": "This Paper investigate action recognition by using Extreme Gradient Boosting (XGBoost). XGBoost is a supervised classification technique using an ensemble of decision trees. In this study, we also compare the performance of Xboost using another machine learning techniques Support Vector Machine (SVM) and Naive Bayes (NB). The experimental study on the human action dataset shows that XGBoost better as compared to SVM and NB in classification accuracy. Although takes more computational time the XGBoost performs good classification on action recognition.",
"title": ""
},
{
"docid": "98218545bf3474b46857d828e1b86004",
"text": "Blockchain-based smart contracts are considered a promising technology for handling financial agreements securely. In order to realize this vision, we need a formal language to unambiguously describe contract clauses. We introduce Findel – a purely declarative financial domain-specific language (DSL) well suited for implementation in blockchain networks. We implement an Ethereum smart contract that acts as a marketplace for Findel contracts and measure the cost of its operation. We analyze challenges in modeling financial agreements in decentralized networks and outline directions for future work.",
"title": ""
},
{
"docid": "ad5669ec1aea8df1b1c99707228a427d",
"text": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-theart models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7% (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5 absolute improvement), outperforming human performance by 2.0.",
"title": ""
},
{
"docid": "71b0dbd905c2a9f4111dfc097bfa6c67",
"text": "In this paper, the authors undertake a study of cyber warfare reviewing theories, law, policies, actual incidents and the dilemma of anonymity. Starting with the United Kingdom perspective on cyber warfare, the authors then consider United States' views including the perspective of its military on the law of war and its general inapplicability to cyber conflict. Consideration is then given to the work of the United Nations' group of cyber security specialists and diplomats who as of July 2010 have agreed upon a set of recommendations to the United Nations Secretary General for negotiations on an international computer security treaty. An examination of the use of a nation's cybercrime law to prosecute violations that occur over the Internet indicates the inherent limits caused by the jurisdictional limits of domestic law to address cross-border cybercrime scenarios. Actual incidents from Estonia (2007), Georgia (2008), Republic of Korea (2009), Japan (2010), ongoing attacks on the United States as well as other incidents and reports on ongoing attacks are considered as well. Despite the increasing sophistication of such cyber attacks, it is evident that these attacks were met with a limited use of law and policy to combat them that can be only be characterised as a response posture defined by restraint. Recommendations are then examined for overcoming the attribution problem. The paper then considers when do cyber attacks rise to the level of an act of war by reference to the work of scholars such as Schmitt and Wingfield. Further evaluation of the special impact that non-state actors may have and some theories on how to deal with the problem of asymmetric players are considered. Discussion and possible solutions are offered. A conclusion is offered drawing some guidance from the writings of the Chinese philosopher Sun Tzu. Finally, an appendix providing a technical overview of the problem of attribution and the dilemma of anonymity in cyberspace is provided. 1. The United Kingdom Perspective \"If I went and bombed a power station in France, that would be an act of war. If I went on to the net and took out a power station, is that an act of war? One",
"title": ""
},
{
"docid": "c74a62bb92cb24faf0906c69644c7a53",
"text": "For many years psychoanalytic and psychodynamic therapies have been considered to lack a credible evidence-base and have consistently failed to appear in lists of ‘empirically supported treatments’. This study systematically reviews the research evaluating the efficacy and effectiveness of psychodynamic psychotherapy for children and young people. The researchers identified 34 separate studies that met criteria for inclusion, including nine randomised controlled trials. While many of the studies reported are limited by sample size and lack of control groups, the review indicates that there is increasing evidence to suggest the effectiveness of psychoanalytic psychotherapy for children and adolescents. The article aims to provide as complete a picture as possible of the existing evidence base, thereby enabling more refined questions to be asked regarding the nature of the current evidence and gaps requiring further exploration.",
"title": ""
},
{
"docid": "757eadf19fee04c91e51ac8e6d3c6de1",
"text": "OBJECTIVES\nInfantile hemangiomas often are inapparent at birth and have a period of rapid growth during early infancy followed by gradual involution. More precise information on growth could help predict short-term outcomes and make decisions about when referral or intervention, if needed, should be initiated. The objective of this study was to describe growth characteristics of infantile hemangioma and compare growth with infantile hemangioma referral patterns.\n\n\nMETHODS\nA prospective cohort study involving 7 tertiary care pediatric dermatology practices was conducted. Growth data were available for a subset of 526 infantile hemangiomas in 433 patients from a cohort study of 1096 children. Inclusion criteria were age younger than 18 months at time of enrollment and presence of at least 1 infantile hemangioma. Growth stage and rate were compared with clinical characteristics and timing of referrals.\n\n\nRESULTS\nEighty percent of hemangioma size was reached during the early proliferative stage at a mean age of 3 months. Differences in growth between hemangioma subtypes included that deep hemangiomas tend to grow later and longer than superficial hemangiomas and that segmental hemangiomas tended to exhibit more continued growth after 3 months of age. The mean age of first visit was 5 months. Factors that predicted need for follow-up included ongoing proliferation, larger size, deep component, and segmental and indeterminate morphologic subtypes.\n\n\nCONCLUSIONS\nMost infantile hemangioma growth occurs before 5 months, yet 5 months was also the mean age at first visit to a specialist. Recognition of growth characteristics and factors that predict the need for follow-up could help aid in clinical decision-making. The first few weeks to months of life are a critical time in hemangioma growth. Infants with hemangiomas need close observation during this period, and those who need specialty care should be referred and seen as early as possible within this critical growth period.",
"title": ""
},
{
"docid": "7178d9b4fdb62fe7689e25377d143912",
"text": "A variety of applications in the use of optical and optoelectronic devices as well as their integrated circuits are increasingly penetrating into our daily routine. One of the most demanding fields is the sensing engineering. Meanwhile, the demand for more mechanical flexibility of systems and lower manufacturing budgets is continuously growing. Since the optical transparency is desired for coupling and transmission of optical signals, cost-effective transparent polymeric films are the promising candidates as carrier substrates. In this work, we aim to establish short-distance planar optical interconnects consisting of light sources/detectors and waveguides on the flexible transparent polymeric films for optical sensing functions. To achieve a miniaturized structure and ensure the flexibility of interconnects, bare chips of optoelectronic light sources/detectors are employed. Here, packaging of these chips carries the burden of all responsibilities in terms of ensuring the mechanical strength, electrical connection, thermal stability as well as the optical performance. It leads to the concept of chip-on-flex (CoF) packaging of optoelectronic devices. We present CoF packaging of a bare edge-emitting laser diode using the previously developed novel optodic bonding. While operating CoF packages, the optical performance of an active diode is strongly impaired by inefficient heat dissipation due to the extremely low thermal conductivity of employed polymeric films. Addressing this challenge, different concepts of thermomanagement are implemented. We elaborate the characterization results that evaluate the performance of CoF packages in terms of mechanical, electrical, thermo-optical, and opto-electronic properties. Two prototypes of planar optical interconnects are presented, with multi- and single-mode polymeric waveguide.",
"title": ""
},
{
"docid": "62cd9572ad22bb352f486a1e4988ff19",
"text": "We present \"appearance-from-motion\", a novel method for recovering the spatially varying isotropic surface reflectance from a video of a rotating subject, with known geometry, under unknown natural illumination. We formulate the appearance recovery as an iterative process that alternates between estimating surface reflectance and estimating incident lighting. We characterize the surface reflectance by a data-driven microfacet model, and recover the microfacet normal distribution for each surface point separately from temporal changes in the observed radiance. To regularize the recovery of the incident lighting, we rely on the observation that natural lighting is sparse in the gradient domain. Furthermore, we exploit the sparsity of strong edges in the incident lighting to improve the robustness of the surface reflectance estimation. We demonstrate robust recovery of spatially varying isotropic reflectance from captured video as well as an internet video sequence for a wide variety of materials and natural lighting conditions.",
"title": ""
},
{
"docid": "3eee111e4521528031019f83786efab7",
"text": "Social media platforms such as Twitter and Facebook enable the creation of virtual customer environments (VCEs) where online communities of interest form around specific firms, brands, or products. While these platforms can be used as another means to deliver familiar e-commerce applications, when firms fail to fully engage their customers, they also fail to fully exploit the capabilities of social media platforms. To gain business value, organizations need to incorporate community building as part of the implementation of social media.",
"title": ""
},
{
"docid": "d4ee96388ca88c0a5d2a364f826dea91",
"text": "Cloud computing, as an emerging computing paradigm, enables users to remotely store their data into a cloud so as to enjoy scalable services on-demand. Especially for small and medium-sized enterprises with limited budgets, they can achieve cost savings and productivity enhancements by using cloud-based services to manage projects, to make collaborations, and the like. However, allowing cloud service providers (CSPs), which are not in the same trusted domains as enterprise users, to take care of confidential data, may raise potential security and privacy issues. To keep the sensitive user data confidential against untrusted CSPs, a natural way is to apply cryptographic approaches, by disclosing decryption keys only to authorized users. However, when enterprise users outsource confidential data for sharing on cloud servers, the adopted encryption system should not only support fine-grained access control, but also provide high performance, full delegation, and scalability, so as to best serve the needs of accessing data anytime and anywhere, delegating within enterprises, and achieving a dynamic set of users. In this paper, we propose a scheme to help enterprises to efficiently share confidential data on cloud servers. We achieve this goal by first combining the hierarchical identity-based encryption (HIBE) system and the ciphertext-policy attribute-based encryption (CP-ABE) system, and then making a performance-expressivity tradeoff, finally applying proxy re-encryption and lazy re-encryption to our scheme.",
"title": ""
},
{
"docid": "17ae550374220164f05c3421b6ff7cd1",
"text": "Abstract meaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclicmeaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclic graphs. AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences. We introduce a neural parser which treats alignments as latent variables within a joint probabilistic model of concepts, relations and alignments. As exact inference requires marginalizing over alignments and is infeasible, we use the variational autoencoding framework and a continuous relaxation of the discrete alignments. We show that joint modeling is preferable to using a pipeline of align and parse. The parser achieves the best reported results on the standard benchmark (73.6% on LDC2016E25).",
"title": ""
},
{
"docid": "87788e55769a7a840aaf41d9c3c5aec6",
"text": "Cyber-attack detection is used to identify cyber-attacks while they are acting on a computer and network system to compromise the security (e.g., availability, integrity, and confidentiality) of the system. This paper presents a cyber-attack detection technique through anomaly-detection, and discusses the robustness of the modeling technique employed. In this technique, a Markov-chain model represents a profile of computer-event transitions in a normal/usual operating condition of a computer and network system (a norm profile). The Markov-chain model of the norm profile is generated from historic data of the system's normal activities. The observed activities of the system are analyzed to infer the probability that the Markov-chain model of the norm profile supports the observed activities. The lower probability the observed activities receive from the Markov-chain model of the norm profile, the more likely the observed activities are anomalies resulting from cyber-attacks, and vice versa. This paper presents the learning and inference algorithms of this anomaly-detection technique based on the Markov-chain model of a norm profile, and examines its performance using the audit data of UNIX-based host machines with the Solaris operating system. The robustness of the Markov-chain model for cyber-attack detection is presented through discussions & applications. To apply the Markov-chain technique and other stochastic process techniques to model the sequential ordering of events, the quality of activity-data plays an important role in the performance of intrusion detection. The Markov-chain technique is not robust to noise in the data (the mixture level of normal activities and intrusive activities). The Markov-chain technique produces desirable performance only at a low noise level. This study also shows that the performance of the Markov-chain techniques is not always robust to the window size: as the window size increases, the amount of noise in the window also generally increases. Overall, this study provides some support for the idea that the Markov-chain technique might not be as robust as the other intrusion-detection methods such as the chi-square distance test technique , although it can produce better performance than the chi-square distance test technique when the noise level of the data is low, such as the Mill & Pascal data in this study.",
"title": ""
},
{
"docid": "8a42e1de2611b83aec33afefdb524cc6",
"text": "There is a growing body of evidence that students’ mindsets play a key role in their math and science achievement. Students who believe that intelligence or mathscience ability is simply a fixed trait (a fixed mindset) are at a significant disadvantage compared to students who believe that their abilities can be developed (a growth mindset). Moreover, research is showing that these mindsets can play an important role in the relative underachievement of women and minorities in math and science. Below I will present research showing that (a) mindsets can predict math/science achievement over time, (b) mindsets can contribute to math/science achievement discrepancies for women and minorities, (c) interventions that change mindsets can boost achievement and reduce achievement discrepancies, and (d) educators play a key role in shaping students’ mindsets.",
"title": ""
},
{
"docid": "b162b2efcd66a9a254e3b8473a5d62f6",
"text": "The rhetoric of both the Brexit and Trump campaigns was grounded in conceptions of the past as the basis for political claims in the present. Both established the past as constituted by nations that were represented as 'white' into which racialized others had insinuated themselves and gained disproportionate advantage. Hence, the resonant claim that was broadcast primarily to white audiences in each place 'to take our country back'. The politics of both campaigns was also echoed in those social scientific analyses that sought to focus on the 'legitimate' claims of the 'left behind' or those who had come to see themselves as 'strangers in their own land'. The skewing of white majority political action as the action of a more narrowly defined white working class served to legitimize analyses that might otherwise have been regarded as racist. In effect, I argue that a pervasive 'methodological whiteness' has distorted social scientific accounts of both Brexit and Trump's election victory and that this needs to be taken account of in our discussion of both phenomena.",
"title": ""
},
{
"docid": "1227c2d65281c35f56b2df073ba37aac",
"text": "This paper develops an adaptive partial differential equation (PDE) observer for battery state-of-charge (SOC) and state-of-health (SOH) estimation. Real-time state and parameter information enables operation near physical limits without compromising durability, thereby unlocking the full potential of battery energy storage. SOC/SOH estimation is technically challenging because battery dynamics are governed by electrochemical principles, mathematically modeled by PDEs. We cast this problem as a simultaneous state (SOC) and parameter (SOH) estimation design for a linear PDE with a nonlinear output mapping. Several new theoretical ideas are developed, integrated together, and tested. These include a backstepping PDE state estimator, a Pad e-based parameter identifier, nonlinear parameter sensitivity analysis, and adaptive inversion of nonlinear output functions. The key novelty of this design is a combined SOC/SOH battery estimation algorithm that identifies physical system variables, from measurements of voltage and current only. [DOI: 10.1115/1.4024801]",
"title": ""
},
{
"docid": "ea71723e88685f5c8c29296d3d8e3197",
"text": "In this paper, we present a complete system for automatic face replacement in images. Our system uses a large library of face images created automatically by downloading images from the internet, extracting faces using face detection software, and aligning each extracted face to a common coordinate system. This library is constructed off-line, once, and can be efficiently accessed during face replacement. Our replacement algorithm has three main stages. First, given an input image, we detect all faces that are present, align them to the coordinate system used by our face library, and select candidate face images from our face library that are similar to the input face in appearance and pose. Second, we adjust the pose, lighting, and color of the candidate face images to match the appearance of those in the input image, and seamlessly blend in the results. Third, we rank the blended candidate replacements by computing a match distance over the overlap region. Our approach requires no 3D model, is fully automatic, and generates highly plausible results across a wide range of skin tones, lighting conditions, and viewpoints. We show how our approach can be used for a variety of applications including face de-identification and the creation of appealing group photographs from a set of images. We conclude with a user study that validates the high quality of our replacement results, and a discussion on the current limitations of our system.",
"title": ""
}
] |
scidocsrr
|
5e602aa0835d5d0583e91e2246c14c90
|
SLSA: A Sentiment Lexicon for Standard Arabic
|
[
{
"docid": "d2eefcb0a03f769c5265a66be89c5ca3",
"text": "The computational treatment of subjectivity and sentiment in natural language is usually significantly improved by applying features exploiting lexical resources where entries are tagged with semantic orientation (e.g., positive, negative values). In spite of the fair amount of work on Arabic sentiment analysis over the past few years, e.g., (Abbasi et al., 2008; Abdul-Mageed et al., 2014; Abdul-Mageed et al., 2012; Abdul-Mageed and Diab, 2012a; Abdul-Mageed and Diab, 2012b; Abdul-Mageed et al., 2011a; Abdul-Mageed and Diab, 2011), the language remains under-resourced as to these polarity repositories compared to the English language. In this paper, we report efforts to build and present SANA, a large-scale, multi-genre, multi-dialect multi-lingual lexicon for the subjectivity and sentiment analysis of the Arabic language and dialects.",
"title": ""
}
] |
[
{
"docid": "b225f151e146edd0107c05c8fbc31686",
"text": "In this paper, we present a novel sequence generator based on a Markov chain model. Specifically, we formulate the problem of generating a sequence of vectors with given average input probability p, average transition density d, and spatial correlation s as a transition matrix computation problem, in which the matrix elements are subject to constraints derived from the specified statistics. We also give a practical heuristic that computes such a matrix and generates a sequence of l n-bit vectors in O(nl + n2) time. Derived from a strongly mixing Markov chain, our generator yields binary vector sequences with accurate statistics, high uniformity, and high randomness. Experimental results show that our sequence generator can cover more than 99% of the parameter space. Sequences of 2,000 48-bit vectors are generated in less than 0.05 seconds, with average deviations of the signal statistics p, d, and s equal to 1.6%, 1.8%, and 2.8%, respectively.Our generator enables the detailed study of power macromodeling. Using our tool and the ISCAS-85 benchmark circuits, we have assessed the sensitivity of power dissipation to the three input statistics p, d, and s. Our investigation reveals that power is most sensitive to transition density, while only occasionally exhibiting high sensitivity to signal probability and spatial correlation. Our experiments also show that input signal imbalance can cause estimation errors as high as 100% in extreme cases, although errors are usually within 25%.",
"title": ""
},
{
"docid": "0902f4b4841ad0f40956449e2b665312",
"text": "Chronic pain is a widespread problem affecting more than one and a half billion people worldwide. Of those, 116 million people suffering from chronic pain reside in the United States of America and four million of those with chronic pain suffer from neuropathic pain. Neuropathic pain is a very complex and hard to manage pain requiring several approaches to medication and treatment. In this paper, the use of essential oils of Mentha x piperita (Peppermint), Pelargonium x asperum (Geranium Rose), Piper nigrum (Black Pepper) and Rosmarinus officinalis ct. cineole (Rosemary ct. cineole) to increase circulation and decrease pain in patients with peripheral neuropathy of the lower extremities is discussed through two case studies, chemical analysis, current research and future considerations. NEUROPATHIES: ESSENTIAL OILS SHOW PROMISING RESULTS 3 Neuropathies: Essential oils show promising results in the fight against symptoms. Chronic pain, or pain lasting longer than six months, affects approximately 1.5 billion people worldwide with 116 million of those people residing in the United States of America (American Academy of Pain Medicine [AAPM], 2011). Pain is divided into two categories: nociceptive pain, which includes visceral and somatic pain, and neuropathic pain. In hospice and palliative care settings, bone and cancer pain are also frequently used categories. Of those 116 million people suffering from chronic pain, approximately four million people in the United States of America are currently suffering from neuropathic pain (Dickson, Head, Gitlow, & Osbahr, 2010, p. 1637). (Bennett, 1998, p. 104) Neuropathic pain is defined as a pain caused by a lesion or disease of the somatosensory nervous system and can be further divided into central and peripheral neuropathic pain (International Association for the Study of Pain [IASP], 2011). The causes, symptoms, diagnosing, current treatment and ongoing research of neuropathic pain will be discussed in this paper. Two case studies will also be reviewed along with analysis of essential oils used for symptom management and possible future considerations for applied aromatherapy research.",
"title": ""
},
{
"docid": "3501fb3d0fa953a4a4cd94733c4bdc36",
"text": "BACKGROUND\nCerebral palsy is the most common cause of physical disability in childhood. While some children have only a motor disorder, others have a range of problems and associated health issues.\n\n\nOBJECTIVE\nThis article describes the known causes of cerebral palsy, the classification of motor disorders and associated disabilities, health maintenance, and the consequences of the motor disorder. The importance of multidisciplinary assessment and treatment in enabling children to achieve their optimal potential and independence is highlighted.\n\n\nDISCUSSION\nGeneral practitioners play an important role in the management of children with cerebral palsy. Disability is a life-long problem which impacts on the child, their parents and their siblings. After transition to adult services, the GP may be the only health professional that has known the young person over an extended period, providing important continuity of care.",
"title": ""
},
{
"docid": "ad5b52646f1dbd3c75726544229621d3",
"text": "Immunotherapy has great potential to treat cancer and prevent future relapse by activating the immune system to recognize and kill cancer cells. A variety of strategies are continuing to evolve in the laboratory and in the clinic, including therapeutic noncellular (vector-based or subunit) cancer vaccines, dendritic cell vaccines, engineered T cells, and immune checkpoint blockade. Despite their promise, much more research is needed to understand how and why certain cancers fail to respond to immunotherapy and to predict which therapeutic strategies, or combinations thereof, are most appropriate for each patient. Underlying these challenges are technological needs, including methods to rapidly and thoroughly characterize the immune microenvironment of tumors, predictive tools to screen potential therapies in patient-specific ways, and sensitive, information-rich assays that allow patient monitoring of immune responses, tumor regression, and tumor dissemination during and after therapy. The newly emerging field of immunoengineering is addressing some of these challenges, and there is ample opportunity for engineers to contribute their approaches and tools to further facilitate the clinical translation of immunotherapy. Here we highlight recent technological advances in the diagnosis, therapy, and monitoring of cancer in the context of immunotherapy, as well as ongoing challenges.",
"title": ""
},
{
"docid": "a6d3ef01f3a2908e8ed9ff3fa60f8a6a",
"text": "Muqarnas is one of the most beautiful elements of Persian architecture which was used to cover the great height of entrance spaces or domes in old mosques or religious schools. Although rapid growth of digital design software brought lots of innovations and ease of use to the world of architecture, this specific vernacular art reached the state of abandonment. This article focuses on modelling Persian patterns by using Grasshopper3D, a Rhinoceros plug-in and by demonstrating the process it hopes to create a basis for a full 3d parametric muqarnas application. Utilizing such software, it is probable to generate desired patterns with the help of today’s algorithmic technology and revitalize muqarnas and other Persian patterns and define them as contemporary architectural elements of Persian architecture.",
"title": ""
},
{
"docid": "f195e7f1018e1e1a6836c9d110ce1de4",
"text": "Motivated by the goal of obtaining more-anthropomorphic walking in bipedal robots, this paper considers a hybrid model of a 3D hipped biped with feet and locking knees. The main observation of this paper is that functional Routhian Reduction can be used to extend two-dimensional walking to three dimensions—even in the presence of periods of underactuation—by decoupling the sagittal and coronal dynamics of the 3D biped. Specifically, we assume the existence of a control law that yields stable walking for the 2D sagittal component of the 3D biped. The main result of the paper is that utilizing this controller together with “reduction control laws” yields walking in three dimensions. This result is supported through simulation.",
"title": ""
},
{
"docid": "fd9461aeac51be30c9d0fbbba298a79b",
"text": "Disaster management is a crucial and urgent research issue. Emergency communication networks (ECNs) provide fundamental functions for disaster management, because communication service is generally unavailable due to large-scale damage and restrictions in communication services. Considering the features of a disaster (e.g., limited resources and dynamic changing of environment), it is always a key problem to use limited resources effectively to provide the best communication services. Big data analytics in the disaster area provides possible solutions to understand the situations happening in disaster areas, so that limited resources can be optimally deployed based on the analysis results. In this paper, we survey existing ECNs and big data analytics from both the content and the spatial points of view. From the content point of view, we survey existing data mining and analysis techniques, and further survey and analyze applications and the possibilities to enhance ECNs. From the spatial point of view, we survey and discuss the most popular methods and further discuss the possibility to enhance ECNs. Finally, we highlight the remaining challenging problems after a systematic survey and studies of the possibilities.",
"title": ""
},
{
"docid": "eb1313075f4870dd0c123233ea297fd1",
"text": "This work summarizes our research on the topic of the application of unsupervised learning algorithms to the problem of intrusion detection, and in particular our main research results in network intrusion detection. We proposed a novel, two tier architecture for network intrusion detection, capable of clustering packet payloads and correlating anomalies in the packet stream. We show the experiments we conducted on such architecture, we give performance results, and we compare our achievements with other comparable existing systems.",
"title": ""
},
{
"docid": "b0b206c172cdbaef42e455c713e67c54",
"text": "In this paper, a framework using deep learning approach is proposed to identify two subtypes of human colorectal carcinoma cancer. The identification process uses information from gene expression and clinical data which is obtained from data integration process. One of deep learning architecture, multimodal Deep Boltzmann Machines (DBM) is used for data integration process. The joint representation gene expression and clinical is later used as Restricted Boltzmann Machines (RBM) input for cancer subtype identification. Kaplan Meier survival analysis is employed to evaluate the identification result. The curves on survival plot obtained from Kaplan Meier analysis are tested using three statistic tests to ensure that there is a significant difference between those curves. According to Log Rank, Generalized Wilcoxon and Tarone-Ware, the two groups of patients with different cancer subtypes identified using the proposed framework are significantly different.",
"title": ""
},
{
"docid": "d2cefbafb0d0ab30daa17630bc800026",
"text": "To assess the feasibility, technical success, and effectiveness of high-resolution magnetic resonance (MR)-guided posterior femoral cutaneous nerve (PFCN) blocks. A retrospective analysis of 12 posterior femoral cutaneous nerve blocks in 8 patients [6 (75 %) female, 2 (25 %) male; mean age, 47 years; range, 42–84 years] with chronic perineal pain suggesting PFCN neuropathy was performed. Procedures were performed with a clinical wide-bore 1.5-T MR imaging system. High-resolution MR imaging was utilized for visualization and targeting of the PFCN. Commercially available, MR-compatible 20-G needles were used for drug delivery. Variables assessed were technical success (defined as injectant surrounding the targeted PFCN on post-intervention MR images) effectiveness, (defined as post-interventional regional anesthesia of the target area innervation downstream from the posterior femoral cutaneous nerve block), rate of complications, and length of procedure time. MR-guided PFCN injections were technically successful in 12/12 cases (100 %) with uniform perineural distribution of the injectant. All blocks were effective and resulted in post-interventional regional anesthesia of the expected areas (12/12, 100 %). No complications occurred during the procedure or during follow-up. The average total procedure time was 45 min (30–70) min. Our initial results demonstrate that this technique of selective MR-guided PFCN blocks is feasible and suggest high technical success and effectiveness. Larger studies are needed to confirm our initial results.",
"title": ""
},
{
"docid": "ed9a851d10d73d7944221ee73a68f685",
"text": "Clustering is an important data mining task Data mining often concerns large and high dimensional data but unfortunately most of the clustering algorithms in the literature are sensitive to largeness or high dimensionality or both Di erent features a ect clusters di erently some are important for clusters while others may hinder the clustering task An e cient way of handling it is by selecting a subset of important features It helps in nding clusters e ciently understanding the data better and reducing data size for e cient storage collection and process ing The task of nding original important features for unsupervised data is largely untouched Traditional feature selection algorithms work only for supervised data where class information is available For unsuper vised data without class information often principal components PCs are used but PCs still require all features and they may be di cult to understand Our approach rst features are ranked according to their importance on clustering and then a subset of important features are selected For large data we use a scalable method using sampling Em pirical evaluation shows the e ectiveness and scalability of our approach for benchmark and synthetic data sets",
"title": ""
},
{
"docid": "0cd3c63e5d67a89613de0357020239c0",
"text": "The best music. .. is essentially there to provide you something to face the world with. —Bruce Springsteen Music can change the world. —Ludwig van Beethoven Music is spiritual. The music business is not. —Van Morrison Although much of the debate about the effects of media on youth revolves around television, music is very important to children and adolescents. Try to change the radio station in the car after your child has set it, and you will quickly see that they have very clear and deeply held opinions. In a survey of junior and senior high school students in northern California (Roberts & Henriksen, 1990), students were asked what media they would choose to take with them if they were stranded on a desert island. They were allowed to nominate a first, second, and third choice from a list including: and music recordings and the means to play them. Because radio is almost exclusively a music medium for adolescents, radio and recordings were combined into a single \" music \" category. As Table 8.1 displays, at all grade levels, music media were preferred over television (which placed second overall), and this preference increased with age. Over 80 percent of the total sample nominated music as one of their first three choices. By eleventh grade, music was selected first by a margin of two to one.",
"title": ""
},
{
"docid": "b575cc4b98ab5c0f704b92e0bf50ed5f",
"text": "The emerging Asian market of Korean broadcasting programs is pushing forward a new phase of cultural marketing. The Korean trend in Asia brought issues such as cultural proximity, and the issues have been analyzed by structural analysis. This article suggests which kind of program Asians adopted as the favorites based on the factors of cultural frame in the aspect of performance. The results of analysis shows that Korean programs satisfy Asian emotional needs as being easy to assimilate to similar life styles, cultural proximity and expressiveness. The preference of Korean programs shows that Asians express sympathy for Asian culture frames including family morals, high morality, love and sacrifice. Additionally, as a case study this paper analyzes the characteristics of the most favorite Korean programs in Asia using five categories: harmony, tension, compromise, participation and agreement. The result of the case study showed that Asian people have a similar culture frame and like stories dealing with love, harmony oriented stories, stories with tension in daily life, low participation and the agreement and reinforcement with their traditional values.",
"title": ""
},
{
"docid": "e0fff766f9ae7834d94ef8e6d444363c",
"text": "Air-gap data is important for the security of computer systems. The injection of the computer virus is limited but possible, however data communication channel is necessary for the transmission of stolen data. This paper considers BFSK digital modulation applied to brightness changes of screen for unidirectional transmission of valuable data. Experimental validation and limitations of the proposed technique are provided.",
"title": ""
},
{
"docid": "2e2dc51bc059d7d40cdae22e1e36776e",
"text": "In this thesis we present an approach to neural machine translation (NMT) that supports multiple domains in a single model and allows switching between the domains when translating. The core idea is to treat text domains as distinct languages and use multilingual NMT methods to create multi-domain translation systems; we show that this approach results in significant translation quality gains over fine-tuning. We also propose approach of unsupervised domain assignment and explore whether the knowledge of pre-specified text domains is necessary; turns out that it is after all, but also that when it is not known quite high translation quality can be reached, and even higher than with known domains in some cases. Additionally, we explore the possibility of intra-language style adaptation through zero shot translation. We show that this approach is able to style adapt, however, with unresolved text deterioration issues.",
"title": ""
},
{
"docid": "dbb3ffab2b2a8619ccfdef04be155496",
"text": "Online discussion communities play an important role in the development of relationships and the transfer of knowledge within and across organizations. Their underlying technologies enhance these processes by providing infrastructures through which group-based communication can occur. Community administrators often make decisions about technologies with the goal of enhancing the user experience, but the impact of such decisions on how a community develops must also be considered. To shed light on this complex and underresearched phenomenon, we offer a model of key latent constructs influenced by technology choices and possible causal paths by which they have dynamic effects on communities. Two important community characteristics that can be impacted are community size (number of members) and community resilience (membership that is willing to remain involved with the community in spite of variability and change in the topics discussed). To model community development, we build on attraction–selection–attrition (ASA) theory, introducing two new concepts: participation costs (how much time and effort are required to engage with content provided in a community) and topic consistency cues (how strongly a community signals that topics that may appear in the future will be consistent with what it has hosted in the past). We use the proposed ASA theory of online communities (OCASA) to develop a simulation model of community size and resilience that affirms some conventional wisdom and also has novel and counterintuitive implications. Analysis of the model leads to testable new propositions about the causal paths by which technology choices affect the emergence of community size and community resilience, and associated implications for community sustainability. 1",
"title": ""
},
{
"docid": "471579f955f8b68a357c8780a7775cc9",
"text": "In addition to practitioners who care for male patients, with the increased use of high-resolution anoscopy, practitioners who care for women are seeing more men in their practices as well. Some diseases affecting the penis can impact on their sexual partners. Many of the lesions and neoplasms of the penis occur on the vulva as well. In addition, there are common and rare lesions unique to the penis. A review of the scope of penile lesions and neoplasms that may present in a primary care setting is presented to assist in developing a differential diagnosis if such a patient is encountered, as well as for practitioners who care for their sexual partners. A familiarity will assist with recognition, as well as when consultation is needed.",
"title": ""
},
{
"docid": "ad0340644e4b2c95765c8875498da1af",
"text": "With the development of deep learning, word vectors (i.e., word embeddings) have been extensively explored and applied to many Natural Language Processing tasks (e.g., parsing, Named Entity Recognition, etc). However, the semantic word vectors learned from context have insufficient sentiment information for performing sentiment analysis at different text levels. In this work, we present three Convolutional Neural Network (CNN)-based models to learn sentiment word vectors (SWV), which integrate sentiment information with semantic and syntactic information into word representations in three different strategies. Experimental results on benchmark datasets showed that sentiment word vectors are able to capture both sentiment and semantic information and outperform semantic word vectors for word-level and sentence-level sentiment analysis. Moreover, in combination with traditional NLP features, the sentiment word vectors achieve the best performance so far.",
"title": ""
},
{
"docid": "2a61fe60671ec73cee769be4d8c59e0c",
"text": "With the rise of social media in our life, several decision makers have worked on these networks to make better decisions. In order to benefit from the data issued from these media, many researchers focused on helping companies understand how to perform a social media competitive analysis and transform these data into knowledge for decision makers. A high number of users interact at any time on different ways in social media such as by expressing their opinions about products, services or transaction related to the organization which can prove very helpful for making better projections. In this paper, we provide a literature review on data warehouse design approaches from social media. More precisely, we start by introducing the main concepts of data warehouse and social media. We also propose two classes of data warehouse design approaches from social media (behavior analysis and integration of sentiment analysis in data warehouse schema) and expose for each one the most representative existing works. Afterward, we propose a comparative study of the existing works.",
"title": ""
},
{
"docid": "9f6ab40fb1f1c331e72b275e3cf614e3",
"text": "The Internet of things (IoT) is still in its infancy and has attracted much interest in many industrial sectors including medical fields, logistics tracking, smart cities and automobiles. However as a paradigm, it is susceptible to a range of significant intrusion threats. This paper presents a threat analysis of the IoT and uses an Artificial Neural Network (ANN) to combat these threats. A multi-level perceptron, a type of supervised ANN, is trained using internet packet traces, then is assessed on its ability to thwart Distributed Denial of Service (DDoS/DoS) attacks. This paper focuses on the classification of normal and threat patterns on an IoT Network. The ANN procedure is validated against a simulated IoT network. The experimental results demonstrate 99.4% accuracy and can successfully detect various DDoS/DoS attacks.",
"title": ""
}
] |
scidocsrr
|
71b812323d569d1a6e6a04c16dc86090
|
Towards Automatic & Personalised Mobile Health Interventions: An Interactive Machine Learning Perspective
|
[
{
"docid": "ff076ca404a911cc523af1aa51da8f47",
"text": "Most Machine Learning (ML) researchers focus on automatic Machine Learning (aML) where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from the availability of “big data”. However, sometimes, for example in health informatics, we are confronted not a small number of data sets or rare events, and with complex problems where aML-approaches fail or deliver unsatisfactory results. Here, interactive Machine Learning (iML) may be of help and the “human-in-the-loop” approach may be beneficial in solving computationally hard problems, where human expertise can help to reduce an exponential search space through heuristics. In this paper, experiments are discussed which help to evaluate the effectiveness of the iML-“human-in-the-loop” approach, particularly in opening the “black box”, thereby enabling a human to directly and indirectly manipulating and interacting with an algorithm. For this purpose, we selected the Ant Colony Optimization (ACO) framework, and use it on the Traveling Salesman Problem (TSP) which is of high importance in solving many practical problems in health informatics, e.g. in the study of proteins.",
"title": ""
},
{
"docid": "3ee8ad2c9e07c33781fc53ac2e11cd6e",
"text": "Tapping into the \"folk knowledge\" needed to advance machine learning applications.",
"title": ""
}
] |
[
{
"docid": "75fcc3987407274148485394acf8856b",
"text": "Here we critically review studies that used electroencephalography (EEG) or event-related potential (ERP) indices as a biomarker of Alzheimer's disease. In the first part we overview studies that relied on visual inspection of EEG traces and spectral characteristics of EEG. Second, we survey analysis methods motivated by dynamical systems theory (DST) as well as more recent network connectivity approaches. In the third part we review studies of sleep. Next, we compare the utility of early and late ERP components in dementia research. In the section on mismatch negativity (MMN) studies we summarize their results and limitations and outline the emerging field of computational neurology. In the following we overview the use of EEG in the differential diagnosis of the most common neurocognitive disorders. Finally, we provide a summary of the state of the field and conclude that several promising EEG/ERP indices of synaptic neurotransmission are worth considering as potential biomarkers. Furthermore, we highlight some practical issues and discuss future challenges as well.",
"title": ""
},
{
"docid": "be64b2dc295c61a2978221b4838e4469",
"text": "In sentiment analysis of product reviews, one important problem is to produce a summary of opinions based on product features/attributes (also called aspects). However, for the same feature, people can express it with many different words or phrases. To produce a useful summary, these words and phrases, which are domain synonyms, need to be grouped under the same feature group. Although several methods have been proposed to extract product features from reviews, limited work has been done on clustering or grouping of synonym features. This paper focuses on this task. Classic methods for solving this problem are based on unsupervised learning using some forms of distributional similarity. However, we found that these methods do not do well. We then model it as a semi-supervised learning problem. Lexical characteristics of the problem are exploited to automatically identify some labeled examples. Empirical evaluation shows that the proposed method outperforms existing state-of-the-art methods by a large margin.",
"title": ""
},
{
"docid": "2e7dd876af56a4698d3e79d3aa5f2eff",
"text": "Although there are numerous aetiologies for coccygodynia described in the medical literature, precoccygeal epidermal inclusion cyst presenting as a coccygodynia has not been reported. We report a 30-year-old woman with intractable coccygodynia. Magnetic resonance imaging showed a circumscribed precoccygeal cystic lesion. The removed cyst was pearly-white in appearance and contained cheesy material. Histological evaluation established the diagnosis of epidermal inclusion cyst with mild nonspecific inflammation. The patient became asymptomatic and remained so at two years follow-up. This report suggests that precoccygeal epidermal inclusion cyst should be considered as one of the differential diagnosis of coccygodynia. Our experience suggests that patients with intractable coccygodynia should have a magnetic resonance imaging to rule out treatable causes of coccygodynia.",
"title": ""
},
{
"docid": "0051a8eae3f4889fccd54b6e9f6a4b5f",
"text": "We propose a simple model for textual matching problems. Starting from a Siamese architecture, we augment word embeddings with two features based on exact and paraphrase match between words in the two sentences being considered. We train the model using four types of regularization on datasets for textual entailment, paraphrase detection and semantic relatedness. Our model performs comparably or better than more complex architectures; achieving state-of-the-art results for paraphrase detection on the SICK dataset and for textual entailment on the SNLI dataset.",
"title": ""
},
{
"docid": "de3ff51b6344fae401f22f8ccc0c290a",
"text": "Attention-based sequence-to-sequence models for automatic speech recognition jointly train an acoustic model, language model, and alignment mechanism. Thus, the language model component is only trained on transcribed audio-text pairs. This leads to the use of shallow fusion with an external language model at inference time. Shallow fusion refers to log-linear interpolation with a separately trained language model at each step of the beam search. In this work, we investigate the behavior of shallow fusion across a range of conditions: different types of language models, different decoding units, and different tasks. On Google Voice Search, we demonstrate that the use of shallow fusion with an neural LM with wordpieces yields a 9.1% relative word error rate reduction (WERR) over our competitive attention-based sequence-to-sequence model, obviating the need for second-pass rescoring.",
"title": ""
},
{
"docid": "7a01ddcc25e25e64b231fcee2c8b96b3",
"text": "In the following pages, I shall demonstrate that there is a psychological technique which makes it possible to interpret dreams, and that on the application of this technique, every dream will reveal itself as a psychological structure, full of significance, and one which may be assigned to a specific place in the psychic activities of the waking state. Further, I shall endeavour to elucidate the processes which underlie the strangeness and obscurity of dreams, and to deduce from these processes the nature of the psychic forces whose conflict or co-operation is responsible for our dreams.",
"title": ""
},
{
"docid": "4672d5db9adefac25658d369380de485",
"text": "Of the considerable research on data streams, relatively little deals with classification where only some of the instances in the stream are labeled. Most state-of-the-art data-stream algorithms do not have an effective way of dealing with unlabeled instances from the same domain. In this paper we explore deep learning techniques that provide important advantages such as the ability to learn incrementally in constant memory, and from unlabeled examples. We develop two deep learning methods and explore empirically via a series of empirical evaluations the application to several data streams scenarios based on real data. We find that our methods can offer competitive accuracy as compared with existing popular data-stream learners.",
"title": ""
},
{
"docid": "d315350d34bd29857864489f9b2bbf06",
"text": "The metabolism of glucagon-like peptide-1 (GLP-1) has not been studied in detail, but it is known to be rapidly cleared from the circulation. Measurement by RIA is hampered by the fact that most antisera are side-viewing or C-terminally directed, and recognize both intact GLP-1 and biologically inactive. N-terminally truncated fragments. Using high pressure liquid chromatography in combination with RIAs, methodology allowing specific determination of both intact GLP-1 and its metabolites was developed. Human plasma was shown to degrade GLP-1-(7-36)amide, forming an N-terminally truncated peptide with a t1/2 of 20.4 +/- 1.4 min at 37 C (n = 6). This was unaffected by EDTA or aprotinin. Inhibitors of dipeptidyl peptidase-IV or low temperature (4 C) completely prevented formation of the metabolite, which was confirmed to be GLP-1-(9-36)amide by mass spectrometry and sequence analysis. High pressure liquid chromatography revealed the concentration of GLP-1-(9-36)amide to be 53.5 +/- 13.7% of the concentration of endogenous intact GLP-1 in the fasted state, which increased to 130.8 +/- 10.0% (P < 0.01; n = 6) 1 h postprandially. Metabolism at the C-terminus was not observed. This study suggests that dipeptidyl peptidase-IV is the primary mechanism for GLP-1 degradation in human plasma in vitro and may have a role in inactivating the peptide in vivo.",
"title": ""
},
{
"docid": "fd7cc098fa84fb725d5326e110ec9048",
"text": "The cluster assumption is exploited by most semi-supervised learning (SSL) methods. However, if the unlabeled data is merely weakly related to the target classes, it becomes questionable whether driving the decision boundary to the low density regions of the unlabeled data will help the classification. In such case, the cluster assumption may not be valid; and consequently how to leverage this type of unlabeled data to enhance the classification accuracy becomes a challenge. We introduce “Semi-supervised Learning with Weakly-Related Unlabeled Data” (SSLW), an inductive method that builds upon the maximum-margin approach, towards a better usage of weakly-related unlabeled information. Although the SSLW could improve a wide range of classification tasks, in this paper, we focus on text categorization with a small training pool. The key assumption behind this work is that, even with different topics, the word usage patterns across different corpora tends to be consistent. To this end, SSLW estimates the optimal wordcorrelation matrix that is consistent with both the co-occurrence information derived from the weakly-related unlabeled documents and the labeled documents. For empirical evaluation, we present a direct comparison with a number of stateof-the-art methods for inductive semi-supervised learning and text categorization. We show that SSLW results in a significant improvement in categorization accuracy, equipped with a small training set and an unlabeled resource that is weakly related to the test domain.",
"title": ""
},
{
"docid": "d26ce319db7b1583347d34ff8251fbc0",
"text": "The study of metacognition can shed light on some fundamental issues about consciousness and its role in behavior. Metacognition research concerns the processes by which people self reflect on their own cognitive and memory processes (monitoring), and how they put their metaknowledge to use in regulating their information processing and behavior (control). Experimental research on metacognition has addressed the following questions: First, what are the bases of metacognitive judgments that people make in monitoring their learning, remembering, and performance? Second, how valid are such judgments and what are the factors that affect the correspondence between subjective and objective indexes of knowing? Third, what are the processes that underlie the accuracy and inaccuracy of metacognitive judgments? Fourth, how does the output of metacognitive monitoring contribute to the strategic regulation of learning and remembering? Finally, how do the metacognitive processes of monitoring and control affect actual performance? Research addressing these questions is reviewed, emphasizing its implication for issues concerning consciousness, in particular, the genesis of subjective experience, the function of self-reflective consciousness, and the cause-and-effect relation between subjective experience and behavior.",
"title": ""
},
{
"docid": "9c82588d5e82df20e2156ca1bda91f09",
"text": "Lean and simulation analysis are driven by the same objective, how to better design and improve processes making the companies more competitive. The adoption of lean has been widely spread in companies from public to private sectors and simulation is nowadays becoming more and more popular. Several authors have pointed out the benefits of combining simulation and lean, however, they are still rarely used together in practice. Optimization as an additional technique to this combination is even a more powerful approach especially when designing and improving complex processes with multiple conflicting objectives. This paper presents the mutual benefits that are gained when combining lean, simulation and optimization and how they overcome each other's limitations. A framework including the three concepts, some of the barriers for its implementation and a real-world industrial example are also described.",
"title": ""
},
{
"docid": "8756ef13409ae696ffaf034c873fdaf6",
"text": "This paper addresses a data-driven prognostics method for the estimation of the Remaining Useful Life (RUL) and the associated confidence value of bearings. The proposed method is based on the utilization of the Wavelet Packet Decomposition (WPD) technique, and the Mixture of Gaussians Hidden Markov Models (MoG-HMM). The method relies on two phases: an off-line phase, and an on-line phase. During the first phase, the raw data provided by the sensors are first processed to extract features in the form of WPD coefficients. The extracted features are then fed to dedicated learning algorithms to estimate the parameters of a corresponding MoG-HMM, which best fits the degradation phenomenon. The generated model is exploited during the second phase to continuously assess the current health state of the physical component, and to estimate its RUL value with the associated confidence. The developed method is tested on benchmark data taken from the “NASA prognostics data repository” related to several experiments of failures on bearings done under different operating conditions. Furthermore, the method is compared to traditional time-feature prognostics and simulation results are given at the end of the paper. The results of the developed prognostics method, particularly the estimation of the RUL, can help improving the availability, reliability, and security while reducing the maintenance costs. Indeed, the RUL and associated confidence value are relevant information which can be used to take appropriate maintenance and exploitation decisions. In practice, this information may help the maintainers to prepare the necessary material and human resources before the occurrence of a failure. Thus, the traditional maintenance policies involving corrective and preventive maintenance can be replaced by condition based maintenance.",
"title": ""
},
{
"docid": "bfe89b9e50e09b2b70450d540a7931e1",
"text": "Social networking websites allow users to create and share content. Big information cascades of post resharing can form as users of these sites reshare others' posts with their friends and followers. One of the central challenges in understanding such cascading behaviors is in forecasting information outbreaks, where a single post becomes widely popular by being reshared by many users. In this paper, we focus on predicting the final number of reshares of a given post. We build on the theory of self-exciting point processes to develop a statistical model that allows us to make accurate predictions. Our model requires no training or expensive feature engineering. It results in a simple and efficiently computable formula that allows us to answer questions, in real-time, such as: Given a post's resharing history so far, what is our current estimate of its final number of reshares? Is the post resharing cascade past the initial stage of explosive growth? And, which posts will be the most reshared in the future?\n We validate our model using one month of complete Twitter data and demonstrate a strong improvement in predictive accuracy over existing approaches. Our model gives only 15% relative error in predicting final size of an average information cascade after observing it for just one hour.",
"title": ""
},
{
"docid": "dda75fe19f987d41f87c859fb364d7ae",
"text": "The PolyU team participated in the Chinese Short Text Conversation (STC) subtask of the NTCIR-13, the core task of NTCIR-13. At NTCIR-13, generation-based approaches and their evaluations are firstly introduced into the task. This minority report describes our methods to solving the STC problem including four retrieval-based and two generationbased typical approaches. We compare and discuss the official results.",
"title": ""
},
{
"docid": "33126812301dfc04b475ecbc9c8ae422",
"text": "From fishtail to princess braids, these intricately woven structures define an important and popular class of hairstyle, frequently used for digital characters in computer graphics. In addition to the challenges created by the infinite range of styles, existing modeling and capture techniques are particularly constrained by the geometric and topological complexities. We propose a data-driven method to automatically reconstruct braided hairstyles from input data obtained from a single consumer RGB-D camera. Our approach covers the large variation of repetitive braid structures using a family of compact procedural braid models. From these models, we produce a database of braid patches and use a robust random sampling approach for data fitting. We then recover the input braid structures using a multi-label optimization algorithm and synthesize the intertwining hair strands of the braids. We demonstrate that a minimal capture equipment is sufficient to effectively capture a wide range of complex braids with distinct shapes and structures.",
"title": ""
},
{
"docid": "a5c054899abf8aa553da4a576577678e",
"text": "Developmental programming resulting from maternal malnutrition can lead to an increased risk of metabolic disorders such as obesity, insulin resistance, type 2 diabetes and cardiovascular disorders in the offspring in later life. Furthermore, many conditions linked with developmental programming are also known to be associated with the aging process. This review summarizes the available evidence about the molecular mechanisms underlying these effects, with the potential to identify novel areas of therapeutic intervention. This could also lead to the discovery of new treatment options for improved patient outcomes.",
"title": ""
},
{
"docid": "3eff8dca65a9a119a9f5c38dbf8dc978",
"text": "Advances in predicting in vivo performance of drug products has the potential to change how drug products are developed and reviewed. Modeling and simulation methods are now more commonly used in drug product development and regulatory drug review. These applications include, but are not limited to: the development of biorelevant specifications, the determination of bioequivalence metrics for modified release products with rapid therapeutic onset, the design of in vitro-in vivo correlations in a mechanistic framework, and prediction of food effect. As new regulatory concepts such as quality by design require better application of biopharmaceutical modeling in drug product development, regulatory challenges in bioequivalence demonstration of complex drug products also present exciting opportunities for creative modeling and simulation approaches. A collaborative effort among academia, government and industry in modeling and simulation will result in improved safe and effective new/generic drugs to the American public.",
"title": ""
},
{
"docid": "48774da3dd848f6e7dc0b63fdf89694e",
"text": "Near Field Communication (NFC) offers intuitive interactions between humans and vehicles. In this paper we explore different NFC based use cases in an automotive context. Nearly all described use cases have been implemented in a BMW vehicle to get experiences of NFC in a real in-car environment. We describe the underlying soft- and hardware architecture and our experiences in setting up the prototype.",
"title": ""
},
{
"docid": "11ddbce61cb175e9779e0fcb5622436f",
"text": "When rewards are sparse and efficient exploration essential, deep Q-learning with -greedy exploration tends to fail. This poses problems for otherwise promising domains such as task-oriented dialog systems, where the primary reward signal, indicating successful completion, typically occurs only at the end of each episode but depends on the entire sequence of utterances. A poor agent encounters such successful dialogs rarely, and a random agent may never stumble upon a successful outcome in reasonable time. We present two techniques that significantly improve the efficiency of exploration for deep Q-learning agents in dialog systems. First, we demonstrate that exploration by Thompson sampling, using Monte Carlo samples from a Bayes-by-Backprop neural network, yields marked improvement over standard DQNs with Boltzmann or -greedy exploration. Second, we show that spiking the replay buffer with a small number of successes, as are easy to harvest for dialog tasks, can make Q-learning feasible when it might otherwise fail catastrophically.",
"title": ""
},
{
"docid": "175d7462d86eae131358c005a32ecdab",
"text": "Software architectures are often constructed through a series of design decisions. In particular, architectural tactics are selected to satisfy specific quality concerns such as reliability, performance, and security. However, the knowledge of these tactical decisions is often lost, resulting in a gradual degradation of architectural quality as developers modify the code without fully understanding the underlying architectural decisions. In this paper we present a machine learning approach for discovering and visualizing architectural tactics in code, mapping these code segments to tactic traceability patterns, and monitoring sensitive areas of the code for modification events in order to provide users with up-to-date information about underlying architectural concerns. Our approach utilizes a customized classifier which is trained using code extracted from fifty performance-centric and safety-critical open source software systems. Its performance is compared against seven off-the-shelf classifiers. In a controlled experiment all classifiers performed well; however our tactic detector outperformed the other classifiers when used within the larger context of the Hadoop Distributed File System. We further demonstrate the viability of our approach for using the automatically detected tactics to generate viable and informative messages in a simulation of maintenance events mined from Hadoop's change management system.",
"title": ""
}
] |
scidocsrr
|
b703ca9cf76a998b7004dfc19c16021f
|
0.35mm pitch wafer level package board level reliability: Studying effect of ball de-population with varying ball size
|
[
{
"docid": "8eace30c00d9b118635dc8a2e383f36b",
"text": "Wafer Level Packaging (WLP) has the highest potential for future single chip packages because the WLP is intrinsically a chip size package. The package is completed directly on the wafer then singulated by dicing for the assembly. All packaging and testing operations of the dice are replaced by whole wafer fabrication and wafer level testing. Therefore, it becomes more cost-effective with decreasing die size or increasing wafer size. However, due to the intrinsic mismatch of the coefficient of thermal expansion (CTE) between silicon chip and plastic PCB material, solder ball reliability subject to temperature cycling becomes the weakest point of the technology. In this paper some fundamental principles in designing WLP structure to achieve the robust reliability are demonstrated through a comprehensive study of a variety of WLP technologies. The first principle is the 'structural flexibility' principle. The more flexible a WLP structure is, the less the stresses that are applied on the solder balls will be. Ball on polymer WLP, Cu post WLP, polymer core solder balls are such examples to achieve better flexibility of overall WLP structure. The second principle is the 'local enhancement' at the interface region of solder balls where fatigue failures occur. Polymer collar WLP, and increasing solder opening size are examples to reduce the local stress level. In this paper, the reliability improvements are discussed through various existing and tested WLP technologies at silicon level and ball level, respectively. The fan-out wafer level packaging is introduced, which is expected to extend the standard WLP to the next stage with unlimited potential applications in future.",
"title": ""
},
{
"docid": "9e91f7e57e074ec49879598c13035d70",
"text": "Wafer Level Package (WLP) technology has seen tremendous advances in recent years and is rapidly being adopted at the 65nm Low-K silicon node. For a true WLP, the package size is same as the die (silicon) size and the package is usually mounted directly on to the Printed Circuit Board (PCB). Board level reliability (BLR) is a bigger challenge on WLPs than the package level due to a larger CTE mismatch and difference in stiffness between silicon and the PCB [1]. The BLR performance of the devices with Low-K dielectric silicon becomes even more challenging due to their fragile nature and lower mechanical strength. A post fab re-distribution layer (RDL) with polymer stack up provides a stress buffer resulting in an improved board level reliability performance. Drop shock (DS) and temperature cycling test (TCT) are the most commonly run tests in the industry to gauge the BLR performance of WLPs. While a superior drop performance is required for devices targeting mobile handset applications, achieving acceptable TCT performance on WLPs can become challenging at times. BLR performance of WLP is sensitive to design features such as die size, die aspect ratio, ball pattern and ball density etc. In this paper, 65nm WLPs with a post fab Cu RDL have been studied for package and board level reliability. Standard JEDEC conditions are applied during the reliability testing. Here, we present a detailed reliability evaluation on multiple WLP sizes and varying ball patterns. Die size ranging from 10 mm2 to 25 mm2 were studied along with variation in design features such as die aspect ratio and the ball density (fully populated and de-populated ball pattern). All test vehicles used the aforementioned 65nm fab node.",
"title": ""
}
] |
[
{
"docid": "8d5222e552ffcd47595c5ec6d3d1f0fe",
"text": "The main purpose of this paper is to highlight the features of Artificial Intelligence (AI), how it was developed, and some of its main applications. John McCarthy, one of the founders of artificial intelligence research, once defined the field as “getting a computer to do things which, when done by people, are said to involve intelligence.” The point of the definition was that he felt perfectly comfortable about carrying on his research without first having to defend any particular philosophical view of what the word “intelligence” means. The beginnings of artificial intelligence are traced to philosophy, fiction, and imagination. Early inventions in electronics, engineering, and many other disciplines have influenced AI. Some early milestones include work in problems solving which included basic work in learning, knowledge representation, and inference as well as demonstration programs in language understanding, translation, theorem proving, associative memory, and knowledge-based systems.",
"title": ""
},
{
"docid": "f1e5f8ab0b2ce32553dd5e08f1113b36",
"text": "We examined the hypothesis that an excess accumulation of intramuscular lipid (IMCL) is associated with insulin resistance and that this may be mediated by the oxidative capacity of muscle. Nine sedentary lean (L) and 11 obese (O) subjects, 8 obese subjects with type 2 diabetes mellitus (D), and 9 lean, exercise-trained (T) subjects volunteered for this study. Insulin sensitivity (M) determined during a hyperinsulinemic (40 mU x m(-2)min(-1)) euglycemic clamp was greater (P < 0.01) in L and T, compared with O and D (9.45 +/- 0.59 and 10.26 +/- 0.78 vs. 5.51 +/- 0.61 and 1.15 +/- 0.83 mg x min(-1)kg fat free mass(-1), respectively). IMCL in percutaneous vastus lateralis biopsy specimens by quantitative image analysis of Oil Red O staining was approximately 2-fold higher in D than in L (3.04 +/- 0.39 vs. 1.40 +/- 0.28% area as lipid; P < 0.01). IMCL was also higher in T (2.36 +/- 0.37), compared with L (P < 0.01). The oxidative capacity of muscle determined with succinate dehydrogenase staining of muscle fibers was higher in T, compared with L, O, and D (50.0 +/- 4.4, 36.1 +/- 4.4, 29.7 +/- 3.8, and 33.4 +/- 4.7 optical density units, respectively; P < 0.01). IMCL was negatively associated with M (r = -0.57, P < 0.05) when endurance-trained subjects were excluded from the analysis, and this association was independent of body mass index. However, the relationship between IMCL and M was not significant when trained individuals were included. There was a positive association between the oxidative capacity and M among nondiabetics (r = 0.37, P < 0.05). In summary, skeletal muscle of trained endurance athletes is markedly insulin sensitive and has a high oxidative capacity, despite having an elevated lipid content. In conclusion, the capacity for lipid oxidation may be an important mediator of the association between excess muscle lipid accumulation and insulin resistance.",
"title": ""
},
{
"docid": "444a9192398374227c9cd93ec253f139",
"text": "185 Abstract— The concept of sequence Data Mining was first introduced by Rakesh Agrawal and Ramakrishnan Srikant in the year 1995. The problem was first introduced in the context of market analysis. It aimed to retrieve frequent patterns in the sequences of products purchased by customers through time ordered transactions. Later on its application was extended to complex applications like telecommunication, network detection, DNA research, etc. Several algorithms were proposed. The very first was Apriori algorithm, which was put forward by the founders themselves. Later more scalable algorithms for complex applications were developed. E.g. GSP, Spade, PrefixSpan etc. The area underwent considerable advancements since its introduction in a short span. In this paper, a systematic survey of the sequential pattern mining algorithms is performed. This paper investigates these algorithms by classifying study of sequential pattern-mining algorithms into two broad categories. First, on the basis of algorithms which are designed to increase efficiency of mining and second, on the basis of various extensions of sequential pattern mining designed for certain application. At the end, comparative analysis is done on the basis of important key features supported by various algorithms and current research challenges are discussed in this field of data mining.",
"title": ""
},
{
"docid": "76d514ee806b154b4fef2fe2c63c8b27",
"text": "Attacks on systems and organisations increasingly exploit human actors, for example through social engineering, complicating their formal treatment and automatic identification. Formalisation of human behaviour is difficult at best, and attacks on socio-technical systems are still mostly identified through brainstorming of experts. In this work we formalize attack tree generation including human factors; based on recent advances in system models we develop a technique to identify possible attacks analytically, including technical and human factors. Our systematic attack generation is based on invalidating policies in the system model by identifying possible sequences of actions that lead to an attack. The generated attacks are precise enough to illustrate the threat, and they are general enough to hide the details of individual steps.",
"title": ""
},
{
"docid": "e6d99d126e42697da3f37dd26ac02524",
"text": "The authors developed, tested, and replicated a model in which safety-specific transformational leadership predicted occupational injuries in 2 separate studies. Data from 174 restaurant workers (M age = 26.75 years, range = 15-64) were analyzed using structural equation modeling (LISREL 8; K. G. Jöreskog & D. Sörbom, 1993) and provided strong support for a model whereby safety-specific transformational leadership predicted occupational injuries through the effects of perceived safety climate, safety consciousness, and safety-related events. Study 2 replicated and extended this model with data from 164 young workers from diverse jobs (M age = 19.54 years, range = 14-24). Safety-specific transformational leadership and role overload were related to occupational injuries through the effects of perceived safety climate, safety consciousness, and safety-related events.",
"title": ""
},
{
"docid": "8ed8886668eef29d9574be5f6f058959",
"text": "We present a fully trainable solution for binarization of degraded document images using extremely randomized trees. Unlike previous attempts that often use simple features, our method encodes all heuristics about whether or not a pixel is foreground text into a high-dimensional feature vector and learns a more complicated decision function. We introduce two novel features, the Logarithm Intensity Percentile (LIP) and the Relative Darkness Index (RDI), and combine them with low level features, and reformulated features from existing binarization methods. Experimental results show that using small sample size (about 1.5% of all available training data), we can achieve a binarization performance comparable to manually-tuned, state-of-the-art methods. Additionally, the trained document binarization classifier shows good generalization capabilities on out-of-domain data.",
"title": ""
},
{
"docid": "52f414bea50c9a7f78fcbf198b6caf4c",
"text": "Searchable encryption (SE) allows a client to outsource a dataset to an untrusted server while enabling the server to answer keyword queries in a private manner. SE can be used as a building block to support more expressive private queries such as range/point and boolean queries, while providing formal security guarantees. To scale SE to big data using external memory, new schemes with small locality have been proposed, where locality is defined as the number of non-continuous reads that the server makes for each query. Previous space-efficient SE schemes achieve optimal locality by increasing the read efficiency-the number of additional memory locations (false positives) that the server reads per result item. This can hurt practical performance.\n In this work, we design, formally prove secure, and evaluate the first SE scheme with tunable locality and linear space. Our first scheme has optimal locality and outperforms existing approaches (that have a slightly different leakage profile) by up to 2.5 orders of magnitude in terms of read efficiency, for all practical database sizes. Another version of our construction with the same leakage as previous works can be tuned to have bounded locality, optimal read efficiency and up to 60x more efficient end-to-end search time. We demonstrate that our schemes work fast in in-memory as well, leading to search time savings of up to 1 order of magnitude when compared to the most practical in-memory SE schemes. Finally, our construction can be tuned to achieve trade-offs between space, read efficiency, locality, parallelism and communication overhead.",
"title": ""
},
{
"docid": "64f15815e4c1c94c3dfd448dec865b85",
"text": "Modern software systems are typically large and complex, making comprehension of these systems extremely difficult. Experienced programmers comprehend code by seamlessly processing synonyms and other word relations. Thus, we believe that automated comprehension and software tools can be significantly improved by leveraging word relations in software. In this paper, we perform a comparative study of six state of the art, English-based semantic similarity techniques and evaluate their effectiveness on words from the comments and identifiers in software. Our results suggest that applying English-based semantic similarity techniques to software without any customization could be detrimental to the performance of the client software tools. We propose strategies to customize the existing semantic similarity techniques to software, and describe how various program comprehension tools can benefit from word relation information.",
"title": ""
},
{
"docid": "48ba3cad9e20162b6dcbb28ead47d997",
"text": "This paper compares the accuracy of several variations of the B LEU algorithm when applied to automatically evaluating student essays. The different configurations include closed-class word removal, stemming, two baseline wordsense disambiguation procedures, and translating the texts into a simple semantic representation. We also prove empirically that the accuracy is kept when the student answers are translated automatically. Although none of the representations clearly outperform the others, some conclusions are drawn from the results.",
"title": ""
},
{
"docid": "7c36d7f2a9604470e0e97bd2425bbf0c",
"text": "Gamification, the use of game mechanics in non-gaming applications, has been applied to various systems to encourage desired user behaviors. In this paper, we examine patterns of user activity in an enterprise social network service after the removal of a points-based incentive system. Our results reveal that the removal of the incentive scheme did reduce overall participation via contribution within the SNS. We also describe the strategies by point leaders and observe that users geographically distant from headquarters tended to comment on profiles outside of their home country. Finally, we describe the implications of the removal of extrinsic rewards, such as points and badges, on social software systems, particularly those deployed within an enterprise.",
"title": ""
},
{
"docid": "8fe823702191b4a56defaceee7d19db6",
"text": "We propose a method of stacking multiple long short-term memory (LSTM) layers for modeling sentences. In contrast to the conventional stacked LSTMs where only hidden states are fed as input to the next layer, our architecture accepts both hidden and memory cell states of the preceding layer and fuses information from the left and the lower context using the soft gating mechanism of LSTMs. Thus the proposed stacked LSTM architecture modulates the amount of information to be delivered not only in horizontal recurrence but also in vertical connections, from which useful features extracted from lower layers are effectively conveyed to upper layers. We dub this architecture Cell-aware Stacked LSTM (CAS-LSTM) and show from experiments that our models achieve state-of-the-art results on benchmark datasets for natural language inference, paraphrase detection, and sentiment classification.",
"title": ""
},
{
"docid": "9688efb8845895d49029c07d397a336b",
"text": "Familial hypercholesterolaemia (FH) leads to elevated plasma levels of LDL-cholesterol and increased risk of premature atherosclerosis. Dietary treatment is recommended to all patients with FH in combination with lipid-lowering drug therapy. Little is known about how children with FH and their parents respond to dietary advice. The aim of the present study was to characterise the dietary habits in children with FH. A total of 112 children and young adults with FH and a non-FH group of children (n 36) were included. The children with FH had previously received dietary counselling. The FH subjects were grouped as: 12-14 years (FH (12-14)) and 18-28 years (FH (18-28)). Dietary data were collected by SmartDiet, a short self-instructing questionnaire on diet and lifestyle where the total score forms the basis for an overall assessment of the diet. Clinical and biochemical data were retrieved from medical records. The SmartDiet scores were significantly improved in the FH (12-14) subjects compared with the non-FH subjects (SmartDiet score of 31 v. 28, respectively). More FH (12-14) subjects compared with non-FH children consumed low-fat milk (64 v. 18 %, respectively), low-fat cheese (29 v. 3%, respectively), used margarine with highly unsaturated fat (74 v. 14 %, respectively). In all, 68 % of the FH (12-14) subjects and 55 % of the non-FH children had fish for dinner twice or more per week. The FH (18-28) subjects showed the same pattern in dietary choices as the FH (12-14) children. In contrast to the choices of low-fat dietary items, 50 % of the FH (12-14) subjects consumed sweet spreads or sweet drinks twice or more per week compared with only 21 % in the non-FH group. In conclusion, ordinary out-patient dietary counselling of children with FH seems to have a long-lasting effect, as the diet of children and young adults with FH consisted of more products that are favourable with regard to the fatty acid composition of the diet.",
"title": ""
},
{
"docid": "5ba72505e19ded19685f43559868bfdf",
"text": "In this paper, we present an optimally-modi#ed log-spectral amplitude (OM-LSA) speech estimator and a minima controlled recursive averaging (MCRA) noise estimation approach for robust speech enhancement. The spectral gain function, which minimizes the mean-square error of the log-spectra, is obtained as a weighted geometric mean of the hypothetical gains associated with the speech presence uncertainty. The noise estimate is given by averaging past spectral power values, using a smoothing parameter that is adjusted by the speech presence probability in subbands. We introduce two distinct speech presence probability functions, one for estimating the speech and one for controlling the adaptation of the noise spectrum. The former is based on the time–frequency distribution of the a priori signal-to-noise ratio. The latter is determined by the ratio between the local energy of the noisy signal and its minimum within a speci6ed time window. Objective and subjective evaluation under various environmental conditions con6rm the superiority of the OM-LSA and MCRA estimators. Excellent noise suppression is achieved, while retaining weak speech components and avoiding the musical residual noise phenomena. ? 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "7e047b7c0a0ded44106ce6b50726d092",
"text": "Skeleton-based action recognition task is entangled with complex spatio-temporal variations of skeleton joints, and remains challenging for Recurrent Neural Networks (RNNs). In this work, we propose a temporal-then-spatial recalibration scheme to alleviate such complex variations, resulting in an end-to-end Memory Attention Networks (MANs) which consist of a Temporal Attention Recalibration Module (TARM) and a Spatio-Temporal Convolution Module (STCM). Specifically, the TARM is deployed in a residual learning module that employs a novel attention learning network to recalibrate the temporal attention of frames in a skeleton sequence. The STCM treats the attention calibrated skeleton joint sequences as images and leverages the Convolution Neural Networks (CNNs) to further model the spatial and temporal information of skeleton data. These two modules (TARM and STCM) seamlessly form a single network architecture that can be trained in an end-to-end fashion. MANs significantly boost the performance of skeleton-based action recognition and achieve the best results on four challenging benchmark datasets: NTU RGB+D, HDM05, SYSU-3D and UT-Kinect.1",
"title": ""
},
{
"docid": "311d186966b7d697731e4c2450289418",
"text": "PURPOSE OF REVIEW\nThe goal of this paper is to review current literature on nutritional ketosis within the context of weight management and metabolic syndrome, namely, insulin resistance, lipid profile, cardiovascular disease risk, and development of non-alcoholic fatty liver disease. We provide background on the mechanism of ketogenesis and describe nutritional ketosis.\n\n\nRECENT FINDINGS\nNutritional ketosis has been found to improve metabolic and inflammatory markers, including lipids, HbA1c, high-sensitivity CRP, fasting insulin and glucose levels, and aid in weight management. We discuss these findings and elaborate on potential mechanisms of ketones for promoting weight loss, decreasing hunger, and increasing satiety. Humans have evolved with the capacity for metabolic flexibility and the ability to use ketones for fuel. During states of low dietary carbohydrate intake, insulin levels remain low and ketogenesis takes place. These conditions promote breakdown of excess fat stores, sparing of lean muscle, and improvement in insulin sensitivity.",
"title": ""
},
{
"docid": "6a40a7cf6690ac39d8b73048dad51e97",
"text": "Power-flow modeling of a unified power-flow controller (UPFC) increases the complexities of the computer program codes for a Newton-Raphson load-flow (NRLF) analysis. This is due to the fact that modifications of the existing codes are needed for computing power injections, and the elements of the Jacobian matrix to take into account the contributions of the series and shunt voltage sources of the UPFC. Additionally, new codes for computing the UPFC real-power injection terms as well as the associated Jacobian matrix need to be developed. To reduce this complexity of programming codes, in this paper, an indirect yet exact UPFC model is proposed. In the proposed model, an existing power system installed with UPFC is transformed into an augmented equivalent network without any UPFC. Due to the absence of any UPFC, the augmented network can easily be solved by reusing the existing NRLF computer codes to obtain the solution of the original network containing UPFC(s). As a result, substantial reduction in the complexities of the computer program codes takes place. Additionally, the proposed model can also account for various practical device limit constraints of the UPFC.",
"title": ""
},
{
"docid": "b92484f67bf2d3f71d51aee9fb7abc86",
"text": "This research addresses the kinds of matching elements that determine analogical relatedness and literal similarity. Despite theoretical agreement on the importance of relational match, the empirical evidence is neither systematic nor definitive. In 3 studies, participants performed online evaluations of relatedness of sentence pairs that varied in either the object or relational match. Results show a consistent focus on relational matches as the main determinant of analogical acceptance. In addition, analogy does not require strict overall identity of relational concepts. Semantically overlapping but nonsynonymous relations were commonly accepted, but required more processing time. Finally, performance in a similarity rating task partly paralleled analogical acceptance; however, relatively more weight was given to object matches. Implications for psychological theories of analogy and similarity are addressed.",
"title": ""
},
{
"docid": "4bc73a7e6a6975ba77349cac62a96c18",
"text": "BACKGROUND\nIn May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear.\n\n\nMETHODS\nThe study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play.\n\n\nRESULTS\nOn average, the reviewed apps included 5 behavior change techniques (range 2-8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found.\n\n\nCONCLUSIONS\nThe present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions.",
"title": ""
},
{
"docid": "61304d369ea790d80b24259336d6974c",
"text": "After searching for the keywords “information privacy” in ABI/Informs focusing on scholarly articles, we obtained a listing of 340 papers. We first eliminated papers that were anonymous, table of contents, interviews with experts, or short opinion pieces. We also removed articles not related to our focus on information privacy research in IS literature. A total of 218 articles were removed as explained in Table A1.",
"title": ""
},
{
"docid": "3c7d25c85b837a3337c93ca2e1e54af4",
"text": "BACKGROUND\nThe treatment of acne scars with fractional CO(2) lasers is gaining increasing impact, but has so far not been compared side-by-side to untreated control skin.\n\n\nOBJECTIVE\nIn a randomized controlled study to examine efficacy and adverse effects of fractional CO(2) laser resurfacing for atrophic acne scars compared to no treatment.\n\n\nMETHODS\nPatients (n = 13) with atrophic acne scars in two intra-individual areas of similar sizes and appearances were randomized to (i) three monthly fractional CO(2) laser treatments (MedArt 610; 12-14 W, 48-56 mJ/pulse, 13% density) and (ii) no treatment. Blinded on-site evaluations were performed by three physicians on 10-point scales. Endpoints were change in scar texture and atrophy, adverse effects, and patient satisfaction.\n\n\nRESULTS\nPreoperatively, acne scars appeared with moderate to severe uneven texture (6.15 ± 1.23) and atrophy (5.72 ± 1.45) in both interventional and non-interventional control sites, P = 1. Postoperatively, lower scores of scar texture and atrophy were obtained at 1 month (scar texture 4.31 ± 1.33, P < 0.0001; atrophy 4.08 ± 1.38, P < 0.0001), at 3 months (scar texture 4.26 ± 1.97, P < 0.0001; atrophy 3.97 ± 2.08, P < 0.0001), and at 6 months (scar texture 3.89 ± 1.7, P < 0.0001; atrophy 3.56 ± 1.76, P < 0.0001). Patients were satisfied with treatments and evaluated scar texture to be mild or moderately improved. Adverse effects were minor.\n\n\nCONCLUSIONS\nIn this single-blinded randomized controlled trial we demonstrated that moderate to severe atrophic acne scars can be safely improved by ablative fractional CO(2) laser resurfacing. The use of higher energy levels might have improved the results and possibly also induced significant adverse effects.",
"title": ""
}
] |
scidocsrr
|
0e1f09486660910ae6ea5eade46134d2
|
SurfelWarp: Efficient Non-Volumetric Single View Dynamic Reconstruction
|
[
{
"docid": "ff61cb8a69d5c8dc35931e41bc4c50fa",
"text": "This paper introduces DART, a general framework for tracking articulated objects composed of rigid bodies connected through a kinematic tree. DART covers a broad set of objects encountered in indoor environments, including furniture and tools, and human and robot bodies, hands and manipulators. To achieve efficient and robust tracking, DART extends the signed distance function representation to articulated objects and takes full advantage of highly parallel GPU algorithms for data association and pose optimization. We demonstrate the capabilities of DART on different types of objects that have each required dedicated tracking techniques in the past.",
"title": ""
},
{
"docid": "2b2398bf61847843e18d1f9150a1bccc",
"text": "We present a robust method for capturing articulated hand motions in realtime using a single depth camera. Our system is based on a realtime registration process that accurately reconstructs hand poses by fitting a 3D articulated hand model to depth images. We register the hand model using depth, silhouette, and temporal information. To effectively map low-quality depth maps to realistic hand poses, we regularize the registration with kinematic and temporal priors, as well as a data-driven prior built from a database of realistic hand poses. We present a principled way of integrating such priors into our registration optimization to enable robust tracking without severely restricting the freedom of motion. A core technical contribution is a new method for computing tracking correspondences that directly models occlusions typical of single-camera setups. To ensure reproducibility of our results and facilitate future research, we fully disclose the source code of our implementation.",
"title": ""
},
{
"docid": "80bdfdc77ca41d7a02bc0af82cae6f65",
"text": "We introduce a geometry-driven approach for real-time 3D reconstruction of deforming surfaces from a single RGB-D stream without any templates or shape priors. To this end, we tackle the problem of non-rigid registration by level set evolution without explicit correspondence search. Given a pair of signed distance fields (SDFs) representing the shapes of interest, we estimate a dense deformation field that aligns them. It is defined as a displacement vector field of the same resolution as the SDFs and is determined iteratively via variational minimization. To ensure it generates plausible shapes, we propose a novel regularizer that imposes local rigidity by requiring the deformation to be a smooth and approximately Killing vector field, i.e. generating nearly isometric motions. Moreover, we enforce that the level set property of unity gradient magnitude is preserved over iterations. As a result, KillingFusion reliably reconstructs objects that are undergoing topological changes and fast inter-frame motion. In addition to incrementally building a model from scratch, our system can also deform complete surfaces. We demonstrate these capabilities on several public datasets and introduce our own sequences that permit both qualitative and quantitative comparison to related approaches.",
"title": ""
}
] |
[
{
"docid": "a2cbc2b95b1988dae97d501c141e161d",
"text": "We present a fast and simple method to compute bundled layouts of general graphs. For this, we first transform a given graph drawing into a density map using kernel density estimation. Next, we apply an image sharpening technique which progressively merges local height maxima by moving the convolved graph edges into the height gradient flow. Our technique can be easily and efficiently implemented using standard graphics acceleration techniques and produces graph bundlings of similar appearance and quality to state-of-the-art methods at a fraction of the cost. Additionally, we show how to create bundled layouts constrained by obstacles and use shading to convey information on the bundling quality. We demonstrate our method on several large graphs.",
"title": ""
},
{
"docid": "65d9ed5f0f3d8789ebce6c3fbad31760",
"text": "The paper briefly outlines DLR's experience with real space robot missions (ROTEX and ETS VII). It then discusses forthcoming projects, e.g., free-flying systems in low or geostationary orbit and robot systems around the space station ISS, where the telerobotic system MARCO might represent a common baseline. Finally it describes our efforts in developing a new generation of \"mechatronic\" ultra-light weight arms with multifingered hands. The third arm generation is operable now (approaching present-day technical limits). In a similar way DLR's four-fingered hand II was a big step towards higher reliability and yet better performance. Artificial robonauts for space are a central goal now for the Europeans as well as for NASA, and the first verification tests of DLR's joint components are supposed to fly already end of 93 on the space station.",
"title": ""
},
{
"docid": "4507ae69ed021941ff7b0e39d8d50d22",
"text": "In the last few years a new research area, called stream reasoning, emerged to bridge the gap between reasoning and stream processing. While current reasoning approaches are designed to work on mainly static data, the Web is, on the other hand, extremely dynamic: information is frequently changed and updated, and new data is continuously generated from a huge number of sources, often at high rate. In other words, fresh information is constantly made available in the form of streams of new data and updates. Despite some promising investigations in the area, stream reasoning is still in its infancy, both from the perspective of models and theories development, and from the perspective of systems and tools design and implementation. The aim of this paper is threefold: (i) we identify the requirements coming from different application scenarios, and we isolate the problems they pose; (ii) we survey existing approaches and proposals in the area of stream reasoning, highlighting their strengths and limitations; (iii) we draw a research agenda to guide the future research and development of stream reasoning. In doing so, we also analyze related research fields to extract algorithms, models, techniques, and solutions that could be useful in the area of stream reasoning.",
"title": ""
},
{
"docid": "8eb4ccc0a28dc7de2c1adb1f508ae5ca",
"text": "Compressed sensing (CS) is an emerging signal processing paradigm that enables the sub-Nyquist processing of sparse signals; i.e., signals with significant redundancy. Electrocardiogram (ECG) signals show significant time-domain sparsity that can be exploited using CS techniques to reduce energy consumption in an adaptive data acquisition scheme. A measurement matrix of random values is central to CS computation. Signal-to-quantization noise ratio (SQNR) results with ECG signals show that 5- and 6-bit Gaussian random coefficients are sufficient for compression factors up to 6X and from 8X-16X, respectively, whereas 6-bit uniform random coefficients are needed for 2X-16X compression ratios.",
"title": ""
},
{
"docid": "5487dd1976a164447c821303b53ebdf8",
"text": "Rapid and pervasive digitization of innovation processes and outcomes has upended extant theories on innovation management by calling into question fundamental assumptions about the definitional boundaries for innovation, agency for innovation, and the relationship between innovation processes and outcomes. There is a critical need for novel theorizing on digital innovation management that does not rely on such assumptions and draws on the rich and rapidly emerging research on digital technologies. We offer suggestions for such theorizing in the form of four new theorizing logics, or elements, that are likely to be valuable in constructing more accurate explanations of innovation processes and outcomes in an increasingly digital world. These logics can open new avenues for researchers to contribute to this important area. Our suggestions in this paper, coupled with the six research notes included in the special issue on digital innovation management, seek to offer a broader foundation for reinventing innovation management research in a digital world.",
"title": ""
},
{
"docid": "e5691e6bb32f06a34fab7b692539d933",
"text": "Öz Supplier evaluation and selection includes both qualitative and quantitative criteria and it is considered as a complex Multi Criteria Decision Making (MCDM) problem. Uncertainty and impreciseness of data is an integral part of decision making process for a real life application. The fuzzy set theory allows making decisions under uncertain environment. In this paper, a trapezoidal type 2 fuzzy multicriteria decision making methods based on TOPSIS is proposed to select convenient supplier under vague information. The proposed method is applied to the supplier selection process of a textile firm in Turkey. In addition, the same problem is solved with type 1 fuzzy TOPSIS to confirm the findings of type 2 fuzzy TOPSIS. A sensitivity analysis is conducted to observe how the decision changes under different scenarios. Results show that the presented type 2 fuzzy TOPSIS method is more appropriate and effective to handle the supplier selection in uncertain environment. Tedarikçi değerlendirme ve seçimi, nitel ve nicel çok sayıda faktörün değerlendirilmesini gerektiren karmaşık birçok kriterli karar verme problemi olarak görülmektedir. Gerçek hayatta, belirsizlikler ve muğlaklık bir karar verme sürecinin ayrılmaz bir parçası olarak karşımıza çıkmaktadır. Bulanık küme teorisi, belirsizlik durumunda karar vermemize imkân sağlayan metotlardan bir tanesidir. Bu çalışmada, ikizkenar yamuk tip 2 bulanık TOPSIS yöntemi kısaca tanıtılmıştır. Tanıtılan yöntem, Türkiye’de bir tekstil firmasının tedarikçi seçimi problemine uygulanmıştır. Ayrıca, tip 2 bulanık TOPSIS yönteminin sonuçlarını desteklemek için aynı problem tip 1 bulanık TOPSIS ile de çözülmüştür. Duyarlılık analizi yapılarak önerilen çözümler farklı senaryolar altında incelenmiştir. Duyarlılık analizi sonuçlarına göre tip 2 bulanık TOPSIS daha efektif ve uygun çözümler üretmektedir.",
"title": ""
},
{
"docid": "338d3b05db192186bb6caf6f36904dd0",
"text": "The threat of malicious insiders to organizations is persistent and increasing. We examine 15 real cases of insider threat sabotage of IT systems to identify several key points in the attack time-line, such as when the insider clearly became disgruntled, began attack preparations, and carried out the attack. We also determine when the attack stopped, when it was detected, and when action was taken on the insider. We found that 7 of the insiders we studied clearly became disgruntled more than 28 days prior to attack, but 9 did not carry out malicious acts until less than a day prior to attack. Of the 15 attacks, 8 ended within a day, 12 were detected within a week, and in 10 cases action was taken on the insider within a month. This exercise is a proof-of-concept for future work on larger data sets, and in this paper we detail our study methods and results, discuss challenges we faced, and identify potential new research directions.",
"title": ""
},
{
"docid": "d253029f47fe3afb6465a71e966fdbd5",
"text": "With the development of the social economy, more and more appliances have been presented in a house. It comes out a problem that how to manage and control these increasing various appliances efficiently and conveniently so as to achieve more comfortable, security and healthy space at home. In this paper, a smart control system base on the technologies of internet of things has been proposed to solve the above problem. The smart home control system uses a smart central controller to set up a radio frequency 433 MHz wireless sensor and actuator network (WSAN). A series of control modules, such as switch modules, radio frequency control modules, have been developed in the WSAN to control directly all kinds of home appliances. Application servers, client computers, tablets or smart phones can communicate with the smart central controller through a wireless router via a Wi-Fi interface. Since it has WSAN as the lower control layer, a appliance can be added into or withdrawn from the control system very easily. The smart control system embraces the functions of appliance monitor, control and management, home security, energy statistics and analysis.",
"title": ""
},
{
"docid": "e6300989e5925d38d09446b3e43092e5",
"text": "Cloud computing provides resources as services in pay-as-you-go mode to customers by using virtualization technology. As virtual machine (VM) is hosted on physical server, great energy is consumed by maintaining the servers in data center. More physical servers means more energy consumption and more money cost. Therefore, the VM placement (VMP) problem is significant in cloud computing. This paper proposes an approach based on ant colony optimization (ACO) to solve the VMP problem, named as ACO-VMP, so as to effectively use the physical resources and to reduce the number of running physical servers. The number of physical servers is the same as the number of the VMs at the beginning. Then the ACO approach tries to reduce the physical server one by one. We evaluate the performance of the proposed ACO-VMP approach in solving VMP with the number of VMs being up to 600. Experimental results compared with the ones obtained by the first-fit decreasing (FFD) algorithm show that ACO-VMP can solve VMP more efficiently to reduce the number of physical servers significantly, especially when the number of VMs is large.",
"title": ""
},
{
"docid": "2146687ac4ac019be6cfc828208187a9",
"text": "Researchers and program developers in medical education presently face the challenge of implementing and evaluating curricula that teach medical students and house staff how to effectively and respectfully deliver health care to the increasingly diverse populations of the United States. Inherent in this challenge is clearly defining educational and training outcomes consistent with this imperative. The traditional notion of competence in clinical training as a detached mastery of a theoretically finite body of knowledge may not be appropriate for this area of physician education. Cultural humility is proposed as a more suitable goal in multicultural medical education. Cultural humility incorporates a lifelong commitment to self-evaluation and self-critique, to redressing the power imbalances in the patient-physician dynamic, and to developing mutually beneficial and nonpaternalistic clinical and advocacy partnerships with communities on behalf of individuals and defined populations.",
"title": ""
},
{
"docid": "8b512d57c7c96c82855927e2f222ec58",
"text": "The current Internet of Things (IoT) has made it very convenient to obtain information about a product from a single data node. However, in many industrial applications, information about a single product can be distributed in multiple different data nodes, and aggregating the information from these nodes has become a common task. In this paper, we provide a distributed service-oriented architecture for this task. In this architecture, each manufacturer provides service for their own products, and data nodes keep the information collected by themselves. Semantic technologies are adopted to handle problems of heterogeneity and serve as the foundation to support different applications. Finally, as an example, we illustrate the use of this architecture to solve the problem of product tracing. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5aed256aaca0a1f2fe8a918e6ffb62bd",
"text": "Zero-shot learning (ZSL) enables solving a task without the need to see its examples. In this paper, we propose two ZSL frameworks that learn to synthesize parameters for novel unseen classes. First, we propose to cast the problem of ZSL as learning manifold embeddings from graphs composed of object classes, leading to a flexible approach that synthesizes “classifiers” for the unseen classes. Then, we define an auxiliary task of synthesizing “exemplars” for the unseen classes to be used as an automatic denoising mechanism for any existing ZSL approaches or as an effective ZSL model by itself. On five visual recognition benchmark datasets, we demonstrate the superior performances of our proposed frameworks in various scenarios of both conventional and generalized ZSL. Finally, we provide valuable insights through a series of empirical analyses, among which are a comparison of semantic representations on the full ImageNet benchmark as well as a comparison of metrics used in generalized ZSL. Our code and data are publicly available at https: //github.com/pujols/Zero-shot-learning-journal. Soravit Changpinyo Google AI E-mail: schangpi@google.com Wei-Lun Chao Cornell University, Department of Computer Science E-mail: weilunchao760414@gmail.com Boqing Gong Tencent AI Lab E-mail: boqinggo@outlook.com Fei Sha University of Southern California, Department of Computer Science E-mail: feisha@usc.edu",
"title": ""
},
{
"docid": "39568ad13dd4ed58180b42e323996574",
"text": "Sequence prediction and classification are ubiquitous and challenging problems in machine learning that can require identifying complex dependencies between temporally distant inputs. Recurrent Neural Networks (RNNs) have the ability, in theory, to cope with these temporal dependencies by virtue of the short-term memory implemented by their recurrent (feedback) connections. However, in practice they are difficult to train successfully when the long-term memory is required. This paper introduces a simple, yet powerful modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. Rather than making the standard RNN models more complex, CW-RNN reduces the number of RNN parameters, improves the performance significantly in the tasks tested, and speeds up the network evaluation. The network is demonstrated in preliminary experiments involving two tasks: audio signal generation and TIMIT spoken word classification, where it outperforms both RNN and LSTM networks.",
"title": ""
},
{
"docid": "ee3815cd041ff70bcefd7b3c7accbfa0",
"text": "Prior research shows that database system performance is dominated by off-chip data stalls, resulting in a concerted effort to bring data into on-chip caches. At the same time, high levels of integration have enabled the advent of chip multiprocessors and increasingly large (and slow) on-chip caches. These two trends pose the imminent technical and research challenge of adapting high-performance data management software to a shifting hardware landscape. In this paper we characterize the performance of a commercial database server running on emerging chip multiprocessor technologies. We find that the major bottleneck of current software is data cache stalls, with L2 hit stalls rising from oblivion to become the dominant execution time component in some cases. We analyze the source of this shift and derive a list of features for future database designs to attain maximum",
"title": ""
},
{
"docid": "9665328d7993e2b1298a2c849c987979",
"text": "The case study presented here, deals with the subject of second language acquisition making at the same time an effort to show as much as possible how L1 was acquired and the ways L1 affected L2, through the process of examining a Greek girl who has been exposed to the English language from the age of eight. Furthermore, I had the chance to analyze the method used by the frontistirio teachers and in what ways this method helps or negatively influences children regarding their performance in the four basic skills. We will evaluate the evidence acquired by the girl by studying briefly the basic theories provided by important figures in the field of L2. Finally, I will also include my personal suggestions and the improvement of the child’s abilities and I will state my opinion clearly.",
"title": ""
},
{
"docid": "c115af8ea687edb9769e7cef48a938ac",
"text": "High resolution imaging radars have come a long way since the early 90's, starting with an FAA Synthetic Vision System program at 35/94 GHz. These systems were heavy and bulky, carried a price tag of about $500K, and were only suitable for larger aircrafts at very small quantity production. Size, weight, and power constraints make 94 GHz still a preferred choice for many situational awareness applications ranging from landing in poor visibility due to fog or brown-out, to cable warning & obstacle avoidance and sense and avoid for unmanned aerial systems. Using COTS components and highly integrated MMIC modules, a complete radar breadboard has been demonstrated in 9 months in one line replacement unit with a total weight of 20 lbs. The new generation of this 94 GHz FMCW imaging sensor will be on the order of 15 lbs or less including the entire radar signal processor. The size and weight achievements of this sensor open up the potential market for rotorcrafts and general aviation.",
"title": ""
},
{
"docid": "d0f14357e0d675c99d4eaa1150b9c55e",
"text": "Purpose – The purpose of this research is to investigate if, and in that case, how and what the egovernment field can learn from user participation concepts and theories in general IS research. We aim to contribute with further understanding of the importance of citizen participation and involvement within the e-government research body of knowledge and when developing public eservices in practice. Design/Methodology/Approach – The analysis in the article is made from a comparative, qualitative case study of two e-government projects. Three analysis themes are induced from the literature review; practice of participation, incentives for participation, and organization of participation. These themes are guiding the comparative analysis of our data with a concurrent openness to interpretations from the field. Findings – The main results in this article are that the e-government field can get inspiration and learn from methods and approaches in traditional IS projects concerning user participation, but in egovernment we also need methods to handle the challenges that arise when designing public e-services for large, heterogeneous user groups. Citizen engagement cannot be seen as a separate challenge in egovernment, but rather as an integrated part of the process of organizing, managing, and performing egovernment projects. Our analysis themes of participation generated from literature; practice, incentives and organization can be used in order to highlight, analyze, and discuss main issues regarding the challenges of citizen participation within e-government. This is an important implication based on our study that contributes both to theory on and practice of e-government. Practical implications – Lessons to learn from this study concern that many e-government projects have a public e-service as one outcome and an internal e-administration system as another outcome. A dominating internal, agency perspective in such projects might imply that citizens as the user group of the e-service are only seen as passive receivers of the outcome – not as active participants in the development. By applying the analysis themes, proposed in this article, citizens as active participants can be thoroughly discussed when initiating (or evaluating) an e-government project. Originality/value – This article addresses challenges regarding citizen participation in e-government development projects. User participation is well-researched within the IS discipline, but the egovernment setting implies new challenges, that are not explored enough.",
"title": ""
},
{
"docid": "50f09f5b2e579e878f041f136bafe07e",
"text": "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods.",
"title": ""
},
{
"docid": "41df9902a1b88da0943ae8641541acc0",
"text": "The computational and robotic synthesis of language evolution is emerging as a new exciting field of research. The objective is to come up with precise operational models of how communities of agents, equipped with a cognitive apparatus, a sensori-motor system, and a body, can arrive at shared grounded communication systems. Such systems may have similar characteristics to animal communication or human language. Apart from its technological interest in building novel applications in the domain of human-robot or robot-robot interaction, this research is of interest to the many disciplines concerned with the origins and evolution of language and communication.",
"title": ""
},
{
"docid": "5c2f115e0159d15a87904e52879c1abf",
"text": "Current approaches for visual--inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time; this problem is further emphasized by the fact that inertial measurements come at high rate, hence, leading to the fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the maximum a posteriori state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a posteriori bias correction in analytic form. The second contribution is to show that the preintegrated inertial measurement unit model can be seamlessly integrated into a visual--inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3-D points, further accelerating the computation. We perform an extensive evaluation of our monocular VIO pipeline on real and simulated datasets. The results confirm that our modeling effort leads to an accurate state estimation in real time, outperforming state-of-the-art approaches.",
"title": ""
}
] |
scidocsrr
|
f602f434fdfce8cebb3125e4700fd51a
|
Sub-nanosecond pulse generator for through-the-wall radar application
|
[
{
"docid": "5a85c72c5b9898b010f047ee99dba133",
"text": "A method to design arbitrary three-way power dividers with ultra-wideband performance is presented. The proposed devices utilize a broadside-coupled structure, which has three coupled layers. The method assumes general asymmetric coupled layers. The design approach exploits the three fundamental modes of propagation: even-even, odd-odd, and odd-even, and the conformal mapping technique to find the coupling factors between the different layers. The method is used to design 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 three-way power dividers. The designed devices feature a multilayer broadside-coupled microstrip-slot-microstrip configuration using elliptical-shaped structures. The developed power dividers have a compact size with an overall dimension of 20 mm 30 mm. The simulated and measured results of the manufactured devices show an insertion loss equal to the nominated value 1 dB. The return loss for the input/output ports of the devices is better than 17, 18, and 13 dB, whereas the isolation between the output ports is better than 17, 14, and 15 dB for the 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 dividers, respectively, across the 3.1-10.6-GHz band.",
"title": ""
},
{
"docid": "9fe570797e2d3e39ec7e87b45f546721",
"text": "A system for the generation of short electrical pulses based on the minority carrier charge storage and the step recovery effect of bipolar transistors is presented. Electrical pulses of about 90 ps up to 800 ps duration are generated with a maximum amplitude of approximately 7 V at 50. The bipolar transistor is driven into saturation and the base-collector and base-emitter junctions become forward biased. The resulting fast switch-off edge of the transistor’s output signal is the basis for the pulse generation. The fast switching of the transistor occurs as a result of the minority carriers that have been injected and stored across the base-collector junction under forward bias conditions. If the saturated transistor is suddenly reverse biased the pn-junction will appear as a low impedance until the stored charge is depleted. Then the impedance will suddenly increase to its normal high value and the flow of current through the junction will turn to zero, abruptly. A differentiation of the output signal of the transistor results in two short pulses with opposite polarities. The differentiating circuit is implemented by a transmission line network, which mainly acts as a high pass filter. Both the transistor technology (pnp or npn) and the phase of the transfer function of the differentating circuit influence the polarity of the output pulses. The pulse duration depends on the transistor parameters as well as on the transfer function of the pulse shaping network. This way of generating short electrical pulses is a new alternative for conventional comb generators based on steprecovery diodes (SRD). Due to the three-terminal structure of the transistor the isolation problem between the input and the output signal of the transistor network is drastically simplified. Furthermore the transistor is an active element in contrast to a SRD, so that its current gain can be used to minimize the power of the driving signal. Correspondence to: M. Gerding (michael.gerding@ruhr-uni-bochum.de)",
"title": ""
}
] |
[
{
"docid": "5da570fe177e37fe8240895098aa7c03",
"text": "Introduction. Root fractures, defined as fractures involving dentine, cementum, and pulpal and supportive tissues, constitute only 0.5-7% of all dental injuries. Horizontal root fractures are commonly observed in the maxillary anterior region and 75% of these fractures occur in the maxillary central incisors. Methods. A 14-year-old female patient was referred to our clinic three days after a traffic accident. In radiographic examination, the right maxillary central incisor was fractured horizontally in apical thirds. Initially, following local infiltrative anesthetics, the coronal fragment was repositioned and this was radiographically confirmed. Then the stabilization splint was applied and remained for three months. After three weeks, according to the results of the vitality tests, the right and left central incisors were nonvital. For the right central incisor, both the coronal and apical fragments were involved in the endodontic preparation. Results. For the right central tooth, both the coronal and apical root fragments were endodontically treated and obturated at a single visit with white mineral trioxide aggregate whilst the fragments were stabilized internally by insertion of a size 40 Hedstrom stainless-steel endodontic file into the canal. Conclusion. Four-year follow-up examination revealed satisfactory clinical and radiographic findings with hard tissue repair of the fracture line.",
"title": ""
},
{
"docid": "9b8ba583adc6df6e02573620587be68a",
"text": "BACKGROUND\nTraditional one-session exposure therapy (OST) in which a patient is gradually exposed to feared stimuli for up to 3 h in a one-session format has been found effective for the treatment of specific phobias. However, many individuals with specific phobia are reluctant to seek help, and access to care is lacking due to logistic challenges of accessing, collecting, storing, and/or maintaining stimuli. Virtual reality (VR) exposure therapy may improve upon existing techniques by facilitating access, decreasing cost, and increasing acceptability and effectiveness. The aim of this study is to compare traditional OST with in vivo spiders and a human therapist with a newly developed single-session gamified VR exposure therapy application with modern VR hardware, virtual spiders, and a virtual therapist.\n\n\nMETHODS/DESIGN\nParticipants with specific phobia to spiders (N = 100) will be recruited from the general public, screened, and randomized to either VR exposure therapy (n = 50) or traditional OST (n = 50). A behavioral approach test using in vivo spiders will serve as the primary outcome measure. Secondary outcome measures will include spider phobia questionnaires and self-reported anxiety, depression, and quality of life. Outcomes will be assessed using a non-inferiority design at baseline and at 1, 12, and 52 weeks after treatment.\n\n\nDISCUSSION\nVR exposure therapy has previously been evaluated as a treatment for specific phobias, but there has been a lack of high-quality randomized controlled trials. A new generation of modern, consumer-ready VR devices is being released that are advancing existing technology and have the potential to improve clinical availability and treatment effectiveness. The VR medium is also particularly suitable for taking advantage of recent phobia treatment research emphasizing engagement and new learning, as opposed to physiological habituation. This study compares a market-ready, gamified VR spider phobia exposure application, delivered using consumer VR hardware, with the current gold standard treatment. Implications are discussed.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov identifier NCT02533310. Registered on 25 August 2015.",
"title": ""
},
{
"docid": "e754c7c7821703ad298d591a3f7a3105",
"text": "The rapid growth in the population density in urban cities and the advancement in technology demands real-time provision of services and infrastructure. Citizens, especially travelers, want to be reached within time to the destination. Consequently, they require to be facilitated with smart and real-time traffic information depending on the current traffic scenario. Therefore, in this paper, we proposed a graph-oriented mechanism to achieve the smart transportation system in the city. We proposed to deploy road sensors to get the overall traffic information as well as the vehicular network to obtain location and speed information of the individual vehicle. These Internet of Things (IoT) based networks generate enormous volume of data, termed as Big Data, depicting the traffic information of the city. To process incoming Big Data from IoT devices, then generating big graphs from the data, and processing them, we proposed an efficient architecture that uses the Giraph tool with parallel processing servers to achieve real-time efficiency. Later, various graph algorithms are used to achieve smart transportation by making real-time intelligent decisions to facilitate the citizens as well as the metropolitan authorities. Vehicular Datasets from various reliable resources representing the real city traffic are used for analysis and evaluation purpose. The system is implemented using Giraph and Spark tool at the top of the Hadoop parallel nodes to generate and process graphs with near real-time. Moreover, the system is evaluated in terms of efficiency by considering the system throughput and processing time. The results show that the proposed system is more scalable and efficient.",
"title": ""
},
{
"docid": "467e274e1fe01041e46ff2b5ff758fac",
"text": "The development and tuning of denoising algorithms is usually based on readily processed test images that are artificially degraded with additive white Gaussian noise (AWGN). While AWGN allows us to easily generate test data in a repeatable manner, it does not reflect the noise characteristics in a real digital camera. Realistic camera noise is signal-dependent and spatially correlated due to the demosaicking step required to obtain full-color images. Hence, the noise characteristic is fundamentally different from AWGN. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or sub optimal choices in research on denoising algorithms. In this paper, we therefore propose an approach to evaluate denoising algorithms with respect to realistic camera noise: we describe a new camera noise model that includes the full processing chain of a single sensor camera. We determine the visual quality of noisy and denoised test sequences using a subjective test with 18 participants. We show that the noise characteristics have a significant effect on visual quality. Quality metrics, which are required to compare denoising results, are applied, and we evaluate the performance of 10 full-reference metrics and one no-reference metric with our realistic test data. We conclude that a more realistic noise model should be used in future research to improve the quality estimation of digital images and videos and to improve the research on denoising algorithms.",
"title": ""
},
{
"docid": "ffc63d22271d903381a28b9d9c8fe3e8",
"text": "Margaret Atwood is a giant of modern literature who refuses to rest on her laurels. She has anticipated, satirized, and even changed the popular pre-conceptions of our time, and is the rare writer whose work is adored by the public, acclaimed by the critics, and read on university campuses. On stage, Atwood is both serious minded and wickedly funny. A winner of many international literary awards, including the prestigious Booker Prize, Margaret Atwood is the author of more than thirty volumes of poetry, children's literature, fiction, and non-fiction. She is perhaps best known for her novels, which include The Edible Woman, The Handmaid's Tale, The Robber Bride, Alias Grace, The Blind Assassin, Oryx and Crake, and The Year of the Flood. Her non-fiction book Payback: Debt and the Shadow Side of Wealth, part of the Massey Lecture series, was recently made into a documentary. Her new book, Madaddam (the third novel in the Oryx and Crake trilogy), has received rave reviews: \"An extraordinary achievement\" (The Independent); \"A fitting and joyous conclusion\" (The New York Times). \n Atwood's work has been published in more than forty languages, including Farsi, Japanese, Turkish, Finnish, Korean, Icelandic and Estonian. In 2004, she co-invented the LongPen, a remote signing device that allows someone to write in ink anywhere in the world via tablet PC and the internet. She is also a popular personality on Twitter, with over 300,000 followers.\n Atwood was born in 1939 in Ottawa and grew up in northern Ontario, Quebec, and Toronto. She received her undergraduate degree from Victoria College at the University of Toronto and her master's degree from Radcliffe College.",
"title": ""
},
{
"docid": "c13c97749874fd32972f6e8b75fd20d1",
"text": "Text categorization is the task of automatically assigning unlabeled text documents to some predefined category labels by means of an induction algorithm. Since the data in text categorization are high-dimensional, feature selection is broadly used in text categorization systems for reducing the dimensionality. In the literature, there are some widely known metrics such as information gain and document frequency thresholding. Recently, a generative graphical model called latent dirichlet allocation (LDA) that can be used to model and discover the underlying topic structures of textual data, was proposed. In this paper, we use the hidden topic analysis of LDA for feature selection and compare it with the classical feature selection metrics in text categorization. For the experiments, we use SVM as the classifier and tf∗idf weighting for weighting the terms. We observed that almost in all metrics, information gain performs best at all keyword numbers while the LDA-based metrics perform similar to chi-square and document frequency thresholding.",
"title": ""
},
{
"docid": "2cff047c4b2577c99aa66df211b0beda",
"text": "Image denoising is an important pre-processing step in medical image analysis. Different algorithms have been proposed in past three decades with varying denoising performances. More recently, having outperformed all conventional methods, deep learning based models have shown a great promise. These methods are however limited for requirement of large training sample size and high computational costs. In this paper we show that using small sample size, denoising autoencoders constructed using convolutional layers can be used for efficient denoising of medical images. Heterogeneous images can be combined to boost sample size for increased denoising performance. Simplest of networks can reconstruct images with corruption levels so high that noise and signal are not differentiable to human eye.",
"title": ""
},
{
"docid": "9caaf7c3c2e01e8625fc566db4913df1",
"text": "It is established that driver distraction is the result of sharing cognitive resources between the primary task (driving) and any other secondary task. In the case of holding conversations, a human passenger who is aware of the driving conditions can choose to interrupt his speech in situations potentially requiring more attention from the driver, but in-car information systems typically do not exhibit such sensitivity. We have designed and tested such a system in a driving simulation environment. Unlike other systems, our system delivers information via speech (calendar entries with scheduled meetings) but is able to react to signals from the environment to interrupt when the driver needs to be fully attentive to the driving task and subsequently resume its delivery. Distraction is measured by a secondary short-term memory task. In both tasks, drivers perform significantly worse when the system does not adapt its speech, while they perform equally well to control conditions (no concurrent task) when the system intelligently interrupts and resumes.",
"title": ""
},
{
"docid": "dcd8b232bcfbd6531cca91f327ec014b",
"text": "Control-Flow Hijacking attacks are the dominant attack vector to compromise systems. Control-Flow Integrity (CFI) solutions mitigate these attacks on the forward edge, i.e., indirect calls through function pointers and virtual calls. Protecting the backward edge is left to stack canaries, which are easily bypassed through information leaks. Shadow Stacks are a fully precise mechanism for protecting backwards edges, and should be deployed with CFI mitigations. We present a comprehensive analysis of all possible shadow stack mechanisms along three axes: performance, compatibility, and security. Based on our study, we propose a new shadow stack design called Shadesmar that leverages a dedicated register, resulting in low performance overhead, and minimal memory overhead. We present case studies of Shadesmar on Phoronix and Apache to demonstrate the feasibility of dedicating a general purpose register to a security monitor on modern architectures, and Shadesmar’s deployability. Isolating the shadow stack is critical for security, and requires in process isolation of a segment of the virtual address space. We achieve this isolation by repurposing two new Intel x86 extensions for memory protection (MPX), and page table control (MPK). Building on our isolation efforts with MPX and MPK, we present the design requirements for a dedicated hardware mechanism to support intra-process memory isolation, and show how such a mechanism can empower the next wave of highly precise software security mitigations that rely on partially isolated information in a process.",
"title": ""
},
{
"docid": "0c45c5ee2433578fbc29d29820042abe",
"text": "When Andrew John Wiles was 10 years old, he read Eric Temple Bell’s The Last Problem and was so impressed by it that he decided that he would be the first person to prove Fermat’s Last Theorem. This theorem states that there are no nonzero integers a, b, c, n with n > 2 such that an + bn = cn. This object of this paper is to prove that all semistable elliptic curves over the set of rational numbers are modular. Fermat’s Last Theorem follows as a corollary by virtue of work by Frey, Serre and Ribet.",
"title": ""
},
{
"docid": "4acbb4e7de6daec331c8ff8672fa7447",
"text": "This paper describes a machine vision system with back lighting illumination and friendly man-machine interface. Subtraction is used to segment target holes quickly and accurately. The oval obtained after tracing boundary is processed by Generalized Hough Transform to acquire the target's center. Marked-hole's area, perimeter and moment invariants are extracted as cluster features. The auto-scoring software, programmed by Visual C++, has successfully solved the recognition of off-target and overlapped holes through alarming surveillance and bullet tacking programs. The experimental results show that, when the target is distorted obviously, the system can recognize the overlapped holes on real time and also clusters random shape holes on the target correctly. The high accuracy, fast computing speed, easy debugging and low cost make the system can be widely used.",
"title": ""
},
{
"docid": "87aee7d33e78a427edb29126d1ca50c6",
"text": "We present the group fused Lasso for detection of multiple ch ange-points shared by a set of co-occurring one-dimensional signals. Change-points are det cted by approximating the original signals with a constraint on the multidimensional total var iation, leading to piecewise-constant approximations. Fast algorithms are proposed to solve the res ulting optimization problems, either exactly or approximately. Conditions are given for consist ency of both algorithms as the number of signals increases, and empirical evidence is provided to su pport the results on simulated and array comparative genomic hybridization data.",
"title": ""
},
{
"docid": "e587b5954c957f268d21878ede3359f8",
"text": "ing audit logs",
"title": ""
},
{
"docid": "adb4e135440725c1e96c15d95818e359",
"text": "We developed a model of cognitive-behavioralcase formulation and tested several hypotheses abouttherapists' ability to use it to obtaincognitive-behavioral formulations of cases of depressedpatients. We tested whether clinicians, using measures wedeveloped, could correctly identify patients' overtproblems and agree on assessments of patients'underlyingschemas. Clinicians offered cognitive-behavioralformulations for three cases after listening to audiotapesof initial interviews with depressed women conducted bythe first author in her private practice. Therapistsidentified 67% of patients' overt problems. When schema ratings were averaged over five judges,interrater reliability was good (inter-rater reliabilitycoefficients averaged 0.72); single judges showed poorinter-rater agreement on schema ratings (inter-rater reliability coefficients averaged 0.37).Providing therapists with a specific context in which tomake ratings did notimprove schema agreement.Ph.D.-trained therapists were more accurate thannon-Ph.D.-trained therapists in identifying patients' problems.Most findings replicated those obtained in an earlierstudy.",
"title": ""
},
{
"docid": "d669dfcdc2486314bd7234e1f42357de",
"text": "The Luneburg lens (LL) represents a very attractive candidate for many applications such as multibeam antennas, multifrequency scanning, and spatial scanning, due to its focusing properties. Indeed, it is a dielectric sphere on which each surface point is a frequency-independent perfect focusing point. This is produced by its index governing law n, which follows the radial distribution n/sup 2/=2-r/sup 2/, where r is the normalized radial position. Practically, an LL is manufactured as a finite number of concentric homogeneous dielectric shells - this is called a discrete LL. The inaccuracies in the curved shell manufacturing process produce intershell air gaps, which degrade the performance of the lens. Furthermore, this requires different materials whose relative dielectric constant covers the range 1-2. The paper proposes a new LL manufacturing process to avoid these drawbacks. The paper describe the theoretical background and the performance of the obtained lens.",
"title": ""
},
{
"docid": "132bb5b7024de19f4160664edca4b4f5",
"text": "Generic Competitive Strategy: Basically, strategy is about two things: deciding where you want your business to go, and deciding how to get there. A more complete definition is based on competitive advantage, the object of most corporate strategy: “Competitive advantage grows out of value a firm is able to create for its buyers that exceeds the firm's cost of creating it. Value is what buyers are willing to pay, and superior value stems from offering lower prices than competitors for equivalent benefits or providing unique benefits that more than offset a higher price. There are two basic types of competitive advantage: cost leadership and differentiation.” Michael Porter Competitive strategies involve taking offensive or defensive actions to create a defendable position in the industry. Generic strategies can help the organization to cope with the five competitive forces in the industry and do better than other organization in the industry. Generic strategies include ‘overall cost leadership’, ‘differentiation’, and ‘focus’. Generally firms pursue only one of the above generic strategies. However some firms make an effort to pursue only one of the above generic strategies. However some firms make an effort to pursue more than one strategy at a time by bringing out a differentiated product at low cost. Though approaches like these are successful in short term, they are hardly sustainable in the long term. If firms try to maintain cost leadership as well as differentiation at the same time, they may fail to achieve either.",
"title": ""
},
{
"docid": "799904b20f1174f01c0d2dd87c57e097",
"text": "ix",
"title": ""
},
{
"docid": "e67b9b48507dcabae92debdb9df9cb08",
"text": "This paper presents an annotation scheme for events that negatively or positively affect entities (benefactive/malefactive events) and for the attitude of the writer toward their agents and objects. Work on opinion and sentiment tends to focus on explicit expressions of opinions. However, many attitudes are conveyed implicitly, and benefactive/malefactive events are important for inferring implicit attitudes. We describe an annotation scheme and give the results of an inter-annotator agreement study. The annotated corpus is available online.",
"title": ""
},
{
"docid": "f2c058a53fa4aea6febc12e2ce87750b",
"text": "This research aims to develop a multiple-choice Web-based quiz-game-like formative assessment system, named GAM-WATA. The unique design of ‘Ask-Hint Strategy’ turns the Web-based formative assessment into an online quiz game. ‘Ask-Hint Strategy’ is composed of ‘Prune Strategy’ and ‘Call-in Strategy’. ‘Prune Strategy’ removes one incorrect option and turns the original 4-option item into a 3-option one. ‘Call-in Strategy’ provides the rate at which other test takers choose each option when answering a question. This research also compares the effectiveness of three different types of formative assessment in an e-Learning environment: paper-and-pencil test (PPT), normal Web-based test (NWBT) and GAM-WATA. In total, 165 fifth grade elementary students (from six classes) in central Taiwan participated in this research. The six classes of students were then divided into three groups and each group was randomly assigned one type of formative assessment. Overall results indicate that different types of formative assessment have significant impacts on e-Learning effectiveness and that the e-Learning effectiveness of the students in the GAM-WATA group appears to be better. Students in the GAM-WATA group more actively participate in Web-based formative assessment to do self-assessment than students in the N-WBT group. The effectiveness of formative assessment will not be significantly improved only by replacing the paper-and-pencil test with Web-based test. The strategies included in GAMWATA are recommended to be taken into consideration when researchers design Web-based formative assessment systems in the future. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e07a731a2c4fa39be27a13b5b5679593",
"text": "Ocean acidification is rapidly changing the carbonate system of the world oceans. Past mass extinction events have been linked to ocean acidification, and the current rate of change in seawater chemistry is unprecedented. Evidence suggests that these changes will have significant consequences for marine taxa, particularly those that build skeletons, shells, and tests of biogenic calcium carbonate. Potential changes in species distributions and abundances could propagate through multiple trophic levels of marine food webs, though research into the long-term ecosystem impacts of ocean acidification is in its infancy. This review attempts to provide a general synthesis of known and/or hypothesized biological and ecosystem responses to increasing ocean acidification. Marine taxa covered in this review include tropical reef-building corals, cold-water corals, crustose coralline algae, Halimeda, benthic mollusks, echinoderms, coccolithophores, foraminifera, pteropods, seagrasses, jellyfishes, and fishes. The risk of irreversible ecosystem changes due to ocean acidification should enlighten the ongoing CO(2) emissions debate and make it clear that the human dependence on fossil fuels must end quickly. Political will and significant large-scale investment in clean-energy technologies are essential if we are to avoid the most damaging effects of human-induced climate change, including ocean acidification.",
"title": ""
}
] |
scidocsrr
|
d38f4b8c76e47fb0fd1a230435245a72
|
Classifying Conversation in Digital Communication
|
[
{
"docid": "f578c9ea0ac7f28faa3d9864c0e43711",
"text": "Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph (e.g., degree statistics or kernel functions). However, recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. Here we provide a conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph convolutional networks. We review methods to embed individual nodes as well as approaches to embed entire (sub)graphs. In doing so, we develop a unified framework to describe these recent approaches, and we highlight a number of important applications and directions for future work.",
"title": ""
}
] |
[
{
"docid": "106b7450136b9eafdddbaca5131be2f5",
"text": "This paper describes the main features of a low cost and compact Ka-band satcom terminal being developed within the ESA-project LOCOMO. The terminal will be compliant with all capacities associated with communication on the move supplying higher quality, better performance and faster speed services than the current available solutions in Ku band. The terminal will be based on a dual polarized low profile Ka-band antenna with TX and RX capabilities.",
"title": ""
},
{
"docid": "3851a77360fb2d6df454c1ee19c59037",
"text": "Plantar fasciitis affects nearly 1 million persons in the United States at any one time. Conservative therapies have been reported to successfully treat 90% of plantar fasciitis cases; however, for the remaining cases, only invasive therapeutic solutions remain. This investigation studied newly emerging technology, low-level laser therapy. From September 2011 to June 2013, 69 subjects were enrolled in a placebo-controlled, randomized, double-blind, multicenter study that evaluated the clinical utility of low-level laser therapy for the treatment of unilateral chronic fasciitis. The volunteer participants were treated twice a week for 3 weeks for a total of 6 treatments and were evaluated at 5 separate time points: before the procedure and at weeks 1, 2, 3, 6, and 8. The pain rating was recorded using a visual analog scale, with 0 representing \"no pain\" and 100 representing \"worst pain.\" Additionally, Doppler ultrasonography was performed on the plantar fascia to measure the fascial thickness before and after treatment. Study participants also completed the Foot Function Index. At the final follow-up visit, the group participants demonstrated a mean improvement in heel pain with a visual analog scale score of 29.6 ± 24.9 compared with the placebo subjects, who reported a mean improvement of 5.4 ± 16.0, a statistically significant difference (p < .001). Although additional studies are warranted, these data have demonstrated that low-level laser therapy is a promising treatment of plantar fasciitis.",
"title": ""
},
{
"docid": "4731a95b14335a84f27993666b192bba",
"text": "Blockchain has been applied to study data privacy and network security recently. In this paper, we propose a punishment scheme based on the action record on the blockchain to suppress the attack motivation of the edge servers and the mobile devices in the edge network. The interactions between a mobile device and an edge server are formulated as a blockchain security game, in which the mobile device sends a request to the server to obtain real-time service or launches attacks against the server for illegal security gains, and the server chooses to perform the request from the device or attack it. The Nash equilibria (NEs) of the game are derived and the conditions that each NE exists are provided to disclose how the punishment scheme impacts the adversary behaviors of the mobile device and the edge server.",
"title": ""
},
{
"docid": "c6eb01a11e88dd686a47ca594b424350",
"text": "Automatic fake news detection is an important, yet very challenging topic. Traditional methods using lexical features have only very limited success. This paper proposes a novel method to incorporate speaker profiles into an attention based LSTM model for fake news detection. Speaker profiles contribute to the model in two ways. One is to include them in the attention model. The other includes them as additional input data. By adding speaker profiles such as party affiliation, speaker title, location and credit history, our model outperforms the state-of-the-art method by 14.5% in accuracy using a benchmark fake news detection dataset. This proves that speaker profiles provide valuable information to validate the credibility of news articles.",
"title": ""
},
{
"docid": "5fe472c30e1dad99628511e03a707aac",
"text": "An automatic program that generates constant profit from the financial market is lucrative for every market practitioner. Recent advance in deep reinforcement learning provides a framework toward end-to-end training of such trading agent. In this paper, we propose an Markov Decision Process (MDP) model suitable for the financial trading task and solve it with the state-of-the-art deep recurrent Q-network (DRQN) algorithm. We propose several modifications to the existing learning algorithm to make it more suitable under the financial trading setting, namely 1. We employ a substantially small replay memory (only a few hundreds in size) compared to ones used in modern deep reinforcement learning algorithms (often millions in size.) 2. We develop an action augmentation technique to mitigate the need for random exploration by providing extra feedback signals for all actions to the agent. This enables us to use greedy policy over the course of learning and shows strong empirical performance compared to more commonly used epsilon-greedy exploration. However, this technique is specific to financial trading under a few market assumptions. 3. We sample a longer sequence for recurrent neural network training. A side product of this mechanism is that we can now train the agent for every T steps. This greatly reduces training time since the overall computation is down by a factor of T. We combine all of the above into a complete online learning algorithm and validate our approach on the spot foreign exchange market.",
"title": ""
},
{
"docid": "c6cb6b1cb964d0e2eb8ad344ee4a62b3",
"text": "Associative classifiers have proven to be very effective in classification problems. Unfortunately, the algorithms used for learning these classifiers are not able to adequately manage big data because of time complexity and memory constraints. To overcome such drawbacks, we propose a distributed association rule-based classification scheme shaped according to the MapReduce programming model. The scheme mines classification association rules (CARs) using a properly enhanced, distributed version of the well-known FP-Growth algorithm. Once CARs have been mined, the proposed scheme performs a distributed rule pruning. The set of survived CARs is used to classify unlabeled patterns. The memory usage and time complexity for each phase of the learning process are discussed, and the scheme is evaluated on seven real-world big datasets on the Hadoop framework, characterizing its scalability and achievable speedup on small computer clusters. The proposed solution for associative classifiers turns to be suitable to practically address ∗Corresponding Author: Tel: +39 05",
"title": ""
},
{
"docid": "11d06fb5474df44a6bc733bd5cd1263d",
"text": "Understanding how materials that catalyse the oxygen evolution reaction (OER) function is essential for the development of efficient energy-storage technologies. The traditional understanding of the OER mechanism on metal oxides involves four concerted proton-electron transfer steps on metal-ion centres at their surface and product oxygen molecules derived from water. Here, using in situ 18O isotope labelling mass spectrometry, we provide direct experimental evidence that the O2 generated during the OER on some highly active oxides can come from lattice oxygen. The oxides capable of lattice-oxygen oxidation also exhibit pH-dependent OER activity on the reversible hydrogen electrode scale, indicating non-concerted proton-electron transfers in the OER mechanism. Based on our experimental data and density functional theory calculations, we discuss mechanisms that are fundamentally different from the conventional scheme and show that increasing the covalency of metal-oxygen bonds is critical to trigger lattice-oxygen oxidation and enable non-concerted proton-electron transfers during OER.",
"title": ""
},
{
"docid": "fd54d540c30968bb8682a4f2eee43c8d",
"text": "This paper presents LISSA (“Learning dashboard for Insights and Support during Study Advice”), a learning analytics dashboard designed, developed, and evaluated in collaboration with study advisers. The overall objective is to facilitate communication between study advisers and students by visualizing grade data that is commonly available in any institution. More specifically, the dashboard attempts to support the dialogue between adviser and student through an overview of study progress, peer comparison, and by triggering insights based on facts as a starting point for discussion and argumentation. We report on the iterative design process and evaluation results of a deployment in 97 advising sessions. We have found that the dashboard supports the current adviser-student dialogue, helps them motivate students, triggers conversation, and provides tools to add personalization, depth, and nuance to the advising session. It provides insights at a factual, interpretative, and reflective level and allows both adviser and student to take an active role during the session.",
"title": ""
},
{
"docid": "0c1001c6195795885604a2aaa24ddb07",
"text": "Recent advances in artificial intelligence (AI) have increased the opportunities for users to interact with the technology. Now, users can even collaborate with AI in creative activities such as art. To understand the user experience in this new user--AI collaboration, we designed a prototype, DuetDraw, an AI interface that allows users and the AI agent to draw pictures collaboratively. We conducted a user study employing both quantitative and qualitative methods. Thirty participants performed a series of drawing tasks with the think-aloud method, followed by post-hoc surveys and interviews. Our findings are as follows: (1) Users were significantly more content with DuetDraw when the tool gave detailed instructions. (2) While users always wanted to lead the task, they also wanted the AI to explain its intentions but only when the users wanted it to do so. (3) Although users rated the AI relatively low in predictability, controllability, and comprehensibility, they enjoyed their interactions with it during the task. Based on these findings, we discuss implications for user interfaces where users can collaborate with AI in creative works.",
"title": ""
},
{
"docid": "7ee4843ff164a7b7fd096a27b25e2c4d",
"text": "Breast cancer remains a significant scientific, clinical and societal challenge. This gap analysis has reviewed and critically assessed enduring issues and new challenges emerging from recent research, and proposes strategies for translating solutions into practice. More than 100 internationally recognised specialist breast cancer scientists, clinicians and healthcare professionals collaborated to address nine thematic areas: genetics, epigenetics and epidemiology; molecular pathology and cell biology; hormonal influences and endocrine therapy; imaging, detection and screening; current/novel therapies and biomarkers; drug resistance; metastasis, angiogenesis, circulating tumour cells, cancer ‘stem’ cells; risk and prevention; living with and managing breast cancer and its treatment. The groups developed summary papers through an iterative process which, following further appraisal from experts and patients, were melded into this summary account. The 10 major gaps identified were: (1) understanding the functions and contextual interactions of genetic and epigenetic changes in normal breast development and during malignant transformation; (2) how to implement sustainable lifestyle changes (diet, exercise and weight) and chemopreventive strategies; (3) the need for tailored screening approaches including clinically actionable tests; (4) enhancing knowledge of molecular drivers behind breast cancer subtypes, progression and metastasis; (5) understanding the molecular mechanisms of tumour heterogeneity, dormancy, de novo or acquired resistance and how to target key nodes in these dynamic processes; (6) developing validated markers for chemosensitivity and radiosensitivity; (7) understanding the optimal duration, sequencing and rational combinations of treatment for improved personalised therapy; (8) validating multimodality imaging biomarkers for minimally invasive diagnosis and monitoring of responses in primary and metastatic disease; (9) developing interventions and support to improve the survivorship experience; (10) a continuing need for clinical material for translational research derived from normal breast, blood, primary, relapsed, metastatic and drug-resistant cancers with expert bioinformatics support to maximise its utility. The proposed infrastructural enablers include enhanced resources to support clinically relevant in vitro and in vivo tumour models; improved access to appropriate, fully annotated clinical samples; extended biomarker discovery, validation and standardisation; and facilitated cross-discipline working. With resources to conduct further high-quality targeted research focusing on the gaps identified, increased knowledge translating into improved clinical care should be achievable within five years.",
"title": ""
},
{
"docid": "e34f38c3c73f3e4c41ac44bc81d86ab7",
"text": "Euler number of a binary image is a fundamental topological feature that remains invariant under translation, rotation, scaling, and rubber-sheet transformation of the image. In this work, a run-based method for computing Euler number is formulated and a new hardware implementation is described. Analysis of time complexity and performance measure is provided to demonstrate the efficiency of the method. The sequential version of the proposed algorithm requires significantly fewer number of pixel accesses compared to the existing methods and tools based on bit-quad counting or quad-tree, both for the worst case and the average case. A pipelined architecture is designed with a single adder tree to implement the algorithm on-chip by exploiting its inherent parallelism. The architecture uses O(N) 2-input gates and requires O(N logN) time to compute the Euler number of an N · N image. The same hardware, with minor modification, can be used to handle arbitrarily large pixel matrices. A standard cell based VLSI implementation of the architecture is also reported. As Euler number is a widely used parameter, the proposed design can be readily used to save computation time in many image processing applications. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "bd059d97916f4c34d6f6320c3b168b7d",
"text": "Autophagy degrades cytoplasmic components and is important for development and human health. Although autophagy is known to be influenced by systemic intercellular signals, the proteins that control autophagy are largely thought to function within individual cells. Here, we report that Drosophila macroglobulin complement-related (Mcr), a complement ortholog, plays an essential role during developmental cell death and inflammation by influencing autophagy in neighboring cells. This function of Mcr involves the immune receptor Draper, suggesting a relationship between autophagy and the control of inflammation. Interestingly, Mcr function in epithelial cells is required for macrophage autophagy and migration to epithelial wounds, a Draper-dependent process. This study reveals, unexpectedly, that complement-related from one cell regulates autophagy in neighboring cells via an ancient immune signaling program.",
"title": ""
},
{
"docid": "72c164c281e98386a054a25677c21065",
"text": "The rapid digitalisation of the hospitality industry over recent years has brought forth many new points of attack for consideration. The hasty implementation of these systems has created a reality in which businesses are using the technical solutions, but employees have very little awareness when it comes to the threats and implications that they might present. This gap in awareness is further compounded by the existence of preestablished, often rigid, cultures that drive how hospitality businesses operate. Potential attackers are recognising this and the last two years have seen a huge increase in cyber-attacks within the sector.Attempts at addressing the increasing threats have taken the form of technical solutions such as encryption, access control, CCTV, etc. However, a high majority of security breaches can be directly attributed to human error. It is therefore necessary that measures for addressing the rising trend of cyber-attacks go beyond just providing technical solutions and make provision for educating employees about how to address the human elements of security. Inculcating security awareness amongst hospitality employees will provide a foundation upon which a culture of security can be created to promote the seamless and secured interaction of hotel users and technology.One way that the hospitality industry has tried to solve the awareness issue is through their current paper-based training. This is unengaging, expensive and presents limited ways to deploy, monitor and evaluate the impact and effectiveness of the content. This leads to cycles of constant training, making it very hard to initiate awareness, particularly within those on minimum waged, short-term job roles.This paper presents a structured approach for eliciting industry requirement for developing and implementing an immersive Cyber Security Awareness learning platform. It used a series of over 40 interviews and threat analysis of the hospitality industry to identify the requirements for designing and implementing cyber security program which encourage engagement through a cycle of reward and recognition. In particular, the need for the use of gamification elements to provide an engaging but gentle way of educating those with little or no desire to learn was identified and implemented. Also presented is a method for guiding and monitoring the impact of their employee’s progress through the learning management system whilst monitoring the levels of engagement and positive impact the training is having on the business.",
"title": ""
},
{
"docid": "5781bae1fdda2d2acc87102960dab3ed",
"text": "Several static analysis tools, such as Splint or FindBugs, have been proposed to the software development community to help detect security vulnerabilities or bad programming practices. However, the adoption of these tools is hindered by their high false positive rates. If the false positive rate is too high, developers may get acclimated to violation reports from these tools, causing concrete and severe bugs being overlooked. Fortunately, some violations are actually addressed and resolved by developers. We claim that those violations that are recurrently fixed are likely to be true positives, and an automated approach can learn to repair similar unseen violations. However, there is lack of a systematic way to investigate the distributions on existing violations and fixed ones in the wild, that can provide insights into prioritizing violations for developers, and an effective way to mine code and fix patterns which can help developers easily understand the reasons of leading violations and how to fix them. In this paper, we first collect and track a large number of fixed and unfixed violations across revisions of software. The empirical analyses reveal that there are discrepancies in the distributions of violations that are detected and those that are fixed, in terms of occurrences, spread and categories, which can provide insights into prioritizing violations. To automatically identify patterns in violations and their fixes, we propose an approach that utilizes convolutional neural networks to learn features and clustering to regroup similar instances. We then evaluate the usefulness of the identified fix patterns by applying them to unfixed violations. The results show that developers will accept and merge a majority (69/116) of fixes generated from the inferred fix patterns. It is also noteworthy that the yielded patterns are applicable to four real bugs in the Defects4J major benchmark for software testing and automated repair.",
"title": ""
},
{
"docid": "05d282026dcecb3286c9ffbd88cb72a3",
"text": "Although deep neural networks (DNNs) are state-of-the-art artificial intelligence systems, it is unclear what insights, if any, they provide about human intelligence. We address this issue in the domain of visual perception. After briefly describing DNNs, we provide an overview of recent results comparing human visual representations and performance with those of DNNs. In many cases, DNNs acquire visual representations and processing strategies that are very different from those used by people. We conjecture that there are at least two factors preventing them from serving as better psychological models. First, DNNs are currently trained with impoverished data, such as data lacking important visual cues to three-dimensional structure, data lacking multisensory statistical regularities, and data in which stimuli are unconnected to an observer’s actions and goals. Second, DNNs typically lack adaptations to capacity limits, such as attentional mechanisms, visual working memory, and compressed mental representations biased toward preserving task-relevant abstractions.",
"title": ""
},
{
"docid": "f975aff622406ca7e563f60e8488f6fa",
"text": "Analog-to-digital converter (ADC)-based multi-Gb/s serial link receivers have gained increasing attention in the backplane community due to the desire for higher I/O throughput, ease of design portability, and flexibility. However, the power dissipation in such receivers is dominated by the ADC. ADCs in serial links employ signal-to-noise-and-distortion ratio (SNDR) and effective-number-of-bit (ENOB) as performance metrics as these are the standard for generic ADC design. This paper studies the use of information-based metrics such as bit-error-rate (BER) to design a BER-optimal ADC (BOA) for serial links. Channel parameters such as the m-clustering value and the threshold non-uniformity metric ht are introduced and employed to quantify the BER improvement achieved by a BOA over a conventional uniform ADC (CUA) in a receiver. Analytical expressions for BER improvement are derived and validated through simulations. A prototype BOA is designed, fabricated and tested in a 1.2 V, 90 nm LP CMOS process to verify the results of this study. BOA's variable-threshold and variable-resolution configurations are implemented via an 8-bit single-core, multiple-output passive digital-to-analog converter (DAC), which incurs an additional power overhead of <; 0.1% (approximately 50 μW). Measurement results show examples in which the BER achieved by the 3-bit BOA receiver is lower by a factor of 109 and 1010, as compared to the 4-bit and 3-bit CUA receivers, respectively, at a data rate of 4-Gb/s and a transmitted signal amplitude of 180 mVppd.",
"title": ""
},
{
"docid": "d669dfcdc2486314bd7234e1f42357de",
"text": "The Luneburg lens (LL) represents a very attractive candidate for many applications such as multibeam antennas, multifrequency scanning, and spatial scanning, due to its focusing properties. Indeed, it is a dielectric sphere on which each surface point is a frequency-independent perfect focusing point. This is produced by its index governing law n, which follows the radial distribution n/sup 2/=2-r/sup 2/, where r is the normalized radial position. Practically, an LL is manufactured as a finite number of concentric homogeneous dielectric shells - this is called a discrete LL. The inaccuracies in the curved shell manufacturing process produce intershell air gaps, which degrade the performance of the lens. Furthermore, this requires different materials whose relative dielectric constant covers the range 1-2. The paper proposes a new LL manufacturing process to avoid these drawbacks. The paper describe the theoretical background and the performance of the obtained lens.",
"title": ""
},
{
"docid": "3ed823504a503fd7148daae3f23190db",
"text": "The ultimate goal of most biomedical research is to gain greater insight into mechanisms of human disease or to develop new and improved therapies or diagnostics. Although great advances have been made in terms of developing disease models in animals, such as transgenic mice, many of these models fail to faithfully recapitulate the human condition. In addition, it is difficult to identify critical cellular and molecular contributors to disease or to vary them independently in whole-animal models. This challenge has attracted the interest of engineers, who have begun to collaborate with biologists to leverage recent advances in tissue engineering and microfabrication to develop novel in vitro models of disease. As these models are synthetic systems, specific molecular factors and individual cell types, including parenchymal cells, vascular cells, and immune cells, can be varied independently while simultaneously measuring system-level responses in real time. In this article, we provide some examples of these efforts, including engineered models of diseases of the heart, lung, intestine, liver, kidney, cartilage, skin and vascular, endocrine, musculoskeletal, and nervous systems, as well as models of infectious diseases and cancer. We also describe how engineered in vitro models can be combined with human inducible pluripotent stem cells to enable new insights into a broad variety of disease mechanisms, as well as provide a test bed for screening new therapies.",
"title": ""
},
{
"docid": "a7f72b95da401ee4f710eb019652bb03",
"text": "Recurrent Neural Network (RNN) are a popular choice for modeling temporal and sequential tasks and achieve many state-of-the-art performance on various complex problems. However, most of the state-of-the-art RNNs have millions of parameters and require many computational resources for training and predicting new data. This paper proposes an alternative RNN model to reduce the number of parameters significantly by representing the weight parameters based on Tensor Train (TT) format. In this paper, we implement the TT-format representation for several RNN architectures such as simple RNN and Gated Recurrent Unit (GRU). We compare and evaluate our proposed RNN model with uncompressed RNN model on sequence classification and sequence prediction tasks. Our proposed RNNs with TT-format are able to preserve the performance while reducing the number of RNN parameters significantly up to 40 times smaller.",
"title": ""
},
{
"docid": "bde9e26746ddcc6e53f442a0e400a57e",
"text": "Aljebreen, Mohammed, \"Implementing a dynamic scaling of web applications in a virtualized cloud computing environment\" (2013). Abstract Cloud computing is becoming more essential day by day. The allure of the cloud is the significant value and benefits that people gain from it, such as reduced costs, increased storage, flexibility, and more mobility. Flexibility is one of the major benefits that cloud computing can provide in terms of scaling up and down the infrastructure of a network. Once traffic has increased on one server within the network, a load balancer instance will route incoming requests to a healthy instance, which is less busy and less burdened. When the full complement of instances cannot handle any more requests, past research has been done by Chieu et. al. that presented a scaling algorithm to address a dynamic scalability of web applications on a virtualized cloud computing environment based on relevant indicators that can increase or decrease servers, as needed. In this project, I implemented the proposed algorithm, but based on CPU Utilization threshold. In addition, two tests were run exploring the capabilities of different metrics when faced with ideal or challenging conditions. The results did find a superior metric that was able to perform successfully under both tests. 3 Dedication I lovingly dedicate this thesis to my gracious and devoted mother for her unwavering love and for always believing in me. 4 Acknowledgments This thesis would not have been possible without the support of many people. My wish is to express humble gratitude to the committee chair, Prof. Sharon Mason, who was perpetually generous in offering her invaluable assistance, support, and guidance. Deepest gratitude is also due to the members of my supervisory committee, Prof. Lawrence Hill and Prof. Jim Leone, without whose knowledge and direction this study would not have been successful. Special thanks also to Prof. Charles Border for his financial support of this thesis and priceless assistance. Profound gratitude to my mother, Moneerah, who has been there from the very beginning, for her support and endless love. I would also like to convey thanks to my wife for her patient and unending encouragement and support throughout the duration of my studies; without my wife's encouragement, I would not have completed this degree. I wish to express my gratitude to my beloved sister and brothers for their kind understanding throughout my studies. Special thanks to my friend, Mohammed Almathami, for his …",
"title": ""
}
] |
scidocsrr
|
e3abbb9fc38b9b18fb88815e3225560d
|
How Social Media Reduces Mass Political Polarization . Evidence from Germany , Spain , and the U
|
[
{
"docid": "f57bcea5431a11cc431f76727ba81a26",
"text": "We develop a Bayesian procedure for estimation and inference for spatial models of roll call voting. This approach is extremely flexible, applicable to any legislative setting, irrespective of size, the extremism of the legislators’ voting histories, or the number of roll calls available for analysis. The model is easily extended to let other sources of information inform the analysis of roll call data, such as the number and nature of the underlying dimensions, the presence of party whipping, the determinants of legislator preferences, and the evolution of the legislative agenda; this is especially helpful since generally it is inappropriate to use estimates of extant methods (usually generated under assumptions of sincere voting) to test models embodying alternate assumptions (e.g., log-rolling, party discipline). A Bayesian approach also provides a coherent framework for estimation and inference with roll call data that eludes extant methods; moreover, via Bayesian simulation methods, it is straightforward to generate uncertainty assessments or hypothesis tests concerning any auxiliary quantity of interest or to formally compare models. In a series of examples we show how our method is easily extended to accommodate theoretically interesting models of legislative behavior. Our goal is to provide a statistical framework for combining the measurement of legislative preferences with tests of models of legislative behavior.",
"title": ""
}
] |
[
{
"docid": "f3641cacf284444ac45f0e085c7214bf",
"text": "Recognition that the entire central nervous system (CNS) is highly plastic, and that it changes continually throughout life, is a relatively new development. Until very recently, neuroscience has been dominated by the belief that the nervous system is hardwired and changes at only a few selected sites and by only a few mechanisms. Thus, it is particularly remarkable that Sir John Eccles, almost from the start of his long career nearly 80 years ago, focused repeatedly and productively on plasticity of many different kinds and in many different locations. He began with muscles, exploring their developmental plasticity and the functional effects of the level of motor unit activity and of cross-reinnervation. He moved into the spinal cord to study the effects of axotomy on motoneuron properties and the immediate and persistent functional effects of repetitive afferent stimulation. In work that combined these two areas, Eccles explored the influences of motoneurons and their muscle fibers on one another. He studied extensively simple spinal reflexes, especially stretch reflexes, exploring plasticity in these reflex pathways during development and in response to experimental manipulations of activity and innervation. In subsequent decades, Eccles focused on plasticity at central synapses in hippocampus, cerebellum, and neocortex. His endeavors extended from the plasticity associated with CNS lesions to the mechanisms responsible for the most complex and as yet mysterious products of neuronal plasticity, the substrates underlying learning and memory. At multiple levels, Eccles' work anticipated and helped shape present-day hypotheses and experiments. He provided novel observations that introduced new problems, and he produced insights that continue to be the foundation of ongoing basic and clinical research. This article reviews Eccles' experimental and theoretical contributions and their relationships to current endeavors and concepts. It emphasizes aspects of his contributions that are less well known at present and yet are directly relevant to contemporary issues.",
"title": ""
},
{
"docid": "44b71e1429f731cc2d91f919182f95a4",
"text": "Power management of multi-core processors is extremely important because it allows power/energy savings when all cores are not used. OS directed power management according to ACPI (Advanced Power and Configurations Interface) specifications is the common approach that industry has adopted for this purpose. While operating systems are capable of such power management, heuristics for effectively managing the power are still evolving. The granularity at which the cores are slowed down/turned off should be designed considering the phase behavior of the workloads. Using 3-D, video creation, office and e-learning applications from the SYSmark benchmark suite, we study the challenges in power management of a multi-core processor such as the AMD Quad-Core Opteron\" and Phenom\". We unveil effects of the idle core frequency on the performance and power of the active cores. We adjust the idle core frequency to have the least detrimental effect on the active core performance. We present optimized hardware and operating system configurations that reduce average active power by 30% while reducing performance by an average of less than 3%. We also present complete system measurements and power breakdown between the various systems components using the SYSmark and SPEC CPU workloads. It is observed that the processor core and the disk consume the most power, with core having the highest variability.",
"title": ""
},
{
"docid": "27a8d159f940cd5649a1a8cd9bc19b06",
"text": "In order to improve performance of previous aspect-based sentiment analysis (ABSA) on restaurant reviews in Indonesian language, this paper adapts the research achieving the highest F1 at SemEval 2016. We use feedforward neural network with one-vs-all strategy for aspect category classification (Slot 1), Conditional Random Field (CRF) for opinion target expression extraction (Slot 2), and Convolutional Neural Network (CNN) for sentiment polarity classification (Slot 3). Aside from lexical features we also use additional features learned from neural networks. We train our model on 992 sentences and evaluate them on 382 sentences. Higher performances are achieved for Slot 1 (F1 0.870) and Slot 3 (F1 0.764) but lower on Slot 2 (F1 0.787).",
"title": ""
},
{
"docid": "1e5202850748b0f613807b0452eb89a2",
"text": "This paper introduces a hierarchical image merging scheme based on a multiresolution contrast decomposition (the ratio of low-pass pyramid). The composite images produced by this scheme preserve those details from the input images that are most relevant to visual perception. Some applications of the method are indicated.",
"title": ""
},
{
"docid": "323c9caac8b04b1531071acf74eb189b",
"text": "Many electronic feedback systems have been proposed for writing support. However, most of these systems only aim at supporting writing to communicate instead of writing to learn, as in the case of literature review writing. Trigger questions are potentially forms of support for writing to learn, but current automatic question generation approaches focus on factual question generation for reading comprehension or vocabulary assessment. This article presents a novel Automatic Question Generation (AQG) system, called G-Asks, which generates specific trigger questions as a form of support for students' learning through writing. We conducted a large-scale case study, including 24 human supervisors and 33 research students, in an Engineering Research Method course and compared questions generated by G-Asks with human generated questions. The results indicate that G-Asks can generate questions as useful as human supervisors (‘useful’ is one of five question quality measures) while significantly outperforming Human Peer and Generic Questions in most quality measures after filtering out questions with grammatical and semantic errors. Furthermore, we identified the most frequent question types, derived from the human supervisors’ questions and discussed how the human supervisors generate such questions from the source text. General Terms: Automatic Question Generation, Natural Language Processing, Academic Writing Support",
"title": ""
},
{
"docid": "7ce9319020332a623ee2b39fec8e4971",
"text": "BACKGROUND\nPrevious studies have demonstrated that nociceptin/orphanin FQ (N/OFQ), the endogenous peptide ligand for the G-protein-coupled NOP receptor, inhibits cough in experimental models. SCH 225288 is a nonpeptide, orally active NOP agonist that may provide the foundation for the development of novel treatments for cough.\n\n\nMETHODS\nFirst we characterized the selectivity of SCH 225288 in human receptor binding assays. Afterwards, the antitussive activity of SCH 225288 was studied in three mechanistically distinct cough models. Specifically, we observed the cough-suppressant effect of SCH 225288 in a guinea pig capsaicin irritant-evoked cough model, a feline mechanically induced cough model and finally in a canine Bordetella bronchiseptica disease model.\n\n\nRESULTS\nSCH 225288 selectively binds human NOP receptor (K(i) = 0.38 +/- 0.02 nmol/l) over classical opioid receptors (COR). In a guinea pig capsaicin cough model, SCH 225288 (0.1-1 mg/kg) suppressed cough at 2, 4, and 6 h after oral administration. The antitussive effect of SCH 225288 (3.0 mg/kg, p.o.) was blocked by the NOP antagonist J113397 (12 mg/kg, i.p.) but not by the classical opioid receptor (COR) antagonist, naltrexone (3.0 mg/kg, i.p.). In the anesthetized cat, we evaluated the effects of SCH 225288 given either intravenously or via the intravertebral artery against the increases in cough number and respiratory expiratory and inspiratory muscle (rectus abdominis and parasternal) electromyographic (EMG) activities due to perturbations of the intrathoracic trachea. SCH 225288 (0.03-3.0 mg/kg, i.v.) inhibited both cough number and abdominal EMG amplitudes. Similarly, SCH 225288 (0.001-0.3 mg/kg) administered intra-arterially also diminished cough number and abdominal EMG amplitudes. No significant effect of the drug was noted on parasternal EMG activity. Finally, we studied the antitussive actions of SCH 225288 (1.0 mg/kg) in a canine B. bronchiseptica disease model. In this model, dogs were challenged intranasally with B. bronchiseptica. Comparisons were made between a vehicle group, an SCH 225288 (1.0 mg/kg, p.o., q.d.) and a butorphanol (0.6 mg/kg, p.o., b.i.d.) group on the mean change in cough scores from baseline values and days 6-9 after B. bronchiseptica challenge. SCH 225288 (1.0 mg/kg, p.o., q.d.) displayed a positive antitussive tendency (p = 0.06) to inhibit B. bronchiseptica cough whereas butorphanol (0.6 mg/kg, p.o., b.i.d.) was devoid of antitussive activity.\n\n\nCONCLUSIONS\nTaken together, the present data show that SCH 225288 is a potent and effective antitussive agent in animal models of cough. Furthermore, these findings indicate that NOP agonists represent a promising new therapeutic approach for the treatment of cough without the side effect liabilities associated with opioid antitussives.",
"title": ""
},
{
"docid": "536c739e6f0690580568a242e1d65ef3",
"text": "Intrusion Detection Systems (IDS) are key components for securing critical infrastructures, capable of detecting malicious activities on networks or hosts. However, the efficiency of an IDS depends primarily on both its configuration and its precision. The large amount of network traffic that needs to be analyzed, in addition to the increase in attacks’ sophistication, renders the optimization of intrusion detection an important requirement for infrastructure security, and a very active research subject. In the state of the art, a number of approaches have been proposed to improve the efficiency of intrusion detection and response systems. In this article, we review the works relying on decision-making techniques focused on game theory and Markov decision processes to analyze the interactions between the attacker and the defender, and classify them according to the type of the optimization problem they address. While these works provide valuable insights for decision-making, we discuss the limitations of these solutions as a whole, in particular regarding the hypotheses in the models and the validation methods. We also propose future research directions to improve the integration of game-theoretic approaches into IDS optimization techniques.",
"title": ""
},
{
"docid": "77aea5cc0a74546f5c8fef1dd39770bc",
"text": "Road condition data are important in transportation management systems. Over the last decades, significant progress has been made and new approaches have been proposed for efficient collection of pavement condition data. However, the assessment of unpaved road conditions has been rarely addressed in transportation research. Unpaved roads constitute approximately 40% of the U.S. road network, and are the lifeline in rural areas. Thus, it is important for timely identification and rectification of deformation on such roads. This article introduces an innovative Unmanned Aerial Vehicle (UAV)-based digital imaging system focusing on efficient collection of surface condition data over rural roads. In contrast to other approaches, aerial assessment is proposed by exploring aerial imagery acquired from an unpiloted platform to derive a threedimensional (3D) surface model over a road distress area for distress measurement. The system consists of a lowcost model helicopter equipped with a digital camera, a Global Positioning System (GPS) receiver and an Inertial Navigation System (INS), and a geomagnetic sensor. A set of image processing algorithms has been developed for precise orientation of the acquired images, and generation of 3D road surface models and orthoimages, which allows for accurate measurement of the size and the dimension of the road surface distresses. The developed system has been tested over several test sites ∗To whom correspondence should be addressed. E-mail: chunsunz@ unimelb.edu.au. with roads of various surface distresses. The experiments show that the system is capable for providing 3D information of surface distresses for road condition assessment. Experiment results demonstrate that the system is very promising and provides high accuracy and reliable results. Evaluation of the system using 2D and 3D models with known dimensions shows that subcentimeter measurement accuracy is readily achieved. The comparison of the derived 3D information with the onsite manual measurements of the road distresses reveals differences of 0.50 cm, demonstrating the potential of the presented system for future practice.",
"title": ""
},
{
"docid": "bf78bfc617dfe5a152ad018dacbd5488",
"text": "Identifying and fixing defects is a crucial and expensive part of the software lifecycle. Measuring the quality of bug-fixing patches is a difficult task that affects both functional correctness and the future maintainability of the code base. Recent research interest in automatic patch generation makes a systematic understanding of patch maintainability and understandability even more critical. \n We present a human study involving over 150 participants, 32 real-world defects, and 40 distinct patches. In the study, humans perform tasks that demonstrate their understanding of the control flow, state, and maintainability aspects of code patches. As a baseline we use both human-written patches that were later reverted and also patches that have stood the test of time to ground our results. To address any potential lack of readability with machine-generated patches, we propose a system wherein such patches are augmented with synthesized, human-readable documentation that summarizes their effects and context. Our results show that machine-generated patches are slightly less maintainable than human-written ones, but that trend reverses when machine patches are augmented with our synthesized documentation. Finally, we examine the relationship between code features (such as the ratio of variable uses to assignments) with participants' abilities to complete the study tasks and thus explain a portion of the broad concept of patch quality.",
"title": ""
},
{
"docid": "5c8570045e83b72643f1ac99018351ea",
"text": "OBJECTIVES\nAlthough anxiety exists concerning the perceived risk of transmission of bloodborne viruses after community-acquired needlestick injuries, seroconversion seems to be rare. The objectives of this study were to describe the epidemiology of pediatric community-acquired needlestick injuries and to estimate the risk of seroconversion for HIV, hepatitis B virus, and hepatitis C virus in these events.\n\n\nMETHODS\nThe study population included all of the children presenting with community-acquired needlestick injuries to the Montreal Children's Hospital between 1988 and 2006 and to Hôpital Sainte-Justine between 1995 and 2006. Data were collected prospectively at Hôpital Sainte-Justine from 2001 to 2006. All of the other data were reviewed retrospectively by using a standardized case report form.\n\n\nRESULTS\nA total of 274 patients were identified over a period of 19 years. Mean age was 7.9 +/- 3.4 years. A total of 176 (64.2%) were boys. Most injuries occurred in streets (29.2%) or parks (24.1%), and 64.6% of children purposely picked up the needle. Only 36 patients (13.1%) noted blood on the device. Among the 230 patients not known to be immune for hepatitis B virus, 189 (82.2%) received hepatitis B immunoglobulin, and 213 (92.6%) received hepatitis B virus vaccine. Prophylactic antiretroviral therapy was offered beginning in 1997. Of the 210 patients who presented thereafter, 82 (39.0%) received chemoprophylaxis, of whom 69 (84.1%) completed a 4-week course of therapy. The use of a protease inhibitor was not associated with a significantly higher risk of adverse effects or early discontinuation of therapy. At 6 months, 189 were tested for HIV, 167 for hepatitis B virus, and 159 for hepatitis C virus. There were no seroconversions.\n\n\nCONCLUSIONS\nWe observed no seroconversions in 274 pediatric community-acquired needlestick injuries, thereby confirming that the risk of transmission of bloodborne viruses in these events is very low.",
"title": ""
},
{
"docid": "68387fac4e4e320b522f928c98127e9d",
"text": "Nowadays, industrial robots play an important role automating recurring manufacturing tasks. New trends towards Smart Factory and Industry 4.0 however take a more productdriven approach and demand for more flexibility of the robotic systems. When a varying order of processing steps is required, intra-factory logistics has to cope with the new challenges. To achieve this flexibility, mobile robots can be used for transporting goods, or even mobile manipulators consisting of a mobile platform and a robot arm for independently grasping work pieces and manipulating them while in motion. Working with mobile robots however poses new challenges that did not yet occur for industrial manipulators: First, mobile robots have a greater position inaccuracy and typically work in not fully structured environments, requiring to interpret sensor data and to more often react to events from the environment. Furthermore, independent mobile robots introduce the aspect of distribution. For mobile manipulators, an additional challenge arises from the combination of platform and arm, where platform and arm, but also sensors have to be coordinated to achieve the desired behavior. The main contribution of this work is an approach that allows the object-oriented modeling and coordination of mobile robots, supporting the cooperation of mobile manipulators. Within a mobile manipulator, the approach allows to define real-time reactions to sensor data and to synchronize the different actuators and sensors present, allowing sensor-aware combinations of motions for platform and arm. Moreover, the approach facilitates an easy way of programming, provides means to handle kinematic restrictions or redundancy, and supports advanced capabilities such as impedance control to mitigate position uncertainty. Working with multiple independent mobile robots, each has a different knowledge about its environment, based on the available sensors. These different views are modeled, allowing consistent coordination of robots in applications using the data available on each robot. To cope with geometric uncertainty, sensors are modeled and the relationship between their measurements and geometric aspects is defined. Based on these definitions and incoming sensor data, position estimates are automatically derived. Additionally, the more dynamic environment leads to different possible outcomes of task execution. These are explicitly modeled and can be used to define reactive behavior. The approach was successfully evaluated based on two application examples, ranging from physical interaction between two mobile manipulators handing over a work-piece to gesture control of a quadcopter for carrying goods.",
"title": ""
},
{
"docid": "e9e19edc17e284932e4a09a97a603947",
"text": "In this paper we analyze the process of hypermedia applications design and implementation, focusing in particular on two critical aspects of these applications: the navigational and interface structure. We discuss the way in which we build the navigation and abstract interface models using the Object-Oriented Hypermedia Design Method (OOHDM); we show which concerns must be taken into account for each task by giving examples from a real project we are developing, the Portinari Project. We show which implementation concerns must be considered when defining interface behavior, discussing both a Toolbook and a HTML implementation of the example application.",
"title": ""
},
{
"docid": "1c367cad26436a059e56d000ac0db3c4",
"text": "We propose a goal-driven web navigation as a benchmark task for evaluating an agent with abilities to understand natural language and plan on partially observed environments. In this challenging task, an agent navigates through a web site, which is represented as a graph consisting of web pages as nodes and hyperlinks as directed edges, to find a web page in which a query appears. The agent is required to have sophisticated high-level reasoning based on natural languages and efficient sequential decision making capability to succeed. We release a software tool, called WebNav, that automatically transforms a website into this goal-driven web navigation task, and as an example, we make WikiNav, a dataset constructed from the English Wikipedia containing approximately 5 million articles and more than 12 million queries for training. We evaluate two different agents based on neural networks on the WikiNav and provide the human performance. Our results show the difficulty of the task for both humans and machines. With this benchmark, we expect faster progress in developing artificial agents with natural language understanding and planning skills.",
"title": ""
},
{
"docid": "7716409441fb8e34013d3e9f58d32476",
"text": "Decentralized partially observable Markov decision processes (Dec-POMDPs) are a powerful tool for modeling multi-agent planning and decision-making under uncertainty. Prevalent Dec-POMDP solution techniques require centralized computation given full knowledge of the underlying model. Multi-agent reinforcement learning (MARL) based approaches have been recently proposed for distributed solution of during learning and policy execution are identical. In some practical scenarios this may not be the case. We propose a novel MARL approach in which agents are allowed to rehearse with information that will not be available during policy execution. The key is for the agents to learn policies that do not explicitly rely on these rehearsal features. We also establish a weak convergence result for our algorithm, RLaR, demonstrating that RLaR converges in probability when certain conditions are met. We show experimentally that incorporating rehearsal features can enhance the learning rate compared to non-rehearsalbased learners, and demonstrate fast, (near) optimal performance on many existing benchmark DecPOMDP problems. We also compare RLaR against an existing approximate Dec-POMDP solver which, like RLaR, does not assume a priori knowledge of the model. While RLaR's policy representation is not as scalable, we show that RLaR produces higher quality policies for most problems and horizons studied. & 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "bd007824642b690f568937003ed33d54",
"text": "We present GENeVis, an application to visualize gene expression time series data in a gene regulatory network context. This is a network of regulator proteins that regulate the expression of their respective target genes. The networks are represented as graphs, in which the nodes represent genes, and the edges represent interactions between a gene and its targets. GENeVis adds features that are currently lacking in existing tools, such as mapping of expression value and corresponding p-value (or other statistic) to a single visual attribute, multiple time point visualization, and visual comparison of multiple time series in one view. Various interaction mechanisms, such as panning, zooming, regulator and target highlighting, data selection, and tooltips support data analysis and exploration. Subnetworks can be studied in detail in a separate view that shows the network context, expression data plots, and tables containing the raw expression data. We present a case study, in which gene expression time series data acquired in-house are analyzed by a biological expert using GENeVis. The case study shows that the application fills the gap between present biological interpretation of time series experiments, performed on a gene-by-gene basis, and analysis of global classes of genes whose expression is regulated by regulator proteins.",
"title": ""
},
{
"docid": "f74f66984fec464cf95569d109670e49",
"text": "The paper discusses the so-called sharing economy from an industrial structure perspective. The illustrative cases examined are Airbnb and Uber. The research question raised is concerned with the extent to which transaction cost theory can be used to explain the changing industrial structures in the application areas that the Internet-based platforms are addressing and how other theoretical frameworks can be helpful in understanding these developments. The paper concludes by proposing a theoretical framework for analyzing the structural implications of the sharing economy based on theories on multi-sided platforms, transaction costs, and substitution and complementation.",
"title": ""
},
{
"docid": "2c95ebadb6544904b791cdbbbd70dc1c",
"text": "This report describes a small heartbeat monitoring system using capacitively coupled ECG sensors. Capacitively coupled sensors using an insulated electrode have been proposed to obtain ECG signals without pasting electrodes directly onto the skin. Although the sensors have better usability than conventional ECG sensors, it is difficult to remove noise contamination. Power-line noise can be a severe noise source that increases when only a single electrode is used. However, a multiple electrode system degrades usability. To address this problem, we propose a noise cancellation technique using an adaptive noise feedback approach, which can improve the availability of the capacitive ECG sensor using a single electrode. An instrumental amplifier is used in the proposed method for the first stage amplifier instead of voltage follower circuits. A microcontroller predicts the noise waveform from an ADC output. To avoid saturation caused by power-line noise, the predicted noise waveform is fed back to an amplifier input through a DAC. We implemented the prototype sensor system to evaluate the noise reduction performance. Measurement results using a prototype board show that the proposed method can suppress 28-dB power-line noise.",
"title": ""
},
{
"docid": "389538174613c07818361d014deecd22",
"text": "High range-resolution monopulse (HRRM) tracking radar which maintains wide instantaneous bandwidth through both range and angle error sensing channels provides range, azimuth, elevation, and amplitude for each resolved part of the target. The three-dimensional target detail can be used to improve and extend radar performance in several ways: for improved precision of target location, for target classification and recognition, to counter repeater-type ECM, to improve low-angle multipath tracking, to resolve multiple targets, as a miss-distance measurement capability, and for improved tracking in chaff and clutter. These have been demonstrated qualitatively except for the ECCM to repeater ECM and low-altitude tracking improvement. Initial results from an experimental HRRM radar with 3-ns pulse length show resolution of aircraft into its major parts and precise location of each resolved part accurately in range and angle. Realtime closed-loop tracking is performed on aircraft in flight using high-speed sampled, digitized, and processed HRRM range and angle video data. Clutter rejection capability is also demonstrated.",
"title": ""
},
{
"docid": "70cad4982e42d44eec890faf6ddc5c75",
"text": "Both translation arrest and proteasome stress associated with accumulation of ubiquitin-conjugated protein aggregates were considered as a cause of delayed neuronal death after transient global brain ischemia; however, exact mechanisms as well as possible relationships are not fully understood. The aim of this study was to compare the effect of chemical ischemia and proteasome stress on cellular stress responses and viability of neuroblastoma SH-SY5Y and glioblastoma T98G cells. Chemical ischemia was induced by transient treatment of the cells with sodium azide in combination with 2-deoxyglucose. Proteasome stress was induced by treatment of the cells with bortezomib. Treatment of SH-SY5Y cells with sodium azide/2-deoxyglucose for 15 min was associated with cell death observed 24 h after treatment, while glioblastoma T98G cells were resistant to the same treatment. Treatment of both SH-SY5Y and T98G cells with bortezomib was associated with cell death, accumulation of ubiquitin-conjugated proteins, and increased expression of Hsp70. These typical cellular responses to proteasome stress, observed also after transient global brain ischemia, were not observed after chemical ischemia. Finally, chemical ischemia, but not proteasome stress, was in SH-SY5Y cells associated with increased phosphorylation of eIF2α, another typical cellular response triggered after transient global brain ischemia. Our results showed that short chemical ischemia of SH-SY5Y cells is not sufficient to induce both proteasome stress associated with accumulation of ubiquitin-conjugated proteins and stress response at the level of heat shock proteins despite induction of cell death and eIF2α phosphorylation.",
"title": ""
}
] |
scidocsrr
|
020ab9f6f041b8d4be0184a3bd62574e
|
Robust Large Margin Deep Neural Networks
|
[
{
"docid": "0e6b54a70a1604caf7449c8eb1286d5e",
"text": "Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and nonexpert readers in statistics, computer science, mathematics, and engineering.",
"title": ""
},
{
"docid": "fabc65effd31f3bb394406abfa215b3e",
"text": "Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems. A more detailed overview of the theory (without proofs) can be found in Vapnik (1995). In Vapnik (1998) one can find detailed description of the theory (including proofs).",
"title": ""
}
] |
[
{
"docid": "d0043eb45257f9eed6d874f4c7aa709c",
"text": "We report the results of our classification-based machine translation model, built upon the framework of a recurrent neural network using gated recurrent units. Unlike other RNN models that attempt to maximize the overall conditional log probability of sentences against sentences, our model focuses a classification approach of estimating the conditional probability of the next word given the input sequence. This simpler approach using GRUs was hoped to be comparable with more complicated RNN models, but achievements in this implementation were modest and there remains a lot of room for improving this classification approach.",
"title": ""
},
{
"docid": "5ef80b057d0dc7e888f369de0556f3b8",
"text": "We extend the theory of boosting for regression problems to the online learning setting. Generalizing from the batch setting for boosting, the notion of a weak learning algorithm is modeled as an online learning algorithm with linear loss functions that competes with a base class of regression functions, while a strong learning algorithm is an online learning algorithm with convex loss functions that competes with a larger class of regression functions. Our main result is an online gradient boosting algorithm which converts a weak online learning algorithm into a strong one where the larger class of functions is the linear span of the base class. We also give a simpler boosting algorithm that converts a weak online learning algorithm into a strong one where the larger class of functions is the convex hull of the base class, and prove its optimality.",
"title": ""
},
{
"docid": "cc6458464cd8bb152683fde0af1e3d23",
"text": "While the application of IoT in smart technologies becomes more and more proliferated, the pandemonium of its protocols becomes increasingly confusing. More seriously, severe security deficiencies of these protocols become evident, as time-to-market is a key factor, which satisfaction comes at the price of a less thorough security design and testing. This applies especially to the smart home domain, where the consumer-driven market demands quick and cheap solutions. This paper presents an overview of IoT application domains and discusses the most important wireless IoT protocols for smart home, which are KNX-RF, EnOcean, Zigbee, Z-Wave and Thread. Finally, it describes the security features of said protocols and compares them with each other, giving advice on whose protocols are more suitable for a secure smart home.",
"title": ""
},
{
"docid": "bc0fa704763199526c4f28e40fa11820",
"text": "GPFS is a distributed file system run on some of the largest supercomputers and clusters. Through it's deployment, the authors have been able to gain a number of key insights into the methodology of developing a distributed file system which can reliably scale and maintain POSIX semantics. Achieving the necessary throughput requires parallel access for reading, writing and updating metadata. It is a process that is accomplished mostly through distributed locking.",
"title": ""
},
{
"docid": "fea31b71829803d78dabf784dfdb0093",
"text": "Tag recommendation is helpful for the categorization and searching of online content. Existing tag recommendation methods can be divided into collaborative filtering methods and content based methods. In this paper, we put our focus on the content based tag recommendation due to its wider applicability. Our key observation is the tag-content co-occurrence, i.e., many tags have appeared multiple times in the corresponding content. Based on this observation, we propose a generative model (Tag2Word), where we generate the words based on the tag-word distribution as well as the tag itself. Experimental evaluations on real data sets demonstrate that the proposed method outperforms several existing methods in terms of recommendation accuracy, while enjoying linear scalability.",
"title": ""
},
{
"docid": "fe82663526f9284243d29acbd5b335f8",
"text": "The potential of lower-limb exoskeletons and powered orthoses in gait assistance applications for patients with locomotive disorders would have a terrific impact in the society of the near future. This paper presents the development and main features of a lower limb exoskeleton being developed as an active orthosis to allow a quadriplegic child to walk. As the patient is not able to move any of her limbs, the device will produce her basic motions in everyday-life activities: stand up, sit down, and walk stably. Synergic biarticular actuation in the ankle, compliance controller based on the force measured by insoles at the feet and the definition of parameterized hip and foot trajectories that allow to choose the characteristics of gait are some of the new features included in this prototype. Experiments validate the improved performance of gait based on the proposed approach.",
"title": ""
},
{
"docid": "4bcc478495702c190d00811732150671",
"text": "We consider an efficient realization of the all-reduce operation with large data sizes in cluster environments, under the assumption that the reduce operator is associative and commutative. We derive a tight lower bound of the amount of data that must be communicated in order to complete this operation and propose a ring-based algorithm that only requires tree connectivity to achieve bandwidth optimality. Unlike the widely used butterfly-like all-reduce algorithm that incurs network contention in SMP/multi-core clusters, the proposed algorithm can achieve contention-free communication in almost all contemporary clusters including SMP/multi-core clusters and Ethernet switched clusters with multiple switches. We demonstrate that the proposed algorithm is more efficient than other algorithms on clusters with different nodal architectures and networking technologies when the data size is sufficiently large.",
"title": ""
},
{
"docid": "c2df8cc7775bd4ec2bfdf4498d136c9f",
"text": "Particle Swarm Optimization is a popular heuristic search algorithm which is inspired by the social learning of birds or fishes. It is a swarm intelligence technique for optimization developed by Eberhart and Kennedy [1] in 1995. Inertia weight is an important parameter in PSO, which significantly affects the convergence and exploration-exploitation trade-off in PSO process. Since inception of Inertia Weight in PSO, a large number of variations of Inertia Weight strategy have been proposed. In order to propose one or more than one Inertia Weight strategies which are efficient than others, this paper studies 15 relatively recent and popular Inertia Weight strategies and compares their performance on 05 optimization test problems.",
"title": ""
},
{
"docid": "db70302a3d7e7e7e5974dd013e587b12",
"text": "In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we propose the SIPHON architecture---a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called \\emph{wormholes} distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, five physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50 000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware.",
"title": ""
},
{
"docid": "d103d7793a9ff39c43dce47d45742905",
"text": "This paper proposes an architecture for an open-domain conversational system and evaluates an implemented system. The proposed architecture is fully composed of modules based on natural language processing techniques. Experimental results using human subjects show that our architecture achieves significantly better naturalness than a retrieval-based baseline and that its naturalness is close to that of a rule-based system using 149K hand-crafted rules.",
"title": ""
},
{
"docid": "b8dbc4c33e51350109bf1fec5ef852ce",
"text": "Stack Overflow is one of the most popular question-and-answer sites for programmers. However, there are a great number of duplicate questions that are expected to be detected automatically in a short time. In this paper, we introduce two approaches to improve the detection accuracy: splitting body into different types of data and using word-embedding to treat word ambiguities that are not contained in the general corpuses. The evaluation shows that these approaches improve the accuracy compared with the traditional method.",
"title": ""
},
{
"docid": "5ad4560383ab74545c494ee722b1c57c",
"text": "In this paper, a sub-dictionary based sparse coding method is proposed for image representation. The novel sparse coding method substitutes a new regularization item for L1-norm in the sparse representation model. The proposed sparse coding method involves a series of sub-dictionaries. Each sub-dictionary contains all the training samples except for those from one particular category. For the test sample to be represented, all the sub-dictionaries should linearly represent it apart from the one that does not contain samples from that label, and this sub-dictionary is called irrelevant sub-dictionary. This new regularization item restricts the sparsity of each sub-dictionary's residual, and this restriction is helpful for classification. The experimental results demonstrate that the proposed method is superior to the previous related sparse representation based classification.",
"title": ""
},
{
"docid": "57ea5e1d282fc47989bdd1c997e07cbf",
"text": "a r t i c l e i n f o This study investigates the moderating effect of recommendation source credibility on the causal relationships between informational factors and recommendation credibility, as well as its moderating effect on the causal relationship between recommendation credibility and recommendation adoption. Using data from 199 responses from a leading online consumer discussion forum in China, we find that recommendation source credibility significantly moderates two informational factors' effects on readers' perception of recommendation credibility, each in a different direction. Further, we find that source credibility negatively moderates the effect of recommendation credibility on recommendation adoption. Traditional word-of-mouth (WOM) has been shown to play an important role on consumers' purchase decisions (e.g., [2]). With the popularization of the Internet, more and more consumers have shared their past consuming experiences (i.e., online consumer recommendation) online, and researchers often refer to this online WOM as electronic word-of-mouth (eWOM). Given the distinct characteristics of Internet communication (e.g., available to individuals without the limitation of time and location, directed to multiple individuals simultaneously), eWOM has conquered known limitations of traditional WOM. In general, eWOM has global reach and influence. In China, many online consumer discussion forums support eWOM, and much previous research [3,7,12,13,21] demonstrates that because eWOM provides indirect purchasing knowledge to readers, the recommendations on these forums can significantly affect their attitudes towards various kinds of consuming targets (e.g., stores, products and services). Various prior studies have postulated large numbers of antecedent factors which can affect information readers' cognition towards the recommendations, and many of them stem from elaboration likelihood model (ELM) (e. that there are two distinct routes that can affect information readers' attitude toward presented information: (1) the central route that considers the attitude formation (or change) as the result of the receivers' diligent consideration of the content of the information (informational factors); and (2) the peripheral route that requires less cognitive work attuned to simple cues in the information to influence attitude (information-irrelevant factors). ELM suggests two factors, named information readers' motivation and ability, can be the significant moderators to shift the effects of central and peripheral factors on readers' perception of information credibility. Other researchers [24,27] posit that the peripheral factor – source credibility – may also have a moderating rather than a direct effect on the causal relationship between the informational factors and the information credibility; this view is consistent with the attribution inference …",
"title": ""
},
{
"docid": "0aca949889a67f3dd21efe372a7f706d",
"text": "Existing research on the formation of employee ethical climate perceptions focuses mainly on organization characteristics as antecedents, and although other constructs have been considered, these constructs have typically been studied in isolation. Thus, our understanding of the context in which ethical climate perceptions develop is incomplete. To address this limitation, we build upon the work of Rupp (Organ Psychol Rev 1:72–94, 2011) to develop and test a multi-experience model of ethical climate which links aspects of the corporate social responsibility (CSR), ethics, justice, and trust literatures and helps to explain how employees’ ethical climate perceptions form. We argue that in forming ethical climate perceptions, employees consider the actions or characteristics of a complex web of actors. Specifically, we propose that employees look (1) outward at how communities are impacted by their organization’s actions (e.g., CSR), (2) upward to make inferences about the ethicality of leaders in their organizations (e.g., ethical leadership), and (3) inward at their own propensity to trust others as they form their perceptions. Using a multiple-wave field study (N = 201) conducted at a privately held US corporation, we find substantial evidence in support of our model.",
"title": ""
},
{
"docid": "673af61761bb63219d3fb1be958560dd",
"text": "The Common Scrambling Algorithm (CSA) is used to encrypt streams of video data in the Digital Video Broadcasting (DVB) system. The algorithm cascades a stream and a block cipher, apparently for a larger security margin. In this paper we set out to analyze the block cipher and the stream cipher separately and give an overview of how they interact with each other. We present a practical attack on the stream cipher. Research on the block cipher so far indicates it to be resistant against linear and algebraic cryptanalysis as well as simple slide attacks.",
"title": ""
},
{
"docid": "cf54284be4dbf970e286a83d3d89d08f",
"text": "The design of a wearable upper extremity therapy robot RUPERT IVtrade (Robotic Upper Extremity Repetitive Trainer) device is presented. It is designed to assist in repetitive therapy tasks related to activities of daily living which has been advocated for being more effective for functional recovery. RUPERTtrade has five actuated degrees of freedom driven by compliant and safe pneumatic muscle actuators (PMA) assisting shoulder elevation, humeral external rotation, elbow extension, forearm supination and wrist/hand extension. The device is designed to extend the arm and move in a 3D space with no gravity compensation, which is a natural setting for practicing day-to-day activities. Because the device is wearable and lightweight, the device is very portable; it can be worn standing or sitting for performing therapy tasks that better mimic activities of daily living. A closed-loop controller combining a PID-based feedback controller and a iterative learning controller (ILC)-based feedforward controller is proposed for RUPERT for passive repetitive task training. This type of control aids in overcoming the highly nonlinear nature of the plant under control, and also helps in adapting easily to different subjects for performing different tasks. The system was tested on two able-bodied subjects to evaluate its performance.",
"title": ""
},
{
"docid": "c68c5df29702e797b758474f4e8b137e",
"text": "Abstract—A miniaturized printed log-periodic fractal dipole antenna is proposed. Tree fractal structure is introduced in an antenna design and evolves the traditional Euclidean log-periodic dipole array into the log-periodic second-iteration tree-dipole array (LPT2DA) for the first time. Main parameters and characteristics of the proposed antenna are discussed. A fabricated proof-of-concept prototype of the proposed antenna is etched on a FR4 substrate with a relative permittivity of 4.4 and volume of 490 mm × 245 mm × 1.5 mm. The impedance bandwidth (measured VSWR < 2) of the fabricated antenna with approximate 40% reduction of traditional log-periodic dipole antenna is from 0.37 to 3.55GHz with a ratio of about 9.59 : 1. Both numerical and experimental results show that the proposed antenna has stable directional radiation patterns and apparently miniaturized effect, which are suitable for various ultra-wideband applications.",
"title": ""
},
{
"docid": "f733b53147ce1765709acfcba52c8bbf",
"text": "BACKGROUND\nIt is important to evaluate the impact of cannabis use on onset and course of psychotic illness, as the increasing number of novice cannabis users may translate into a greater public health burden. This study aims to examine the relationship between adolescent onset of regular marijuana use and age of onset of prodromal symptoms, or first episode psychosis, and the manifestation of psychotic symptoms in those adolescents who use cannabis regularly.\n\n\nMETHODS\nA review was conducted of the current literature for youth who initiated cannabis use prior to the age of 18 and experienced psychotic symptoms at, or prior to, the age of 25. Seventeen studies met eligibility criteria and were included in this review.\n\n\nRESULTS\nThe current weight of evidence supports the hypothesis that early initiation of cannabis use increases the risk of early onset psychotic disorder, especially for those with a preexisting vulnerability and who have greater severity of use. There is also a dose-response association between cannabis use and symptoms, such that those who use more tend to experience greater number and severity of prodromal and diagnostic psychotic symptoms. Those with early-onset psychotic disorder and comorbid cannabis use show a poorer course of illness in regards to psychotic symptoms, treatment, and functional outcomes. However, those with early initiation of cannabis use appear to show a higher level of social functioning than non-cannabis users.\n\n\nCONCLUSIONS\nAdolescent initiation of cannabis use is associated, in a dose-dependent fashion, with emergence and severity of psychotic symptoms and functional impairment such that those who initiate use earlier and use at higher frequencies demonstrate poorer illness and treatment outcomes. These associations appear more robust for adolescents at high risk for developing a psychotic disorder.",
"title": ""
},
{
"docid": "3244dc9475ab3d4e51ce9dee3d5b46b9",
"text": "Dielectric Elastomer Actuators (DEAs) are an emerging actuation technology which are inherent lightweight and compliant in nature, enabling the development of unique and versatile devices, such as the Dielectric Elastomer Minimum Energy Structure (DEMES). We present the development of a multisegment DEMES actuator for use in a deployable microsatellite gripper. The satellite, called CleanSpace One, will demonstrate active debris removal (ADR) in space using a small cost effective system. The inherent flexibility and lightweight nature of the DEMES actuator enables space efficient storage (e.g. in a rolled configuration) of the gripper prior to deployment. Multisegment DEMES have multiple open sections and are an effective way of amplifying bending deformation. We present the evolution of our DEMES actuator design from initial concepts up until the final design, describing briefly the trade-offs associated with each method. We describe the optimization of our chosen design concept and characterize this design in terms on bending angle as a function of input voltage and gripping force. Prior to the characterization the actuator was stored and subsequently deployed from a rolled state, a capability made possible thanks to the fabrication methodology and materials used. A tip angle change of approximately 60o and a gripping force of 0.8 mN (for small deflections from the actuator tip) were achieved. The prototype actuators (approximately 10 cm in length) weigh a maximum of 0.65 g and are robust and mechanically resilient, demonstrating over 80,000 activation cycles.",
"title": ""
},
{
"docid": "84c2b96916ce68245cf81bdf8f4b435c",
"text": "INTRODUCTION\nComplete and accurate coding of injury causes is essential to the understanding of injury etiology and to the development and evaluation of injury-prevention strategies. While civilian hospitals use ICD-9-CM external cause-of-injury codes, military hospitals use codes derived from the NATO Standardization Agreement (STANAG) 2050.\n\n\nDISCUSSION\nThe STANAG uses two separate variables to code injury cause. The Trauma code uses a single digit with 10 possible values to identify the general class of injury as battle injury, intentionally inflicted nonbattle injury, or unintentional injury. The Injury code is used to identify cause or activity at the time of the injury. For a subset of the Injury codes, the last digit is modified to indicate place of occurrence. This simple system contains fewer than 300 basic codes, including many that are specific to battle- and sports-related injuries not coded well by either the ICD-9-CM or the draft ICD-10-CM. However, while falls, poisonings, and injuries due to machinery and tools are common causes of injury hospitalizations in the military, few STANAG codes correspond to these events. Intentional injuries in general and sexual assaults in particular are also not well represented in the STANAG. Because the STANAG does not map directly to the ICD-9-CM system, quantitative comparisons between military and civilian data are difficult.\n\n\nCONCLUSIONS\nThe ICD-10-CM, which will be implemented in the United States sometime after 2001, expands considerably on its predecessor, ICD-9-CM, and provides more specificity and detail than the STANAG. With slight modification, it might become a suitable replacement for the STANAG.",
"title": ""
}
] |
scidocsrr
|
b1f40396ca8ac7965ead86d914095c9a
|
Evaluation of Spoken Language Systems: the ATIS Domain
|
[
{
"docid": "33f610dbc42bd50af0a8da5a6b464c8b",
"text": "Speech research has made tremendous progress in the past using the following paradigm: de ne the research problem, collect a corpus to objectively measure progress, and solve the research problem. Natural language research, on the other hand, has typically progressed without the bene t of any corpus of data with which to test research hypotheses. We describe the Air Travel Information System (ATIS) pilot corpus, a corpus designed to measure progress in Spoken Language Systems that include both a speech and natural language component. This pilot marks the rst full-scale attempt to collect such a corpus and provides guidelines for future e orts.",
"title": ""
}
] |
[
{
"docid": "91b33c8dcc29c8a672b1df3d4ccc5943",
"text": "We propose a new vision-based SLAM (simultaneous localization and mapping) technique using both line and corner features as landmarks in the scene. The proposed SLAM algorithm uses an extended Kalman filter based framework to localize and reconstruct 3D line and corner landmarks at the same time and in real time. It provides more accurate localization and map building results than conventional corner feature only-based techniques. Moreover, the reconstructed 3D line landmarks enhance the performance of the robot relocation when robot's pose remains uncertain with corner information only. Experimental results show that the hybrid landmark based SLAM, using lines and corners, produces better performance than corner only one's",
"title": ""
},
{
"docid": "4d66a85651a78bfd4f7aba290c21f9a7",
"text": "Mobile carrier networks follow an architecture where network elements and their interfaces are defined in detail through standardization, but provide limited ways to develop new network features once deployed. In recent years we have witnessed rapid growth in over-the-top mobile applications and a 10-fold increase in subscriber traffic while ground-breaking network innovation took a back seat. We argue that carrier networks can benefit from advances in computer science and pertinent technology trends by incorporating a new way of thinking in their current toolbox. This article introduces a blueprint for implementing current as well as future network architectures based on a software-defined networking approach. Our architecture enables operators to capitalize on a flow-based forwarding model and fosters a rich environment for innovation inside the mobile network. In this article, we validate this concept in our wireless network research laboratory, demonstrate the programmability and flexibility of the architecture, and provide implementation and experimentation details.",
"title": ""
},
{
"docid": "a45f9042b677c64c8c2a5a0ca4299f23",
"text": "The most exciting development in parallel computer architecture is the convergence of traditionally disparate approaches on a common machine structure. This book explains the forces behind this convergence of shared-memory, message-passing, data parallel, and data-driven computing architectures. It then examines the design issues that are critical to all parallel architecture across the full range of modern design, covering data access, communication performance, coordination of cooperative work, and correct implementation of useful semantics. It not only describes the hardware and software techniques for addressing each of these issues but also explores how these techniques interact in the same system. Examining architecture from an application-driven perspective, it provides comprehensive discussions of parallel programming for high performance and of workload-driven evaluation, based on understanding hardware-software interactions.",
"title": ""
},
{
"docid": "3cc0707cec7af22db42e530399e762a8",
"text": "While watching television, people increasingly consume additional content related to what they are watching. We consider the task of finding video content related to a live television broadcast for which we leverage the textual stream of subtitles associated with the broadcast. We model this task as a Markov decision process and propose a method that uses reinforcement learning to directly optimize the retrieval effectiveness of queries generated from the stream of subtitles. Our dynamic query modeling approach significantly outperforms state-of-the-art baselines for stationary query modeling and for text-based retrieval in a television setting. In particular we find that carefully weighting terms and decaying these weights based on recency significantly improves effectiveness. Moreover, our method is highly efficient and can be used in a live television setting, i.e., in near real time.",
"title": ""
},
{
"docid": "37ef0e97e086975a4b47acd52f58f93f",
"text": "Herb induced liver injury (HILI) and drug induced liver injury (DILI) share the common characteristic of chemical compounds as their causative agents, which were either produced by the plant or synthetic processes. Both, natural and synthetic chemicals are foreign products to the body and need metabolic degradation to be eliminated. During this process, hepatotoxic metabolites may be generated causing liver injury in susceptible patients. There is uncertainty, whether risk factors such as high lipophilicity or high daily and cumulative doses play a pathogenetic role for HILI, as these are under discussion for DILI. It is also often unclear, whether a HILI case has an idiosyncratic or an intrinsic background. Treatment with herbs of Western medicine or traditional Chinese medicine (TCM) rarely causes elevated liver tests (LT). However, HILI can develop to acute liver failure requiring liver transplantation in single cases. HILI is a diagnosis of exclusion, because clinical features of HILI are not specific as they are also found in many other liver diseases unrelated to herbal use. In strikingly increased liver tests signifying severe liver injury, herbal use has to be stopped. To establish HILI as the cause of liver damage, RUCAM (Roussel Uclaf Causality Assessment Method) is a useful tool. Diagnostic problems may emerge when alternative causes were not carefully excluded and the correct therapy is withheld. Future strategies should focus on RUCAM based causality assessment in suspected HILI cases and more regulatory efforts to provide all herbal medicines and herbal dietary supplements used as medicine with strict regulatory surveillance, considering them as herbal drugs and ascertaining an appropriate risk benefit balance.",
"title": ""
},
{
"docid": "7cd4efb34472aa2e7f8019c14137bf4e",
"text": "In theory, the pose of a calibrated camera can be uniquely determined from a minimum of four coplanar but noncollinear points. In practice, there are many applications of camera pose tracking from planar targets and there is also a number of recent pose estimation algorithms which perform this task in real-time, but all of these algorithms suffer from pose ambiguities. This paper investigates the pose ambiguity for planar targets viewed by a perspective camera. We show that pose ambiguities - two distinct local minima of the according error function - exist even for cases with wide angle lenses and close range targets. We give a comprehensive interpretation of the two minima and derive an analytical solution that locates the second minimum. Based on this solution, we develop a new algorithm for unique and robust pose estimation from a planar target. In the experimental evaluation, this algorithm outperforms four state-of-the-art pose estimation algorithms",
"title": ""
},
{
"docid": "62d93b9bcc66f402cd045f8586b0b62f",
"text": "Passive crossbar resistive random access memory (RRAM) arrays require select devices with nonlinear I-V characteristics to address the sneak-path problem. Here, we present a systematical analysis to evaluate the performance requirements of select devices during the read operation of RRAM arrays for the proposed one-selector-one-resistor (1S1R) configuration with serially connected selector/storage element. We found high selector current density is critical and the selector nonlinearity (ON/OFF) requirement can be relaxed at present. Different read schemes were analyzed to achieve high read margin and low power consumption. Design optimizations of the sense resistance and the storage elements are also discussed.",
"title": ""
},
{
"docid": "7124a3336c5c1555e713f2ed9b9d6c5f",
"text": "The search for relevant information can be very frustrating for users who, unintentionally, use too general or inappropriate keywords to express their requests. To overcome this situation, query expansion techniques aim at transforming the user request by adding new terms, referred as expansion features, that better describe the real intent of the users. We propose a method that relies exclusively on relevant structures (as opposed to the use of semantics) found in knowledge bases (KBs) to extract the expansion features. We call our method Structural Query Expansion (SQE). The structural analysis of KBs takes us to propose a set of structural motifs that connect their strongly related entries, which can be used to extract expansion features. In this paper we use Wikipedia as our KB, which is probably one of the largest sources of information. SQE is capable of achieving more than 150% improvement over non expanded queries and is able to identify the expansion features in less than 0.2 seconds in the worst case scenario. Most significantly, we believe that we are contributing to open new research directions in query expansion, proposing a method that is orthogonal to many current systems. For example, SQE improves pseudo-relevance feedback techniques up to 13%.",
"title": ""
},
{
"docid": "59574eb62f7c1473abaa564e022a45ee",
"text": "As deep learning (DL) is being rapidly pushed to edge computing, researchers invented various ways to make inference computation more efficient on mobile/IoT devices, such as network pruning, parameter compression, and etc. Quantization, as one of the key approaches, can effectively offload GPU, and make it possible to deploy DL on fixed-point pipeline. Unfortunately, not all existing networks design are friendly to quantization. For example, the popular lightweight MobileNetV1, while it successfully reduces parameter size and computation latency with separable convolution, our experiment shows its quantized models have large performance gap against its float point models. To resolve this, we analyzed the root cause of quantization loss and proposed a quantization-friendly separable convolution architecture. By evaluating the image classification task on ImageNet2012 dataset, our modified MobileNetV1 model can archive 8-bit inference top-1 accuracy in 68.03%, almost closed the gap to the float pipeline.",
"title": ""
},
{
"docid": "d2c6e2e807376b63828da4037028f891",
"text": "Cortical circuits in the brain are refined by experience during critical periods early in postnatal life. Critical periods are regulated by the balance of excitatory and inhibitory (E/I) neurotransmission in the brain during development. There is now increasing evidence of E/I imbalance in autism, a complex genetic neurodevelopmental disorder diagnosed by abnormal socialization, impaired communication, and repetitive behaviors or restricted interests. The underlying cause is still largely unknown and there is no fully effective treatment or cure. We propose that alteration of the expression and/or timing of critical period circuit refinement in primary sensory brain areas may significantly contribute to autistic phenotypes, including cognitive and behavioral impairments. Dissection of the cellular and molecular mechanisms governing well-established critical periods represents a powerful tool to identify new potential therapeutic targets to restore normal plasticity and function in affected neuronal circuits.",
"title": ""
},
{
"docid": "825640f8ce425a34462b98869758e289",
"text": "Recurrent neural networks scale poorly due to the intrinsic difficulty in parallelizing their state computations. For instance, the forward pass computation of ht is blocked until the entire computation of ht−1 finishes, which is a major bottleneck for parallel computing. In this work, we propose an alternative RNN implementation by deliberately simplifying the state computation and exposing more parallelism. The proposed recurrent unit operates as fast as a convolutional layer and 5-10x faster than cuDNN-optimized LSTM. We demonstrate the unit’s effectiveness across a wide range of applications including classification, question answering, language modeling, translation and speech recognition. We open source our implementation in PyTorch and CNTK1.",
"title": ""
},
{
"docid": "1a325aa676d9f29d3b2698bcb4e0b8be",
"text": "Monitoring of electric power systems in real time for reliability, aging status, and presence of incipient faults requires distributed and centralized processing of large amounts of data from distributed sensor networks. To solve this task, cohesive multidisciplinary efforts are needed from such fields as sensing, signal processing, control, communications, optimization theory, and, more recently, robotics. This review paper focuses on one trend of power system monitoring, namely, mobile monitoring. The developments in robotic maintenance for power systems indicate significant potential of this technological approach. Authors discuss integration of several important relevant sensor technologies that are used to monitor power systems, including acoustic sensing, fringing electric field sensing, and infrared sensing.",
"title": ""
},
{
"docid": "310036a45a95679a612cc9a60e44e2e0",
"text": "A broadband single layer, dual circularly polarized (CP) reflectarrays with linearly polarized feed is introduced in this paper. To reduce the electrical interference between the two orthogonal polarizations of the CP element, a novel subwavelength multiresonance element with a Jerusalem cross and an open loop is proposed, which presents a broader bandwidth and phase range excessing 360° simultaneously. By tuning the x- and y-axis dimensions of the proposed element, an optimization technique is used to minimize the phase errors on both orthogonal components. Then, a single-layer offset-fed 20 × 20-element dual-CP reflectarray has been designed and fabricated. The measured results show that the 1-dB gain and 3-dB axial ratio (AR) bandwidths of the dual-CP reflectarray can reach 12.5% and 50%, respectively, which shows a significant improvement in gain and AR bandwidths as compared to reflectarrays with conventional λ/2 cross-dipole elements.",
"title": ""
},
{
"docid": "245de72c0f333f4814990926e08c13e9",
"text": "Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.",
"title": ""
},
{
"docid": "6c75e6f05411cb746082e583f122194b",
"text": "A smart Vehicle Speed Monitoring and Traffic Routing System (VSMTRS) is proposed using Wireless Sensor Networks to monitor and report about the speeding vehicles and also to regulate the traffic. The system is built up of wireless modules including Crossbow MicaZ mote MPR2400, a 2.4 GHz IEEE 802.15.4, Tiny Wireless Measurement System (TWMS), a data acquisition card MDA320CA and a base station MIB510. The other modules include a microcontroller and a motor. The software used includes Tiny OS-1.11, Crossbow Moteworks (Xsniffer, Moteview, Moteconfig), MATLAB 7.30 for Data Processing PIC 16F877A is used for generating required data. MPLAB Assembler is used for microcontroller programming. This paper explains about the hardware prototype setup for part of VSMTRS, the algorithms used for the purpose, the advantages and the limitations of the entire system. Also the configuration of the setup, the OS and the application software are elaborated.",
"title": ""
},
{
"docid": "b231da0ff32e823bb245328929bdebf3",
"text": "BACKGROUND\nCultivated bananas and plantains are giant herbaceous plants within the genus Musa. They are both sterile and parthenocarpic so the fruit develops without seed. The cultivated hybrids and species are mostly triploid (2n = 3x = 33; a few are diploid or tetraploid), and most have been propagated from mutants found in the wild. With a production of 100 million tons annually, banana is a staple food across the Asian, African and American tropics, with the 15 % that is exported being important to many economies.\n\n\nSCOPE\nThere are well over a thousand domesticated Musa cultivars and their genetic diversity is high, indicating multiple origins from different wild hybrids between two principle ancestral species. However, the difficulty of genetics and sterility of the crop has meant that the development of new varieties through hybridization, mutation or transformation was not very successful in the 20th century. Knowledge of structural and functional genomics and genes, reproductive physiology, cytogenetics, and comparative genomics with rice, Arabidopsis and other model species has increased our understanding of Musa and its diversity enormously.\n\n\nCONCLUSIONS\nThere are major challenges to banana production from virulent diseases, abiotic stresses and new demands for sustainability, quality, transport and yield. Within the genepool of cultivars and wild species there are genetic resistances to many stresses. Genomic approaches are now rapidly advancing in Musa and have the prospect of helping enable banana to maintain and increase its importance as a staple food and cash crop through integration of genetical, evolutionary and structural data, allowing targeted breeding, transformation and efficient use of Musa biodiversity in the future.",
"title": ""
},
{
"docid": "6209ab862101c29f8fdf302bf33684bb",
"text": "In multimedia applications, the text and image components in a web document form a pairwise constraint that potentially indicates the same semantic concept. This paper studies cross-modal learning via the pairwise constraint and aims to find the common structure hidden in different modalities. We first propose a compound regularization framework to address the pairwise constraint, which can be used as a general platform for developing cross-modal algorithms. For unsupervised learning, we propose a multi-modal subspace clustering method to learn a common structure for different modalities. For supervised learning, to reduce the semantic gap and the outliers in pairwise constraints, we propose a cross-modal matching method based on compound ℓ21 regularization. Extensive experiments demonstrate the benefits of joint text and image modeling with semantically induced pairwise constraints, and they show that the proposed cross-modal methods can further reduce the semantic gap between different modalities and improve the clustering/matching accuracy.",
"title": ""
},
{
"docid": "fc821977be6a0c420d73ca76c5249dbd",
"text": "Monitoring volatile organic compound (VOC) pollution levels in indoor environments is of great importance for the health and comfort of individuals, especially considering that people currently spend >80% of their time indoors. The primary aim of this paper is to design a low-power ZigBee sensor network and internode data reception control framework to use in the real-time acquisition and communication of data concerning air pollutant levels from VOCs. The network consists of end device sensors with photoionization detectors, routers that propagate the network over long distances, and a coordinator that communicates with a computer. The design is based on the ATmega16 microcontroller and the Atmel RF230 ZigBee module, which are used to effectively process communication data with low power consumption. Priority is given to power consumption and sensing efficiency, which are achieved by incorporating various smart tasking and power management protocols. The measured data are displayed on a computer monitor through a graphical user interface. The preliminary experimental results demonstrate that the wireless sensor network system can monitor VOC concentrations with a high level of accuracy and is thus suitable for automated environmental monitoring. Both good indoor air quality and energy conservation can be achieved by integrating the VOC monitoring system proposed in this paper with the residential integrated ventilation controller.",
"title": ""
},
{
"docid": "786d1ba82d326370684395eba5ef7cd3",
"text": "A miniaturized dual-band Wilkinson power divider with a parallel LC circuit at the midpoints of two coupled-line sections is proposed in this paper. General design equations for parallel inductor L and capacitor C are derived from even- and odd-mode analysis. Generally speaking, characteristic impedances between even and odd modes are different in two coupled-line sections, and their electrical lengths are also different in inhomogeneous medium. This paper proved that a parallel LC circuit compensates for the characteristic impedance differences and the electrical length differences for dual-band operation. In other words, the proposed model provides self-compensation structure, and no extra compensation circuits are needed. Moreover, the upper limit of the frequency ratio range can be adjusted by two coupling strengths, where loose coupling for the first coupled-line section and tight coupling for the second coupled-line section are preferred for a wider frequency ratio range. Finally, an experimental circuit shows good agreement with the theoretical simulation.",
"title": ""
}
] |
scidocsrr
|
8e1971b93b4a565c54553c9a2a628e4d
|
A Convenient Multicamera Self-Calibration for Virtual Environments
|
[
{
"docid": "c48d4bd9d5fde3fa61e600449411fd25",
"text": "Shape-From-Silhouette (SFS), also known as Visual Hull (VH) construction, is a popular 3D reconstruction method which estimates the shape of an object from multiple silhouette images. The original SFS formulation assumes that all of the silhouette images are captured either at the same time or while the object is static. This assumption is violated when the object moves or changes shape. Hence the use of SFS with moving objects has been restricted to treating each time instant sequentially and independently. Recently we have successfully extended the traditional SFS formulation to refine the shape of a rigidly moving object over time. Here we further extend SFS to apply to dynamic articulated objects. Given silhouettes of a moving articulated object, the process of recovering the shape and motion requires two steps: (1) correctly segmenting (points on the boundary of) the silhouettes to each articulated part of the object, (2) estimating the motion of each individual part using the segmented silhouette. In this paper, we propose an iterative algorithm to solve this simultaneous assignment and alignment problem. Once we have estimated the shape and motion of each part of the object, the articulation points between each pair of rigid parts are obtained by solving a simple motion constraint between the connected parts. To validate our algorithm, we first apply it to segment the different body parts and estimate the joint positions of a person. The acquired kinematic (shape and joint) information is then used to track the motion of the person in new video sequences.",
"title": ""
}
] |
[
{
"docid": "645d828cc2fc16b1f6894e34c6104ea9",
"text": "on behalf of the American Heart Association Statistics Committee and Stroke Statistics Virani, Nathan D. Wong, Daniel Woo and Melanie B. Turner Nina P. Paynter, Pamela J. Schreiner, Paul D. Sorlie, Joel Stein, Tanya N. Turan, Salim S. Darren K. McGuire, Emile R. Mohler, Claudia S. Moy, Michael E. Mussolino, Graham Nichol, Lynda D. Lisabeth, David Magid, Gregory M. Marcus, Ariane Marelli, David B. Matchar, Mark D. Huffman, Brett M. Kissela, Steven J. Kittner, Daniel T. Lackland, Judith H. Lichtman, Heather J. Fullerton, Cathleen Gillespie, Susan M. Hailpern, John A. Heit, Virginia J. Howard, Franco, William B. Borden, Dawn M. Bravata, Shifan Dai, Earl S. Ford, Caroline S. Fox, Sheila Alan S. Go, Dariush Mozaffarian, Véronique L. Roger, Emelia J. Benjamin, Jarett D. Berry, Association 2013 Update : A Report From the American Heart −− Heart Disease and Stroke Statistics",
"title": ""
},
{
"docid": "9a03c5ff214a1a41280e6f4b335c87f1",
"text": "In this paper, we present an automatic abstractive summarization system of meeting conversations. Our system extends a novel multi-sentence fusion algorithm in order to generate abstract templates. It also leverages the relationship between summaries and their source meeting transcripts to select the best templates for generating abstractive summaries of meetings. Our manual and automatic evaluation results demonstrate the success of our system in achieving higher scores both in readability and informativeness.",
"title": ""
},
{
"docid": "2b40c6f6a9fc488524c23e11cd57a00b",
"text": "An overview of the basics of metaphorical thought and language from the perspective of Neurocognition, the integrated interdisciplinary study of how conceptual thought and language work in the brain. The paper outlines a theory of metaphor circuitry and discusses how everyday reason makes use of embodied metaphor circuitry.",
"title": ""
},
{
"docid": "0cf1c430d24a93f5d4da9200fbda41d4",
"text": "For some time I have been involved in efforts to develop computer-controlled systems for instruction. One such effort has been a computer-assistedinstruction (CAI) program for teaching reading in the primary grades (Atkinson, 1974) and another for teaching computer science at the college level (Atkinson, in press). The goal has been to use psychological theory to devise optimal instructional procedures—procedures that make moment-by-moment decisions based on the student's unique response history. To help guide some of the theoretical aspects of this work, research has also been done on the restricted but well-defined problem of optimizing the teaching of a foreign language vocabulary. This is an area in which mathematical models provide an accurate description of learning, and these models can be used in conjunction with the methods of control theory to develop precise algorithms for sequencing instruction among vocabulary items. Some of this work has been published, and those who have read about it know that the optimization schemes are quite effective—far more effective than procedures that permit the learner to make his own instructional decisions (Atkinson, 1972a, 1972b; Atkinson & Paulson, 1972). In conducting these vocabulary learning experiments, I have been struck by the incredible variability in learning rates across subjects. Even Stanford University students, who are a fairly select sample, display impressively large betweensubject differences. These differences may reflect differences in fundamental abilities, but it is easy to demonstrate that they also depend on the strategies that subjects bring to bear on the task. Good learners can introspect with ease about a \"bag of tricks\" for learning vocabulary items, whereas poor",
"title": ""
},
{
"docid": "83d68b205286362096d71c3041ea254b",
"text": "T his session is concerned with the automated creation of fiction or \"literary artefacts\" that might take the form of prose, poetry or drama. Special focus is placed upon those approaches that include the generation of narrative structures and therefore use some kind of story model. First attempts in automated story generation date back to the 1970s, with the implementation of Meehan's TALE-SPIN (1977) based on the achievement of character plans and Klein's automatic novel writer (1973/1979) that simulates the effects of generated events in the narrative universe. Currently, story generators enjoy a phase of revival, both as stand-alone systems or embedded components. Most of them make reference to an explicit model of narrative, but the approaches used are diverse: they range from story grammars in the generative vein to the conceptually inspired engagement-reflection cycle. Real-life applications include the generation of a set of plot plans for screen writers in a commercial entertainment environment, who could use the automatically created story pool as a source of inspiration, and the generation of new kinds of interactive dramas (video games).",
"title": ""
},
{
"docid": "14b6b544144d6c14cb283fd0ac8308d8",
"text": "Disrupted daily or circadian rhythms of lung function and inflammatory responses are common features of chronic airway diseases. At the molecular level these circadian rhythms depend on the activity of an autoregulatory feedback loop oscillator of clock gene transcription factors, including the BMAL1:CLOCK activator complex and the repressors PERIOD and CRYPTOCHROME. The key nuclear receptors and transcription factors REV-ERBα and RORα regulate Bmal1 expression and provide stability to the oscillator. Circadian clock dysfunction is implicated in both immune and inflammatory responses to environmental, inflammatory, and infectious agents. Molecular clock function is altered by exposomes, tobacco smoke, lipopolysaccharide, hyperoxia, allergens, bleomycin, as well as bacterial and viral infections. The deacetylase Sirtuin 1 (SIRT1) regulates the timing of the clock through acetylation of BMAL1 and PER2 and controls the clock-dependent functions, which can also be affected by environmental stressors. Environmental agents and redox modulation may alter the levels of REV-ERBα and RORα in lung tissue in association with a heightened DNA damage response, cellular senescence, and inflammation. A reciprocal relationship exists between the molecular clock and immune/inflammatory responses in the lungs. Molecular clock function in lung cells may be used as a biomarker of disease severity and exacerbations or for assessing the efficacy of chronotherapy for disease management. Here, we provide a comprehensive overview of clock-controlled cellular and molecular functions in the lungs and highlight the repercussions of clock disruption on the pathophysiology of chronic airway diseases and their exacerbations. Furthermore, we highlight the potential for the molecular clock as a novel chronopharmacological target for the management of lung pathophysiology.",
"title": ""
},
{
"docid": "a2223d57a866b0a0ef138e52fb515b84",
"text": "This paper is concerned with paraphrase detection, i.e., identifying sentences that are semantically identical. The ability to detect similar sentences written in natural language is crucial for several applications, such as text mining, text summarization, plagiarism detection, authorship authentication and question answering. Recognizing this importance, we study in particular how to address the challenges with detecting paraphrases in user generated short texts, such as Twitter, which often contain language irregularity and noise, and do not necessarily contain as much semantic information as longer clean texts. We propose a novel deep neural network-based approach that relies on coarse-grained sentence modelling using a convolutional neural network (CNN) and a recurrent neural network (RNN) model, combined with a specific fine-grained word-level similarity matching model. More specifically, we develop a new architecture, called DeepParaphrase, which enables to create an informative semantic representation of each sentence by (1) using CNN to extract the local region information in form of important n-grams from the sentence, and (2) applying RNN to capture the long-term dependency information. In addition, we perform a comparative study on stateof-the-art approaches within paraphrase detection. An important insight from this study is that existing paraphrase approaches perform well when applied on clean texts, but they do not necessarily deliver good performance against noisy texts, and vice versa. In contrast, our evaluation has shown that the proposed DeepParaphrase-based approach achieves good results in both types of texts, thus making it more robust and generic than the existing approaches.",
"title": ""
},
{
"docid": "5bd7df3bfcb5b99f8bcb4a9900af980e",
"text": "A learning model predictive controller for iterative tasks is presented. The controller is reference-free and is able to improve its performance by learning from previous iterations. A safe set and a terminal cost function are used in order to guarantee recursive feasibility and nondecreasing performance at each iteration. This paper presents the control design approach, and shows how to recursively construct terminal set and terminal cost from state and input trajectories of previous iterations. Simulation results show the effectiveness of the proposed control logic.",
"title": ""
},
{
"docid": "8722dcbb49196f0390e4cc439b2ac969",
"text": "A planar array of passive lens elements can be phased to approximate the effect of a curved dielectric lens. The rotational orientation of each element can provide the required phase shift for circular polarization. The array elements must be designed so that the hand of circular polarization changes as the electromagnetic wave passes through the lens. An element is presented that is based on an aperture-coupled microstrip patch antenna, and two lenses are designed. Each lens has a diameter of 254 mm and contains 349 elements. The elements have identical dimensions but the rotational orientation of each element is selected to provide a specific lens function. The first lens is designed to collimate radiation from a feed horn into a beam pointing 20° from broadside. At 12.9 GHz the aperture efficiency is 48%. The second lens acts as a Wollaston-type prism. It splits an incident wave according to its circular polarization components.",
"title": ""
},
{
"docid": "0ffe59ea5705ae6d180cee8976bbffb4",
"text": "We propose an analytical framework for studying parallel repetition, a basic product operation for one-round twoplayer games. In this framework, we consider a relaxation of the value of projection games. We show that this relaxation is multiplicative with respect to parallel repetition and that it provides a good approximation to the game value. Based on this relaxation, we prove the following improved parallel repetition bound: For every projection game G with value at most ρ, the k-fold parallel repetition G⊗k has value at most\n [EQUATION]\n This statement implies a parallel repetition bound for projection games with low value ρ. Previously, it was not known whether parallel repetition decreases the value of such games. This result allows us to show that approximating set cover to within factor (1 --- ε) ln n is NP-hard for every ε > 0, strengthening Feige's quasi-NP-hardness and also building on previous work by Moshkovitz and Raz.\n In this framework, we also show improved bounds for few parallel repetitions of projection games, showing that Raz's counterexample to strong parallel repetition is tight even for a small number of repetitions.\n Finally, we also give a short proof for the NP-hardness of label cover(1, Δ) for all Δ > 0, starting from the basic PCP theorem.",
"title": ""
},
{
"docid": "859c6f75ac740e311da5e68fcd093531",
"text": "PURPOSE\nTo understand the effect of socioeconomic status (SES) on the risk of complications in type 1 diabetes (T1D), we explored the relationship between SES and major diabetes complications in a prospective, observational T1D cohort study.\n\n\nMETHODS\nComplete data were available for 317 T1D persons within 4 years of age 28 (ages 24-32) in the Pittsburgh Epidemiology of Diabetes Complications Study. Age 28 was selected to maximize income, education, and occupation potential and to minimize the effect of advanced diabetes complications on SES.\n\n\nRESULTS\nThe incidences over 1 to 20 years' follow-up of end-stage renal disease and coronary artery disease were two to three times greater for T1D individuals without, compared with those with a college degree (p < .05 for both), whereas the incidence of autonomic neuropathy was significantly greater for low-income and/or nonprofessional participants (p < .05 for both). HbA(1c) was inversely associated only with income level. In sex- and diabetes duration-adjusted Cox models, lower education predicted end-stage renal disease (hazard ratio [HR], 2.9; 95% confidence interval [95% CI], 1.1-7.7) and coronary artery disease (HR, 2.5, 95% CI, 1.3-4.9), whereas lower income predicted autonomic neuropathy (HR, 1.7; 95% CI, 1.0-2.9) and lower-extremity arterial disease (HR, 3.7; 95% CI, 1.1-11.9).\n\n\nCONCLUSIONS\nThese associations, partially mediated by clinical risk factors, suggest that lower SES T1D individuals may have poorer self-management and, thus, greater complications from diabetes.",
"title": ""
},
{
"docid": "63ca8787121e3b392e130f9d451b11ea",
"text": "Frank K.Y. Chan Hong Kong University of Science and Technology",
"title": ""
},
{
"docid": "bd246ca9cea19187daf5d55e70149f4c",
"text": "Voice interactions on mobile phones are most often used to augment or supplement touch based interactions for users' convenience. However, for people with limited hand dexterity caused by various forms of motor-impairments voice interactions can have a significant impact and in some cases even enable independent interaction with a mobile device for the first time. For these users, a Mobile Voice User Interface (M-VUI), which allows for completely hands-free, voice only interaction would provide a high level of accessibility and independence. Implementing such a system requires research to address long standing usability challenges introduced by voice interactions that negatively affect user experience due to difficulty learning and discovering voice commands.\n In this paper we address these concerns reporting on research conducted to improve the visibility and learnability of voice commands of a M-VUI application being developed on the Android platform. Our research confirmed long standing challenges with voice interactions while exploring several methods for improving the onboarding and learning experience. Based on our findings we offer a set of implications for the design of M-VUIs.",
"title": ""
},
{
"docid": "6d60f0cd26681db25f322d77cadfdd34",
"text": "Over the last years, several authors have signaled that state of the art categorization methods fail to perform well when trained and tested on data from different databases. The general consensus in the literature is that this issue, known as domain adaptation and/or dataset bias, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. The large majority of these works use BOW feature descriptors, and learning methods based on image-to-image distance functions. Following the seminal work of [6], in this paper we challenge these two assumptions. We experimentally show that using the NBNN classifier over existing domain adaptation databases achieves always very strong performances. We build on this result, and present an NBNN-based domain adaptation algorithm that learns iteratively a class metric while inducing, for each sample, a large margin separation among classes. To the best of our knowledge, this is the first work casting the domain adaptation problem within the NBNN framework. Experiments show that our method achieves the state of the art, both in the unsupervised and semi-supervised settings.",
"title": ""
},
{
"docid": "886975826046787d2c054a7f13205ea7",
"text": "Cyber-secure networked control is modeled, analyzed, and experimentally illustrated in this paper. An attack space defined by the adversary's system knowledge, disclosure, and disruption resources is introduced. Adversaries constrained by these resources are modeled for a networked control system architecture. It is shown that attack scenarios corresponding to replay, zero dynamics, and bias injection attacks can be analyzed using this framework. An experimental setup based on a quadruple-tank process controlled over a wireless network is used to illustrate the attack scenarios, their consequences, and potential counter-measures.",
"title": ""
},
{
"docid": "8f5747a5503c9e5ab1945e2ac42516a4",
"text": "Mental wellbeing is the combination of feeling good and functioning well. Digital technology widens the opportunities for promoting mental wellbeing, particularly among those young people for whom technology is an ordinary part of life. This paper presents an initial review of publicly available apps and websites that have a primary purpose of promoting mental wellbeing. The review was in two stages: first, the interdisciplinary research team identified and reviewed 14 apps/websites, then 13 young people (7 female, 6 male) aged 12–18 years reviewed 11 of the apps/websites. Overall, the reviewers’ views were positive, although some significant criticisms were made. Based on the findings of the study, initial recommendations are offered to improve the design of apps/websites for promoting mental wellbeing among young people aged 12–18 years: highlight any age limits, provide information on mental wellbeing, improve findability, ensure accessibility on school computers, and highlight if young people were involved in design.",
"title": ""
},
{
"docid": "2111199064e824173cbf1322e3fdcd47",
"text": "This work addresses fundamental questions about the nature of cybercriminal organization. We investigate the organization of three underground forums: BlackhatWorld, Carders and L33tCrew to understand the nature of distinct communities within a forum, the structure of organization and the impact of enforcement, in particular banning members, on the structure of these forums. We find that each forum is divided into separate competing communities. Smaller communities are limited to 100-230 members, have a two-tiered hierarchy akin to a gang, and focus on a subset of cybercrime activities. Larger communities may have thousands of members and a complex organization with a distributed multi-tiered hierarchy more akin to a mob; such communities also have a more diverse cybercrime portfolio compared to smaller cohorts. Finally, despite differences in size and cybercrime portfolios, members on a single forum have similar operational practices, for example, they use the same electronic currency.",
"title": ""
},
{
"docid": "89a1e91c2ab1393f28a6381ba94de12d",
"text": "In this paper, a simulation environment encompassing realistic propagation conditions and system parameters is employed in order to analyze the performance of future multigigabit indoor communication systems at tetrahertz frequencies. The influence of high-gain antennas on transmission aspects is investigated. Transmitter position for optimal signal coverage is also analyzed. Furthermore, signal coverage maps and achievable data rates are calculated for generic indoor scenarios with and without furniture for a variety of possible propagation conditions.",
"title": ""
},
{
"docid": "645a1d50394e9cf746e88398ca867ad2",
"text": "In this paper, we conduct a comparative analysis of two associative memory-based pattern recognition algorithms. We compare the established Hopfield network algorithm with our novel Distributed Hierarchical Graph Neuron (DHGN) algorithm. The computational complexity and recall efficiency aspects of these algorithms are discussed. The results show that DHGN offers lower computational complexity with better recall efficiency compared to the Hopfield network.",
"title": ""
},
{
"docid": "443637fcc9f9efcf1026bb64aa0a9c97",
"text": "Given the unprecedented availability of data and computing resources, there is widespread renewed interest in applying data-driven machine learning methods to problems for which the development of conventional engineering solutions is challenged by modeling or algorithmic deficiencies. This tutorial-style paper starts by addressing the questions of why and when such techniques can be useful. It then provides a high-level introduction to the basics of supervised and unsupervised learning. For both supervised and unsupervised learning, exemplifying applications to communication networks are discussed by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack, with an emphasis on the physical layer.",
"title": ""
}
] |
scidocsrr
|
fb0e45beefb406e1e8084d57bfe308e4
|
CRF-Filters: Discriminative Particle Filters for Sequential State Estimation
|
[
{
"docid": "4ac3c3fb712a1121e0990078010fe4b0",
"text": "1.1 Introduction Relational data has two characteristics: first, statistical dependencies exist between the entities we wish to model, and second, each entity often has a rich set of features that can aid classification. For example, when classifying Web documents, the page's text provides much information about the class label, but hyperlinks define a relationship between pages that can improve classification [Taskar et al., 2002]. Graphical models are a natural formalism for exploiting the dependence structure among entities. Traditionally, graphical models have been used to represent the joint probability distribution p(y, x), where the variables y represent the attributes of the entities that we wish to predict, and the input variables x represent our observed knowledge about the entities. But modeling the joint distribution can lead to difficulties when using the rich local features that can occur in relational data, because it requires modeling the distribution p(x), which can include complex dependencies. Modeling these dependencies among inputs can lead to intractable models, but ignoring them can lead to reduced performance. A solution to this problem is to directly model the conditional distribution p(y|x), which is sufficient for classification. This is the approach taken by conditional random fields [Lafferty et al., 2001]. A conditional random field is simply a conditional distribution p(y|x) with an associated graphical structure. Because the model is",
"title": ""
}
] |
[
{
"docid": "e5c7026b7970276a2814001c489792df",
"text": "The three level buck converter can offer high efficiency and high power density in VR and POL applications. The gains are made possible by adding a flying capacitor that reduces the MOSFET voltage stress by half allowing for the use of low voltage devices, doubles the effective switching frequency, and decreases the inductor size by reducing the volt-second across the inductor. To achieve high efficiency and power density the flying capacitor must be balanced at half of the input voltage and the circuit must be started up without the MOSFETs seeing the full input voltage for protection purposes. This paper provides a new novel control method to balance the flying capacitor with the use of current control and offers a simple startup solution to protect the MOSFETs during start up. Experimental verification shows the efficiency gains and inductance reduction.",
"title": ""
},
{
"docid": "0f122797e9102c6bab57e64176ee5e84",
"text": "We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.",
"title": ""
},
{
"docid": "e559afd57c31b67f30942a519d079109",
"text": "We show how to use a variational approximation to the logistic function to perform approximate inference in Bayesian networks containing discrete nodes with continuous parents. Essentially, we convert the logistic function to a Gaussian, which facilitates exact inference, and then iteratively adjust the variational parameters to improve the quality of the approximation. We demonstrate experimentally that this approximation is much faster than sampling, but comparable in accuracy. We also introduce a simple new technique for handling evidence, which allows us to handle arbitrary distributionson observed nodes, as well as achieving a significant speedup in networks with discrete variables of large cardinality.",
"title": ""
},
{
"docid": "24f1f6cf89291915b8865be12b02bba1",
"text": "The apple genome sequence and the availability of high-throughput genotyping technologies have initiated a new era where SNP markers are abundant across the whole genome. Genomic selection (GS) is a statistical approach that utilizes all available genome-wide markers simultaneously to estimate breeding values or total genetic values. For breeding programmes, GS is a promising alternative to the traditional marker-assisted selection for manipulating complex polygenic traits often controlled by many small-effect genes. Various factors, such as genetic architecture of selection traits, population size and structure, genetic evaluation systems, density of SNP markers and extent of linkage disequilibrium, have been shown to be the key drivers of the accuracy of GS. In this paper, we provide an overview of the status of these aspects in current apple-breeding programmes. Strategies for GS for fruit quality and disease resistance are discussed, and an update on an empirical genomic selection study in a New Zealand apple-breeding programme is provided, along with a foresight of expected accuracy from such selection.",
"title": ""
},
{
"docid": "efc6c423fa98c012543352db8fb0688a",
"text": "Wireless sensor networks consist of sensor nodes with sensing and communication capabilities. We focus on data aggregation problems in energy constrained sensor networks. The main goal of data aggregation algorithms is to gather and aggregate data in an energy efficient manner so that network lifetime is enhanced. In this paper, we present a survey of data aggregation algorithms in wireless sensor networks. We compare and contrast different algorithms on the basis of performance measures such as lifetime, latency and data accuracy. We conclude with possible future research directions.",
"title": ""
},
{
"docid": "c4a104956ee7e0db325348e683947134",
"text": "Intracellular pH (pH(i)) plays a critical role in the physiological and pathophysiological processes of cells, and fluorescence imaging using pH-sensitive indicators provides a powerful tool to assess the pH(i) of intact cells and subcellular compartments. Here we describe a nanoparticle-based ratiometric pH sensor, comprising a bright and photostable semiconductor quantum dot (QD) and pH-sensitive fluorescent proteins (FPs), exhibiting dramatically improved sensitivity and photostability compared to BCECF, the most widely used fluorescent dye for pH imaging. We found that Förster resonance energy transfer between the QD and multiple FPs modulates the FP/QD emission ratio, exhibiting a >12-fold change between pH 6 and 8. The modularity of the probe enables customization to specific biological applications through genetic engineering of the FPs, as illustrated by the altered pH range of the probe through mutagenesis of the fluorescent protein. The QD-FP probes facilitate visualization of the acidification of endosomes in living cells following polyarginine-mediated uptake. These probes have the potential to enjoy a wide range of intracellular pH imaging applications that may not be feasible with fluorescent proteins or organic fluorophores alone.",
"title": ""
},
{
"docid": "f975a1fa2905f8ae42ced1f13a88a15b",
"text": "This paper presents a new method of detecting and tracking the boundaries of drivable regions in road without road-markings. As unmarked roads connect residential places to public roads, the capability of autonomously driving on such a roadway is important to truly realize self-driving cars in daily driving scenarios. To detect the left and right boundaries of drivable regions, our method first examines the image region at the front of ego-vehicle and then uses the appearance information of that region to identify the boundary of the drivable region from input images. Due to variation in the image acquisition condition, the image features necessary for boundary detection may not be present. When this happens, a boundary detection algorithm working frame-by-frame basis would fail to successfully detect the boundaries. To effectively handle these cases, our method tracks, using a Bayes filter, the detected boundaries over frames. Experiments using real-world videos show promising results.",
"title": ""
},
{
"docid": "de6581719d2bc451695a77d43b091326",
"text": "Keyphrases are useful for a variety of tasks in information retrieval systems and natural language processing, such as text summarization, automatic indexing, clustering/classification, ontology learning and building and conceptualizing particular knowledge domains, etc. However, assigning these keyphrases manually is time consuming and expensive in term of human resources. Therefore, there is a need to automate the task of extracting keyphrases. A wide range of techniques of keyphrase extraction have been proposed, but they are still suffering from the low accuracy rate and poor performance. This paper presents a state of the art of automatic keyphrase extraction approaches to identify their strengths and weaknesses. We also discuss why some techniques perform better than others and how can we improve the task of automatic keyphrase extraction.",
"title": ""
},
{
"docid": "e95bef9aac5bb118109d82dec750da26",
"text": "A novel microstrip circular disc monopole antenna with a reconfigurable 10-dB impedance bandwidth is proposed in this communication for cognitive radios (CRs). The antenna is fed by a microstrip line integrated with a bandpass filter based on a three-line coupled resonator (TLCR). The reconfiguration of the filter enables the monopole antenna to operate at either a wideband state or a narrowband state by using a PIN diode. For the narrowband state, two varactor diodes are employed to change the antenna operating frequency from 3.9 to 4.82 GHz continuously, which is different from previous work using PIN diodes to realize a discrete tuning. Similar radiation patterns with low cross-polarization levels are achieved for the two operating states. Measured results on tuning range, radiation patterns, and realized gains are provided, which show good agreement with numerical simulations.",
"title": ""
},
{
"docid": "076c5e6d8d6822988c64cabf8e6d4289",
"text": "This paper presents the design of a dual-polarized log.-periodic four arm antenna bent on a conical MID substrate. The bending of a planar structure in free space is highlighted and the resulting effects on the input impedance and radiation characteristic are analyzed. The subsequent design of the UWB compliant prototype is introduced. An adequate agreement between simulated and measured performance can be observed. The antenna provides an input matching of better than −8 dB over a frequency range from 3GHz to 9GHz. The antenna pattern is characterized by a radiation with two linear, orthogonal polarizations and a front-to-back ratio of 6 dB. A maximum gain of 5.6 dBi is achieved at 5.5GHz. The pattern correlation coefficients confirm the suitability of this structure for diversity and MIMO applications. The overall antenna diameter and height are 50mm and 24mm respectively. It could therefore be used as a surface mounted or ceiling antenna in buildings, vehicles or aircrafts for communication systems.",
"title": ""
},
{
"docid": "c5cc4da2906670c30fc0bac3040217bd",
"text": "Many popular problems in robotics and computer vision including various types of simultaneous localization and mapping (SLAM) or bundle adjustment (BA) can be phrased as least squares optimization of an error function that can be represented by a graph. This paper describes the general structure of such problems and presents g2o, an open-source C++ framework for optimizing graph-based nonlinear error functions. Our system has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA. We provide evaluations on a wide range of real-world and simulated datasets. The results demonstrate that while being general g2o offers a performance comparable to implementations of state-of-the-art approaches for the specific problems.",
"title": ""
},
{
"docid": "9b32c1ea81eb8d8eb3675c577cc0e2fc",
"text": "Users' addiction to online social networks is discovered to be highly correlated with their social connections in the networks. Dense social connections can effectively help online social networks retain their active users and improve the social network services. Therefore, it is of great importance to make a good prediction of the social links among users. Meanwhile, to enjoy more social network services, users nowadays are usually involved in multiple online social networks simultaneously. Formally, the social networks which share a number of common users are defined as the \"aligned networks\".With the information transferred from multiple aligned social networks, we can gain a more comprehensive knowledge about the social preferences of users in the pre-specified target network, which will benefit the social link prediction task greatly. However, when transferring the knowledge from other aligned source networks to the target network, there usually exists a shift in information distribution between different networks, namely domain difference. In this paper, we study the social link prediction problem of the target network, which is aligned with multiple social networks concurrently. To accommodate the domain difference issue, we project the features extracted for links from different aligned networks into a shared lower-dimensional feature space. Moreover, users in social networks usually tend to form communities and would only connect to a small number of users. Thus, the target network structure has both the low-rank and sparse properties. We propose a novel optimization framework, SLAMPRED, to combine both these two properties aforementioned of the target network and the information of multiple aligned networks with nice domain adaptations. Since the objective function is a linear combination of convex and concave functions involving nondifferentiable regularizers, we propose a novel optimization method to iteratively solve it. Extensive experiments have been done on real-world aligned social networks, and the experimental results demonstrate the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "1804ba10a62f81302f2701cfe0330783",
"text": "We describe a web browser fingerprinting technique based on measuring the onscreen dimensions of font glyphs. Font rendering in web browsers is affected by many factors—browser version, what fonts are installed, and hinting and antialiasing settings, to name a few— that are sources of fingerprintable variation in end-user systems. We show that even the relatively crude tool of measuring glyph bounding boxes can yield a strong fingerprint, and is a threat to users’ privacy. Through a user experiment involving over 1,000 web browsers and an exhaustive survey of the allocated space of Unicode, we find that font metrics are more diverse than User-Agent strings, uniquely identifying 34% of participants, and putting others into smaller anonymity sets. Fingerprinting is easy and takes only milliseconds. We show that of the over 125,000 code points examined, it suffices to test only 43 in order to account for all the variation seen in our experiment. Font metrics, being orthogonal to many other fingerprinting techniques, can augment and sharpen those other techniques. We seek ways for privacy-oriented web browsers to reduce the effectiveness of font metric–based fingerprinting, without unduly harming usability. As part of the same user experiment of 1,000 web browsers, we find that whitelisting a set of standard font files has the potential to more than quadruple the size of anonymity sets on average, and reduce the fraction of users with a unique font fingerprint below 10%. We discuss other potential countermeasures.",
"title": ""
},
{
"docid": "8aefd572e089cb29c13cefc6e59bdda8",
"text": "Different linguistic perspectives causes many diverse segmentation criteria for Chinese word segmentation (CWS). Most existing methods focus on improve the performance for each single criterion. However, it is interesting to exploit these different criteria and mining their common underlying knowledge. In this paper, we propose adversarial multi-criteria learning for CWS by integrating shared knowledge from multiple heterogeneous segmentation criteria. Experiments on eight corpora with heterogeneous segmentation criteria show that the performance of each corpus obtains a significant improvement, compared to single-criterion learning. Source codes of this paper are available on Github1.",
"title": ""
},
{
"docid": "3cde42debc88882e5f0ba6ef6a1de1db",
"text": "With the booming development of tourism, travel security problems are becoming more and more prominent. Congestion, stampedes, fights and other tourism emergency events occurred frequently, which should be a wake-up call for tourism security. Therefore, it is of great research value and application prospect to real-time monitor tourists and detect abnormal events in tourism surveillance video by using computer vision and video intelligent processing technology, which can realize the timely forecast and early warning of tourism emergencies. At present, although most of the video-based abnormal event detection methods work well in simple scenes, there are often problems such as low detection rate and high false positive rate in complex motion scenarios, and the detection of abnormal events can’t be processed in real time. To tackle these issues, we propose an abnormal event detection model in tourism video based on salient spatio-temporal features and sparse combination learning, which has good robustness and timeliness in complex motion scenarios and can be adapted to real-time anomaly detection in practical applications. Specifically, spatio-temporal gradient model is combined with foreground detection to extract 3D gradient features on the foreground target of video sequence as the salient spatio-temporal features, which can eliminate the interference of the background. Sparse combination learning algorithm is used to establish the abnormal event detection model, which can realize the real-time detection of abnormal events. In addition, we construct a new ScenicSpot dataset with 18 video clips (5964 frames) containing both normal and abnormal events. The experimental results on ScenicSpot dataset and two standard benchmark datasets show that our method can realize the automatic detection and recognition of tourists’ abnormal behavior, and has better performance compared with the classical methods.",
"title": ""
},
{
"docid": "357e09114978fc0ac1fb5838b700e6ca",
"text": "Instance level video object segmentation is an important technique for video editing and compression. To capture the temporal coherence, in this paper, we develop MaskRNN, a recurrent neural net approach which fuses in each frame the output of two deep nets for each object instance — a binary segmentation net providing a mask and a localization net providing a bounding box. Due to the recurrent component and the localization component, our method is able to take advantage of long-term temporal structures of the video data as well as rejecting outliers. We validate the proposed algorithm on three challenging benchmark datasets, the DAVIS-2016 dataset, the DAVIS-2017 dataset, and the Segtrack v2 dataset, achieving state-of-the-art performance on all of them.",
"title": ""
},
{
"docid": "44e527e6078a01abd79a5f1f74fa1b78",
"text": "A transformer provides galvanic isolation and grounding of the photovoltaic (PV) array in a PV-fed grid-connected inverter. Inclusion of the transformer, however, may increase the cost and/or bulk of the system. To overcome this drawback, a single-phase, single-stage [no extra converter for voltage boost or maximum power point tracking (MPPT)], doubly grounded, transformer-less PV interface, based on the buck-boost principle, is presented. The configuration is compact and uses lesser components. Only one (undivided) PV source and one buck-boost inductor are used and shared between the two half cycles, which prevents asymmetrical operation and parameter mismatch problems. Total harmonic distortion and DC component of the current supplied to the grid is low, compared to existing topologies and conform to standards like IEEE 1547. A brief review of the existing, transformer-less, grid-connected inverter topologies is also included. It is demonstrated that, as compared to the split PV source topology, the proposed configuration is more effective in MPPT and array utilization. Design and analysis of the inverter in discontinuous conduction mode is carried out. Simulation and experimental results are presented.",
"title": ""
},
{
"docid": "ebc8a48b9664ef2aab9e2933a987ef19",
"text": "We consider the three-stage two-dimensional bin packing problem (2BP) which occurs in real-world applications such as glass, paper, or steel cutting. We present new integer linear programming formulations: Models for a restricted version and the original version of the problem are developed. Both involve polynomial numbers of variables and constraints only and effectively avoid symmetries. Those models are solved using CPLEX. Furthermore, a branch-and-price (B&P) algorithm is presented for a set covering formulation of the unrestricted problem. We consider stabilizing the column generation process of the B&P algorithm using dual-optimal inequalities. Fast column generation is performed by applying a hierarchy of four methods: (a) a fast greedy heuristic, (b) an evolutionary algorithm, (c) solving a restricted form of the pricing problem using CPLEX, and finally (d) solving the complete pricing problem using CPLEX. Computational experiments on standard benchmark instances document the benefits of the new approaches: The restricted version of the ILP model can be used for quickly obtaining nearly optimal solutions. The unrestricted version is computationally more expensive. Column generation provides a strong lower bound for 3-stage 2BP. The combination of all four pricing algorithms and column generation stabilization in the proposed B&P framework yields the best results in terms of the average objective value, the average run-time, and the number of instances solved to proven optimality. 1 This work is supported by the Austrian Science Fund (FWF) under grant P16263-N04. Preprint submitted to Elsevier Science 30 September 2004",
"title": ""
},
{
"docid": "dba5777004cf43d08a58ef3084c25bd3",
"text": "This paper investigates the problem of automatic humour recognition, and provides and in-depth analysis of two of the most frequently observ ed features of humorous text: human-centeredness and negative polarity. T hrough experiments performed on two collections of humorous texts, we show that th ese properties of verbal humour are consistent across different data s ets.",
"title": ""
},
{
"docid": "44e0cd40b9a06abd5a4e54524b214dce",
"text": "A large majority of road accidents are relative to driver fatigue, distraction and drowsiness which are widely believed to be the largest contributors to fatalities and severe injuries, either as a direct cause of falling asleep at the wheel or as a contributing factor in lowering the attention and reaction time of a driver in critical situations. Thus to prevent road accidents, a countermeasure device has to be used. This paper illuminates and highlights the various measures that have been studied to detect drowsiness such as vehicle based, physiological based, and behavioural based measures. The main objective is to develop a real time non-contact system which will be able to identify driver’s drowsiness beforehand. The system uses an IR sensitive monochrome camera that detects the position and state of the eyes to calculate the drowsiness of a driver. Once the driver is detected as drowsy, the system will generate warning signals to alert the driver. In case the signal is not re-established the system will shut off the engine to prevent any mishap. Keywords— Drowsiness, Road Accidents, Eye Detection, Face Detection, Blink Pattern, PERCLOS, MATLAB, Arduino Nano",
"title": ""
}
] |
scidocsrr
|
d5d1c9cbefdd02f1f567aa0bd15db3fd
|
Video game play, attention, and learning: how to shape the development of attention and influence learning?
|
[
{
"docid": "af7803b0061e75659f718d56ba9715b3",
"text": "An emerging body of multidisciplinary literature has documented the beneficial influence of physical activity engendered through aerobic exercise on selective aspects of brain function. Human and non-human animal studies have shown that aerobic exercise can improve a number of aspects of cognition and performance. Lack of physical activity, particularly among children in the developed world, is one of the major causes of obesity. Exercise might not only help to improve their physical health, but might also improve their academic performance. This article examines the positive effects of aerobic physical activity on cognition and brain function, at the molecular, cellular, systems and behavioural levels. A growing number of studies support the idea that physical exercise is a lifestyle factor that might lead to increased physical and mental health throughout life.",
"title": ""
}
] |
[
{
"docid": "133eccbb62434ad3444962dcf091226c",
"text": "We propose a novel multi-sensor system for accurate and power-efficient dynamic car-driver hand-gesture recognition, using a short-range radar, a color camera, and a depth camera, which together make the system robust against variable lighting conditions. We present a procedure to jointly calibrate the radar and depth sensors. We employ convolutional deep neural networks to fuse data from multiple sensors and to classify the gestures. Our algorithm accurately recognizes 10 different gestures acquired indoors and outdoors in a car during the day and at night. It consumes significantly less power than purely vision-based systems.",
"title": ""
},
{
"docid": "abcb9b8feb996917df2dcbd85dbeaff4",
"text": "Nearly all aspects of modern life are in some way being changed by big data and machine learning. Netflix knows what movies people like to watch and Google knows what people want to know based on their search histories. Indeed, Google has recently begun to replace much of its existing non–machine learning technology with machine learning algorithms, and there is great optimism that these techniques can provide similar improvements across many sectors. It isnosurprisethenthatmedicineisawashwithclaims of revolution from the application of machine learning to big health care data. Recent examples have demonstrated that big data and machine learning can create algorithms that perform on par with human physicians.1 Though machine learning and big data may seem mysterious at first, they are in fact deeply related to traditional statistical models that are recognizable to most clinicians. It is our hope that elucidating these connections will demystify these techniques and provide a set of reasonable expectations for the role of machine learning and big data in health care. Machine learning was originally described as a program that learns to perform a task or make a decision automatically from data, rather than having the behavior explicitlyprogrammed.However,thisdefinitionisverybroad and could cover nearly any form of data-driven approach. For instance, consider the Framingham cardiovascular risk score,whichassignspointstovariousfactorsandproduces a number that predicts 10-year cardiovascular risk. Should this be considered an example of machine learning? The answer might obviously seem to be no. Closer inspection oftheFraminghamriskscorerevealsthattheanswermight not be as obvious as it first seems. The score was originally created2 by fitting a proportional hazards model to data frommorethan5300patients,andsothe“rule”wasinfact learnedentirelyfromdata.Designatingariskscoreasamachine learning algorithm might seem a strange notion, but this example reveals the uncertain nature of the original definition of machine learning. It is perhaps more useful to imagine an algorithm as existing along a continuum between fully human-guided vs fully machine-guided data analysis. To understand the degree to which a predictive or diagnostic algorithm can said to be an instance of machine learning requires understanding how much of its structure or parameters were predetermined by humans. The trade-off between human specificationofapredictivealgorithm’spropertiesvslearning those properties from data is what is known as the machine learning spectrum. Returning to the Framingham study, to create the original risk score statisticians and clinical experts worked together to make many important decisions, such as which variables to include in the model, therelationshipbetweenthedependentandindependent variables, and variable transformations and interactions. Since considerable human effort was used to define these properties, it would place low on the machine learning spectrum (#19 in the Figure and Supplement). Many evidence-based clinical practices are based on a statistical model of this sort, and so many clinical decisions in fact exist on the machine learning spectrum (middle left of Figure). On the extreme low end of the machine learning spectrum would be heuristics and rules of thumb that do not directly involve the use of any rules or models explicitly derived from data (bottom left of Figure). Suppose a new cardiovascular risk score is created that includes possible extensions to the original model. For example, it could be that risk factors should not be added but instead should be multiplied or divided, or perhaps a particularly important risk factor should square the entire score if it is present. Moreover, if it is not known in advance which variables will be important, but thousands of individual measurements have been collected, how should a good model be identified from among the infinite possibilities? This is precisely what a machine learning algorithm attempts to do. As humans impose fewer assumptions on the algorithm, it moves further up the machine learning spectrum. However, there is never a specific threshold wherein a model suddenly becomes “machine learning”; rather, all of these approaches exist along a continuum, determined by how many human assumptions are placed onto the algorithm. An example of an approach high on the machine learning spectrum has recently emerged in the form of so-called deep learning models. Deep learning models are stunningly complex networks of artificial neurons that were designed expressly to create accurate models directly from raw data. Researchers recently demonstrated a deep learning algorithm capable of detecting diabetic retinopathy (#4 in the Figure, top center) from retinal photographs at a sensitivity equal to or greater than that of ophthalmologists.1 This model learned the diagnosis procedure directly from the raw pixels of the images with no human intervention outside of a team of ophthalmologists who annotated each image with the correct diagnosis. Because they are able to learn the task with little human instruction or prior assumptions, these deep learning algorithms rank very high on the machine learning spectrum (Figure, light blue circles). Though they require less human guidance, deep learning algorithms for image recognition require enormous amounts of data to capture the full complexity, variety, and nuance inherent to real-world images. Consequently, these algorithms often require hundreds of thousands of examples to extract the salient image features that are correlated with the outcome of interest. Higher placement on the machine learning spectrum does not imply superiority, because different tasks require different levels of human involvement. While algorithms high on the spectrum are often very flexible and can learn many tasks, they are often uninterpretable VIEWPOINT",
"title": ""
},
{
"docid": "cd407caad37c33ee5540b079e94782c7",
"text": "Despite the remarkable recent progress, person reidentification (Re-ID) approaches are still suffering from the failure cases where the discriminative body parts are missing. To mitigate such cases, we propose a simple yet effective Horizontal Pyramid Matching (HPM) approach to fully exploit various partial information of a given person, so that correct person candidates can be still identified even even some key parts are missing. Within the HPM, we make the following contributions to produce a more robust feature representation for the Re-ID task: 1) we learn to classify using partial feature representations at different horizontal pyramid scales, which successfully enhance the discriminative capabilities of various person parts; 2) we exploit average and max pooling strategies to account for person-specific discriminative information in a global-local manner. To validate the effectiveness of the proposed HPM, extensive experiments are conducted on three popular benchmarks, including Market-1501, DukeMTMC-ReID and CUHK03. In particular, we achieve mAP scores of 83.1%, 74.5% and 59.7% on these benchmarks, which are the new state-of-the-arts. Our code is available on Github .",
"title": ""
},
{
"docid": "b5453d9e4385d5a5ff77997ad7e3f4f0",
"text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.",
"title": ""
},
{
"docid": "d59f6325233544b2deaaa60b8743312a",
"text": "Printed documents are vulnerable to forgery through the latest technology development and it becomes extremely important. Most of the forgeries can be resulting loss of personal identity or ownership of a certain valuable object. This paper proposes novel authentication technique and schema for printed document authentication using watermarked QR (Quick Response) code. The technique is based Watermarked QR code generated with embedding logo belongs to the owner of the document which contain validation link, and the schema is checking the validation link of the printed document which linked to the web server and database server through internet connection by scanning it over camera phone and QR code reader, the result from this technique and schema is the validation can be done in real-time using smart phone such as smart phone based Android, Black Berry, and iOS. To get a good performance in extracting and validating printed document, it can be done by preparing in advance the validation link via internet connection to get the authentication of information hidden. Finally, this paper provide experimental results to demonstrate the authenticated of printed documents using watermarked QR code.",
"title": ""
},
{
"docid": "fd9e8a79decf68721fb0dd81f16a5f8b",
"text": "Feeder reconfiguration (FRC) is an important function of distribution automation system. It modifies the topology of distribution network through changing the open/close statuses of tie switches and sectionalizing switches. The change of topology redirects the power flow within the distribution network, in order to obtain a better performance of the system. Various methods have been explored to solve FRC problems. This paper presents a literature survey on distribution system FRC. Among many aspects to be reviewed for a comprehensive study, this paper focuses on FRC objectives and solution methods. The problem definition of FRC is first discussed, the objectives are summarized, and various solution methods are categorized and evaluated.",
"title": ""
},
{
"docid": "31d7a6da7093d50d0d5890cce4cb60cf",
"text": "We introduce a novel Gaussian process based Bayesian model for asymmetric transfer learning. We adopt a two-layer feed-forward deep Gaussian process as the task learner of source and target domains. The first layer projects the data onto a separate non-linear manifold for each task. We perform knowledge transfer by projecting the target data also onto the source domain and linearly combining its representations on the source and target domain manifolds. Our approach achieves the state-of-the-art in a benchmark real-world image categorization task, and improves on it in cross-tissue tumor detection from histopathology tissue slide images.",
"title": ""
},
{
"docid": "05eb1af3e6838640b6dc5c1c128cc78a",
"text": "Predicting the success of referring expressions (RE) is vital for real-world applications such as navigation systems. Traditionally, research has focused on studying Referring Expression Generation (REG) in virtual, controlled environments. In this paper, we describe a novel study of spatial references from real scenes rather than virtual. First, we investigate how humans describe objects in open, uncontrolled scenarios and compare our findings to those reported in virtual environments. We show that REs in real-world scenarios differ significantly to those in virtual worlds. Second, we propose a novel approach to quantifying image complexity when complete annotations are not present (e.g. due to poor object recognition capabitlities), and third, we present a model for success prediction of REs for objects in real scenes. Finally, we discuss implications for Natural Language Generation (NLG) systems and future directions.",
"title": ""
},
{
"docid": "9e42d25a6a7bc3aad562e58721c7650d",
"text": "The purpose of this retrospective study was to illustrate the differences in maternal and paternal filicides in Finland during a 25-year period. In the sample of 200 filicides [neonaticides (n = 56), filicide-suicides (n = 75), other filicides (n = 69)], the incidence was 5.09 deaths per 100,000 live births: 59 percent of filicides were committed by mothers, 39 percent by fathers, and 2 percent by stepfathers. The mean age of the maternal victims (1.6 y) was significantly lower than that of the paternal victims (5.6 y), but no correlation between the sex of the victim and the sex of the perpetrator was found, and the number of female and male victims was equal. The sample of other filicides (n = 65) was studied more closely by forensic psychiatric examination and review of collateral files. Filicidal mothers showed mental distress and often had psychosocial stressors of marital discord and lack of support. They often killed for altruistic reasons and in association with suicide. Maternal perpetrators also dominated in filicide cases in which death was caused by a single episode or recurrent episodes of battering. Psychosis and psychotic depression were diagnosed in 51 percent of the maternal perpetrators, and 76 percent of the mothers were deemed not responsible for their actions by reason of insanity. Paternal perpetrators, on the other hand, were jealous of their mates, had a personality disorder (67%), abused alcohol (45%), or were violent toward their mates. In 18 percent of the cases, they were not held responsible for their actions by reason of insanity. During childhood, most of the perpetrators had endured emotional abuse from their parents or guardians, some of whom also engaged in alcohol abuse and domestic violence. The purpose of this study was to examine the differences between maternal and paternal filicides in a sample of 200 cases in Finland. This report also provides a psychosocial profile of the perpetrator and victim in 65 filicides and a discussion of the influence of diagnoses on decisions regarding criminal responsibility.",
"title": ""
},
{
"docid": "0ca477c017da24940bb5af79b2c8826a",
"text": "Code comprehension is critical in software maintenance. Towards providing tools and approaches to support maintenance tasks, researchers have investigated various research lines related to how software code can be described in an abstract form. So far, studies on change pattern mining, code clone detection, or semantic patch inference have mainly adopted text-, tokenand tree-based representations as the basis for computing similarity among code fragments. Although, in general, existing techniques form clusters of “similar” code, our experience in patch mining has revealed that clusters of patches formed by such techniques do not usually carry explainable semantics that can be associated to bug-fixing patterns. In this paper, we propose a novel, automated approach for mining semantically-relevant fix patterns based on an iterative, three-fold, clustering strategy. Our technique, FixMiner, leverages different tree representations for each round of clustering: the Abstract syntax tree, the edit actions tree, and the code context tree. We have evaluated FixMiner on thousands of software patches collected from open source projects. Preliminary results show that we are able to mine accurate patterns, efficiently exploiting change information in AST diff trees. Eventually, FixMiner yields patterns which can be associated to the semantics of the bugs that the associated patches address. We further leverage the mined patterns to implement an automated program repair pipeline with which we are able to correctly fix 25 bugs from the Defects4J benchmark. Beyond this quantitative performance, we show that the mined fix patterns are sufficiently relevant to produce patches with a high probability of correctness: 80% of FixMiner’s A. Koyuncu, K. Liu, T. F. Bissyandé, D. Kim, J. Klein, K. Kim, and Y. Le Traon SnT, University of Luxembourg E-mail: {firstname.lastname}@uni.lu M. Monperrus KTH Royal Institute of Technology E-mail: martin.monperrus@csc.kth.se ar X iv :1 81 0. 01 79 1v 1 [ cs .S E ] 3 O ct 2 01 8 2 Anil Koyuncu et al. generated plausible patches are correct, while the closest related works, namely HDRepair and SimFix, achieve respectively 26% and 70% of correctness.",
"title": ""
},
{
"docid": "c6645086397ba0825f5f283ba5441cbf",
"text": "Anomalies have broad patterns corresponding to their causes. In industry, anomalies are typically observed as equipment failures. Anomaly detection aims to detect such failures as anomalies. Although this is usually a binary classification task, the potential existence of unseen (unknown) failures makes this task difficult. Conventional supervised approaches are suitable for detecting seen anomalies but not for unseen anomalies. Although, unsupervised neural networks for anomaly detection now detect unseen anomalies well, they cannot utilize anomalous data for detecting seen anomalies even if some data have been made available. Thus, providing an anomaly detector that finds both seen and unseen anomalies well is still a tough problem. In this paper, we introduce a novel probabilistic representation of anomalies to solve this problem. The proposed model defines the normal and anomaly distributions using the analogy between a set and the complementary set. We applied these distributions to an unsupervised variational autoencoder (VAE)-based method and turned it into a supervised VAE-based method. We tested the proposed method with well-known data and real industrial data to show that the proposed method detects seen anomalies better than the conventional unsupervised method without degrading the detection performance for unseen anomalies.",
"title": ""
},
{
"docid": "0159d630fc310d32dc76fd88edac49ef",
"text": "We consider variants on the Prize Collecting Steiner Tree problem and on the primal-dual 2-approximation algorithm devised for it by Goemans and Williamson. We introduce an improved pruning rule for the algorithm that is slightly faster and provides solutions that are at least as good and typically significantly better. On a selection of real-world instances whose underlying graphs are county street maps, the improvement in the standard objective function ranges from 1.7% to 9.2%. Substantially better improvements are obtained for the complementary \"net worth\" objective function and for randomly generated instances. We also show that modifying the growth phase of the GoemansWilliamson algorithm to make it independent of the choice of root vertex does not significantly affect the algorithm's worst-case guarantee or behavior in practice. The resulting algorithm can be fttrther modified so that, without an increase in running time, it becomes a 2-approximation algorithm for finding the best subtree over all choices of root. In the second part of the paper, we consider quota and budget versions of the problem. In the first, one is looking for the tree with minimum edge cost that contains vertices whose total prize is at least a given quota; in the second one is looking for the tree with maximum prize, given that the total edge cost is within a given budget. The quota problem is a generalization of the k-MST problem, and we observe how constant-factor approximation algorithms for that problem can be extended to it. We also show how a (5 ÷ e)approximation algorithm for the (unrooted) budget problem can be derived from Gaxg's 3-approximation algorithm for the k-MST. None of these algorithms are likely to be used in ~ ractice, but we show how the general approach behind them which involves performing multiple runs of the GoemansWiUiamson algorithm using an increasing sequence of prizemultipliers) can be incorporated into a practical heuristic. We also uncover some surprising properties of the cost/prize tradeoff curves generated (and used) by this approach. 1 Prob lem Def in i t ions In the Prize Collecting Steiner Tree\" (PCST) problem, one is given a graph G = (V, E) , a non-negative edge cost c(e) for each edge e 6 E , a non-negative vertex prize p(v) for each vertex v 6 V, and a specified root vertex vo 6 V. In this paper we shall consider four different optimization problems based on this scenario, the first being the one initially studied in [6, 7]: \" ¢ / ~ T Labs, Room C239, 180 Park Avenue, Florham Park, NJ 07932. Email: dsj@research.att.com ?MIT Lab. for Computer Science, 545 Tech Square, Cambridge, MA 02139. Emaih mariam@theory.lcs.mit.edu SAT&T Labs, Room A003, 180 Park Avenue, Florham Park, NJ 07932. Emaih phillips@reseoxch.att.com Steven Phillips 1. The Goemans-WiUiamson Minimization problem: Find a subtree T ' = (V',E') of G tha t minimizes the cost of the edges in the tree plus the prizes of the vertices not in the tree, i.e., tha t minimizes GW(T') = Z c(e) + Z p(v)",
"title": ""
},
{
"docid": "0e30b5ffa34b9a065130688f0b7e44da",
"text": "This brief presents a new technique for minimizing reference spurs in a charge-pump phase-locked loop (PLL) while maintaining dead-zone-free operation. The proposed circuitry uses a phase/frequency detector with a variable delay element in its reset path, with the delay length controlled by feedback from the charge-pump. Simulations have been performed with several PLLs to compare the proposed circuitry with previously reported techniques. The proposed approach shows improvements over previously reported techniques of 12 and 16 dB in the two closest reference spurs",
"title": ""
},
{
"docid": "a4e92e4dc5d93aec4414bc650436c522",
"text": "Where you can find the compiling with continuations easily? Is it in the book store? On-line book store? are you sure? Keep in mind that you will find the book in this site. This book is very referred for you because it gives not only the experience but also lesson. The lessons are very valuable to serve for you, that's not about who are reading this compiling with continuations book. It is about this book that will give wellness for all people from many societies.",
"title": ""
},
{
"docid": "b60416c661e1f9c292555955965c7f01",
"text": "A 4.9-6.4-Gb/s two-level SerDes ASIC I/O core employing a four-tap feed-forward equalizer (FFE) in the transmitter and a five-tap decision-feedback equalizer (DFE) in the receiver has been designed in 0.13-/spl mu/m CMOS. The transmitter features a total jitter (TJ) of 35 ps p-p at 10/sup -12/ bit error rate (BER) and can output up to 1200 mVppd into a 100-/spl Omega/ differential load. Low jitter is achieved through the use of an LC-tank-based VCO/PLL system that achieves a typical random jitter of 0.6 ps over a phase noise integration range from 6 MHz to 3.2 GHz. The receiver features a variable-gain amplifier (VGA) with gain ranging from -6to +10dB in /spl sim/1dB steps, an analog peaking amplifier, and a continuously adapted DFE-based data slicer that uses a hybrid speculative/dynamic feedback architecture optimized for high-speed operation. The receiver system is designed to operate with a signal level ranging from 50 to 1200 mVppd. Error-free operation of the system has been demonstrated on lossy transmission line channels with over 32-dB loss at the Nyquist (1/2 Bd rate) frequency. The Tx/Rx pair with amortized PLL power consumes 290 mW of power from a 1.2-V supply while driving 600 mVppd and uses a die area of 0.79 mm/sup 2/.",
"title": ""
},
{
"docid": "7d7c596d334153f11098d9562753a1ee",
"text": "The design of systems for intelligent control of urban traffic is important in providing a safe environment for pedestrians and motorists. Artificial neural networks (ANNs) (learning systems) and expert systems (knowledge-based systems) have been extensively explored as approaches for decision making. While the ANNs compute decisions by learning from successfully solved examples, the expert systems rely on a knowledge base developed by human reasoning for decision making. It is possible to integrate the learning abilities of an ANN and the knowledge-based decision-making ability of the expert system. This paper presents a real-time intelligent decision making system, IDUTC, for urban traffic control applications. The system integrates a backpropagation-based ANN that can learn and adapt to the dynamically changing environment and a fuzzy expert system for decision making. The performance of the proposed intelligent decision-making system is evaluated by mapping the the adaptable traffic light control problem. The application is implemented using the ANN approach, the FES approach, and the proposed integrated system approach. The results of extensive simulations using the three approaches indicate that the integrated system provides better performance and leads to a more efficient implementation than the other two approaches.",
"title": ""
},
{
"docid": "baaf84ec42f3624cb949f37b5cab83e8",
"text": "In this paper, we propose a practical method for user grouping and decoding-order setting in a successive interference canceller (SIC) for downlink non-orthogonal multiple access (NOMA). While the optimal user grouping and decoding order, which depend on the instantaneous channel conditions among users within a cell, are assumed in previous work, the proposed method uses user grouping and a decoding order that are unified among all frequency blocks. The proposed decoding order in the SIC enables the application of NOMA with a SIC to a system where all the elements within a codeword for a user are distributed among multiple frequency blocks (resource blocks). The unified user grouping eases the complexity in the SIC process at the user terminal. The unified user grouping also reduces the complexity of the efficient downlink control signaling in NOMA with a SIC. The unified user grouping and decoding order among frequency blocks in principle reduce the achievable throughput compared to the optimal one. However, based on numerical results, we show that the proposed method does not significantly degrade the system-level throughput in downlink cellular networks.",
"title": ""
},
{
"docid": "1f770561b6f535e36dfb5e43326780a5",
"text": "The Red Brick WarehouseTMis a commercial Relational Database Management System designed specifically for query, decision support, and data warehouse applications. Red Brick Warehouse is a software-only system providing ANSI SQL support in an open cliendserver environment. Red Brick Warehouse is distinguished from traditional RDBMS products by an architecture optimized to deliver high performance in read-mostly, high-intensity query applications. In these applications, the workload is heavily biased toward complex SQL SELECT operations that read but do not update the database. The average unit of work is very large, and typically involves multi-table joins, aggregation, duplicate elimination, and sorting. Multi-user concurrency is moderate, with typical systems supporting 50 to 500 concurrent user sessions. Query databases are often very large, with tables ranging from 100 million to many billion rows and occupying 50 Gigabytes to 2 Terabytes, Databases are populated by massive bulk-load operations on an hourly, daily, or weekly cycle. Time-series and historical data are maintained for months or years. Red Brick Warehouse makes use of parallel processing as well as other specialized algorithms to achieve outstanding performance and scalability on cost-effective hardware platforms.",
"title": ""
},
{
"docid": "745562de56499ff0030f35afa8d84b7f",
"text": "This paper will show how the accuracy and security of SCADA systems can be improved by using anomaly detection to identify bad values caused by attacks and faults. The performance of invariant induction and ngram anomaly-detectors will be compared and this paper will also outline plans for taking this work further by integrating the output from several anomalydetecting techniques using Bayesian networks. Although the methods outlined in this paper are illustrated using the data from an electricity network, this research springs from a more general attempt to improve the security and dependability of SCADA systems using anomaly detection.",
"title": ""
},
{
"docid": "3a798fac488b605c145d3ce171f4dcba",
"text": "In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Discrimination in credit, mortgage, insurance, labor market, and education has been investigated by researchers in economics and human sciences. With the advent of automatic decision support systems, such as credit scoring systems, the ease of data collection opens several challenges to data analysts for the fight against discrimination. In this article, we introduce the problem of discovering discrimination through data mining in a dataset of historical decision records, taken by humans or by automatic systems. We formalize the processes of direct and indirect discrimination discovery by modelling protected-by-law groups and contexts where discrimination occurs in a classification rule based syntax. Basically, classification rules extracted from the dataset allow for unveiling contexts of unlawful discrimination, where the degree of burden over protected-by-law groups is formalized by an extension of the lift measure of a classification rule. In direct discrimination, the extracted rules can be directly mined in search of discriminatory contexts. In indirect discrimination, the mining process needs some background knowledge as a further input, for example, census data, that combined with the extracted rules might allow for unveiling contexts of discriminatory decisions. A strategy adopted for combining extracted classification rules with background knowledge is called an inference model. In this article, we propose two inference models and provide automatic procedures for their implementation. An empirical assessment of our results is provided on the German credit dataset and on the PKDD Discovery Challenge 1999 financial dataset.",
"title": ""
}
] |
scidocsrr
|
1d80043a80fefeffecb42585250b4b08
|
Crime prediction and forecasting in Tamilnadu using clustering approaches
|
[
{
"docid": "a9ea1f1f94a26181addac948837c3030",
"text": "Crime tends to clust er geographi cally. This has led to the wide usage of hotspot analysis to identify and visualize crime. Accurately identified crime hotspots can greatly benefit the public by creating accurate threat visualizations, more efficiently allocating police resources, and predicting crime. Yet existing mapping methods usually identify hotspots without considering the underlying correlates of crime. In this study, we introduce a spatial data mining framework to study crime hotspots through their related variables. We use Geospatial Discriminative Patterns (GDPatterns) to capture the significant difference between two classes (hotspots and normal areas) in a geo-spatial dataset. Utilizing GDPatterns, we develop a novel model—Hotspot Optimization Tool (HOT)—to improve the identification of crime hotspots. Finally, based on a similarity measure, we group GDPattern clusters and visualize the distribution and characteristics of crime related variables. We evaluate our approach using a real world dataset collected from a northeast city in the United States. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "169af7f4732197ef8b80a6c032ff37c5",
"text": "Assisting impaired individuals with robotic devices is an emerging and potentially transformative technology. This paper describes the design of an assistive robotic grasping system that allows impaired individuals to interact with the system in a human-in-the-loop manner, including the use of a novel cranio-facial electromyography input device. The system uses an augmented reality interface that allows users to plan grasps online that match their task-oriented intents. The system uses grasp quality measurements that generate more robust grasps by considering the local geometry of the object and the effect of uncertainty during grasp acquisition. This interface is validated by testing with real users, both healthy and impaired. This work forms the foundation for a flexible, fully featured human-in-the-loop system that allows users to grasp known and unknown objects in cluttered spaces using novel, practical human–robot interaction paradigms that have the potential to bring human-in-the-loop assistive devices out of the research environment and into the lives of those that need them.",
"title": ""
},
{
"docid": "dfb13625c6c03932b6dd83a77a782073",
"text": "Location Based Service (LBS), although it greatly benefits the daily life of mobile device users, has introduced significant threats to privacy. In an LBS system, even under the protection of pseudonyms, users may become victims of inference attacks, where an adversary reveals a user's real identity and complete moving trajectory with the aid of side information, e.g., accidental identity disclosure through personal encounters. To enhance privacy protection for LBS users, a common approach is to include extra fake location information associated with different pseudonyms, known as dummy users, in normal location reports. Due to the high cost of dummy generation using resource constrained mobile devices, self-interested users may free-ride on others' efforts. The presence of such selfish behaviors may have an adverse effect on privacy protection. In this paper, we study the behaviors of self-interested users in the LBS system from a game-theoretic perspective. We model the distributed dummy user generation as Bayesian games in both static and timing-aware contexts, and analyze the existence and properties of the Bayesian Nash Equilibria for both models. Based on the analysis, we propose a strategy selection algorithm to help users achieve optimized payoffs. Leveraging a beta distribution generalized from real-world location privacy data traces, we perform simulations to assess the privacy protection effectiveness of our approach. The simulation results validate our theoretical analysis for the dummy user generation game models.",
"title": ""
},
{
"docid": "5806b1bd779c7f39ecf2dac3f51ce267",
"text": "We have conducted two investigations on the ability of human participants to solve challenging collective coordination tasks in a distributed fashion with limited perception and communication capabilities similar to those of a simple ground robot. In these investigations, participants were gathered in a laboratory of networked workstations and were given a series of different collective tasks with varying communication and perception capabilities. Here, we focus on our latest investigation and describe our methodology, platform design considerations, and highlight some interesting observed behaviors. These investigations are the preliminary phase in designing a formal strategy for learning human-inspired behaviors for solving complex distributed multirobot problems, such as pattern formation.",
"title": ""
},
{
"docid": "269add24d3c659694de68b5b5470aae4",
"text": "INTRODUCTION\nPatients subject to major surgery, suffering sepsis, major trauma, or following cardiopulmonary bypass exhibit a systemic inflammatory response. This inflammatory response involves a complex array of inflammatory polypeptide molecules known as cytokines. It is well accepted that the loss of local control of the release of these cytokines leads to systemic inflammation and potentially deleterious consequences including the Systemic Inflammatory Response Syndrome, Multi-Organ Dysfunction Syndrome, shock and death.\n\n\nMETHODS\nThe Medline database was searched for literature on mechanisms involved in the development of SIRS and potential targets for modifying the inflammatory response. We focus on the novel therapy of cytokine adsorption as a promising removal technology.\n\n\nRESULTS\nAccumulating data from human studies and experimental animal models suggests that both pro- and anti- inflammatory cytokines are released following a variety of initiating stimuli including endotoxin release, complement activation, ischaemia reperfusion injury and others.\n\n\nDISCUSSION\nPro-and anti-inflammatory cytokines interact in a complex and unpredictable manner to influence the immune system and eventually cause multiple end organ effects. Cytokine adsorption therapy provides a potential solution to improving outcomes following Systemic Inflammatory Response Syndrome.",
"title": ""
},
{
"docid": "a91add591aacaa333e109d77576ba463",
"text": "It has become essential to scrutinize and evaluate software development methodologies, mainly because of their increasing number and variety. Evaluation is required to gain a better understanding of the features, strengths, and weaknesses of the methodologies. The results of such evaluations can be leveraged to identify the methodology most appropriate for a specific context. Moreover, methodology improvement and evolution can be accelerated using these results. However, despite extensive research, there is still a need for a feature/criterion set that is general enough to allow methodologies to be evaluated regardless of their types. We propose a general evaluation framework which addresses this requirement. In order to improve the applicability of the proposed framework, all the features – general and specific – are arranged in a hierarchy along with their corresponding criteria. Providing different levels of abstraction enables users to choose the suitable criteria based on the context. Major evaluation frameworks for object-oriented, agent-oriented, and aspect-oriented methodologies have been studied and assessed against the proposed framework to demonstrate its reliability and validity.",
"title": ""
},
{
"docid": "6a6691d92503f98331ad7eed61a9c357",
"text": "This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.",
"title": ""
},
{
"docid": "c56daed0cc2320892fad3ac34ce90e09",
"text": "In this paper we describe the open source data analytics platform KNIME, focusing particularly on extensions and modules supporting fuzzy sets and fuzzy learning algorithms such as fuzzy clustering algorithms, rule induction methods, and interactive clustering tools. In addition we outline a number of experimental extensions, which are not yet part of the open source release and present two illustrative examples from real world applications to demonstrate the power of the KNIME extensions.",
"title": ""
},
{
"docid": "2d51392b05cd2072e142563533d109de",
"text": "After having carried out a historical review and identifying the state of the art in relation to the interfaces for the exploration of scientific articles, the authors propose a model based in an immersive virtual environment, natural user interfaces and natural language processing, which provides an excellent experience for the user and allows for better use of some of its capabilities, for example, intuition and cognition in 3-dimensional environments. In this work, the Oculus Rift and Leap Motion Hardware devices are used. This work aims to contribute to the proposal of a tool which would facilitate and optimize the arduous task of reviewing literature in scientific databases. The case study is the exploration and information retrieval of scientific articles using ALICIA (Scientific database of Perú). Finally, conclusions and recommendations for future work are laid out and discussed. Keywords—Immersive virtual environment; human computer interaction; natural user interfaces; natural language processing; Oculus Rift",
"title": ""
},
{
"docid": "ed4fe271f9da453da04829ed8de0f88b",
"text": "With the evolution of HCI (Human-Computer Interaction), the computer vision systems are playing an important role in our lives. Some of the prime areas of computer vision applications include gender detection, face recognition, body tracking and ethnicity identification etc. Automated data analyses techniques help discover regularities and hidden associations in larger volumes of datasets. Classification being a data mining technique is largely used to group categorical data as well as a blend of continuous numeric values and categorical data. A number of classification techniques like decision trees, support vector machine (SVM), nearest neighbors and neural networks etc. have gained popularity in numerous areas of data mining practices. Among these classification techniques, decision trees offer an added advantage of producing easily interpretable rules and logic statements along with generating the classification tree for the given dataset. This study offers a distinct method for gender classification of facial images. We have used a variant of the decision tree algorithm for gender classification of frontal images due to its distinctive features. Our technique demonstrates robustness and relative scale invariance for gender classification. Details of the experimental design and the results are reported herein.",
"title": ""
},
{
"docid": "938395ce421e0fede708e3b4ab7185b5",
"text": "This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.",
"title": ""
},
{
"docid": "46674077de97f82bc543f4e8c0a8243a",
"text": "Recently, multiple formulations of vision problems as probabilistic inversions of generative models based on computer graphics have been proposed. However, applications to 3D perception from natural images have focused on low-dimensional latent scenes, due to challenges in both modeling and inference. Accounting for the enormous variability in 3D object shape and 2D appearance via realistic generative models seems intractable, as does inverting even simple versions of the many-tomany computations that link 3D scenes to 2D images. This paper proposes and evaluates an approach that addresses key aspects of both these challenges. We show that it is possible to solve challenging, real-world 3D vision problems by approximate inference in generative models for images based on rendering the outputs of probabilistic CAD (PCAD) programs. Our PCAD object geometry priors generate deformable 3D meshes corresponding to plausible objects and apply affine transformations to place them in a scene. Image likelihoods are based on similarity in a feature space based on standard mid-level image representations from the vision literature. Our inference algorithm integrates single-site and locally blocked Metropolis-Hastings proposals, Hamiltonian Monte Carlo and discriminative datadriven proposals learned from training data generated from our models. We apply this approach to 3D human pose estimation and object shape reconstruction from single images, achieving quantitative and qualitative performance improvements over state-of-the-art baselines.",
"title": ""
},
{
"docid": "7d507a0b754a8029d28216e795cb7286",
"text": "a Lake Michigan Field Station/Great Lakes Environmental Research Laboratory/NOAA, 1431 Beach St, Muskegon, MI 49441, USA b Great Lakes Environmental Research Laboratory/NOAA, 4840 S. State Rd., Ann Arbor, MI 48108, USA c School Forest Resources, Pennsylvania State University, 434 Forest Resources Building, University Park, PA 16802, USA d School of Natural Resources and Environment, University of Michigan, 440 Church St., Ann Arbor, MI 48109, USA",
"title": ""
},
{
"docid": "ba8625debb88dd0339a343e1a8b817b3",
"text": "Smoothed Particle Hydrodynamics (SPH) has been established as one of the major concepts for fluid animation in computer graphics. While SPH initially gained popularity for interactive free-surface scenarios, it has emerged to be a fully fledged technique for state-of-the-art fluid animation with versatile effects. Nowadays, complex scenes with millions of sampling points, oneand two-way coupled rigid and elastic solids, multiple phases and additional features such as foam or air bubbles can be computed at reasonable expense. This state-of-the-art report summarizes SPH research within the graphics community.",
"title": ""
},
{
"docid": "7d278c1a5359ccd0dfcc236ba3a47614",
"text": "Humanoid robots may require a degree of compliance at joint level for improving efficiency, shock tolerance, and safe interaction with humans. The presence of joint elasticity, however, complexifies the control design of humanoid robots. This paper proposes a control framework to extend momentum based controllers developed for stiff actuation to the case of series elastic actuators. The key point is to consider the motor velocities as an intermediate control input, and then apply high-gain control to stabilise the desired motor velocities achieving momentum control. Simulations carried out on a model of the robot iCub verify the soundness of the proposed approach.",
"title": ""
},
{
"docid": "0f529d7db34417f248a3174ef9feb507",
"text": "The purpose of this research is to conduct a comprehensive and systematic review of the literature in the field of `Supply Chain Risk Management' and identify important research gaps for potential research. Furthermore, a conceptual risk management framework is also proposed that encompasses holistic view of the field. `Systematic Literature Review' method is used to examine quality articles published over a time period of almost 15 years (2000 - June, 2014). The findings of the study are validated through text mining software. Systematic literature review has identified the progress of research based on various descriptive and thematic typologies. The review and text mining analysis have also provided an insight into major research gaps. Based on the identified gaps, a framework is developed that can help researchers model interdependencies between risk factors.",
"title": ""
},
{
"docid": "f72a046fb5e4f1faa74084f1ceea5b90",
"text": "For mobile communication systems, small size and low profile antennas are necessary. Planar printed antennas such as slot antenna and microstrip antenna are attractive for their use in mobile and wireless communication systems due to their low profile compact size [1], [2]. However, because of the electro-magnetic interference, radiation of omni directional antenna such as the slot antenna is remarkably deteriorated if the metal block of RFIC (radio frequency integrated circuit) or body of a car approaches to the back of the antenna. Patch antenna is one of the solutions to overcome these problems [3]. However, radiation efficiency and bandwidth of a patch antenna decreases rapidly as the thickness of the substrate decreases, and also one-sided directional patch antenna has large ground plane. Therefore, miniaturized antennas on thinner substrate are necessary for future 3-dimension packaging techniques in integrating with RF-chips. In our previous works, we presented the design theory of the one-sided directional electrically small antenna (ESA) composed of an impedance matching circuit, a half wavelength (λ/2) top metal and a bottom floating metal layer for IMS (@2.4GHz) application [4].",
"title": ""
},
{
"docid": "4dd690ffa1a73674e1b0488b7656b26e",
"text": "In this paper, we propose the deep reinforcement relevance network (DRRN), a novel deep architecture, to design a model for handling an action space characterized using natural language with applications to text-based games. For a particular class of games, a user must choose among a number of actions described by text, with the goal of maximizing long-term reward. In these games, the best action is typically what fits the current situation best (modeled as a state in the DRRN), also described by text. Because of the exponential complexity of natural language with respect to sentence length, there is typically an unbounded set of unique actions. Even with a constrained vocabulary, the action space is very large and sparse, posing challenges for learning. To address this challenge, the DRRN extracts separate high-level embedding vectors from the texts that describe states and actions, respectively, using a general interaction function, such as inner product, bilinear, and DNN interaction, between these embedding vectors to approximate the Qfunction. We evaluate the DRRN on two popular text games, showing superior performance over other deep Q-learning architectures.",
"title": ""
},
{
"docid": "5d6a2b9d6d200a4fde3e00290d6e128b",
"text": "Evolutionary Algorithms are a common probabilistic optimization method based on the model of natural evolution One important oper ator in these algorithms is the selection scheme for which in this paper a new description model based on tness distributions is introduced blickle tik ee ethz ch y thiele tik ee ethz ch",
"title": ""
},
{
"docid": "ded8b8390c3f74473feb35d6af45ec00",
"text": "Overwhelming evidence supports the importance of sleep for memory consolidation. Medical students are often deprived of sufficient sleep due to large amounts of clinical duties and university load, we therefore investigated how study and sleep habits influence university performance. We performed a questionnaire-based study with 31 medical students of the University of Munich (second and third clinical semesters; surgery and internal medicine). The students kept a diary (in 30-min bins) on their daily schedules (times when they studied by themselves, attended classes, slept, worked on their thesis, or worked to earn money). The project design involved three 2-wk periods (A: during the semester; B: directly before the exam period--pre-exam; C: during the subsequent semester break). Besides the diaries, students completed once questionnaires about their sleep quality (Pittsburgh Sleep Quality Index [PSQI]), their chronotype (Munich Chronotype Questionnaire [MCTQ]), and their academic history (previous grades, including the previously achieved preclinical board exam [PBE]). Analysis revealed significant correlations between the actual sleep behavior during the semester (MS(diary); mid-sleep point averaged from the sleep diaries) during the pre-exam period and the achieved grade (p = 0.002) as well as between the grades of the currently taken exam and the PBE (p = 0.002). A regression analysis with MS(diary) pre-exam and PBE as predictors in a model explained 42.7% of the variance of the exam grade (effect size 0.745). Interestingly, MS(diary)--especially during the pre-exam period-was the strongest predictor for the currently achieved grade, along with the preclinical board exam as a covariate, whereas the chronotype did not significantly influence the exam grade.",
"title": ""
},
{
"docid": "cd0e7cace1b89af72680f9d8ef38bdf3",
"text": "Analyzing stock market trends and sentiment is an interdisciplinary area of research being undertaken by many disciplines such as Finance, Computer Science, Statistics, and Economics. It has been well established that real time news plays a strong role in the movement of stock prices. With the advent of electronic and online news sources, analysts have to deal with enormous amounts of real-time, unstructured streaming data. In this paper, we present an automated text mining based approach to aggregate news stories from diverse sources and create a News Corpus. The Corpus is filtered down to relevant sentences and analyzed using Natural Language Processing (NLP) techniques. A sentiment metric, called NewsSentiment, utilizing the count of positive and negative polarity words is proposed as a measure of the sentiment of the overall news corpus. We have used various open source packages and tools to develop the news collection and aggregation engine as well as the sentiment evaluation engine. Extensive experimentation has been done using news stories about various stocks. The time variation of NewsSentiment shows a very strong correlation with the actual stock price movement. Our proposed metric has many applications in analyzing current news stories and predicting stock trends for specific companies and sectors of the economy.",
"title": ""
}
] |
scidocsrr
|
bad241a36888f158c2ae67237fbae2e6
|
Performance Analysis of Load Balancing Architectures in Cloud Computing
|
[
{
"docid": "272281eafb06f6c9dd030897e846fd00",
"text": "Cloud computing is emerging as a new paradigm of large-scale distributed computing. It is a framework for enabling convenient, on-demand network access to a shared pool of computing resources. Load balancing is one of the main challenges in cloud computing which is required to distribute the dynamic workload across multiple nodes to ensure that no single node is overwhelmed. It helps in optimal utilization of resources and hence in enhancing the performance of the system. The goal of load balancing is to minimize the resource consumption which will further reduce energy consumption and carbon emission rate that is the dire need of cloud computing. This determines the need of new metrics, energy consumption and carbon emission for energy-efficient load balancing in cloud computing. This paper discusses the existing load balancing techniques in cloud computing and further compares them based on various parameters like performance, scalability, associated overhead etc. that are considered in different techniques. It further discusses these techniques from energy consumption and carbon emission perspective.",
"title": ""
},
{
"docid": "8a7cf92704d06baee24cb6f2a551094d",
"text": "Network bandwidth and hardware technology are developing rapidly, resulting in the vigorous development of the Internet. A new concept, cloud computing, uses low-power hosts to achieve high reliability. The cloud computing, an Internet-based development in which dynamically scalable and often virtualized resources are provided as a service over the Internet has become a significant issue. The cloud computing refers to a class of systems and applications that employ distributed resources to perform a function in a decentralized manner. Cloud computing is to utilize the computing resources (service nodes) on the network to facilitate the execution of complicated tasks that require large-scale computation. Thus, the selecting nodes for executing a task in the cloud computing must be considered, and to exploit the effectiveness of the resources, they have to be properly selected according to the properties of the task. However, in this study, a two-phase scheduling algorithm under a three-level cloud computing network is advanced. The proposed scheduling algorithm combines OLB (Opportunistic Load Balancing) and LBMM (Load Balance Min-Min) scheduling algorithms that can utilize more better executing efficiency and maintain the load balancing of system.",
"title": ""
}
] |
[
{
"docid": "460a296de1bd13378d71ce19ca5d807a",
"text": "Many books discuss applications of data mining. For financial data analysis and financial modeling, see Benninga and Czaczkes [BC00] and Higgins [Hig03]. For retail data mining and customer relationship management, see books by Berry and Linoff [BL04] and Berson, Smith, and Thearling [BST99], and the article by Kohavi [Koh01]. For telecommunication-related data mining, see the book by Mattison [Mat97]. Chen, Hsu, and Dayal [CHD00] reported their work on scalable telecommunication tandem traffic analysis under a data warehouse/OLAP framework. For bioinformatics and biological data analysis, there are a large number of introductory references and textbooks. An introductory overview of bioinformatics for computer scientists was presented by Cohen [Coh04]. Recent textbooks on bioinformatics include Krane and Raymer [KR03], Jones and Pevzner [JP04], Durbin, Eddy, Krogh and Mitchison [DEKM98], Setubal and Meidanis [SM97], Orengo, Jones, and Thornton [OJT03], and Pevzner [Pev03]. Summaries of biological data analysis methods and algorithms can also be found in many other books, such as Gusfield [Gus97], Waterman [Wat95], Baldi and Brunak [BB01], and Baxevanis and Ouellette [BO04]. There are many books on scientific data analysis, such as Grossman, Kamath, Kegelmeyer, et al. (eds.) [GKK01]. For geographic data mining, see the book edited by Miller and Han [MH01]. Valdes-Perez [VP99] discusses the principles of human-computer collaboration for knowledge discovery in science. For intrusion detection, see Barbará [Bar02] and Northcutt and Novak [NN02].",
"title": ""
},
{
"docid": "34bd9a54a1aeaf82f7c4b27047cb2f49",
"text": "Choosing a good location when opening a new store is crucial for the future success of a business. Traditional methods include offline manual survey, which is very time consuming, and analytic models based on census data, which are unable to adapt to the dynamic market. The rapid increase of the availability of big data from various types of mobile devices, such as online query data and offline positioning data, provides us with the possibility to develop automatic and accurate data-driven prediction models for business store placement. In this paper, we propose a Demand Distribution Driven Store Placement (D3SP) framework for business store placement by mining search query data from Baidu Maps. D3SP first detects the spatial-temporal distributions of customer demands on different business services via query data from Baidu Maps, the largest online map search engine in China, and detects the gaps between demand and supply. Then we determine candidate locations via clustering such gaps. In the final stage, we solve the location optimization problem by predicting and ranking the number of customers. We not only deploy supervised regression models to predict the number of customers, but also learn to rank models to directly rank the locations. We evaluate our framework on various types of businesses in real-world cases, and the experiments results demonstrate the effectiveness of our methods. D3SP as the core function for store placement has already been implemented as a core component of our business analytics platform and could be potentially used by chain store merchants on Baidu Nuomi.",
"title": ""
},
{
"docid": "1d956bafdb6b7d4aa2afcfeb77ac8cbb",
"text": "In this paper, we propose a novel model for high-dimensional data, called the Hybrid Orthogonal Projection and Estimation (HOPE) model, which combines a linear orthogonal projection and a finite mixture model under a unified generative modeling framework. The HOPE model itself can be learned unsupervised from unlabelled data based on the maximum likelihood estimation as well as discriminatively from labelled data. More interestingly, we have shown the proposed HOPE models are closely related to neural networks (NNs) in a sense that each hidden layer can be reformulated as a HOPE model. As a result, the HOPE framework can be used as a novel tool to probe why and how NNs work, more importantly, to learn NNs in either supervised or unsupervised ways. In this work, we have investigated the HOPE framework to learn NNs for several standard tasks, including image recognition on MNIST and speech recognition on TIMIT. Experimental results have shown that the HOPE framework yields significant performance gains over the current state-of-the-art methods in various types of NN learning problems, including unsupervised feature learning, supervised or semi-supervised learning.",
"title": ""
},
{
"docid": "cb7fdb129ad9c27cadd3227deeb8d13a",
"text": "The study was undertaken to evaluate the efficacy and safety of a posterolateral reversed L-shaped knee joint incision for treating the posterolateral tibial plateau fracture. Knee specimens from eight fresh, frozen adult corpses were dissected bilaterally using a posterolateral reversed L-shaped approach. During the dissection, the exposure range was observed, and important parameters of anatomical structure were measured, including the parameters of common peroneal nerve (CPN) to ameliorate the incision and the distances between bifurcation of main vessels and the tibial articular surface to clear risk awareness. The posterolateral aspect of the tibial plateau from the proximal tibiofibular joint to the tibial insertion of the posterior cruciate ligament was exposed completely. There was no additional damage to other vital structures and no evidence of fibular osteotomy or posterolateral corner complex injury. The mean length of the exposed CPN was 56.48 mm. The CPN sloped at a mean angle of 14.7° toward the axis of the fibula. It surrounded the neck of the fibula an average of 42.18 mm from the joint line. The mean distance between the opening of the interosseous membrane and the joint line was 48.78 mm. The divergence of the fibular artery from the posterior tibial artery was on average 76.46 mm from articular surface. This study confirmed that posterolateral reversed L-shaped approach could meet the requirements of anatomical reduction and buttress fixation for posterolateral tibial plateau fracture. Exposure of the CPN can be minimized or even avoided by modifying the skin incision. Care is needed to dissect distally and deep through the approach as vital vascular bifurcations are concentrated in this region. Placement of a posterior buttressing plate carries a high vascular risk when the plate is implanted beneath these vessels.",
"title": ""
},
{
"docid": "d54e33049b3f5170ec8bd09d8f17c05c",
"text": "Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higherlevel representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution P (x) is structurally related to some task of interest, say predicting P (y|x). This paper focusses on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.",
"title": ""
},
{
"docid": "c4be39977487cdebc8127650c8eda433",
"text": "Unfavorable wake and separated flow from the hull might cause a dramatic decay of the propeller performance in single-screw propelled vessels such as tankers, bulk carriers and containers. For these types of vessels, special attention has to be paid to the design of the stern region, the occurrence of a good flow towards the propeller and rudder being necessary to avoid separation and unsteady loads on the propeller blades and, thus, to minimize fuel consumption and the risk for cavitation erosion and vibrations. The present work deals with the analysis of the propeller inflow in a single-screw chemical tanker vessel affected by massive flow separation in the stern region. Detailed flow measurements by Laser Doppler Velocimetry (LDV) were performed in the propeller region at model scale, in the Large Circulating Water Channel of CNR-INSEAN. Tests were undertaken with and without propeller in order to investigate its effect on the inflow characteristics and the separation mechanisms. In this regard, the study concerned also a phase locked analysis of the propeller perturbation at different distances upstream of the propulsor. The study shows the effectiveness of the 3 order statistical moment (i.e. skewness) for describing the topology of the wake and accurately identifying the portion affected by the detached flow.",
"title": ""
},
{
"docid": "5f606838b7158075a4b13871c5b6ec89",
"text": "The sentence is a standard textual unit in natural language processing applications. In many languages the punctuation mark that indicates the end-of-sentence boundary is ambiguous; thus the tokenizers of most NLP systems must be equipped with special sentence boundary recognition rules for every new text collection. As an alternative, this article presents an efficient, trainable system for sentence boundary disambiguation. The system, called Satz, makes simple estimates of the parts of speech of the tokens immediately preceding and following each punctuation mark, and uses these estimates as input to a machine learning algorithm that then classifies the punctuation mark. Satz is very fast both in training and sentence analysis, and its combined robustness and accuracy surpass existing techniques. The system needs only a small lexicon and training corpus, and has been shown to transfer quickly and easily from English to other languages, as demonstrated on French and German.",
"title": ""
},
{
"docid": "13da027ebd361c644481d17afc71898f",
"text": "Many usability guidelines have been created in numerous areas and mobile devices application is included as well. However, there is not much published works in relation to the usability guidelines that comes up together with metric. Although a number of measurement models (e.g. The Metric for Usability Standard in Computing [MUSiC]) have been produced for evaluating usability, they are not focusing on mobile application. This paper will make an attempt to review the existing measurement models and will further explain the development of usability metric using GQM approach. Further research will firstly develop a set of usability guidelines for mobile application which will be used to develop a metric for usability measurement.",
"title": ""
},
{
"docid": "63e4183beadb30244730de8ac86b20ee",
"text": "Softwares use cryptographic algorithms to secure their communications and to protect their internal data. However the algorithm choice, its implementation design and the generation methods of its input parameters may have dramatic consequences on the security of the data it was initially supposed to protect. Therefore to assess the security of a binary program involving cryptography, analysts need to check that none of these points will cause a system vulnerability. It implies, as a first step, to precisely identify and locate the cryptographic code in the binary program. Since binary analysis is a difficult and cumbersome task, it is interesting to devise a method to automatically retrieve cryptographic primitives and their parameters.\n In this paper, we present a novel approach to automatically identify symmetric cryptographic algorithms and their parameters inside binary code. Our approach is static and based on DFG isomorphism. To cope with binary codes produced from different source codes and by different compilers and options, the DFG is normalized using code rewrite mechanisms. Our approach differs from previous works, that either use statistical criteria leading to imprecise results, or rely on heavy dynamic instrumentation. To validate our approach, we present experimental results on a set of synthetic samples including several cryptographic algorithms, binary code of well-known cryptographic libraries and reference source implementation compiled using different compilers and options.",
"title": ""
},
{
"docid": "109a1276cd743a522b9e0a36b9b58f32",
"text": "This study examined the effects of a virtual reality distraction intervention on chemotherapy-related symptom distress levels in 16 women aged 50 and older. A cross-over design was used to answer the following research questions: (1) Is virtual reality an effective distraction intervention for reducing chemotherapy-related symptom distress levels in older women with breast cancer? (2) Does virtual reality have a lasting effect? Chemotherapy treatments are intensive and difficult to endure. One way to cope with chemotherapy-related symptom distress is through the use of distraction. For this study, a head-mounted display (Sony PC Glasstron PLM - S700) was used to display encompassing images and block competing stimuli during chemotherapy infusions. The Symptom Distress Scale (SDS), Revised Piper Fatigue Scale (PFS), and the State Anxiety Inventory (SAI) were used to measure symptom distress. For two matched chemotherapy treatments, one pre-test and two post-test measures were employed. Participants were randomly assigned to receive the VR distraction intervention during one chemotherapy treatment and received no distraction intervention (control condition) during an alternate chemotherapy treatment. Analysis using paired t-tests demonstrated a significant decrease in the SAI (p = 0.10) scores immediately following chemotherapy treatments when participants used VR. No significant changes were found in SDS or PFS values. There was a consistent trend toward improved symptoms on all measures 48 h following completion of chemotherapy. Evaluation of the intervention indicated that women thought the head mounted device was easy to use, they experienced no cybersickness, and 100% would use VR again.",
"title": ""
},
{
"docid": "879fab81526e15e40eae938153b951c6",
"text": "This paper presents an analysis and empirical evaluation of techniques developed to support focus and context awareness in tasks involving visualization of time lines. It focuses on time lines that display discrete events and their temporal relationships. The most common form of representation for such time lines is the Gantt chart. Although ubiquitous in event visualization and project planning applications, Gantt charts are inherently space-consuming, and suffer from shortcomings in providing focus and context awareness when a large number of tasks and events needs to be displayed. In an attempt to address this problem, we implemented and adapted a number of focus and context awareness techniques for an interactive task scheduling system in combination with the standard Gantt chart and an alternative space-filling mosaic approach to time line visualization. A controlled user trial compared user performance at interpreting representations of hierarchical task scheduling, assessing different methods across various conditions resulting from interactive explorations of the Gantt and the mosaic interfaces. Results suggested a number of possible improvements to these interactive visualization techniques. The implementation of some of these improvements is also presented and discussed.",
"title": ""
},
{
"docid": "197dfd6fdcb600c2dec6aefcbf8dfd1f",
"text": "In this paper, We propose a formalized method to improve the performance of Contextual Anomaly Detection (CAD) for detecting stock market manipulation using Big Data techniques. The method aims to improve the CAD algorithm by capturing the expected behaviour of stocks through sentiment analysis of tweets about stocks. The extracted insights are aggregated per day for each stock and transformed to a time series. The time series is used to eliminate false positives from anomalies that are detected by CAD. We present a case study and explore developing sentiment analysis models to improve anomaly detection in the stock market. The experimental results confirm the proposed method is effective in improving CAD through removing irrelevant anomalies by correctly identifying 28% of false positives.",
"title": ""
},
{
"docid": "5ceb415b17cc36e9171ddc72a860ccc8",
"text": "Word embeddings and convolutional neural networks (CNN) have attracted extensive attention in various classification tasks for Twitter, e.g. sentiment classification. However, the effect of the configuration used to generate the word embeddings on the classification performance has not been studied in the existing literature. In this paper, using a Twitter election classification task that aims to detect election-related tweets, we investigate the impact of the background dataset used to train the embedding models, as well as the parameters of the word embedding training process, namely the context window size, the dimensionality and the number of negative samples, on the attained classification performance. By comparing the classification results of word embedding models that have been trained using different background corpora (e.g. Wikipedia articles and Twitter microposts), we show that the background data should align with the Twitter classification dataset both in data type and time period to achieve significantly better performance compared to baselines such as SVM with TF-IDF. Moreover, by evaluating the results of word embedding models trained using various context window sizes and dimensionalities, we find that large context window and dimension sizes are preferable to improve the performance. However, the number of negative samples parameter does not significantly affect the performance of the CNN classifiers. Our experimental results also show that choosing the correct word embedding model for use with CNN leads to statistically significant improvements over various baselines such as random, SVM with TF-IDF and SVM with word embeddings. Finally, for out-of-vocabulary (OOV) words that are not available in the learned word embedding models, we show that a simple OOV strategy to randomly initialise the OOV words without any prior knowledge is sufficient to attain a good classification performance among the current OOV strategies (e.g. a random initialisation using statistics of the pre-trained word embedding models).",
"title": ""
},
{
"docid": "e2a1ceadf01443a36af225b225e4d521",
"text": "Event detection remains a challenge because of the difficulty of encoding the word semantics in various contexts. Previous approaches have heavily depended on language-specific knowledge and preexisting natural language processing tools. However, not all languages have such resources and tools available compared with English language. A more promising approach is to automatically learn effective features from data, without relying on language-specific resources. In this study, we develop a language-independent neural network to capture both sequence and chunk information from specific contexts and use them to train an event detector for multiple languages without any manually encoded features. Experiments show that our approach can achieve robust, efficient and accurate results for various languages. In the ACE 2005 English event detection task, our approach achieved a 73.4% F-score with an average of 3.0% absolute improvement compared with state-of-the-art. Additionally, our experimental results are competitive for Chinese and Spanish.",
"title": ""
},
{
"docid": "b212a4b4e249e4da8e6193c9b4221bbf",
"text": "Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis. The system can be accessed either through a cloud-enabled web-interface or downloaded and installed to run within the user's local environment. All resources related to Tavaxy are available at http://www.tavaxy.org .",
"title": ""
},
{
"docid": "ddddca65683572ff97f8f878e529b32d",
"text": "Human beliefs have remarkable robustness in the face of disconfirmation. This robustness is often explained as the product of heuristics or motivated reasoning. However, robustness can also arise from purely rational principles when the reasoner has recourse to ad hoc auxiliary hypotheses. Auxiliary hypotheses primarily function as the linking assumptions connecting different beliefs to one another and to observational data, but they can also function as a \"protective belt\" that explains away disconfirmation by absorbing some of the blame. The present article traces the role of auxiliary hypotheses from philosophy of science to Bayesian models of cognition and a host of behavioral phenomena, demonstrating their wide-ranging implications.",
"title": ""
},
{
"docid": "24e9f079fe1fd0155c2f6a948f021da4",
"text": "Current blood glucose monitoring (BGM) techniques are invasive as they require a finger prick blood sample, a repetitively painful process that creates the risk of infection. BGM is essential to avoid complications arising due to abnormal blood glucose levels in diabetic patients. Laser light-based sensors have demonstrated a superior potential for BGM. Existing near-infrared (NIR)-based BGM techniques have shortcomings, such as the absorption of light in human tissue, higher signal-to-noise ratio, and lower accuracy, and these disadvantages have prevented NIR techniques from being employed for commercial BGM applications. A simple, compact, and cost-effective non-invasive device using visible red laser light of wavelength 650 nm for BGM (RL-BGM) is implemented in this paper. The RL-BGM monitoring device has three major technical advantages over NIR. Unlike NIR, red laser light has ~30 times better transmittance through human tissue. Furthermore, when compared with NIR, the refractive index of laser light is more sensitive to the variations in glucose level concentration resulting in faster response times ~7–10 s. Red laser light also demonstrates both higher linearity and accuracy for BGM. The designed RL-BGM device has been tested for both in vitro and in vivo cases and several experimental results have been generated to ensure the accuracy and precision of the proposed BGM sensor.",
"title": ""
},
{
"docid": "362ce6581dee5023c9d548b634153345",
"text": "In most practical situations, the compression or transmission of images and videos creates distortions that will eventually be perceived by a human observer. Vice versa, image and video restoration techniques, such as inpainting or denoising, aim to enhance the quality of experience of human viewers. Correctly assessing the similarity between an image and an undistorted reference image as subjectively experienced by a human viewer can thus lead to significant improvements in any transmission, compression, or restoration system. This paper introduces the Haar wavelet-based perceptual similarity index (HaarPSI), a novel and computationally inexpensive similarity measure for full reference image quality assessment. The HaarPSI utilizes the coefficients obtained from a Haar wavelet decomposition to assess local similarities between two images, as well as the relative importance of image areas. The consistency of the HaarPSI with the human quality of experience was validated on four large benchmark databases containing thousands of differently distorted images. On these databases, the HaarPSI achieves higher correlations with human opinion scores than state-of-the-art full reference similarity measures like the structural similarity index (SSIM), the feature similarity index (FSIM), and the visual saliency-based index (VSI). Along with the simple computational structure and the short execution time, these experimental results suggest a high applicability of the HaarPSI in real world tasks.",
"title": ""
},
{
"docid": "e31b5b120d485d77e8743132f028d8b3",
"text": "In this paper, we consider the problem of linking users across multiple online communities. Specifically, we focus on the alias-disambiguation step of this user linking task, which is meant to differentiate users with the same usernames. We start quantitatively analyzing the importance of the alias-disambiguation step by conducting a survey on 153 volunteers and an experimental analysis on a large dataset of About.me (75,472 users). The analysis shows that the alias-disambiguation solution can address a major part of the user linking problem in terms of the coverage of true pairwise decisions (46.8%). To the best of our knowledge, this is the first study on human behaviors with regards to the usages of online usernames. We then cast the alias-disambiguation step as a pairwise classification problem and propose a novel unsupervised approach. The key idea of our approach is to automatically label training instances based on two observations: (a) rare usernames are likely owned by a single natural person, e.g. pennystar88 as a positive instance; (b) common usernames are likely owned by different natural persons, e.g. tank as a negative instance. We propose using the n-gram probabilities of usernames to estimate the rareness or commonness of usernames. Moreover, these two observations are verified by using the dataset of Yahoo! Answers. The empirical evaluations on 53 forums verify: (a) the effectiveness of the classifiers with the automatically generated training data and (b) that the rareness and commonness of usernames can help user linking. We also analyze the cases where the classifiers fail.",
"title": ""
},
{
"docid": "f2ba236803a453c2b351aa910fdfa32d",
"text": "This study presents PV power based cuk converter for dc load application. The maximum power from the sun radiation is obtained by sun tracking and Maximum Power Point Tracking (MPPT). The sun tracking is implemented by the stepper motor control and MPPT is implemented by the Cuk converter and the load voltage is maintained constant irrespective of the variation in solar power. This technique improves the dynamic and steady state characteristics of the system. The simulation was done in MATLAB simulink and the experiments are carried out and the results are presented.",
"title": ""
}
] |
scidocsrr
|
d87ad46fe567c258a79a1aee87a55ba2
|
Deep Hashing Network for Efficient Similarity Retrieval
|
[
{
"docid": "9f746a67a960b01c9e33f6cd0fcda450",
"text": "Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes.",
"title": ""
},
{
"docid": "02621546c67e6457f350d0192b616041",
"text": "Binary embedding of high-dimensional data requires long codes to preserve the discriminative power of the input space. Traditional binary coding methods often suffer from very high computation and storage costs in such a scenario. To address this problem, we propose Circulant Binary Embedding (CBE) which generates binary codes by projecting the data with a circulant matrix. The circulant structure enables the use of Fast Fourier Transformation to speed up the computation. Compared to methods that use unstructured matrices, the proposed method improves the time complexity from O(d) to O(d log d), and the space complexity from O(d) to O(d) where d is the input dimensionality. We also propose a novel time-frequency alternating optimization to learn data-dependent circulant projections, which alternatively minimizes the objective in original and Fourier domains. We show by extensive experiments that the proposed approach gives much better performance than the state-of-the-art approaches for fixed time, and provides much faster computation with no performance degradation for fixed number of bits.",
"title": ""
}
] |
[
{
"docid": "ebc77c29a8f761edb5e4ca588b2e6fb5",
"text": "Gigantomastia by definition means bilateral benign progressive breast enlargement to a degree that requires breast reduction surgery to remove more than 1800 g of tissue on each side. It is seen at puberty or during pregnancy. The etiology for this condition is still not clear, but surgery remains the mainstay of treatment. We present a unique case of Gigantomastia, which was neither related to puberty nor pregnancy and has undergone three operations so far for recurrence.",
"title": ""
},
{
"docid": "7973cb32f19b61b0cc88671e4939e32b",
"text": "Trolling behaviors are extremely diverse, varying by context, tactics, motivations, and impact. Definitions, perceptions of, and reactions to online trolling behaviors vary. Since not all trolling is equal or deviant, managing these behaviors requires context sensitive strategies. This paper describes appropriate responses to various acts of trolling in context, based on perceptions of college students in North America. In addition to strategies for dealing with deviant trolling, this paper illustrates the complexity of dealing with socially and politically motivated trolling.",
"title": ""
},
{
"docid": "db111db8aaaf1185d9dc99ba53e6e828",
"text": "Topic model uncovers abstract topics within texts documents, which is an essential task in text analysis in social networks. However, identifying topics in text documents in social networks is challenging since the texts are short, unlabeled, and unstructured. For this reason, we propose a topic classification system regarding the features of text documents in social networks. The proposed system is based on several machine-learning algorithms and voting system. The accuracy of the system has been tested using text documents that were classified into three topics. The experiment results show that the proposed system guarantees high accuracy rates in documents topic classification.",
"title": ""
},
{
"docid": "6ebf60b36d9a13c5ae6ded91ee7d95fe",
"text": "In this paper, a novel approach for Kannada, Telugu and Devanagari handwritten numerals recognition based on global and local structural features is proposed. Probabilistic Neural Network (PNN) Classifier is used to classify the Kannada, Telugu and Devanagari numerals separately. Algorithm is validated with Kannada, Telugu and Devanagari numerals dataset by setting various radial values of PNN classifier under different experimental setup. The experimental results obtained are encouraging and comparable with other methods found in literature survey. The novelty of the proposed method is free from thinning and size",
"title": ""
},
{
"docid": "21e235169d37658afee28d5f3f7c831b",
"text": "Two studies assessed the effects of a training procedure (Goal Management Training, GMT), derived from Duncan's theory of goal neglect, on disorganized behavior following TBI. In Study 1, patients with traumatic brain injury (TBI) were randomly assigned to brief trials of GMT or motor skills training. GMT, but not motor skills training, was associated with significant gains on everyday paper-and-pencil tasks designed to mimic tasks that are problematic for patients with goal neglect. In Study 2, GMT was applied in a postencephalitic patient seeking to improve her meal-preparation abilities. Both naturalistic observation and self-report measures revealed improved meal preparation performance following GMT. These studies provide both experimental and clinical support for the efficacy of GMT toward the treatment of executive functioning deficits that compromise independence in patients with brain damage.",
"title": ""
},
{
"docid": "e1c298ea1c0a778a91e302202b8e1463",
"text": "Computational topology has recently seen an important development toward data analysis, giving birth to the field of topological data analysis. Topological persistence, or persistent homology, appears as a fundamental tool in this field. In this paper, we study topological persistence in general metric spaces, with a statistical approach. We show that the use of persistent homology can be naturally considered in general statistical frameworks and that persistence diagrams can be used as statistics with interesting convergence properties. Some numerical experiments are performed in various contexts to illustrate our results.",
"title": ""
},
{
"docid": "cfacfeecc0eee3fb6a0228ca6ed67be4",
"text": "This paper describes the results of our evaluation of a pedestrian's radio wave reflection characteristics. The reflection characteristics of radio waves from a pedestrian were measured as part of the effort to improve the pedestrian detection performance of the radar sensor. A pedestrian's radio wave reflection intensity is low, at about 15-20dB less than that of the rear of a vehicle, and can vary by as much as 20dB. Evaluating these characteristics in detail is a prerequisite to the development of a radar sensor that is capable of detecting pedestrians reliably.",
"title": ""
},
{
"docid": "09dbfbd77307b0cd152772618c40e083",
"text": "Textbook Question Answering (TQA) [1] is a newly proposed task to answer arbitrary questions in middle school curricula, which has particular challenges to understand the long essays in additional to the images. Bilinear models [2], [3] are effective at learning high-level associations between questions and images, but are inefficient to handle the long essays. In this paper, we propose an Essay-anchor Attentive Multi-modal Bilinear pooling (EAMB), a novel method to encode the long essays into the joint space of the questions and images. The essay-anchors, embedded from the keywords, represent the essay information in a latent space. We propose a novel network architecture to pay special attention on the keywords in the questions, consequently encoding the essay information into the question features, and thus the joint space with the images. We then use the bilinear models to extract the multi-modal interactions to obtain the answers. EAMB successfully utilizes the redundancy of the pre-trained word embedding space to represent the essay-anchors. This avoids the extra learning difficulties from exploiting large network structures. Quantitative and qualitative experiments show the outperforming effects of EAMB on the TQA dataset.",
"title": ""
},
{
"docid": "39cde8c4da81d72d7a0ff058edb71409",
"text": "One glaring weakness of Java for numerical programming is its lack of support for complex numbers. Simply creating a Complex number class leads to poor performance relative to Fortran. We show in this paper, however, that the combination of such aComplex class and a compiler that understands its semantics does indeed lead to Fortran-like performance. This performance gain is achieved while leaving the Java language completely unchanged and maintaining full compatibility with existing Java Virtual Machines . We quantify the effectiveness of our approach through experiments with linear algebra, electromagnetics, and computational fluid-dynamics kernels.",
"title": ""
},
{
"docid": "08fcc60aad5e9183c9c9440698317bcd",
"text": "This paper proposes a small-scale agile wall climbing robot able to navigate on smooth surfaces of any orientation, including vertical and inverted surfaces, which uses adhesive elastomer materials for attachment. Using two actuated legs with rotary motion and two passive revolute joints at each foot the robot can climb and steer in any orientation. Due to its compact design, a high degree of miniaturization is possible. It has onboard power, sensing, computing, and wireless communication which allow for semi-autonomous operation. Various aspects of a functioning prototype design and performance are discussed in detail, including leg and feet design and gait control. The current prototype can climb 90deg slopes at a speed of 6 cm/s and steer to any angle. This robot is intended for inspection and surveillance applications and, ultimately, space missions",
"title": ""
},
{
"docid": "e8edb58537ada97ee5da365fa096ae2d",
"text": "In this paper, we present a novel semi-supervised learning framework based on `1 graph. The `1 graph is motivated by that each datum can be reconstructed by the sparse linear superposition of the training data. The sparse reconstruction coefficients, used to deduce the weights of the directed `1 graph, are derived by solving an `1 optimization problem on sparse representation. Different from conventional graph construction processes which are generally divided into two independent steps, i.e., adjacency searching and weight selection, the graph adjacency structure as well as the graph weights of the `1 graph is derived simultaneously and in a parameter-free manner. Illuminated by the validated discriminating power of sparse representation in [16], we propose a semi-supervised learning framework based on `1 graph to utilize both labeled and unlabeled data for inference on a graph. Extensive experiments on semi-supervised face recognition and image classification demonstrate the superiority of our proposed semi-supervised learning framework based on `1 graph over the counterparts based on traditional graphs.",
"title": ""
},
{
"docid": "d3c3195b8272bd41d0095e236ddb1d96",
"text": "The extension of in vivo optical imaging for disease screening and image-guided surgical interventions requires brightly emitting, tissue-specific materials that optically transmit through living tissue and can be imaged with portable systems that display data in real-time. Recent work suggests that a new window across the short-wavelength infrared region can improve in vivo imaging sensitivity over near infrared light. Here we report on the first evidence of multispectral, real-time short-wavelength infrared imaging offering anatomical resolution using brightly emitting rare-earth nanomaterials and demonstrate their applicability toward disease-targeted imaging. Inorganic-protein nanocomposites of rare-earth nanomaterials with human serum albumin facilitated systemic biodistribution of the rare-earth nanomaterials resulting in the increased accumulation and retention in tumour tissue that was visualized by the localized enhancement of infrared signal intensity. Our findings lay the groundwork for a new generation of versatile, biomedical nanomaterials that can advance disease monitoring based on a pioneering infrared imaging technique.",
"title": ""
},
{
"docid": "8a1d0d2767a35235fa5ac70818ec92e7",
"text": "This work demonstrates two 94 GHz SPDT quarter-wave shunt switches using saturated SiGe HBTs. A new mode of operation, called reverse saturation, using the emitter at the RF output node of the switch, is utilized to take advantage of the higher emitter doping and improved isolation from the substrate. The switches were designed in a 180 nm SiGe BiCMOS technology featuring 90 nm SiGe HBTs (selective emitter shrink) with fT/fmax of 250/300+ GHz. The forward-saturated switch achieves an insertion loss and isolation at 94 GHz of 1.8 dB and 19.3 dB, respectively. The reverse-saturated switch achieves a similar isolation, but reduces the insertion loss to 1.4 dB. This result represents a 30% improvement in insertion loss in comparison to the best CMOS SPDT at 94 GHz.",
"title": ""
},
{
"docid": "5a2c1c1362b543a1da3fe4d3e786a368",
"text": "We describe a fully automated system for the classification of acral volar melanomas. We used a total of 213 acral dermoscopy images (176 nevi and 37 melanomas). Our automatic tumor area extraction algorithm successfully extracted the tumor in 199 cases (169 nevi and 30 melanomas), and we developed a diagnostic classifier using these images. Our linear classifier achieved a sensitivity (SE) of 100%, a specificity (SP) of 95.9%, and an area under the receiver operating characteristic curve (AUC) of 0.993 using a leave-one-out cross-validation strategy (81.1% SE, 92.1% SP; considering 14 unsuccessful extraction cases as false classification). In addition, we developed three pattern detectors for typical dermoscopic structures such as parallel ridge, parallel furrow, and fibrillar patterns. These also achieved good detection accuracy as indicated by their AUC values: 0.985, 0.931, and 0.890, respectively. The features used in the melanoma-nevus classifier and the parallel ridge detector have significant overlap.",
"title": ""
},
{
"docid": "8e302428a1fd6f7331f5546c22bf4d73",
"text": "Automatic extraction of synonyms and/or semantically related words has various applications in Natural Language Processing (NLP). There are currently two mainstream extraction paradigms, namely, lexicon-based and distributional approaches. The former usually suffers from low coverage, while the latter is only able to capture general relatedness rather than strict synonymy. In this paper, two rule-based extraction methods are applied to definitions from a machine-readable dictionary. Extracted synonyms are evaluated in two experiments by solving TOEFL synonym questions and being compared against existing thesauri. The proposed approaches have achieved satisfactory results in both evaluations, comparable to published studies or even the state of the art.",
"title": ""
},
{
"docid": "ae2d7dd00c5cae7f5e66403e25b76631",
"text": "This paper applies discriminative multinomial Naïve Bayes with various filtering analysis in order to build a network intrusion detection system. For our experimental analysis, we used the new NSL-KDD dataset, which is considered as a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. We perform 2 class classifications with 10-fold cross validation for building our proposed model. The experimental results show that the proposed approach is very accurate with low false positive rate and takes less time in comparison to other existing approaches while building an efficient network intrusion detection system.",
"title": ""
},
{
"docid": "6fc6167d1ef6b96d239fea03b9653865",
"text": "Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. In order to reduce this cost, several quantization schemes have gained attention recently with some focusing on weight quantization, and others focusing on quantizing activations. This paper proposes novel techniques that target weight and activation quantizations separately resulting in an overall quantized neural network (QNN). The activation quantization technique, PArameterized Clipping acTivation (PACT), uses an activation clipping parameter α that is optimized during training to find the right quantization scale. The weight quantization scheme, statistics-aware weight binning (SAWB), finds the optimal scaling factor that minimizes the quantization error based on the statistical characteristics of the distribution of weights without the need for an exhaustive search. The combination of PACT and SAWB results in a 2-bit QNN that achieves state-of-the-art classification accuracy (comparable to full precision networks) across a range of popular models and datasets.",
"title": ""
},
{
"docid": "d4f7a87891fc1c626d033be09cdf45b7",
"text": "Type-2 fuzzy sets, which are characterized by membership functions (MFs) that are themselves fuzzy, have been attracting interest. This paper focuses on advancing the understanding of interval type-2 fuzzy logic controllers (FLCs). First, a type-2 FLC is evolved using Genetic Algorithms (GAs). The type-2 FLC is then compared with another three GA evolved type-1 FLCs that have different design parameters. The objective is to examine the amount by which the extra degrees of freedom provided by antecedent type-2 fuzzy sets is able to improve the control performance. Experimental results show that better control can be achieved using a type-2 FLC with fewer fuzzy sets/rules so one benefit of type-2 FLC is a lower trade-off between modeling accuracy and interpretability. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cfef14e5b89aab3b56823cbfda1f0ddc",
"text": "High utility sequential pattern (HUSP) mining is an emerging topic in pattern mining, and only a few algorithms have been proposed to address it. In practice, most sequence databases usually grow over time, and it is inefficient for existing algorithms to mine HUSPs from scratch when databases grow with a small portion of updates. In view of this, we propose the IncUSP-Miner+ algorithm to mine HUSPs incrementally. Specifically, to avoid redundant re-computations, we propose a tighter upper bound of the utility of a sequence, called Tight Sequence Utility (TSU), and then we design a novel data structure, called the candidate pattern tree, to buffer the sequences whose TSU values are greater than or equal to the minimum utility threshold in the original database. Accordingly, to avoid keeping a huge amount of utility information for each sequence, a set of concise utility information is designed to be stored in each tree node. To improve the mining efficiency, several strategies are proposed to reduce the amount of computation for utility update and the scopes of database scans. Moreover, several strategies are also proposed to properly adjust the candidate pattern tree for the support of multiple database updates. Experimental results on some real and synthetic datasets show that IncUSP-Miner+ is able to efficiently mine HUSPs incrementally.",
"title": ""
}
] |
scidocsrr
|
a81d77eda544c85154fef5117434757b
|
Algorithms for orthogonal nonnegative matrix factorization
|
[
{
"docid": "570bc6b72db11c32292f705378042089",
"text": "In this paper, we propose a novel method, called local nonnegative matrix factorization (LNMF), for learning spatially localized, parts-based subspace representation of visual patterns. An objective function is defined to impose localization constraint, in addition to the non-negativity constraint in the standard NMF [1]. This gives a set of bases which not only allows a non-subtractive (part-based) representation of images but also manifests localized features. An algorithm is presented for the learning of such basis components. Experimental results are presented to compare LNMF with the NMF and PCA methods for face representation and recognition, which demonstrates advantages of LNMF.",
"title": ""
},
{
"docid": "c00e78121637ee9bcf1640c41204afd0",
"text": "In this paper we present a methodology for analyzing polyphonic musical passages comprised by notes that exhibit a harmonically fixed spectral profile (such as piano notes). Taking advantage of this unique note structure we can model the audio content of the musical passage by a linear basis transform and use non-negative matrix decomposition methods to estimate the spectral profile and the temporal information of every note. This approach results in a very simple and compact system that is not knowledge-based, but rather learns notes by observation.",
"title": ""
}
] |
[
{
"docid": "99faeab3adcf89a3f966b87547cea4e7",
"text": "In-service structural health monitoring of composite aircraft structures plays a key role in the assessment of their performance and integrity. In recent years, Fibre Optic Sensors (FOS) have proved to be a potentially excellent technique for real-time in-situ monitoring of these structures due to their numerous advantages, such as immunity to electromagnetic interference, small size, light weight, durability, and high bandwidth, which allows a great number of sensors to operate in the same system, and the possibility to be integrated within the material. However, more effort is still needed to bring the technology to a fully mature readiness level. In this paper, recent research and applications in structural health monitoring of composite aircraft structures using FOS have been critically reviewed, considering both the multi-point and distributed sensing techniques.",
"title": ""
},
{
"docid": "51eb0e35baa92a85a620b9bf15cbfca0",
"text": "The detection of bad weather conditions is crucial for meteorological centers, specially with demand for air, sea and ground traffic management. In this article, a system based on computer vision is presented which detects the presence of rain or snow. To separate the foreground from the background in image sequences, a classical Gaussian Mixture Model is used. The foreground model serves to detect rain and snow, since these are dynamic weather phenomena. Selection rules based on photometry and size are proposed in order to select the potential rain streaks. Then a Histogram of Orientations of rain or snow Streaks (HOS), estimated with the method of geometric moments, is computed, which is assumed to follow a model of Gaussian-uniform mixture. The Gaussian distribution represents the orientation of the rain or the snow whereas the uniform distribution represents the orientation of the noise. An algorithm of expectation maximization is used to separate these two distributions. Following a goodness-of-fit test, the Gaussian distribution is temporally smoothed and its amplitude allows deciding the presence of rain or snow. When the presence of rain or of snow is detected, the HOS makes it possible to detect the pixels of rain or of snow in the foreground images, and to estimate the intensity of the precipitation of rain or of snow. The applications of the method are numerous and include the detection of critical weather conditions, the observation of weather, the reliability improvement of video-surveillance systems and rain rendering.",
"title": ""
},
{
"docid": "4f5bc7305614149ff9dc178d60bba721",
"text": "Love is a wondrous state, deep, tender, and rewarding. Because of its intimate and personal nature it is regarded by some as an improper topic for experimental research. But, whatever our personal feelings may be, our assigned mission as psychologists is to analyze all facets of human and animal behavior into their component variables. So far as love or affection is concerned, psychologists have failed in this mission. The little we know about love does not transcend simple observation, and the little we write about it has been written better by poets and novelists. But of greater concern is the fact that psychologists tend to give progressively less attention to a motive which pervades our entire lives. Psychologists, at least psychologists who write textbooks, not only show no interest in the origin and development of love or affection, but they seem to be unaware of its very existence.",
"title": ""
},
{
"docid": "f6b49f33720ef789cf085a5ab8154ed4",
"text": "Several artificial neural network (ANN) models with a feed-forward, back-propagation network structure and various training algorithms, are developed to forecast daily and monthly river flow discharges in Manwan Reservoir. In order to test the applicability of these models, they are compared with a conventional time series flow prediction model. Results indicate that the ANN models provide better accuracy in forecasting river flow than does the auto-regression time series model. In particular, the scaled conjugate gradient algorithm furnishes the highest correlation coefficient and the smallest root mean square error. This ANN model is finally employed in the advanced water resource project of Yunnan Power Group.",
"title": ""
},
{
"docid": "028070222acb092767aadfdd6824d0df",
"text": "The autism spectrum disorders (ASDs) are a group of conditions characterized by impairments in reciprocal social interaction and communication, and the presence of restricted and repetitive behaviours. Individuals with an ASD vary greatly in cognitive development, which can range from above average to intellectual disability. Although ASDs are known to be highly heritable (∼90%), the underlying genetic determinants are still largely unknown. Here we analysed the genome-wide characteristics of rare (<1% frequency) copy number variation in ASD using dense genotyping arrays. When comparing 996 ASD individuals of European ancestry to 1,287 matched controls, cases were found to carry a higher global burden of rare, genic copy number variants (CNVs) (1.19 fold, P = 0.012), especially so for loci previously implicated in either ASD and/or intellectual disability (1.69 fold, P = 3.4 × 10-4). Among the CNVs there were numerous de novo and inherited events, sometimes in combination in a given family, implicating many novel ASD genes such as SHANK2, SYNGAP1, DLGAP2 and the X-linked DDX53–PTCHD1 locus. We also discovered an enrichment of CNVs disrupting functional gene sets involved in cellular proliferation, projection and motility, and GTPase/Ras signalling. Our results reveal many new genetic and functional targets in ASD that may lead to final connected pathways.",
"title": ""
},
{
"docid": "d68147bf8637543adf3053689de740c3",
"text": "In this paper, we do a research on the keyword extraction method of news articles. We build a candidate keywords graph model based on the basic idea of TextRank, use Word2Vec to calculate the similarity between words as transition probability of nodes' weight, calculate the word score by iterative method and pick the top N of the candidate keywords as the final results. Experimental results show that the weighted TextRank algorithm with correlation of words can improve performance of keyword extraction generally.",
"title": ""
},
{
"docid": "54722f4851707c2bf51d18910728a31c",
"text": "Many modern companies wish to maintain knowledge in the form of a corporate knowledge graph and to use and manage this knowledge via a knowledge graph management system (KGMS). We formulate various requirements for a fully-fledged KGMS. In particular, such a system must be capable of performing complex reasoning tasks but, at the same time, achieve efficient and scalable reasoning over Big Data with an acceptable computational complexity. Moreover, a KGMS needs interfaces to corporate databases, the web, and machine-learning and analytics packages. We present KRR formalisms and a system achieving these goals. To this aim, we use specific suitable fragments from the Datalog± family of languages, and we introduce the vadalog system, which puts these swift logics into action. This system exploits the theoretical underpinning of relevant Datalog± languages and combines it with existing and novel techniques from database and AI practice.",
"title": ""
},
{
"docid": "d27d17176181b09a74c9c8115bc6a66e",
"text": "In this chapter, we provide definitions of Business Intelligence (BI) and outline the development of BI over time, particularly carving out current questions of BI. Different scenarios of BI applications are considered and business perspectives and views of BI on the business process are identified. Further, the goals and tasks of BI are discussed from a management and analysis point of view and a method format for BI applications is proposed. This format also gives an outline of the book’s contents. Finally, examples from different domain areas are introduced which are used for demonstration in later chapters of the book. 1.1 Definition of Business Intelligence If one looks for a definition of the term Business Intelligence (BI) one will find the first reference already in 1958 in a paper of H.P. Luhn (cf. [14]). Starting from the definition of the terms “Intelligence” as “the ability to apprehend the interrelationships of presented facts in such a way as to guide action towards a desired goal” and “Business” as “a collection of activities carried on for whatever purpose, be it science, technology, commerce, industry, law, government, defense, et cetera”, he specifies a business intelligence system as “[an] automatic system [that] is being developed to disseminate information to the various sections of any industrial, scientific or government organization.” The main task of Luhn’s system was automatic abstracting of documents and delivering this information to appropriate so-called action points. This definition did not come into effect for 30 years, and in 1989Howard Dresner coined the term Business Intelligence (BI) again. He introduced it as an umbrella term for a set of concepts and methods to improve business decision making, using systems based on facts. Many similar definitions have been given since. In Negash [18], important aspects of BI are emphasized by stating that “. . . business intelligence systems provide actionable information delivered at the right time, at the right location, and in the right form to assist decision makers.” Today one can find many different definitions which show that at the top level the intention of BI has not changed so much. For example, in [20] BI is defined as “an integrated, company-specific, IT-based total approach for managerial decision © Springer-Verlag Berlin Heidelberg 2015 W. Grossmann, S. Rinderle-Ma, Fundamentals of Business Intelligence, Data-Centric Systems and Applications, DOI 10.1007/978-3-662-46531-8_1 1",
"title": ""
},
{
"docid": "3e845c9a82ef88c7a1f4447d57e35a3e",
"text": "Link prediction is a key problem for network-structured data. Link prediction heuristics use some score functions, such as common neighbors and Katz index, to measure the likelihood of links. They have obtained wide practical uses due to their simplicity, interpretability, and for some of them, scalability. However, every heuristic has a strong assumption on when two nodes are likely to link, which limits their effectiveness on networks where these assumptions fail. In this regard, a more reasonable way should be learning a suitable heuristic from a given network instead of using predefined ones. By extracting a local subgraph around each target link, we aim to learn a function mapping the subgraph patterns to link existence, thus automatically learning a “heuristic” that suits the current network. In this paper, we study this heuristic learning paradigm for link prediction. First, we develop a novel γ-decaying heuristic theory. The theory unifies a wide range of heuristics in a single framework, and proves that all these heuristics can be well approximated from local subgraphs. Our results show that local subgraphs reserve rich information related to link existence. Second, based on the γ-decaying theory, we propose a new method to learn heuristics from local subgraphs using a graph neural network (GNN). Its experimental results show unprecedented performance, working consistently well on a wide range of problems.",
"title": ""
},
{
"docid": "7b385edcbb0e3fa5bfffca2e1a9ecf13",
"text": "A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. We present taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault-monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature.",
"title": ""
},
{
"docid": "038e48bcae7346ef03a318bb3a280bcc",
"text": "Low back pain (LBP) is a problem worldwide with a lifetime prevalence reported to be as high as 84%. The lifetime prevalence of low back pain is reported to be as high as 84%, and the prevalence of chronic low back pain is about 23%, with 11–12% of the population being disabled by low back pain [1]. LBP is defined as pain experienced between the twelfth rib and the inferior gluteal fold, with or without associated leg pain [2]. Based on the etiology LBP is classified as Specific Low Back Pain and Non-specific Low Back Pain. Of all the LBP patients 10% are attributed to Specific and 90% are attributed to NonSpecific Low Back Pain (NSLBP) [3]. Specific LBP are those back pains which have specific etiology causes like Sponylolisthesis, Spondylosis, Ankylosing Spondylitis, Prolapsed disc etc.",
"title": ""
},
{
"docid": "3f206b161dc55aea204dda594127bf3d",
"text": "A key challenge in fine-grained recognition is how to find and represent discriminative local regions. Recent attention models are capable of learning discriminative region localizers only from category labels with reinforcement learning. However, not utilizing any explicit part information, they are not able to accurately find multiple distinctive regions. In this work, we introduce an attribute-guided attention localization scheme where the local region localizers are learned under the guidance of part attribute descriptions. By designing a novel reward strategy, we are able to learn to locate regions that are spatially and semantically distinctive with reinforcement learning algorithm. The attribute labeling requirement of the scheme is more amenable than the accurate part location annotation required by traditional part-based fine-grained recognition methods. Experimental results on the CUB-200-2011 dataset [1] demonstrate the superiority of the proposed scheme on both fine-grained recognition and attribute recognition.",
"title": ""
},
{
"docid": "18aa98d42150adb110632b20118909e4",
"text": "In recent times, 60 GHz millimeter wave systems have become increasingly attractive due to the escalating demand for multi-Gb/s wireless communication. Recent works have demonstrated the ability to realize a 60 GHz transceiver by means of a cost-effective CMOS process. This paper aims to give the most up-to-date status of the 60 GHz wireless transceiver development, with an emphasis on realizing low power consumption and small form factor that is applicable for mobile terminals. To make 60 GHz wireless more robust and ease of use in various applications, broadband propagation and interference characteristics are measured at the 60 GHz band in an application-oriented office environment, considering the concurrent use of multiple frequency channels and multiple terminals. Moreover, this paper gives an overview of future millimeter wave systems.",
"title": ""
},
{
"docid": "6508fc8732fd22fde8c8ac180a2e19e3",
"text": "The problem of joint modeling the text and image components of multimedia documents is studied. The text component is represented as a sample from a hidden topic model, learned with latent Dirichlet allocation, and images are represented as bags of visual (SIFT) features. Two hypotheses are investigated: that 1) there is a benefit to explicitly modeling correlations between the two components, and 2) this modeling is more effective in feature spaces with higher levels of abstraction. Correlations between the two components are learned with canonical correlation analysis. Abstraction is achieved by representing text and images at a more general, semantic level. The two hypotheses are studied in the context of the task of cross-modal document retrieval. This includes retrieving the text that most closely matches a query image, or retrieving the images that most closely match a query text. It is shown that accounting for cross-modal correlations and semantic abstraction both improve retrieval accuracy. The cross-modal model is also shown to outperform state-of-the-art image retrieval systems on a unimodal retrieval task.",
"title": ""
},
{
"docid": "50b316a52bdfacd5fe319818d0b22962",
"text": "Artificial neural networks (ANN) are used to predict 1) degree program completion, 2) earned hours, and 3) GPA for college students. The feed forward neural net architecture is used with the back propagation learning function and the logistic activation function. The database used for training and validation consisted of 17,476 student transcripts from Fall 1983 through Fall 1994. It is shown that age, gender, race, ACT scores, and reading level are significant in predicting the degree program completion, earned hours, and GPA. Of the three, earned hours proved the most difficult to predict.",
"title": ""
},
{
"docid": "3cd19e73aade3e99fff4b213afd3c678",
"text": "We describe the dialogue model for the virtual humans developed at the Institute for Creative Technologies at the University of Southern California. The dialogue model contains a rich set of information state and dialogue moves to allow a wide range of behaviour in multimodal, multiparty interaction. We extend this model to enable non-team negotiation, using ideas from social science literature on negotiation and implemented strategies and dialogue moves for this area. We present a virtual human doctor who uses this model to engage in multimodal negotiation dialogue with people from other organisations. The doctor is part of the SASO-ST system, used for training for non-team interactions.",
"title": ""
},
{
"docid": "132bb5b7024de19f4160664edca4b4f5",
"text": "Generic Competitive Strategy: Basically, strategy is about two things: deciding where you want your business to go, and deciding how to get there. A more complete definition is based on competitive advantage, the object of most corporate strategy: “Competitive advantage grows out of value a firm is able to create for its buyers that exceeds the firm's cost of creating it. Value is what buyers are willing to pay, and superior value stems from offering lower prices than competitors for equivalent benefits or providing unique benefits that more than offset a higher price. There are two basic types of competitive advantage: cost leadership and differentiation.” Michael Porter Competitive strategies involve taking offensive or defensive actions to create a defendable position in the industry. Generic strategies can help the organization to cope with the five competitive forces in the industry and do better than other organization in the industry. Generic strategies include ‘overall cost leadership’, ‘differentiation’, and ‘focus’. Generally firms pursue only one of the above generic strategies. However some firms make an effort to pursue only one of the above generic strategies. However some firms make an effort to pursue more than one strategy at a time by bringing out a differentiated product at low cost. Though approaches like these are successful in short term, they are hardly sustainable in the long term. If firms try to maintain cost leadership as well as differentiation at the same time, they may fail to achieve either.",
"title": ""
},
{
"docid": "45ef23f40fd4241b58b8cb0810695785",
"text": "Two-wheeled wheelchairs are considered highly nonlinear and complex systems. The systems mimic a double-inverted pendulum scenario and will provide better maneuverability in confined spaces and also to reach higher level of height for pick and place tasks. The challenge resides in modeling and control of the two-wheeled wheelchair to perform comparably to a normal four-wheeled wheelchair. Most common modeling techniques have been accomplished by researchers utilizing the basic Newton's Laws of motion and some have used 3D tools to model the system where the models are much more theoretical and quite far from the practical implementation. This article is aimed at closing the gap between the conventional mathematical modeling approaches where the integrated 3D modeling approach with validation on the actual hardware implementation was conducted. To achieve this, both nonlinear and a linearized model in terms of state space model were obtained from the mathematical model of the system for analysis and, thereafter, a 3D virtual prototype of the wheelchair was developed, simulated, and analyzed. This has increased the confidence level for the proposed platform and facilitated the actual hardware implementation of the two-wheeled wheelchair. Results show that the prototype developed and tested has successfully worked within the specific requirements established.",
"title": ""
},
{
"docid": "f7b8956748e8c19468490f35ed764e4e",
"text": "We show how the database community’s notion of a generic query interface for data aggregation can be applied to ad-hoc networks of sensor devices. As has been noted in the sensor network literature, aggregation is important as a data-reduction tool; networking approaches, however, have focused on application specific solutions, whereas our innetwork aggregation approach is driven by a general purpose, SQL-style interface that can execute queries over any type of sensor data while providing opportunities for significant optimization. We present a variety of techniques to improve the reliability and performance of our solution. We also show how grouped aggregates can be efficiently computed and offer a comparison to related systems and",
"title": ""
}
] |
scidocsrr
|
4db247936928f32d3ea3ade0f9e8e8a9
|
Indonesian News Classification using Support Vector Machine
|
[
{
"docid": "c698f7d6b487cc7c87d7ff215d7f12b2",
"text": "This paper reports a controlled study with statistical signi cance tests on ve text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classi er, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classier. We focus on the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small (less than ten), and that all the methods perform comparably when the categories are su ciently common (over 300 instances).",
"title": ""
},
{
"docid": "52a5f4c15c1992602b8fe21270582cc6",
"text": "This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between linear and cubic in the training set size. SMO’s computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. On realworld sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.",
"title": ""
}
] |
[
{
"docid": "6aa717b48ddfc1ba8512e19f577a3b76",
"text": "Human activity information can be used in a lot of applications such as fitness monitoring. In this paper an activity classification wearable device system is designed which can provide the activity information of the users via a three-axis kinematic sensor. The features in time domain and frequency domain of acceleration data are extracted, and Decision Tree algorithm is applied. The training module is performed once off line to generate the classification model, while the classification can be executed real time on the STM32L low-power microcontroller. The wearable device was worn in watch style in experiment and it offered activity information in acceptable accuracy.",
"title": ""
},
{
"docid": "8f0da69d48c3d5098018b2e5046b6e8e",
"text": "Halogenated aliphatic compounds have many technical uses, but substances within this group are also ubiquitous environmental pollutants that can affect the ozone layer and contribute to global warming. The establishment of quantitative structure-property relationships is of interest not only to fill in gaps in the available database but also to validate experimental data already acquired. The three-dimensional structures of 240 compounds were modeled with molecular mechanics prior to the generation of empirical descriptors. Two bilinear projection methods, principal component analysis (PCA) and partial-least-squares regression (PLSR), were used to identify outliers. PLSR was subsequently used to build a multivariate calibration model by extracting the latent variables that describe most of the covariation between the molecular structure and the boiling point. Boiling points were also estimated with an extension of the group contribution method of Stein and Brown.",
"title": ""
},
{
"docid": "321ae36452b9aac47c833db017116bb5",
"text": "Near Field Communication, NFCis one of the latest short range wireless communication technologies. NFC provides safe communication between electronic gadgets. NFC-enabled devices can just be pointed or touched by the users of their devices to other NFC-enabled devices to communicate with them. With NFC technology, communication is established when an NFC-compatible device is brought within a few centimetres of another i.e. around 20 cm theoretically (4cm is practical). The immense benefit of the short transmission range is that it prevents eavesdropping on NFC-enabled dealings. NFC technology enables several innovative usage scenarios for mobile devices. NFC technology works on the basis of RFID technology which uses magnetic field induction to commence communication between electronic devices in close vicinity. NFC operates at 13.56MHz and has 424kbps maximum data transfer rate. NFC is complementary to Bluetooth and 802.11 with their long distance capabilities. In card emulation mode NFC devices can offer contactless/wireless smart card standard. This technology enables smart phones to replace traditional plastic cards for the purpose of ticketing, payment, etc. Sharing (share files between phones), service discovery i.e. get information by touching smart phones etc. are other possible applications of NFC using smart phones. This paper provides an overview of NFC technology in a detailed manner including working principle, transmission details, protocols and standards, application scenarios, future market, security standards and vendor’s chipsets which are available for this standard. This comprehensive survey should serve as a useful guide for students, researchers and academicians who are interested in NFC Technology and its applications [1].",
"title": ""
},
{
"docid": "32f96ae1a99ed2ade25df0792d8d3779",
"text": "The success of software development depends on the proper estimation of the effort required to develop the software. Project managers require a reliable approach for software effort estimation. It is especially important during the early stages of the software development life cycle. Accurate software effort estimation is a major concern in software industries. Stochastic Gradient Boosting (SGB) is one of the machine learning techniques that helps in getting improved estimated values. SGB is used for improving the accuracy of models built on decision trees. In this paper, the main goal is to estimate the effort required to develop various software projects using the class point approach. Then, optimization of the effort parameters is achieved using the SGB technique to obtain better prediction accuracy. Further- more, performance comparisons of the models obtained using the SGB technique with the Multi Layer Perceptron and the Radial Basis Function Network are presented in order to highlight the performance achieved by each method.",
"title": ""
},
{
"docid": "b42b17131236abc1ee3066905025aa8c",
"text": "The planet Mars, while cold and arid today, once possessed a warm and wet climate, as evidenced by extensive fluvial features observable on its surface. It is believed that the warm climate of the primitive Mars was created by a strong greenhouse effect caused by a thick CO2 atmosphere. Mars lost its warm climate when most of the available volatile CO2 was fixed into the form of carbonate rock due to the action of cycling water. It is believed, however, that sufficient CO2 to form a 300 to 600 mb atmosphere may still exist in volatile form, either adsorbed into the regolith or frozen out at the south pole. This CO2 may be released by planetary warming, and as the CO2 atmosphere thickens, positive feedback is produced which can accelerate the warming trend. Thus it is conceivable, that by taking advantage of the positive feedback inherent in Mars' atmosphere/regolith CO2 system, that engineering efforts can produce drastic changes in climate and pressure on a planetary scale. In this paper we propose a mathematical model of the Martian CO2 system, and use it to produce analysis which clarifies the potential of positive feedback to accelerate planetary engineering efforts. It is shown that by taking advantage of the feedback, the requirements for planetary engineering can be reduced by about 2 orders of magnitude relative to previous estimates. We examine the potential of various schemes for producing the initial warming to drive the process, including the stationing of orbiting mirrors, the importation of natural volatiles with high greenhouse capacity from the outer solar system, and the production of artificial halocarbon greenhouse gases on the Martian surface through in-situ industry. If the orbital mirror scheme is adopted, mirrors with dimension on the order or 100 km radius are required to vaporize the CO2 in the south polar cap. If manufactured of solar sail like material, such mirrors would have a mass on the order of 200,000 tonnes. If manufactured in space out of asteroidal or Martian moon material, about 120 MWe-years of energy would be needed to produce the required aluminum. This amount of power can be provided by near-term multimegawatt nuclear power units, such as the 5 MWe modules now under consideration for NEP spacecraft. Orbital transfer of very massive bodies from the outer solar system can be accomplished using nuclear thermal rocket engines using the asteroid's volatile material as propellant. Using major planets for gravity assists, the rocket ∆V required to move an outer solar system asteroid onto a collision trajectory with Mars can be as little as 300 m/s. If the asteroid is made of NH3, specific impulses of about 400 s can be attained, and as little as 10% of the asteroid will be required for propellant. Four 5000 MWt NTR engines would require a 10 year burn time to push a 10 billion tonne asteroid through a ∆V of 300 m/s. About 4 such objects would be sufficient to greenhouse Mars. Greenhousing Mars via the manufacture of halocarbon gases on the planet's surface may well be the most practical option. Total surface power requirements to drive planetary warming using this method are calculated and found to be on the order of 1000 MWe, and the required times scale for climate and atmosphere modification is on the order of 50 years. It is concluded that a drastic modification of Martian conditions can be achieved using 21st century technology. The Mars so produced will closely resemble the conditions existing on the primitive Mars. Humans operating on the surface of such a Mars would require breathing gear, but pressure suits would be unnecessary. With outside atmospheric pressures raised, it will be possible to create large dwelling areas by means of very large inflatable structures. Average temperatures could be above the freezing point of water for significant regions during portions of the year, enabling the growth of plant life in the open. The spread of plants could produce enough oxygen to make Mars habitable for animals in several millennia. More rapid oxygenation would require engineering efforts supported by multi-terrawatt power sources. It is speculated that the desire to speed the terraforming of Mars will be a driver for developing such technologies, which in turn will define a leap in human power over nature as dramatic as that which accompanied the creation of post-Renaissance industrial civilization.",
"title": ""
},
{
"docid": "41b1a0c362c7bdb77b7dbcc20adcd532",
"text": "Augmented reality involves the use of models and their associated renderings to supplement information in a real scene. In order for this information to be relevant or meaningful, the models must be positioned and displayed in such a way that they align with their corresponding real objects. For practical reasons this alignment cannot be known a priori, and cannot be hard-wired into a system. Instead a simple, reliable alignment or calibration process is performed so that computer models can be accurately registered with their real-life counterparts. We describe the design and implementation of such a process and we show how it can be used to create convincing interactions between real and virtual objects.",
"title": ""
},
{
"docid": "c64d2ed872227ead9069a52428bf1d7e",
"text": "A file provenance system supports the automatic collection and management of provenance i.e. the complete processing history of a data object. File system level provenance provides functionality unavailable in the existing provenance systems. In this paper, we discuss the design objectives for a flexible and efficient file provenance system and then propose the design of such a system, called FiPS. We design FiPS as a thin stackable file system for capturing provenance in a portable manner. FiPS can capture provenance at various degrees of granularity, can transform provenance records into secure information, and can direct the resulting provenance data to various persistent storage systems.",
"title": ""
},
{
"docid": "cb5f866f2977c7c0d66d75bea1094375",
"text": "A single-switch nonisolated dc/dc converter for a stand-alone photovoltaic (PV)-battery-powered pump system is proposed in this paper. The converter is formed by combining a buck converter with a buck-boost converter. This integration also resulted in reduced repeated power processing, hence improving the conversion efficiency. With only a single transistor, the converter is able to perform three tasks simultaneously, namely, maximum-power-point tracking (MPPT), battery charging, and driving the pump at constant flow rate. To achieve these control objectives, the two inductors operate in different modes such that variable switching frequency control and duty cycle control can be used to manage MPPT and output voltage regulation, respectively. The battery in the converter provides a more steady dc-link voltage as compared to that of a conventional single-stage converter and hence mitigates the high voltage stress problem. Experimental results of a 14-W laboratory prototype converter with a maximum efficiency of 92% confirmed the performance of the proposed converter when used in a PV-battery pump system.",
"title": ""
},
{
"docid": "2400a51363b36f97c12d9aaa17d3badc",
"text": "If you really want to be smarter, reading can be one of the lots ways to evoke and realize. Many people who like reading will have more knowledge and experiences. Reading can be a way to gain information from economics, politics, science, fiction, literature, religion, and many others. As one of the part of book categories, understanding the digital economy data tools and research always becomes the most wanted book. Many people are absolutely searching for this book. It means that many love to read this kind of book.",
"title": ""
},
{
"docid": "29649adbb39f182af1d84aab476ff8bf",
"text": "Users of the online shopping site Amazon are encouraged to post reviews of the products that they purchase. Little attempt is made by Amazon to restrict or limit the content of these reviews. The number of reviews for different products varies, but the reviews provide accessible and plentiful data for relatively easy analysis for a range of applications. This paper seeks to apply and extend the current work in the field of natural language processing and sentiment analysis to data retrieved from Amazon. Naive Bayes and decision list classifiers are used to tag a given review as positive or negative. The number of stars a user gives a product is used as training data to perform supervised machine learning. A corpus contains 50,000 product review from 15 products serves as the dataset of study. Top selling and reviewed books on the site are the primary focus of the experiments, but useful features of them that aid in accurate classification are compared to those most useful in classification of other media products. The features, such as bag-of-words and bigrams, are compared to one another in their effectiveness in correctly tagging reviews. Errors in classification and general difficulties regarding the selection of features are analyzed and discussed.",
"title": ""
},
{
"docid": "bbf2497352c90e52ed5e47ed0ef2dea1",
"text": "In this paper, we present a high-efficiency nonplanar Yagi-Uda antenna array consisting of six wire bonds for on-chip radio applications. The measured and simulated results are presented for the radiation pattern, gain, and radiation efficiency of the antenna at 40 GHz. The array has been characterized with two different ground-plane sizes that are 0.32¿2 and 1.4¿2. The arrays achieved the measured gains of 6.0 and 8.1 dBi for the small and large ground-plane variations, respectively, after numerically removing the loss from the feed line. The efficiencies of the antennas were extracted using the simulated directivity values. The efficiency of the large ground antenna was 82.5%, whereas that of the small ground antenna was 72.0%. Single wire-bond antennas are also characterized when integrated directly on a silicon-germanium complementary-metal-oxide-semiconductor transceiver chip with -1.4-dBi gain and 51% efficiency. This paper builds on measurements of a single wire-bond antenna over ground-plane sizes ranging from 0.5¿2 to 2.1¿2 , which achieved gains of 0.4-3.6 dBi and efficiencies of 51%-84%.",
"title": ""
},
{
"docid": "4460d7bba3565b99a9da65bcc8112a12",
"text": "Urban search and rescue missions raise special requirements on robotic systems. Small aerial systems provide essential support to human task forces in situation assessment and surveillance. As external infrastructure for navigation and communication is usually not available, robotic systems must be able to operate autonomously. A limited payload of small aerial systems poses a great challenge to the system design. The optimal tradeoff between flight performance, sensors, and computing resources has to be found. Communication to external computers cannot be guaranteed; therefore, all processing and decision making has to be done on board. In this article, we present an unmanned aircraft system design fulfilling these requirements. The components of our system are structured into groups to encapsulate their functionality and interfaces. We use both laser and stereo vision odometry to enable seamless indoor and outdoor navigation. The odometry is fused with an inertial measurement unit in an extended Kalman filter. Navigation is supported by a module that recognizes known objects in the environment. A distributed computation approach is adopted to address the computational requirements of the used algorithms. The capabilities of the system are validated in flight experiments, using a quadrotor.",
"title": ""
},
{
"docid": "fc5a04c795fbfdd2b6b53836c9710e4d",
"text": "In the last few years, deep convolutional neural networks have become ubiquitous in computer vision, achieving state-of-the-art results on problems like object detection, semantic segmentation, and image captioning. However, they have not yet been widely investigated in the document analysis community. In this paper, we present a word spotting system based on convolutional neural networks. We train a network to extract a powerful image representation, which we then embed into a word embedding space. This allows us to perform word spotting using both query-by-string and query-by-example in a variety of word embedding spaces, both learned and handcrafted, for verbatim as well as semantic word spotting. Our novel approach is versatile and the evaluation shows that it outperforms the previous state-of-the-art for word spotting on standard datasets.",
"title": ""
},
{
"docid": "7e788eb9ff8fd10582aa94a89edb10a2",
"text": "This paper recasts the problem of feature location in source code as a decision-making problem in the presence of uncertainty. The solution to the problem is formulated as a combination of the opinions of different experts. The experts in this work are two existing techniques for feature location: a scenario-based probabilistic ranking of events and an information-retrieval-based technique that uses latent semantic indexing. The combination of these two experts is empirically evaluated through several case studies, which use the source code of the Mozilla Web browser and the Eclipse integrated development environment. The results show that the combination of experts significantly improves the effectiveness of feature location as compared to each of the experts used independently",
"title": ""
},
{
"docid": "2d774ec62cdac08997cb8b86e73fe015",
"text": "This paper focuses on modeling resolving and simulations of the inverse kinematics of an anthropomorphic redundant robotic structure with seven degrees of freedom and a workspace similar to human arm. Also the kinematical model and the kinematics equations of the robotic arm are presented. A method of resolving the redundancy of seven degrees of freedom robotic arm is presented using Fuzzy Logic toolbox from MATLAB®.",
"title": ""
},
{
"docid": "b07f47274cc8d7a0a7afdcf3d25486aa",
"text": "Fifth-generation (5G) networks will be the first generation to benefit from location information that is sufficiently precise to be leveraged in wireless network design and optimization. We argue that location information can aid in addressing several of the key challenges in 5G, complementary to existing and planned technological developments. These challenges include an increase in traffic and number of devices, robustness for mission-critical services, and a reduction in total energy consumption and latency. This article gives a broad overview of the growing research area of location-aware communications across different layers of the protocol stack. We highlight several promising trends, tradeoffs, and pitfalls.",
"title": ""
},
{
"docid": "fb7961117dae98e770e0fe84c33673b9",
"text": "Named-Entity Recognition (NER) aims at identifying the fragments of a given text that mention a given entity of interest. This manuscript presents our Minimal named-Entity Recognizer (MER), designed with flexibility, autonomy and efficiency in mind. To annotate a given text, MER only requires a lexicon (text file) with the list of terms representing the entities of interest; and a GNU Bash shell grep and awk tools. MER was deployed in a cloud infrastructure using multiple Virtual Machines to work as an annotation server and participate in the Technical Interoperability and Performance of annotation Servers (TIPS) task of BioCreative V.5. Preliminary results show that our solution processed each document (text retrieval and annotation) in less than 3 seconds on average without using any type of cache. MER is publicly available in a GitHub repository (https://github.com/lasigeBioTM/MER) and through a RESTful Web service (http://labs.fc.ul.pt/mer/).",
"title": ""
},
{
"docid": "fbd413241603459451b79d0ab9580932",
"text": "Document-level sentiment classification is a fundamental problem which aims to predict a user’s overall sentiment about a product in a document. Several methods have been proposed to tackle the problem whereas most of them fail to consider the influence of users who express the sentiment and products which are evaluated. To address the issue, we propose a deep memory network for document-level sentiment classification which could capture the user and product information at the same time. To prove the effectiveness of our algorithm, we conduct experiments on IMDB and Yelp datasets and the results indicate that our model can achieve better performance than several existing methods.",
"title": ""
},
{
"docid": "a545496b8cd0a8083830ece25d0f6634",
"text": "Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used or minimize the number or total size of accepted items. We consider off-line and on-line variants of the problems. For the off-line variant, we require that there be an ordering of the bins, so that no item in a later bin fits in an earlier bin. We find the approximation ratios of two natural approximation algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of 1 k on the item sizes, for some integer k. ∗The work of Boyar, Favrholdt, Kohrt, and Larsen was supported in part by the Danish Natural Science Research Council (SNF). The work of Epstein was supported in part by the Israel Science Foundation (ISF). A preliminary version of this paper appeared in the proceedings of the Fifteenth International Symposium on Fundamentals of Computation Theory, 2005.",
"title": ""
},
{
"docid": "06d9ad4c3c4f4e07c2173cd1ae5cf608",
"text": "The standard Local Binary Pattern (LBP) is considered among the most computationally efficient remote sensing (RS) image descriptors in the framework of large-scale content based RS image retrieval (CBIR). However, it has limited discrimination capability for characterizing high dimensional RS images with complex semantic content. There are several LBP variants introduced in computer vision that can be extended to RS CBIR to efficiently overcome the above-mentioned problem. To this end, this paper presents a comparative study in order to analyze and compare advanced LBP variants in RS CBIR domain. We initially introduce a categorization of the LBP variants based on the specific CBIR problems in RS, and analyze the most recent methodological developments associated to each category. All the considered LBP variants are introduced for the first time in the framework of RS image retrieval problems, and have been experimentally compared in terms of their: 1) discrimination capability to model high-level semantic information present in RS images (and thus the retrieval performance); and 2) computational complexities associated to retrieval and feature extraction time.",
"title": ""
}
] |
scidocsrr
|
1c9bf3185e399658c2d21ea6dd76bcd7
|
When Should Software Firms Commercialize New Products via Freemium Business Models ?
|
[
{
"docid": "b4880ddb59730f465f585f3686d1d2b1",
"text": "The authors study the effect of word-of-mouth (WOM) marketing on member growth at an Internet social networking site and compare it with traditional marketing vehicles. Because social network sites record the electronic invitations sent out by existing members, outbound WOM may be precisely tracked. WOM, along with traditional marketing, can then be linked to the number of new members subsequently joining the site (signups). Due to the endogeneity among WOM, new signups, and traditional marketing activity, the authors employ a Vector Autoregression (VAR) modeling approach. Estimates from the VAR model show that word-ofmouth referrals have substantially longer carryover effects than traditional marketing actions. The long-run elasticity of signups with respect to WOM is estimated to be 0.53 (substantially larger than the average advertising elasticities reported in the literature) and the WOM elasticity is about 20 times higher than the elasticity for marketing events, and 30 times that of media appearances. Based on revenue from advertising impressions served to a new member, the monetary value of a WOM referral can be calculated; this yields an upper bound estimate for the financial incentives the firm might offer to stimulate word-of-mouth.",
"title": ""
},
{
"docid": "c7d629a83de44e17a134a785795e26d8",
"text": "How can firms profitably give away free products? This paper provides a novel answer and articulates tradeoffs in a space of information product design. We introduce a formal model of two-sided network externalities based in textbook economics—a mix of Katz & Shapiro network effects, price discrimination, and product differentiation. Externality-based complements, however, exploit a different mechanism than either tying or lock-in even as they help to explain many recent strategies such as those of firms selling operating systems, Internet browsers, games, music, and video. The model presented here argues for three simple but useful results. First, even in the absence of competition, a firm can rationally invest in a product it intends to give away into perpetuity. Second, we identify distinct markets for content providers and end consumers and show that either can be a candidate for a free good. Third, product coupling across markets can increase consumer welfare even as it increases firm profits. The model also generates testable hypotheses on the size and direction of network effects while offering insights to regulators seeking to apply antitrust law to network markets. ACKNOWLEDGMENTS: We are grateful to participants of the 1999 Workshop on Information Systems and Economics, the 2000 Association for Computing Machinery SIG E-Commerce, the 2000 International Conference on Information Systems, the 2002 Stanford Institute for Theoretical Economics (SITE) workshop on Internet Economics, the 2003 Insitut D’Economie Industrielle second conference on “The Economics of the Software and Internet Industries,” as well as numerous participants at university seminars. We wish to thank Tom Noe for helpful observations on oligopoly markets, Lones Smith, Kai-Uwe Kuhn, and Jovan Grahovac for corrections and model generalizations, Jeff MacKie-Mason for valuable feedback on model design and bundling, and Hal Varian for helpful comments on firm strategy and model implications. Frank Fisher provided helpful advice on and knowledge of the Microsoft trial. Jean Tirole provided useful suggestions and examples, particularly in regard to credit card markets. Paul Resnick proposed the descriptive term “internetwork” externality to describe two-sided network externalities. Tom Eisenmann provided useful feedback and examples. We also thank Robert Gazzale, Moti Levi, and Craig Newmark for their many helpful observations. This research has been supported by NSF Career Award #IIS 9876233. For an earlier version of the paper that also addresses bundling and competition, please see “Information Complements, Substitutes, and Strategic Product Design,” November 2000, http://ssrn.com/abstract=249585.",
"title": ""
}
] |
[
{
"docid": "c99b28aefb425a076dac01b2a0861087",
"text": "Theoretical, psychoanalytical constructs referring to the unconscious, the superego, and id, enjoy an autonomy within the I. As such, this study contemplates the discussion of these foreign entities that inhabit the interior of the I, producing an effect of foreignness. In the first section, I will develop a reflection on the state of foreignness of the unconscious. I will begin with an analogy used by Freud, which addresses the thesis of universality of consciousness with the psychoanalytical thesis of the subconscience within the I. Affirmation of consciousness in the other may be used analogously for affirm the idea of another inhabiting our own being. I shall continue, seeking to understand how the process of unconscious repression produces the effect of foreignness. The idea of a moral censor present in the entity of the superego constitutes the theme of the second section. The superego follows the principle of otherness in its constitution and in its effects on the I. Finally, a reflection on the dimension of otherness in the Id seems urgent to me, as with this concept, Freud radicalized in the idea of the foreign as the origin of the subject.",
"title": ""
},
{
"docid": "3a9d3a285c6828510e3c57d13b8648db",
"text": "Predicting system failures can be of great benefit to managers that get a better command over system performance. Data that systems generate in the form of logs is a valuable source of information to predict system reliability. As such, there is an increasing demand of tools to mine logs and provide accurate predictions. However, interpreting information in logs poses some challenges. This study discusses how to effectively mining sequences of logs and provide correct predictions. The approach integrates different machine learning techniques to control for data brittleness, provide accuracy of model selection and validation, and increase robustness of classification results. We apply the proposed approach to log sequences of 25 different applications of a software system for telemetry and performance of cars. On this system, we discuss the ability of three well-known support vector machines - multilayer perceptron, radial basis function and linear kernels - to fit and predict defective log sequences. Our results show that a good analysis strategy provides stable, accurate predictions. Such strategy must at least require high fitting ability of models used for prediction. We demonstrate that such models give excellent predictions both on individual applications - e.g., 1 % false positive rate, 94 % true positive rate, and 95 % precision - and across system applications - on average, 9 % false positive rate, 78 % true positive rate, and 95 % precision. We also show that these results are similarly achieved for different degree of sequence defectiveness. To show how good are our results, we compare them with recent studies in system log analysis. We finally provide some recommendations that we draw reflecting on our study.",
"title": ""
},
{
"docid": "42134f9aa4474d66882287f2b6f26b3d",
"text": "We have developed parallel optical interconnect technologies designed to support terabit/s-class chip-to-chip data transfer through polymer waveguides integrated in printed circuit boards (PCBs). The board-level links represent a highly integrated packaging approach based on a novel parallel optical module, or Optomodule, with 16 transmitter and 16 receiver channels. Optomodules with 16 Tx+16 Rx channels have been assembled and fully characterized, with transmitters operating at data rates up to 20 Gb/s for a 27-1 PRBS pattern. Receivers characterized as fiber-coupled 16-channel transmitter-to-receiver links operated error-free up to 15 Gb/s, providing a 240 Gb/s aggregate bidirectional data rate. The low-profile Optomodule is directly surface mounted to a circuit board using convention ball grid array (BGA) solder process. Optical coupling to a dense array of polymer waveguides fabricated on the PCB is facilitated by turning mirrors and lens arrays integrated into the optical PCB. A complete optical link between two Optomodules interconnected through 32 polymer waveguides has been demonstrated with each unidirectional link operating at 10 Gb/s achieving a 160 Gb/s bidirectional data rate. The full module-to-module link provides the fastest, widest, and most integrated multimode optical bus demonstrated to date.",
"title": ""
},
{
"docid": "db2702205e1b5a6368bcb549d20e1191",
"text": "As a result of good modeling capabilities, neural networks have been used extensively for a number of chemical engineering applications such as sensor data analysis, fault detection and nonlinear process identi®cation. However, only in recent years, with the upsurge in the research on nonlinear control, has its use in process control been widespread. This paper intend to provide an extensive review of the various applications utilizing neural networks for chemical process control, both in simulation and online implementation. We have categorized the review under three major control schemes; predictive control, inverse-model-based control, and adaptive control methods, respectively. In each of these categories, we summarize the major applications as well as the objectives and results of the work. The review reveals the tremendous prospect of using neural networks in process control. It also shows the multilayered neural network as the most popular network for such process control applications and also shows the lack of actual successful online applications at the present time. q 1998 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "72af95617ff081cf773674ed5aaf7a07",
"text": "Reputation systems are crucial for distributed applications in which users have to be made accountable for their actions, such as ecommerce websites. However, existing systems often disclose the identity of the raters, which might deter honest users from submitting reviews out of fear of retaliation from the ratees. While many privacy-preserving reputation systems have been proposed, we observe that none of them is simultaneously truly decentralized, trustless, and suitable for real world usage in, for example, e-commerce applications. In this paper, we present a blockchain based decentralized privacy-preserving reputation system. We demonstrate that our system provides correctness and security while eliminating the need for users to trust any third parties or even fellow users.",
"title": ""
},
{
"docid": "ede8a7a2ba75200dce83e17609ec4b5b",
"text": "We present a complimentary objective for training recurrent neural networks (RNN) with gating units that helps with regularization and interpretability of the trained model. Attention-based RNN models have shown success in many difficult sequence to sequence classification problems with long and short term dependencies, however these models are prone to overfitting. In this paper, we describe how to regularize these models through an L1 penalty on the activation of the gating units, and show that this technique reduces overfitting on a variety of tasks while also providing to us a human-interpretable visualization of the inputs used by the network. These tasks include sentiment analysis, paraphrase recognition, and question answering.",
"title": ""
},
{
"docid": "92bb2deffacbe9699ca2e1b6bad27e83",
"text": "Horizontal DNA transfer (HDT) is a pervasive mechanism of diversification in many microbial species, but its primary evolutionary role remains controversial. Much recent research has emphasised the adaptive benefit of acquiring novel DNA, but here we argue instead that intragenomic conflict provides a coherent framework for understanding the evolutionary origins of HDT. To test this hypothesis, we developed a mathematical model of a clonally descended bacterial population undergoing HDT through transmission of mobile genetic elements (MGEs) and genetic transformation. Including the known bias of transformation toward the acquisition of shorter alleles into the model suggested it could be an effective means of counteracting the spread of MGEs. Both constitutive and transient competence for transformation were found to provide an effective defence against parasitic MGEs; transient competence could also be effective at permitting the selective spread of MGEs conferring a benefit on their host bacterium. The coordination of transient competence with cell-cell killing, observed in multiple species, was found to result in synergistic blocking of MGE transmission through releasing genomic DNA for homologous recombination while simultaneously reducing horizontal MGE spread by lowering the local cell density. To evaluate the feasibility of the functions suggested by the modelling analysis, we analysed genomic data from longitudinal sampling of individuals carrying Streptococcus pneumoniae. This revealed the frequent within-host coexistence of clonally descended cells that differed in their MGE infection status, a necessary condition for the proposed mechanism to operate. Additionally, we found multiple examples of MGEs inhibiting transformation through integrative disruption of genes encoding the competence machinery across many species, providing evidence of an ongoing \"arms race.\" Reduced rates of transformation have also been observed in cells infected by MGEs that reduce the concentration of extracellular DNA through secretion of DNases. Simulations predicted that either mechanism of limiting transformation would benefit individual MGEs, but also that this tactic's effectiveness was limited by competition with other MGEs coinfecting the same cell. A further observed behaviour we hypothesised to reduce elimination by transformation was MGE activation when cells become competent. Our model predicted that this response was effective at counteracting transformation independently of competing MGEs. Therefore, this framework is able to explain both common properties of MGEs, and the seemingly paradoxical bacterial behaviours of transformation and cell-cell killing within clonally related populations, as the consequences of intragenomic conflict between self-replicating chromosomes and parasitic MGEs. The antagonistic nature of the different mechanisms of HDT over short timescales means their contribution to bacterial evolution is likely to be substantially greater than previously appreciated.",
"title": ""
},
{
"docid": "dfcc931d9cd7d084bbbcf400f44756a5",
"text": "In this paper we address the problem of aligning very long (often more than one hour) audio files to their corresponding textual transcripts in an effective manner. We present an efficient recursive technique to solve this problem that works well even on noisy speech signals. The key idea of this algorithm is to turn the forced alignment problem into a recursive speech recognition problem with a gradually restricting dictionary and language model. The algorithm is tolerant to acoustic noise and errors or gaps in the text transcript or audio tracks. We report experimental results on a 3 hour audio file containing TV and radio broadcasts. We will show accurate alignments on speech under a variety of real acoustic conditions such as speech over music and speech over telephone lines. We also report results when the same audio stream has been corrupted with white additive noise or compressed using a popular web encoding format such as RealAudio. This algorithm has been used in our internal multimedia indexing project. It has processed more than 200 hours of audio from varied sources, such as WGBH NOVA documentaries and NPR web audio files. The system aligns speech media content in about one to five times realtime, depending on the acoustic conditions of the audio signal.",
"title": ""
},
{
"docid": "f35007fdca9c35b4c243cb58bd6ede7a",
"text": "Photovoltaic Thermal Collector (PVT) is a hybrid generator which converts solar radiation into useful electric and thermal energies simultaneously. This paper gathers all PVT sub-models in order to form a unique dynamic model that reveals PVT parameters interactions. As PVT is a multi-input/output/output system, a state space model based on energy balance equations is developed in order to analyze and assess the parameters behaviors and correlations of PVT constituents. The model simulation is performed using LabVIEW Software. The simulation shows the impact of the fluid flow rate variation on the collector efficiencies (thermal and electrical).",
"title": ""
},
{
"docid": "33b2c5abe122a66b73840506aa3b443e",
"text": "Semantic role labeling, the computational identification and labeling of arguments in text, has become a leading task in computational linguistics today. Although the issues for this task have been studied for decades, the availability of large resources and the development of statistical machine learning methods have heightened the amount of effort in this field. This special issue presents selected and representative work in the field. This overview describes linguistic background of the problem, the movement from linguistic theories to computational practice, the major resources that are being used, an overview of steps taken in computational systems, and a description of the key issues and results in semantic role labeling (as revealed in several international evaluations). We assess weaknesses in semantic role labeling and identify important challenges facing the field. Overall, the opportunities and the potential for useful further research in semantic role labeling are considerable.",
"title": ""
},
{
"docid": "d7671e3c1124d3011744b5d35a8b0ac9",
"text": "Due to its potential for multi-gigabit and low latency wireless links, millimeter wave (mmWave) technology is expected to play a central role in 5th generation (5G) cellular systems. While there has been considerable progress in understanding the mmWave physical layer, innovations will be required at all layers of the protocol stack, in both the access and the core network. Discrete-event network simulation is essential for end-to-end, cross-layer research and development. This paper provides a tutorial on a recently developed full-stack mmWave module integrated into the widely used open-source ns–3 simulator. The module includes a number of detailed statistical channel models as well as the ability to incorporate real measurements or ray-tracing data. The physical and medium access control layers are modular and highly customizable, making it easy to integrate algorithms or compare orthogonal frequency division multiplexing numerologies, for example. The module is interfaced with the core network of the ns–3 Long Term Evolution (LTE) module for full-stack simulations of end-to-end connectivity, and advanced architectural features, such as dual-connectivity, are also available. To facilitate the understanding of the module, and verify its correct functioning, we provide several examples that show the performance of the custom mmWave stack as well as custom congestion control algorithms designed specifically for efficient utilization of the mmWave channel.",
"title": ""
},
{
"docid": "0c6518365ed9b2d47cec55a146e54866",
"text": "The conversational nature of intelligent personal assistants (IPAs) has the potential to trigger personification tendencies in users, which in turn can translate into consumer loyalty and satisfaction. We conducted a study of Amazon Alexa usage and explored the manifestations and possible correlates of users' personification of Alexa. The data were collected via diary instrument from nineteen Alexa users over four days. Less than half of the participants reported personification behaviors. Most of the personification reports can be characterized as mindless politeness (saying 'thank you' and 'please' to Alexa). Two participants expressed deeper personification by confessing their love and reprimanding Alexa. A new study is underway to understand whether expressions of personifications are caused by users' emotional attachments or skepticism about technology's intelligence.",
"title": ""
},
{
"docid": "1e934aef7999b592971b393e40395994",
"text": "Over recent years, as the popularity of mobile phone devices has increased, Short Message Service (SMS) has grown into a multi-billion dollars industry. At the same time, reduction in the cost of messaging services has resulted in growth in unsolicited commercial advertisements (spams) being sent to mobile phones. In parts of Asia, up to 30% of text messages were spam in 2012. Lack of real databases for SMS spams, short length of messages and limited features, and their informal language are the factors that may cause the established email filtering algorithms to underperform in their classification. In this project, a database of real SMS Spams from UCI Machine Learning repository is used, and after preprocessing and feature extraction, different machine learning techniques are applied to the database. Finally, the results are compared and the best algorithm for spam filtering for text messaging is introduced. Final simulation results using 10-fold cross validation shows the best classifier in this work reduces the overall error rate of best model in original paper citing this dataset by more than half.",
"title": ""
},
{
"docid": "cc05a6d24114423052f68268df56c64e",
"text": "Species knowledge is essential for protecting biodiversity. The identification of plants by conventional keys is complex, time consuming, and due to the use of specific botanical terms frustrating for non-experts. This creates a hard to overcome hurdle for novices interested in acquiring species knowledge. Today, there is an increasing interest in automating the process of species identification. The availability and ubiquity of relevant technologies, such as, digital cameras and mobile devices, the remote access to databases, new techniques in image processing and pattern recognition let the idea of automated species identification become reality. This paper is the first systematic literature review with the aim of a thorough analysis and comparison of primary studies on computer vision approaches for plant species identification. We identified 120 peer-reviewed studies, selected through a multi-stage process, published in the last 10 years (2005-2015). After a careful analysis of these studies, we describe the applied methods categorized according to the studied plant organ, and the studied features, i.e., shape, texture, color, margin, and vein structure. Furthermore, we compare methods based on classification accuracy achieved on publicly available datasets. Our results are relevant to researches in ecology as well as computer vision for their ongoing research. The systematic and concise overview will also be helpful for beginners in those research fields, as they can use the comparable analyses of applied methods as a guide in this complex activity.",
"title": ""
},
{
"docid": "2cb0c74e57dea6fead692d35f8a8fac6",
"text": "Matching local image descriptors is a key step in many computer vision applications. For more than a decade, hand-crafted descriptors such as SIFT have been used for this task. Recently, multiple new descriptors learned from data have been proposed and shown to improve on SIFT in terms of discriminative power. This paper is dedicated to an extensive experimental evaluation of learned local features to establish a single evaluation protocol that ensures comparable results. In terms of matching performance, we evaluate the different descriptors regarding standard criteria. However, considering matching performance in isolation only provides an incomplete measure of a descriptors quality. For example, finding additional correct matches between similar images does not necessarily lead to a better performance when trying to match images under extreme viewpoint or illumination changes. Besides pure descriptor matching, we thus also evaluate the different descriptors in the context of image-based reconstruction. This enables us to study the descriptor performance on a set of more practical criteria including image retrieval, the ability to register images under strong viewpoint and illumination changes, and the accuracy and completeness of the reconstructed cameras and scenes. To facilitate future research, the full evaluation pipeline is made publicly available.",
"title": ""
},
{
"docid": "d87e9a6c62c100142523baddc499320c",
"text": "Intelligent behaviour in the real-world requires the ability to acquire new knowledge from an ongoing sequence of experiences while preserving and reusing past knowledge. We propose a novel algorithm for unsupervised representation learning from piece-wise stationary visual data: Variational Autoencoder with Shared Embeddings (VASE). Based on the Minimum Description Length principle, VASE automatically detects shifts in the data distribution and allocates spare representational capacity to new knowledge, while simultaneously protecting previously learnt representations from catastrophic forgetting. Our approach encourages the learnt representations to be disentangled, which imparts a number of desirable properties: VASE can deal sensibly with ambiguous inputs, it can enhance its own representations through imagination-based exploration, and most importantly, it exhibits semantically meaningful sharing of latents between different datasets. Compared to baselines with entangled representations, our approach is able to reason beyond surface-level statistics and perform semantically meaningful cross-domain inference.",
"title": ""
},
{
"docid": "e6ba843b871f6783fb486ab598fd1027",
"text": "To prevent the further loss of species from landscapes used for productive enterprises such as agriculture, forestry, and grazing, it is necessary to determine the composition, quantity, and configuration of landscape elements required to meet the needs of the species present. I present a multi-species approach for defining the attributes required to meet the needs of the biota in a landscape and the management regimes that should be applied. The approach builds on the concept of umbrella species, whose requirements are believed to encapsulate the needs of other species. It identifies a suite of “focal species,” each of which is used to define different spatial and compositional attributes that must be present in a landscape and their appropriate management regimes. All species considered at risk are grouped according to the processes that threaten their persistence. These threats may include habitat loss, habitat fragmentation, weed invasion, and fire. Within each group, the species most sensitive to the threat is used to define the minimum acceptable level at which that threat can occur. For example, the area requirements of the species most limited by the availability of particular habitats will define the minimum suitable area of those habitat types; the requirements of the most dispersal-limited species will define the attributes of connecting vegetation; species reliant on critical resources will define essential compositional attributes; and species whose populations are limited by processes such as fire, predation, or weed invasion will define the levels at which these processes must be managed. For each relevant landscape parameter, the species with the most demanding requirements for that parameter is used to define its minimum acceptable value. Because the most demanding species are selected, a landscape designed and managed to meet their needs will encompass the requirements of all other species. Especies Focales: Una Sombrilla Multiespecífica para Conservar la Naturaleza Resumen: Para evitar mayores pérdidas de especies en paisajes utilizados para actividades productivas como la agricultura, la ganadería y el pastoreo, es necesario determinar la composición, cantidad y configuración de elementos del paisaje que se requieren para satisfacer las necesidades de las especies presentes. Propongo un enfoque multiespecífico para definir los atributos requeridos para satisfacer las necesidades de la biota en un paisaje y los regímenes de manejo que deben ser aplicados. El enfoque se basa en el concepto de las especies sombrilla, de las que se piensa que sus requerimientos engloban a las necesidades de otras especies. El concepto identifica una serie de “especies focales”, cada una de las cuales se utiliza para definir distintos atributos espaciales y de composición que deben estar presentes en un paisaje, así como sus requerimientos adecuados de manejo. Todas las especies consideradas en riesgo se agrupan de acuerdo con los procesos que amenazan su persistencia. Estas amenazas pueden incluir pérdida de hábitat, fragmentación de hábitat, invasión de hierbas y fuego. Dentro de cada grupo, se utiliza a la especie más sensible a la amenaza para definir el nivel mínimo aceptable en que la amenaza ocurre. Por ejemplo, los requerimientos espaciales de especies limitadas por la disponibilidad de hábitats particulares definirán el área mínima adecuada de esos tipos de hábitat; los requerimientos de la especie más limitada en su dispersión definirán los atributos de la vegetación conectante, las especies dependientes de recursos críticos definirán los atributos de composición esenciales; y especies cuyas poblaciones están limitadas por procesos como el fuego, la depredación o invasión de hierbas definirán los niveles en que deberán manejarse estos procesos. Para cada parámetro relevante del Paper submitted September 19, 1996; revised manuscript accepted February 24, 1997. 850 Focal Species for Nature Conservation Lambeck Conservation Biology Volume 11, No. 4, August 1997 Introduction Throughout the world, changing patterns of land use have resulted in the loss of natural habitat and the increasing fragmentation of that which remains. Not only have these changes altered habitat composition and configuration, but they have modified the rates and intensities of many ecological processes essential for ecosystems to retain their integrity. As a consequence, many landscapes that are being used for productive purposes such as agriculture, grazing, and forestry, are suffering species declines and losses (Saunders 1989; Saunders et al. 1991; Hobbs et al. 1993). Attempts to prevent further loss of biological diversity from such landscapes requires a capacity to define the spatial, compositional, and functional attributes that must be present if the needs of the plants and animals are to be met. There has been considerable debate in the ecological literature about whether the requirements of single species should serve as the basis for defining conservation requirements or whether the analysis of landscape pattern and process should underpin conservation planning (Franklin 1993; Hansen et al. 1993; Orians 1993; Franklin 1994; Hobbs 1994; Tracy & Brussard 1994). Speciesbased approaches have taken the form of either singlespecies studies, often targeted at rare or vulnerable species, or the study of groups of species considered to represent components of biodiversity (Soulé & Wilcox 1980; Simberloff 1988; Wilson & Peter 1988; Pimm & Gilpin 1989; Brussard 1991; Kohm 1991). Species-based approaches have been criticized on the grounds that they do not provide whole-landscape solutions to conservation problems, that they cannot be conducted at a rate sufficient to deal with the urgency of the threats, and that they consume a disproportionate amount of conservation funding (Franklin 1993; Hobbs 1994; Walker 1995). Consequently, critics of single-species studies are calling for approaches that consider higher levels of organization such as ecosystems and landscapes (Noss 1983; Noss & Harris 1986; Noss 1987; Gosselink et al . 1990; Dyer & Holland 1991; Salwasser 1991; Franklin 1993; Hobbs 1994). These alternative approaches place a greater emphasis on the relationship between landscape pattern and processes and community measures such as species diversity or species richness (Janzen 1983; Newmark 1985; Saunders et al . 1991; Anglestam 1992; Hobbs 1993, 1994). Although approaches that consider pattern and processes at a landscape scale help to identify the elements that need to be present in a landscape, they are unable to define the appropriate quantity and distribution of those elements. Such approaches have tended, by and large, to be descriptive. They can identify relationships between landscape patterns and measures such as species richness, but they are unable to define the composition, configuration, and quantity of landscape features required for a landscape to retain its biota. Ultimately, questions such as what type of pattern is required in a landscape, or at what rate a given process should proceed, cannot be answered without reference to the needs of the species in that landscape. Therefore, we cannot ignore the requirements of species if we wish to define the characteristics of a landscape that will ensure their retention. The challenge then is to find an efficient means of meeting the needs of all species without studying each one individually. In order to overcome this dilemma, proponents of single-species studies have developed the concept of umbrella species (Murphy & Wilcox 1986; Noss 1990; Cutler 1991; Ryti 1992; Hanley 1993; Launer & Murphy 1994; Williams & Gaston 1994). These are species whose requirements for persistence are believed to encapsulate those of an array of additional species. The attractiveness of umbrella species to land managers is obvious. If it is indeed possible to manage a whole community or ecosystem by focusing on the needs of one or a few species, then the seemingly intractable problem of considering the needs of all species is resolved. Species as diverse as Spotted Owls (Franklin 1994), desert tortoises (Tracy & Brussard 1994), black-tailed deer (Hanley 1993) and butterflies (Launer & Murphy 1994) have been proposed to serve an umbrella function for the ecosystems in which they occur. But given that the majority of species within an ecosystem have widely differing habitat requirements, it seems unlikely that any single species could serve as an umbrella for all others. As Franklin (1994) points out, landscapes designed and managed around the needs of single species may fail to capture other critical elements of the ecosystems in which they occur. It would therefore appear that if the concept of umbrella species is to be useful, it will be necessary to search for multi-species approaches that identify a set of species whose spatial, compositional, and functional requirements encompass those of all other species in the region. I present a method for selecting, from the total pool of species in a landscape, a subset of “focal species” whose paisaje, se utiliza a la especies con los mayores requerimientos para ese parámetro para definir su valor aceptable mínimo. Debido a que se seleccionan las especies más demandantes, un paisaje diseñado y manejado para satisfacer sus necesidades abarcará los requerimientos de todas las demás especies.",
"title": ""
},
{
"docid": "a1a8dc4d3c1c0d2d76e0f1cd0cb039d2",
"text": "73 generalized vertex median of a weighted graph, \" Operations Res., pp. 955-961, July 1967. and 1973, respectively. He spent two and a half years at Bell Laboratories , Murray Hill, NJ, developing telemetrized automatic surveillance and control systems. He is now Manager at Data Communications Systems, Vienna, VA, where he has major responsibilities in research and development of network analysis and design capabilities, and has applied these capabilities in the direction of projects ranging from feasability analysis and design of front end processors for the Navy to development of network architectures for the FAA. NY, responsible for contributing to the ongoing research in the areas of large network design, topological optimization for terminal access, the concentrator location problem, and flow and congestion control strategies for packet switching networks. At present, Absfruct-An algorithm is defined for establishing routing tables in the individual nodes of a data network. The routing fable at a node i specifies, for each other node j , what fraction of the traffic destined far node j should leave node i on each of the links emanating from node i. The algorithm is applied independently at each node and successively updates the routing table at that node based on information communicated between adjacent nodes about the marginal delay to each destination. For stationary input traffic statistics, the average delay per message through the network converges, with successive updates of the routing tables, to the minimum average delay over all routing assignments. The algorithm has the additional property that the traffic to each destination is guaranteed to be loop free at each iteration of the algorithm. In addition, a new global convergence theorem for non-continuous iteration algorithms is developed. INTRODUCTION T HE problem of routing assignments has been one of the most intensively studied areas in the field of data networks in recent years. These routing problems can be roughly classified as static routing, quasi-static routing, and dynamic routing. Static routing can be typified by the following type of problem. One wishes to establish a new data network and makes various assumptions about the node locations, the link locations, and the capacities of the links. Given the traffic between each source and destination, one can calculate the traffic on each link as a function of the routing of the traffic. If one approximates the queueing delays on each link as a function of the link traffic, one can …",
"title": ""
},
{
"docid": "1e30d2f8e11bfbd868fdd0dfc0ea4179",
"text": "In this paper, I study how companies can use their personnel data and information from job satisfaction surveys to predict employee quits. An important issue discussed at length in the paper is how employers can ensure the anonymity of employees in surveys used for management and HR analytics. I argue that a simple mechanism where the company delegates the implementation of job satisfaction surveys to an external consulting company can be optimal. In the subsequent empirical analysis, I use a unique combination of firm-level data (personnel records) and information from job satisfaction surveys to assess the benefits for companies using data in their decision-making. Moreover, I show how companies can move from a descriptive to a predictive approach.",
"title": ""
},
{
"docid": "d0e3e1a5d5bfaa2aecc046dbd9be8e48",
"text": "Wind power generation studies of slow phenomena using a detailed model can be difficult to perform with a conventional offline simulation program. Due to the computational power and high-speed input and output, a real-time simulator is capable of conducting repetitive simulations of wind profiles in a short time with detailed models of critical components and allows testing of prototype controllers through hardware-in-the-loop (HIL). This paper discusses methods to overcome the challenges of real-time simulation of wind systems, characterized by their complexity and high-frequency switching. A hybrid flow-battery supercapacitor energy storage system (ESS), coupled in a wind turbine generator to smooth wind power, is studied by real-time HIL simulation. The prototype controller is embedded in one real-time simulator, while the rest of the system is implemented in another independent simulator. The simulation results of the detailed wind system model show that the hybrid ESS has a lower battery cost, higher battery longevity, and improved overall efficiency over its reference ESS.",
"title": ""
}
] |
scidocsrr
|
51147a318341b36fad9d091ee252ecf1
|
Who Leads the Clothing Fashion: Style, Color, or Texture? A Computational Study
|
[
{
"docid": "e77dc44a5b42d513bdbf4972d62a74f9",
"text": "Clothing recognition is an extremely challenging problem due to wide variation in clothing item appearance, layering, and style. In this paper, we tackle the clothing parsing problem using a retrieval based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to parse the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on the fly from retrieved examples, and transferred parse masks (paper doll item transfer) from retrieved examples. Experimental evaluation shows that our approach significantly outperforms state of the art in parsing accuracy.",
"title": ""
},
{
"docid": "b17fdc300edc22ab855d4c29588731b2",
"text": "Describing clothing appearance with semantic attributes is an appealing technique for many important applications. In this paper, we propose a fully automated system that is capable of generating a list of nameable attributes for clothes on human body in unconstrained images. We extract low-level features in a pose-adaptive manner, and combine complementary features for learning attribute classifiers. Mutual dependencies between the attributes are then explored by a Conditional Random Field to further improve the predictions from independent classifiers. We validate the performance of our system on a challenging clothing attribute dataset, and introduce a novel application of dressing style analysis that utilizes the semantic attributes produced by our system.",
"title": ""
}
] |
[
{
"docid": "0fa55762a86f658aa2936cd63f2db838",
"text": "Mindfulness has received considerable attention as a correlate of psychological well-being and potential mechanism for the success of mindfulness-based interventions (MBIs). Despite a common emphasis of mindfulness, at least in name, among MBIs, mindfulness proves difficult to assess, warranting consideration of other common components. Self-compassion, an important construct that relates to many of the theoretical and practical components of MBIs, may be an important predictor of psychological health. The present study compared ability of the Self-Compassion Scale (SCS) and the Mindful Attention Awareness Scale (MAAS) to predict anxiety, depression, worry, and quality of life in a large community sample seeking self-help for anxious distress (N = 504). Multivariate and univariate analyses showed that self-compassion is a robust predictor of symptom severity and quality of life, accounting for as much as ten times more unique variance in the dependent variables than mindfulness. Of particular predictive utility are the self-judgment and isolation subscales of the SCS. These findings suggest that self-compassion is a robust and important predictor of psychological health that may be an important component of MBIs for anxiety and depression.",
"title": ""
},
{
"docid": "98c64622f9a22f89e3f9dd77c236f310",
"text": "After a development process of many months, the TLS 1.3 specification is nearly complete. To prevent past mistakes, this crucial security protocol must be thoroughly scrutinised prior to deployment. In this work we model and analyse revision 10 of the TLS 1.3 specification using the Tamarin prover, a tool for the automated analysis of security protocols. We specify and analyse the interaction of various handshake modes for an unbounded number of concurrent TLS sessions. We show that revision 10 meets the goals of authenticated key exchange in both the unilateral and mutual authentication cases. We extend our model to incorporate the desired delayed client authentication mechanism, a feature that is likely to be included in the next revision of the specification, and uncover a potential attack in which an adversary is able to successfully impersonate a client during a PSK-resumption handshake. This observation was reported to, and confirmed by, the IETF TLS Working Group. Our work not only provides the first supporting evidence for the security of several complex protocol mode interactions in TLS 1.3, but also shows the strict necessity of recent suggestions to include more information in the protocol's signature contents.",
"title": ""
},
{
"docid": "ff04301675ffa651e9cbdfbb9c6ab75d",
"text": "It is challenging to detect and track the ball from the broadcast soccer video. The feature-based tracking methods to judge if a sole object is a target are inadequate because the features of the balls change fast over frames and we cannot differ the ball from other objects by them. This paper proposes a new framework to find the ball position by creating and analyzing the trajectory. The ball trajectory is obtained from the candidate collection by use of the heuristic false candidate reduction, the Kalman filterbased trajectory mining, and the trajectory evaluation. The ball trajectory is extended via a localized Kalman filter-based model matching procedure. The experimental results on two consecutive 1000-frame sequences illustrate that the proposed framework is very effective and can obtain a very high accuracy that is much better than existing methods.",
"title": ""
},
{
"docid": "f4859226e52f7c9d2b2dc4ac8a0255de",
"text": "Imbalanced data learning is one of the challenging problems in data mining; among this matter, founding the right model assessment measures is almost a primary research issue. Skewed class distribution causes a misreading of common evaluation measures as well it lead a biased classification. This article presents a set of alternative for imbalanced data learning assessment, using a combined measures (G-means, likelihood ratios, Discriminant power, F-Measure Balanced Accuracy, Youden index, Matthews correlation coefficient), and graphical performance assessment (ROC curve, Area Under Curve, Partial AUC, Weighted AUC, Cumulative Gains Curve and lift chart, Area Under Lift AUL), that aim to provide a more credible evaluation. We analyze the applications of these measures in churn prediction models evaluation, a well known application of imbalanced data",
"title": ""
},
{
"docid": "f941c1f5e5acd9865e210b738ff1745a",
"text": "We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well. To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.",
"title": ""
},
{
"docid": "4a22a7dbcd1515e2b1b6e7748ffa3e02",
"text": "Average public feedback scores given to sellers have increased strongly over time in an online labor market. Changes in marketplace composition or improved seller performance cannot fully explain this trend. We propose that two factors inflated reputations: (1) it costs more to give bad feedback than good feedback and (2) this cost to raters is increasing in the cost to sellers from bad feedback. Together, (1) and (2) can lead to an equilibrium where feedback is always positive, regardless of performance. In response, the marketplace encouraged buyers to additionally give private feedback. This private feedback was substantially more candid and more predictive of future worker performance. When aggregates of private feedback about each job applicant were experimentally provided to employers as a private feedback score, employers used these scores when making screening and hiring decisions.",
"title": ""
},
{
"docid": "e727b64ba45852732f836808ff330940",
"text": "Deep learning researches on the transformation problems for image and text have raised great attention. However, present methods for music feature transfer using neural networks are far from practical application. In this paper, we initiate a novel system for transferring the texture of music, and release it as an open source project. Its core algorithm is composed of a converter which represents sounds as texture spectra, a corresponding reconstructor and a feed-forward transfer network. We evaluate this system from multiple perspectives, and experimental results reveal that it achieves convincing results in both sound effects and computational performance.",
"title": ""
},
{
"docid": "b568dae2d11ca8c28c0b7268368ce53d",
"text": "The Box and Block Test, a test of manual dexterity, has been used by occupational therapists and others to evaluate physically handicapped individuals. Because the test lacked normative data for adults, the results of the test have been interpreted subjectively. The purpose of this study was to develop normative data for adults. Test subjects were 628 Normal adults (310 males and 318 females)from the seven-county Milwaukee area. Data on males and females 20 to 94 years old were divided into 12 age groups. Means, standard deviations, standard error, and low and high scores are reported for each five-year age group. These data will enable clinicians to objectively compare a patient's score to a normal population parameter. Occupational therapists are frequently involved with increasing the manual dexterity of their patients. Often, these patients are unable to perform tests offine manual or finger dexterity, such as the Purdue Pegboard Test or the Crawford Small Parts Dexterity Test. Tests of manual dexterity, such as the Minnesota Rate of Manipulation Test, have limited clinical application because a) they require lengthy administration time, b) a standardized standing position must be used for testing, and c) the tests use normative samples that poorly represent the wide range of clinical patients. Because of the limitations of such standardized tests, therapists often evaluate dexterity subjectively. The Box and Block Test has been suggested as a measure of gross manual dexterity (1) and as a prevocational test for handicapped people (2). Norms have been collected on adults with neuromuscular involvement (2) and on normal children (7, 8, and 9 years old) (3). Standardized instructions along with reliability and validity data, are reported in the literature (2,3), but there are no norms for the normal adult population. Therefore, the purpose of this study was to collect normative data for adults. Methods",
"title": ""
},
{
"docid": "9f3966e64089594b261e1cd9dca8eef1",
"text": "We examine how control over a technology platform can increase profits and innovation. By choosing how much to open and when to bundle enhancements, platform sponsors can influence choices of ecosystem partners. Platform openness invites developer participation but sacrifices direct sales. Bundling enhancements early drives developers away but bundling late delays platform growth. Ironically, developers can prefer sponsored platforms to unmanaged open standards despite giving up their applications. Results can inform antitrust law and innovation strategy.",
"title": ""
},
{
"docid": "609bc0aa7dcd9ffc97e753642bec8c82",
"text": "Current trends in energy power generation are leading efforts related to the development of more reliable, sustainable sources and technologies for energy harvesting. Solar energy is one of these renewable energy resources, widely available in nature. Most of the solar panels used today to convert solar energy into chemical energy, and then to electrical energy, are stationary. Energy efficiency studies have shown that more electrical energy can be retrieved from solar panels if they are organized in arrays and then placed on a solar tracker that can then follow the sun as it moves during the day from east to west, and as it moves from north to south during the year, as seasons change. Adding more solar panels to solar tracker structures will improve its yield. It would also add more challenges when it comes to managing the overall weight of such structures, and their strength and reliability under different weather conditions, such as wind, changes in temperature, and atmospheric conditions. Hence, careful structural design and simulation is needed to establish the most optimal parameters in order for solar trackers to withstand all environmental conditions and to function with a high reliability for long periods of time.",
"title": ""
},
{
"docid": "3c79b81af0d84dcbfebb2108f3078dc4",
"text": "This paper reviews the available literature on computational modelling in two areas of bone biomechanics: fracture and healing. Bone is a complex material, with a multiphasic, heterogeneous and anisotropic microstructure. The processes of fracture and healing can only be understood in terms of the underlying bone structure and its mechanical role. Bone fracture analysis attempts to predict the failure of musculoskeletal structures by several possible mechanisms under different loading conditions. However, as opposed to structurally inert materials, bone is a living tissue that can repair itself. An exciting new field of research is being developed to better comprehend these mechanisms and the mechanical behaviour of bone tissue. One of the main goals of this work is to demonstrate, after a review of computational models, the main similarities and differences between normal engineering materials and bone tissue from a structural point of view. We also underline the importance of computational simulations in biomechanics due to the difficulty of obtaining experimental or clinical results. 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "65fd482ac37852214fc82b4bc05c6f72",
"text": "This paper examines important factors for link prediction in networks and provides a general, high-performance framework for the prediction task. Link prediction in sparse networks presents a significant challenge due to the inherent disproportion of links that can form to links that do form. Previous research has typically approached this as an unsupervised problem. While this is not the first work to explore supervised learning, many factors significant in influencing and guiding classification remain unexplored. In this paper, we consider these factors by first motivating the use of a supervised framework through a careful investigation of issues such as network observational period, generality of existing methods, variance reduction, topological causes and degrees of imbalance, and sampling approaches. We also present an effective flow-based predicting algorithm, offer formal bounds on imbalance in sparse network link prediction, and employ an evaluation method appropriate for the observed imbalance. Our careful consideration of the above issues ultimately leads to a completely general framework that outperforms unsupervised link prediction methods by more than 30% AUC.",
"title": ""
},
{
"docid": "efd79ed4f8fba97f0ee4a2774f40da6a",
"text": "This paper presents a new algorithm for the extrinsic calibration of a perspective camera and an invisible 2D laser-rangefinder (LRF). The calibration is achieved by freely moving a checkerboard pattern in order to obtain plane poses in camera coordinates and depth readings in the LRF reference frame. The problem of estimating the rigid displacement between the two sensors is formulated as one of registering a set of planes and lines in the 3D space. It is proven for the first time that the alignment of three plane-line correspondences has at most eight solutions that can be determined by solving a standard p3p problem and a linear system of equations. This leads to a minimal closed-form solution for the extrinsic calibration that can be used as hypothesis generator in a RANSAC paradigm. Our calibration approach is validated through simulation and real experiments that show the superiority with respect to the current state-of-the-art method requiring a minimum of five input planes.",
"title": ""
},
{
"docid": "fb898ef1b13d68ca3b5973b77237de74",
"text": "We present a nonrigid alignment algorithm for aligning high-resolution range data in the presence of low-frequency deformations, such as those caused by scanner calibration error. Traditional iterative closest points (ICP) algorithms, which rely on rigid-body alignment, fail in these cases because the error appears as a nonrigid warp in the data. Our algorithm combines the robustness and efficiency of ICP with the expressiveness of thin-plate splines to align high-resolution scanned data accurately, such as scans from the Digital Michelangelo Project [M. Levoy et al. (2000)]. This application is distinguished from previous uses of the thin-plate spline by the fact that the resolution and size of warping are several orders of magnitude smaller than the extent of the mesh, thus requiring especially precise feature correspondence.",
"title": ""
},
{
"docid": "cf222e0f90538d150cc45ae30edf696c",
"text": "Workflows are a widely used abstraction for representing large scientific applications and executing them on distributed systems such as clusters, clouds, and grids. However, workflow systems have been largely silent on the question of precisely what environment each task in the workflow is expected to run in. As a result, a workflow may run correctly in the environment in which it was designed, but when moved to another machine, is highly likely to fail due to differences in the operating system, installed applications, available data, and so forth. Lightweight container technology has recently arisen as a potential solution to this problem, by providing a well-defined execution environments at the operating system level. In this paper, we consider how to best integrate container technology into an existing workflow system, using Makeflow, Work Queue, and Docker as examples of current technology. A brief performance study of Docker shows very little overhead in CPU and I/O performance, but significant costs in creating and deleting containers. Taking this into account, we describe four different methods of connecting containers to different points of the infrastructure, and explain several methods of managing the container images that must be distributed to executing tasks. We explore the performance of a large bioinformatics workload on a Docker-enabled cluster, and observe the best configuration to be locally-managed containers that are shared between multiple tasks.",
"title": ""
},
{
"docid": "d2401987609efcb5a7fe420d48dfec1b",
"text": "Good sparse approximations are essential for practical inference in Gaussian Processes as the computational cost of exact methods is prohibitive for large datasets. The Fully Independent Training Conditional (FITC) and the Variational Free Energy (VFE) approximations are two recent popular methods. Despite superficial similarities, these approximations have surprisingly different theoretical properties and behave differently in practice. We thoroughly investigate the two methods for regression both analytically and through illustrative examples, and draw conclusions to guide practical application.",
"title": ""
},
{
"docid": "8e3eec62b02a9cf7a56803775757925f",
"text": "Emotional states of individuals, also known as moods, are central to the expression of thoughts, ideas and opinions, and in turn impact attitudes and behavior. As social media tools are increasingly used by individuals to broadcast their day-to-day happenings, or to report on an external event of interest, understanding the rich ‘landscape’ of moods will help us better interpret and make sense of the behavior of millions of individuals. Motivated by literature in psychology, we study a popular representation of human mood landscape, known as the ‘circumplex model’ that characterizes affective experience through two dimensions: valence and activation. We identify more than 200 moods frequent on Twitter, through mechanical turk studies and psychology literature sources, and report on four aspects of mood expression: the relationship between (1) moods and usage levels, including linguistic diversity of shared content (2) moods and the social ties individuals form, (3) moods and amount of network activity of individuals, and (4) moods and participatory patterns of individuals such as link sharing and conversational engagement. Our results provide at-scale naturalistic assessments and extensions of existing conceptualizations of human mood in social media contexts.",
"title": ""
},
{
"docid": "427028ef819df3851e37734e5d198424",
"text": "The code that provides solutions to key software requirements, such as security and fault-tolerance, tends to be spread throughout (or cross-cut) the program modules that implement the “primary functionality” of a software system. Aspect-oriented programming is an emerging programming paradigm that supports implementing such cross-cutting requirements into named program units called “aspects”. To construct a system as an aspect-oriented program (AOP), one develops code for primary functionality in traditional modules and code for cross-cutting functionality in aspect modules. Compiling and running an AOP requires that the aspect code be “woven” into the code. Although aspect-oriented programming supports the separation of concerns into named program units, explicit and implicit dependencies of both aspects and traditional modules will result in systems with new testing challenges, which include new sources for program faults. This paper introduces a candidate fault model, along with associated testing criteria, for AOPs based on interactions that are unique to AOPs. The paper also identifies key issues relevant to the systematic testing of AOPs.",
"title": ""
},
{
"docid": "774938c175781ed644327db1dae9d1d4",
"text": "It is widely accepted that sizing or predicting the volumes of various kinds of software deliverable items is one of the first and most dominant aspects of software cost estimating. Most of the cost estimation model or techniques usually assume that software size or structural complexity is the integral factor that influences software development effort. Although sizing and complexity measure is a very critical due to the need of reliable size estimates in the utilization of existing software project cost estimation models and complex problem for software cost estimating, advances in sizing technology over the past 30 years have been impressive. This paper attempts to review the 12 object-oriented software metrics proposed in 90s’ by Chidamber, Kemerer and Li.",
"title": ""
}
] |
scidocsrr
|
6ee0e5ad4cafa28c40085e8b4726c7d2
|
Towards Memory-Efficient Allocation of CNNs on Processing-in-Memory Architecture
|
[
{
"docid": "59ba2709e4f3653dcbd3a4c0126ceae1",
"text": "Processing-in-memory (PIM) is a promising solution to address the \"memory wall\" challenges for future computer systems. Prior proposed PIM architectures put additional computation logic in or near memory. The emerging metal-oxide resistive random access memory (ReRAM) has showed its potential to be used for main memory. Moreover, with its crossbar array structure, ReRAM can perform matrix-vector multiplication efficiently, and has been widely studied to accelerate neural network (NN) applications. In this work, we propose a novel PIM architecture, called PRIME, to accelerate NN applications in ReRAM based main memory. In PRIME, a portion of ReRAM crossbar arrays can be configured as accelerators for NN applications or as normal memory for a larger memory space. We provide microarchitecture and circuit designs to enable the morphable functions with an insignificant area overhead. We also design a software/hardware interface for software developers to implement various NNs on PRIME. Benefiting from both the PIM architecture and the efficiency of using ReRAM for NN computation, PRIME distinguishes itself from prior work on NN acceleration, with significant performance improvement and energy saving. Our experimental results show that, compared with a state-of-the-art neural processing unit design, PRIME improves the performance by ~2360× and the energy consumption by ~895×, across the evaluated machine learning benchmarks.",
"title": ""
},
{
"docid": "9897f5e64b4a5d6d80fadb96cb612515",
"text": "Deep convolutional neural networks (CNNs) are rapidly becoming the dominant approach to computer vision and a major component of many other pervasive machine learning tasks, such as speech recognition, natural language processing, and fraud detection. As a result, accelerators for efficiently evaluating CNNs are rapidly growing in popularity. The conventional approaches to designing such CNN accelerators is to focus on creating accelerators to iteratively process the CNN layers. However, by processing each layer to completion, the accelerator designs must use off-chip memory to store intermediate data between layers, because the intermediate data are too large to fit on chip. In this work, we observe that a previously unexplored dimension exists in the design space of CNN accelerators that focuses on the dataflow across convolutional layers. We find that we are able to fuse the processing of multiple CNN layers by modifying the order in which the input data are brought on chip, enabling caching of intermediate data between the evaluation of adjacent CNN layers. We demonstrate the effectiveness of our approach by constructing a fused-layer CNN accelerator for the first five convolutional layers of the VGGNet-E network and comparing it to the state-of-the-art accelerator implemented on a Xilinx Virtex-7 FPGA. We find that, by using 362KB of on-chip storage, our fused-layer accelerator minimizes off-chip feature map data transfer, reducing the total transfer by 95%, from 77MB down to 3.6MB per image.",
"title": ""
}
] |
[
{
"docid": "70f8d5a6d6ff36dd669403d7865bab94",
"text": "Addressing the problem of information overload, automatic multi-document summarization (MDS) has been widely utilized in the various real-world applications. Most of existing approaches adopt term-based representation for documents which limit the performance of MDS systems. In this paper, we proposed a novel unsupervised pattern-enhanced topic model (PETMSum) for the MDS task. PETMSum combining pattern mining techniques with LDA topic modelling could generate discriminative and semantic rich representations for topics and documents so that the most representative, non-redundant, and topically coherent sentences can be selected automatically to form a succinct and informative summary. Extensive experiments are conducted on the data of document understanding conference (DUC) 2006 and 2007. The results prove the effectiveness and efficiency of our proposed approach.",
"title": ""
},
{
"docid": "38382c04e7dc46f5db7f2383dcae11fb",
"text": "Motor schemas serve as the basic unit of behavior specification for the navigation of a mobile robot. They are multiple concurrent processes that operate in conjunction with associated perceptual schemas and contribute independently to the overall concerted action of the vehicle. The motivation behind the use of schemas for this domain is drawn from neuroscientific, psychological, and robotic sources. A variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot. Simulation results and actual mobile robot experiments demonstrate the feasibility of this approach.",
"title": ""
},
{
"docid": "c4ff647b5962d3d713577c16a7a9cae5",
"text": "In this paper we propose the use of an illumination invariant transform to improve many aspects of visual localisation, mapping and scene classification for autonomous road vehicles. The illumination invariant colour space stems from modelling the spectral properties of the camera and scene illumination in conjunction, and requires only a single parameter derived from the image sensor specifications. We present results using a 24-hour dataset collected using an autonomous road vehicle, demonstrating increased consistency of the illumination invariant images in comparison to raw RGB images during daylight hours. We then present three example applications of how illumination invariant imaging can improve performance in the context of vision-based autonomous vehicles: 6-DoF metric localisation using monocular cameras over a 24-hour period, life-long visual localisation and mapping using stereo, and urban scene classification in changing environments. Our ultimate goal is robust and reliable vision-based perception and navigation an attractive proposition for low-cost autonomy for road vehicles.",
"title": ""
},
{
"docid": "1e3bb73929d076a031b8571499da2f42",
"text": "Mesocrystals are 3D ordered nanoparticle superstructures, often with internal porosity, which receive much recent research interest. While more and more mesocrystal systems are found in biomineralization or synthesized, their potential as material still needs to be explored. It needs to be revealed, which new chemical and physical properties arise from the mesocrystal structure, or how they change by the ordered aggregation of nanoparticles to fully exploit the promising potential of mesocrystals. Also, the mechanisms for mesocrystal synthesis need to be explored to adapt it to a wide class of materials. The last three years have seen remarkable progress, which is summarized here. Also potential future directions of this reaserch field are discussed. This shows the importance of mesocrystals not only for the field of materials research and allows the appliction of mesocrystals in advanced materials synthesis or property improvement of existing materials. It also outlines attractive research directions in this field.",
"title": ""
},
{
"docid": "9e536b6a4d66583659c26d9a3a913254",
"text": "OBJECTIVES\nThree accumulative tracers, iodine-123-labeled N-isopropyl-p-iodoamphetamine (I-123-IMP), technetium-99m-labeled hexamethylpropyleneamineoxime (Tc-99m-HMPAO), and technetium-99m-labeled ethyl cysteinate dimer (Tc-99m-ECD) are widely used to measure cerebral blood flow (CBF) in single-photon emission computed tomography (SPECT). In the present study, normal regional distribution of CBF measured with three different SPECT tracers was entered into a database and compared with regional distribution of CBF measured by positron emission tomography (PET) with H2(15)O. The regional distribution of tissue fractions of gray matter determined by voxel-based morphometry was also compared with SPECT and PET CBF distributions.\n\n\nMETHODS\nSPECT studies with I-123-IMP, Tc-99m-HMPAO, and Tc-99m-ECD were performed on 11, 20, and 17 healthy subjects, respectively. PET studies were performed on 11 healthy subjects. Magnetic resonance (MR) imaging studies for voxel-based morphometry were performed on 43 of the 48 subjects who underwent SPECT study. All SPECT, PET, and MR images were transformed into the standard brain format with the SPM2 system. The voxel values of each SPECT and PET image were globally normalized to 50 ml/100 ml/min. Gray matter, white matter, and cerebrospinal fluid images were segmented and extracted from all transformed MR images by applying voxel-based morphometry methods with the SPM2 system.\n\n\nRESULTS\nRegional distribution of all three SPECT tracers differed from that of H2150 in the pons, midbrain, thalamus, putamen, parahippocampal gyrus, posterior cingulate gyrus, temporal cortex, and occipital cortex. No significant correlations were observed between the tissue fraction of gray matter and CBF with any tracer.\n\n\nCONCLUSION\nDifferences in regional distribution of SPECT tracers were considered to be caused mainly by differences in the mechanism of retention of tracers in the brain. Regional distribution of CBF was independent of regional distribution of gray matter fractions, and consequently the blood flow per gray matter volume differed for each brain region.",
"title": ""
},
{
"docid": "b417b412334d8d5ce931f93f564df528",
"text": "The field of dataset shift has received a growing amount of interest in the last few years. The fact that most real-world applications have to cope with some form of shift makes its study highly relevant. The literature on the topic is mostly scattered, and different authors use different names to refer to the same concepts, or use the same name for different concepts. With this work, we attempt to present a unifying framework through the review and comparison of some of the most important works in the",
"title": ""
},
{
"docid": "4839938502248899c8adc9b6ef359c52",
"text": "This paper introduces an overview and positioning of the contemporary brand experience in the digital context. With technological advances in games, gamification and emerging technologies, such as Virtual Reality (VR) and Artificial Intelligence (AI), it is possible that brand experiences are getting more pervasive and seamless. In this paper, we review the current theories around multi-sensory brand experience and the role of new technologies in the whole consumer journey, including pre-purchase, purchase and post-purchase stages. After this analysis, we introduce a conceptual framework that promotes a continuous loop of consumer experience and engagement from different and new touch points, which could be augmented by games, gamification and emerging technologies. Based on the framework, we conclude this paper with propositions, examples and recommendations for future research in contemporary brand management, which could help brand managers and designers to deal with technological challenges posed by the contemporary society.",
"title": ""
},
{
"docid": "70be8e5a26cb56fdd2c230cf36e00364",
"text": "If investors are not fully rational, what can smart money do? This paper provides an example in which smart money can strategically take advantage of investors’ behavioral biases and manipulate the price process to make profit. The paper considers three types of traders, behavior-driven investors who are less willing to sell losers than to sell winners (dispositional effect), arbitrageurs, and a manipulator who can influence asset prices. We show that, due to the investors’ behavioral biases and the limit of arbitrage, the manipulator can profit from a “pump-and-dump” trading strategy by accumulating the speculative asset while pushing the asset price up, and then selling the asset at high prices. Since nobody has private information, manipulation here is completely trade-based. The paper also endogenously derives several asset-pricing anomalies, including excess volatility, momentum and reversal. As an empirical test, the paper presents some empirical evidence from the U.S. SEC prosecution of “pump-and-dump” manipulation cases that are consistent with our model. JEL: G12, G18",
"title": ""
},
{
"docid": "33bbff16549f405aebec8b0400da878c",
"text": "Lexicon-Based approaches to Sentiment Analysis (SA) differ from the more common machine-learning based approaches in that the former rely solely on previously generated lexical resources that store polarity information for lexical items, which are then identified in the texts, assigned a polarity tag, and finally weighed, to come up with an overall score for the text. Such SA systems have been proved to perform on par with supervised, statistical systems, with the added benefit of not requiring a training set. However, it remains to be seen whether such lexically-motivated systems can cope equally well with extremely short texts, as generated on social networking sites, such as Twitter. In this paper we perform such an evaluation using Sentitext, a lexicon-based SA tool for Spanish.",
"title": ""
},
{
"docid": "28e0bd104c8654ed9ad007c66bae0461",
"text": "Today, journalist, information analyst, and everyday news consumers are tasked with discerning and fact-checking the news. This task has became complex due to the ever-growing number of news sources and the mixed tactics of maliciously false sources. To mitigate these problems, we introduce the The News Landscape (NELA) Toolkit: an open source toolkit for the systematic exploration of the news landscape. NELA allows users to explore the credibility of news articles using well-studied content-based markers of reliability and bias, as well as, filter and sort through article predictions based on the users own needs. In addition, NELA allows users to visualize the media landscape at different time slices using a variety of features computed at the source level. NELA is built with a modular, pipeline design, to allow researchers to add new tools to the toolkit with ease. Our demo is an early transition of automated news credibility research to assist human fact-checking efforts and increase the understanding of the news ecosystem as a whole. To use this tool, go to http://nelatoolkit.science",
"title": ""
},
{
"docid": "951ad18af2b3c9b0ca06147b0c804f65",
"text": "Food photos are widely used in food logs for diet monitoring and in social networks to share social and gastronomic experiences. A large number of these images are taken in restaurants. Dish recognition in general is very challenging, due to different cuisines, cooking styles, and the intrinsic difficulty of modeling food from its visual appearance. However, contextual knowledge can be crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and location of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then, we reformulate the problem using a probabilistic model connecting dishes, restaurants, and locations. We apply that model in three different tasks: dish recognition, restaurant recognition, and location refinement. Experiments on six datasets show that by integrating multiple evidences (visual, location, and external knowledge) our system can boost the performance in all tasks.",
"title": ""
},
{
"docid": "a39c0db041f31370135462af467426ed",
"text": "Part of the ventral temporal lobe is thought to be critical for face perception, but what determines this specialization remains unknown. We present evidence that expertise recruits the fusiform gyrus 'face area'. Functional magnetic resonance imaging (fMRI) was used to measure changes associated with increasing expertise in brain areas selected for their face preference. Acquisition of expertise with novel objects (greebles) led to increased activation in the right hemisphere face areas for matching of upright greebles as compared to matching inverted greebles. The same areas were also more activated in experts than in novices during passive viewing of greebles. Expertise seems to be one factor that leads to specialization in the face area.",
"title": ""
},
{
"docid": "7e846a58cbf49231c41789d1190bce67",
"text": "We study the problem of zero-shot classification in which we don't have labeled data in target domain. Existing approaches learn a model from source domain and apply it without adaptation to target domain, which is prone to domain shift problem. To solve the problem, we propose a novel Learning Discriminative Instance Attribute(LDIA) method. Specifically, we learn a projection matrix for both the source and target domain jointly and also use prototype in the attribute space to regularise the learned projection matrix. Therefore, the information of the source domain can be effectively transferred to the target domain. The experimental results on two benchmark datasets demonstrate that the proposed LDIA method exceeds competitive approaches for zero-shot classification task.",
"title": ""
},
{
"docid": "da2f91adcb64786177733357a2cd0da7",
"text": "Object-oriented programming is as much a different way of designing programs as it is a different way of designing programming languages. This paper describes what it is like to design systems in Smalltalk. In particular, since a major motivation for object-oriented programming is software reuse, this paper describes how classes are developed so that they will be reusable.",
"title": ""
},
{
"docid": "b4dd76179734fb43e74c9c1daef15bbf",
"text": "Breast cancer represents one of the diseases that make a high number of deaths every year. It is the most common type of all cancers and the main cause of women’s deaths worldwide. Classification and data mining methods are an effective way to classify data. Especially in medical field, where those methods are widely used in diagnosis and analysis to make decisions. In this paper, a performance comparison between different machine learning algorithms: Support Vector Machine (SVM), Decision Tree (C4.5), Naive Bayes (NB) and k Nearest Neighbors (k-NN) on the Wisconsin Breast Cancer (original) datasets is conducted. The main objective is to assess the correctness in classifying data with respect to efficiency and effectiveness of each algorithm in terms of accuracy, precision, sensitivity and specificity. Experimental results show that SVM gives the highest accuracy (97.13%) with lowest error rate. All experiments are executed within a simulation environment and conducted in WEKA data mining tool. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the Conference Program Chairs.",
"title": ""
},
{
"docid": "c1f9740f056ceb7653fe37c4902f62b6",
"text": "This work explores the use of Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) for automatic language identification (LID). The use of RNNs is motivated by their better ability in modeling sequences with respect to feed forward networks used in previous works. We show that LSTM RNNs can effectively exploit temporal dependencies in acoustic data, learning relevant features for language discrimination purposes. The proposed approach is compared to baseline i-vector and feed forward Deep Neural Network (DNN) systems in the NIST Language Recognition Evaluation 2009 dataset. We show LSTM RNNs achieve better performance than our best DNN system with an order of magnitude fewer parameters. Further, the combination of the different systems leads to significant performance improvements (up to 28%).",
"title": ""
},
{
"docid": "e0db3c5605ea2ea577dda7d549e837ae",
"text": "This paper presents a system based on new operators for handling sets of propositional clauses represented by means of ZBDDs. The high compression power of such data structures allows efficient encodings of structured instances. A specialized operator for the distribution of sets of clauses is introduced and used for performing multi-resolution on clause sets. Cut eliminations between sets of clauses of exponential size may then be performed using polynomial size data structures. The ZRES system, a new implementation of the Davis-Putnam procedure of 1960, solves two hard problems for resolution, that are currently out of the scope of the best SAT provers.",
"title": ""
},
{
"docid": "796625110c6e97f4ff834cfe04c784fe",
"text": "This paper addresses the large-scale visual font recognition (VFR) problem, which aims at automatic identification of the typeface, weight, and slope of the text in an image or photo without any knowledge of content. Although visual font recognition has many practical applications, it has largely been neglected by the vision community. To address the VFR problem, we construct a large-scale dataset containing 2,420 font classes, which easily exceeds the scale of most image categorization datasets in computer vision. As font recognition is inherently dynamic and open-ended, i.e., new classes and data for existing categories are constantly added to the database over time, we propose a scalable solution based on the nearest class mean classifier (NCM). The core algorithm is built on local feature embedding, local feature metric learning and max-margin template selection, which is naturally amenable to NCM and thus to such open-ended classification problems. The new algorithm can generalize to new classes and new data at little added cost. Extensive experiments demonstrate that our approach is very effective on our synthetic test images, and achieves promising results on real world test images.",
"title": ""
},
{
"docid": "0c1013c474edc2d43b749ceab2c51c13",
"text": "Effective training of neural networks requires a lot of data. In the low-data regime, parameters are underdetermined, and learnt networks generalise poorly. Data Augmentation alleviates this by using existing data more effectively. However standard data augmentation produces only limited plausible alternative data. Given there is potential to generate a much broader set of augmentations, we design and train a generative model to do data augmentation. The model, based on image conditional Generative Adversarial Networks, takes data from a source domain and learns to take any data item and augment it by generating other within-class data items. As this generative process does not depend on the classes themselves, it can be applied to novel unseen classes of data. We demonstrate that a Data Augmentation Generative Adversarial Network (DAGAN) augments classifiers very well on Omniglot, EMNIST and VGG-Face.",
"title": ""
}
] |
scidocsrr
|
0f6b8994271059a46829401fe06391e7
|
Trusted Execution Environment: What It is, and What It is Not
|
[
{
"docid": "b0709248d08564b7d1a1f23243aa0946",
"text": "TrustZone-based Real-time Kernel Protection (TZ-RKP) is a novel system that provides real-time protection of the OS kernel using the ARM TrustZone secure world. TZ-RKP is more secure than current approaches that use hypervisors to host kernel protection tools. Although hypervisors provide privilege and isolation, they face fundamental security challenges due to their growing complexity and code size. TZ-RKP puts its security monitor, which represents its entire Trusted Computing Base (TCB), in the TrustZone secure world; a safe isolated environment that is dedicated to security services. Hence, the security monitor is safe from attacks that can potentially compromise the kernel, which runs in the normal world. Using the secure world for kernel protection has been crippled by the lack of control over targets that run in the normal world. TZ-RKP solves this prominent challenge using novel techniques that deprive the normal world from the ability to control certain privileged system functions. These functions are forced to route through the secure world for inspection and approval before being executed. TZ-RKP's control of the normal world is non-bypassable. It can effectively stop attacks that aim at modifying or injecting kernel binaries. It can also stop attacks that involve modifying the system memory layout, e.g, through memory double mapping. This paper presents the implementation and evaluation of TZ-RKP, which has gone through rigorous and thorough evaluation of effectiveness and performance. It is currently deployed on the latest models of the Samsung Galaxy series smart phones and tablets, which clearly demonstrates that it is a practical real-world system.",
"title": ""
},
{
"docid": "c35a4278aa4a084d119238fdd68d9eb6",
"text": "ARM TrustZone, which provides a Trusted Execution Environment (TEE), normally plays a role in keeping security-sensitive resources safe. However, to properly control access to the resources, it is not enough to just isolate them from the Rich Execution Environment (REE). In addition to the isolation, secure communication should be guaranteed between security-critical resources in the TEE and legitimate REE processes that are permitted to use them. Even though there is a TEE security solution — namely, a kernel-integrity monitor — it aims to protect the REE kernel’s static regions, not to secure communication between the REE and TEE. We propose SeCReT to ameliorate this problem. SeCReT is a framework that builds a secure channel between the REE and TEE by enabling REE processes to use session keys in the REE that is regarded as unsafe region. SeCReT provides the session key to a requestor process only when the requestor’s code and control flow integrity are verified. To prevent the key from being exposed to an attacker who already compromised the REE kernel, SeCReT flushes the key from the memory every time the processor switches into kernel mode. In this paper, we present the design and implementation of SeCReT to show how it protects the key in the REE. Our prototype is implemented on Arndale board, which offers a Cortex-A15 dual-core processor with TrustZone as its security extension. We performed a security analysis by using a kernel rootkit and also ran LMBench microbenchmark to evaluate the performance overhead imposed by SeCReT.",
"title": ""
}
] |
[
{
"docid": "8d6cb15882c3a08ce8e2726ed65bf3cb",
"text": "Natural language processing systems (NLP) that extract clinical information from textual reports were shown to be effective for limited domains and for particular applications. Because an NLP system typically requires substantial resources to develop, it is beneficial if it is designed to be easily extendible to multiple domains and applications. This paper describes multiple extensions of an NLP system called MedLEE, which was originally developed for the domain of radiological reports of the chest, but has subsequently been extended to mammography, discharge summaries, all of radiology, electrocardiography, echocardiography, and pathology.",
"title": ""
},
{
"docid": "4236e1b86150a9557b518b789418f048",
"text": "Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost.",
"title": ""
},
{
"docid": "d5008ed5c6c41c55759bd87dacb82c08",
"text": "Attestation is a mechanism used by a trusted entity to validate the software integrity of an untrusted platform. Over the past few years, several attestation techniques have been proposed. While they all use variants of a challenge-response protocol, they make different assumptions about what an attacker can and cannot do. Thus, they propose intrinsically divergent validation approaches. We survey in this article the different approaches to attestation, focusing in particular on those aimed at Wireless Sensor Networks. We discuss the motivations, challenges, assumptions, and attacks of each approach. We then organise them into a taxonomy and discuss the state of the art, carefully analysing the advantages and disadvantages of each proposal. We also point towards the open research problems and give directions on how to address them.",
"title": ""
},
{
"docid": "391c34e983c99af1cc0a06f6f1d4a6bf",
"text": "Network protocol reverse engineering of botnet command and control (C&C) is a challenging task, which requires various manual steps and a significant amount of domain knowledge. Furthermore, most of today's C&C protocols are encrypted, which prevents any analysis on the traffic without first discovering the encryption algorithm and key. To address these challenges, we present an end-to-end system for automatically discovering the encryption algorithm and keys, generating a protocol specification for the C&C traffic, and crafting effective network signatures. In order to infer the encryption algorithm and key, we enhance state-of-the-art techniques to extract this information using lightweight binary analysis. In order to generate protocol specifications we infer field types purely by analyzing network traffic. We evaluate our approach on three prominent malware families: Sality, ZeroAccess and Ramnit. Our results are encouraging: the approach decrypts all three protocols, detects 97% of fields whose semantics are supported, and infers specifications that correctly align with real protocol specifications.",
"title": ""
},
{
"docid": "93c928adef35a409acaa9b371a1498f3",
"text": "The acquisition of a new motor skill is characterized first by a short-term, fast learning stage in which performance improves rapidly, and subsequently by a long-term, slower learning stage in which additional performance gains are incremental. Previous functional imaging studies have suggested that distinct brain networks mediate these two stages of learning, but direct comparisons using the same task have not been performed. Here we used a task in which subjects learn to track a continuous 8-s sequence demanding variable isometric force development between the fingers and thumb of the dominant, right hand. Learning-associated changes in brain activation were characterized using functional MRI (fMRI) during short-term learning of a novel sequence, during short-term learning after prior, brief exposure to the sequence, and over long-term (3 wk) training in the task. Short-term learning was associated with decreases in activity in the dorsolateral prefrontal, anterior cingulate, posterior parietal, primary motor, and cerebellar cortex, and with increased activation in the right cerebellar dentate nucleus, the left putamen, and left thalamus. Prefrontal, parietal, and cerebellar cortical changes were not apparent with short-term learning after prior exposure to the sequence. With long-term learning, increases in activity were found in the left primary somatosensory and motor cortex and in the right putamen. Our observations extend previous work suggesting that distinguishable networks are recruited during the different phases of motor learning. While short-term motor skill learning seems associated primarily with activation in a cortical network specific for the learned movements, long-term learning involves increased activation of a bihemispheric cortical-subcortical network in a pattern suggesting \"plastic\" development of new representations for both motor output and somatosensory afferent information.",
"title": ""
},
{
"docid": "a798db9dfcfec4b8149de856c7e69b48",
"text": "Compared to scanned images, document pictures captured by camera can suffer from distortions due to perspective and page warping. It is necessary to restore a frontal planar view of the page before other OCR techniques can be applied. In this paper we describe a novel approach for flattening a curved document in a single picture captured by an uncalibrated camera. To our knowledge this is the first reported method able to process general curved documents in images without camera calibration. We propose to model the page surface by a developable surface, and exploit the properties (parallelism and equal line spacing) of the printed textual content on the page to recover the surface shape. Experiments show that the output images are much more OCR friendly than the original ones. While our method is designed to work with any general developable surfaces, it can be adapted for typical special cases including planar pages, scans of thick books, and opened books.",
"title": ""
},
{
"docid": "89e11f208c3c96e2b55b00ffbf7da59b",
"text": "In data mining, regression analysis is a computational tool that predicts continuous output variables from a number of independent input variables, by approximating their complex inner relationship. A large number of methods have been successfully proposed, based on various methodologies, including linear regression, support vector regression, neural network, piece-wise regression etc. In terms of piece-wise regression, the existing methods in literature are usually restricted to problems of very small scale, due to their inherent non-linear nature. In this work, a more efficient piece-wise regression method is introduced based on a novel integer linear programming formulation. The proposed method partitions one input variable into multiple mutually exclusive segments, and fits one multivariate linear regression function per segment to minimise the total absolute error. Assuming both the single partition feature and the number of regions are known, the mixed integer linear model is proposed to simultaneously determine the locations of multiple break-points and regression coefficients for each segment. Furthermore, an efficient heuristic procedure is presented to identify the key partition feature and final number of break-points. 7 real world problems covering several application domains have been used to demon∗Corresponding author: Tel.: +442076792563; Fax.: +442073882348 Email addresses: lingjian.yang.10@ucl.ac.uk (Lingjian Yang), s.liu@ucl.ac.uk (Songsong Liu), sophia.tsoka@kcl.ac.uk (Sophia Tsoka), l.papageorgiou@ucl.ac.uk (Lazaros G. Papageorgiou) Preprint submitted to Journal of Expert Systems with Applications August 12, 2015 Accepted Manuscript of a publised work apears its final form in Expert Systems with Applications. To access the final edited and published work see http://dx.doi.org/10.1016/j.eswa.2015.08.034. Open Access under CC-BY-NC-ND user license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "166a0aaa57fb6d7297f1c604f4a1caa8",
"text": "Neural networks designed for the task of classification have become a commodity in recent years. Many works target the development of better networks, which results in a complexification of their architectures with more layers, multiple sub-networks, or even the combination of multiple classifiers. In this paper, we show how to redesign a simple network to reach excellent performances, which are better than the results reproduced with CapsNet on several datasets, by replacing a layer with a Hit-or-Miss layer. This layer contains activated vectors, called capsules, that we train to hit or miss a central capsule by tailoring a specific centripetal loss function. We also show how our network, named HitNet, is capable of synthesizing a representative sample of the images of a given class by including a reconstruction network. This possibility allows to develop a data augmentation step combining information from the data space and the feature space, resulting in a hybrid data augmentation process. In addition, we introduce the possibility for HitNet, to adopt an alternative to the true target when needed by using the new concept of ghost capsules, which is used here to detect potentially mislabeled images in the training data.",
"title": ""
},
{
"docid": "35d220680e18898d298809272619b1d6",
"text": "This paper proposes the use of a least mean fourth (LMF)-based algorithm for single-stage three-phase grid-integrated solar photovoltaic (SPV) system. It consists of an SPV array, voltage source converter (VSC), three-phase grid, and linear/nonlinear loads. This system has an SPV array coupled with a VSC to provide three-phase active power and also acts as a static compensator for the reactive power compensation. It also conforms to an IEEE-519 standard on harmonics by improving the quality of power in the three-phase distribution network. Therefore, this system serves to provide harmonics alleviation, load balancing, power factor correction and regulating the terminal voltage at the point of common coupling. In order to increase the efficiency and maximum power to be extracted from the SPV array at varying environmental conditions, a single-stage system is used along with perturb and observe method of maximum power point tracking (MPPT) integrated with the LMF-based control technique. The proposed system is modeled and simulated using MATLAB/Simulink with available simpower system toolbox and the behaviour of the system under different loads and environmental conditions are verified experimentally on a developed system in the laboratory.",
"title": ""
},
{
"docid": "59a0eb620744f0e53d5a50fab0fcd708",
"text": "Suppose that we wish to learn from examples and counterexamples a criterion for recognizing whether an assembly of wooden blocks constitutes an arch. Suppose also that we have preprogrammed recognizers for various relationships e.g. on-top-of(z, y), above(z, y), etc. and believe that some quantified expression in terms of these base relationships should suffice to approximate the desired notion of an arch. How can we formulate such a relational learning problem so as to exploit the benefits that are demonstrable in propositional learning, such as attribute-efficient learning by linear separators, and error-resilient learning ? We believe that learning in a general setting that allows for multiple objects and relations in this way is paradigmatic of the more fundamental questions that need to be addressed if one is to resolve the following dilemma that arises in the design of intelligent systems: Mathematical logic is an attractive language of description because it has clear semantics and sound proof procedures. However, as a basis for large programmed systems it leads to brittleness because, in practice, consistent usage of the various predicate names throughout a system cannot be guaranteed, except in application areas such as mathematics where the viability of the axiomatic method has been demonstrated independently. In this paper we develop the following approach to circumventing this problem. We suggest that brittleness can be overcome by using a new kind of logic in which each statement is learnable. By allowing thesystem to learn rules empirically from the environment, relative to any particular programs it may have for recognizing some base predicates, we enable the system to acquire a set of statements approximately consistent with each other and with the world, without the need for a globally knowledgeable and consistent p*OgrUllll~~. We illustrate this approach by describing a simple logic that hzs a sound and efficient proof procedure for reasoning about instances, and that is rendered robust by having the rules learnable. The complexity and accuracy of both ~o,,yri~,,, ACM ,999 l-581 13.067.8199105...%5.00 learning and deduction are provably polynomial bounded.",
"title": ""
},
{
"docid": "50c931cc73cbb3336d24707dcb5e938a",
"text": "Endochondral ossification, the mechanism responsible for the development of the long bones, is dependent on an extremely stringent coordination between the processes of chondrocyte maturation in the growth plate, vascular expansion in the surrounding tissues, and osteoblast differentiation and osteogenesis in the perichondrium and the developing bone center. The synchronization of these processes occurring in adjacent tissues is regulated through vigorous crosstalk between chondrocytes, endothelial cells and osteoblast lineage cells. Our knowledge about the molecular constituents of these bidirectional communications is undoubtedly incomplete, but certainly some signaling pathways effective in cartilage have been recognized to play key roles in steering vascularization and osteogenesis in the perichondrial tissues. These include hypoxia-driven signaling pathways, governed by the hypoxia-inducible factors (HIFs) and vascular endothelial growth factor (VEGF), which are absolutely essential for the survival and functioning of chondrocytes in the avascular growth plate, at least in part by regulating the oxygenation of developing cartilage through the stimulation of angiogenesis in the surrounding tissues. A second coordinating signal emanating from cartilage and regulating developmental processes in the adjacent perichondrium is Indian Hedgehog (IHH). IHH, produced by pre-hypertrophic and early hypertrophic chondrocytes in the growth plate, induces the differentiation of adjacent perichondrial progenitor cells into osteoblasts, thereby harmonizing the site and time of bone formation with the developmental progression of chondrogenesis. Both signaling pathways represent vital mediators of the tightly organized conversion of avascular cartilage into vascularized and mineralized bone during endochondral ossification.",
"title": ""
},
{
"docid": "ee785105669d58052ad3b3a3954ba9fb",
"text": "Data visualization is an essential component of genomic data analysis. However, the size and diversity of the data sets produced by today's sequencing and array-based profiling methods present major challenges to visualization tools. The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution. A key characteristic of IGV is its focus on the integrative nature of genomic studies, with support for both array-based and next-generation sequencing data, and the integration of clinical and phenotypic data. Although IGV is often used to view genomic data from public sources, its primary emphasis is to support researchers who wish to visualize and explore their own data sets or those from colleagues. To that end, IGV supports flexible loading of local and remote data sets, and is optimized to provide high-performance data visualization and exploration on standard desktop systems. IGV is freely available for download from http://www.broadinstitute.org/igv, under a GNU LGPL open-source license.",
"title": ""
},
{
"docid": "d0bb735eadd569508827d9a55ff492f5",
"text": "The emergence of social media has had a significant impact on how people communicate and socialize. Teens use social media to make and maintain social connections with friends and build their reputation. However, the way of analyzing the characteristics of teens in social media has mostly relied on ethnographic accounts or quantitative analyses with small datasets. This paper shows the possibility of detecting age information in user profiles by using a combination of textual and facial recognition methods and presents a comparative study of 27K teens and adults in Instagram. Our analysis highlights that (1) teens tend to post fewer photos but highly engage in adding more tags to their own photos and receiving more Likes and comments about their photos from others, and (2) to post more selfies and express themselves more than adults, showing a higher sense of self-representation. We demonstrate the application of our novel method that shows clear trends of age differences as well as substantiates previous insights in social media.",
"title": ""
},
{
"docid": "cf54533bc317b960fc80f22baa26d7b1",
"text": "The state-of-the-art named entity recognition (NER) systems are statistical machine learning models that have strong generalization capability (i.e., can recognize unseen entities that do not appear in training data) based on lexical and contextual information. However, such a model could still make mistakes if its features favor a wrong entity type. In this paper, we utilize Wikipedia as an open knowledge base to improve multilingual NER systems. Central to our approach is the construction of high-accuracy, highcoverage multilingual Wikipedia entity type mappings. These mappings are built from weakly annotated data and can be extended to new languages with no human annotation or language-dependent knowledge involved. Based on these mappings, we develop several approaches to improve an NER system. We evaluate the performance of the approaches via experiments on NER systems trained for 6 languages. Experimental results show that the proposed approaches are effective in improving the accuracy of such systems on unseen entities, especially when a system is applied to a new domain or it is trained with little training data (up to 18.3 F1 score improvement).",
"title": ""
},
{
"docid": "2be35e0e63316137b3426fffd397111c",
"text": "Face detection is essential to facial analysis tasks, such as facial reenactment and face recognition. Both cascade face detectors and anchor-based face detectors have translated shining demos into practice and received intensive attention from the community. However, cascade face detectors often suffer from a low detection accuracy, while anchor-based face detectors rely heavily on very large neural networks pre-trained on large-scale image classification datasets such as ImageNet, which is not computationally efficient for both training and deployment. In this paper, we devise an efficient anchor-based cascade framework called anchor cascade. To improve the detection accuracy by exploring contextual information, we further propose a context pyramid maxout mechanism for anchor cascade. As a result, anchor cascade can train very efficient face detection models with a high detection accuracy. Specifically, compared with a popular convolutional neural network (CNN)-based cascade face detector MTCNN, our anchor cascade face detector greatly improves the detection accuracy, e.g., from 0.9435 to 0.9704 at $1k$ false positives on FDDB, while it still runs in comparable speed. Experimental results on two widely used face detection benchmarks, FDDB and WIDER FACE, demonstrate the effectiveness of the proposed framework.",
"title": ""
},
{
"docid": "7f02090e896afacd6b70537c03956078",
"text": "Although the literature on Asian Americans and racism has been emerging, few studies have examined how coping influences one's encounters with racism. To advance the literature, the present study focused on the psychological impact of Filipino Americans' experiences with racism and the role of coping as a mediator using a community-based sample of adults (N = 199). Two multiple mediation models were used to examine the mediating effects of active, avoidance, support-seeking, and forbearance coping on the relationship between perceived racism and psychological distress and self-esteem, respectively. Separate analyses were also conducted for men and women given differences in coping utilization. For men, a bootstrap procedure indicated that active, support-seeking, and avoidance coping were mediators of the relationship between perceived racism and psychological distress. Active coping was negatively associated with psychological distress, whereas both support seeking and avoidance were positively associated with psychological distress. A second bootstrap procedure for men indicated that active and avoidance coping mediated the relationship between perceived racism and self-esteem such that active coping was positively associated with self-esteem, and avoidance was negatively associated with self-esteem. For women, only avoidance coping had a significant mediating effect that was associated with elevations in psychological distress and decreases in self-esteem. The results highlight the importance of examining the efficacy of specific coping responses to racism and the need to differentiate between the experiences of men and women.",
"title": ""
},
{
"docid": "8e64738b0d21db1ec5ef0220507f3130",
"text": "Automatic clothes search in consumer photos is not a trivial problem as photos are usually taken under completely uncontrolled realistic imaging conditions. In this paper, a novel framework is presented to tackle this issue by leveraging low-level features (e.g., color) and high-level features (attributes) of clothes. First, a content-based image retrieval(CBIR) approach based on the bag-of-visual-words (BOW) model is developed as our baseline system, in which a codebook is constructed from extracted dominant color patches. A reranking approach is then proposed to improve search quality by exploiting clothes attributes, including the type of clothes, sleeves, patterns, etc. The experiments on photo collections show that our approach is robust to large variations of images taken in unconstrained environment, and the reranking algorithm based on attribute learning significantly improves retrieval performance in combination with the proposed baseline.",
"title": ""
},
{
"docid": "da906b692787c40778edc44d310ef527",
"text": "From the beginning, a primary goal of the Cyc project has been to build a large knowledge base containing a store of formalized background knowledge suitable for supporting reasoning in a variety of domains. In this paper, we will discuss the portion of Cyc technology that has been released in open source form as OpenCyc, provide examples of the content available in ResearchCyc, and discuss their utility for the future development of fully formalized knowledge bases.",
"title": ""
},
{
"docid": "47d56fb4ec278497bce0d05caf61b09a",
"text": "As manufacturers continue to improve the energy efficiency of battery-powered wireless devices, WiFi has become one of—if not the—most significant power draws. Hence, modern devices fastidiously manage their radios, shifting into low-power listening or sleep states whenever possible. The fundamental limitation with this approach, however, is that the radio is incapable of transmitting or receiving unless it is fully powered. Unfortunately, applications found on today’s wireless devices often require frequent access to the channel. We observe, however, that many of these same applications have relatively low bandwidth requirements. Leveraging the inherent sparsity in Direct Sequence Spread Spectrum (DSSS) modulation, we propose a transceiver design based on compressive sensing that allows WiFi devices to operate their radios at lower clock rates when receiving and transmitting at low bit rates, thus consuming less power. We have implemented our 802.11b-based design in a software radio platform, and show that it seamlessly interacts with existing WiFi deployments. Our prototype remains fully functional when the clock rate is reduced by a factor of five, potentially reducing power consumption by over 30%.",
"title": ""
},
{
"docid": "a68cec6fd069499099c8bca264eb0982",
"text": "The anti-saccade task has emerged as an important task for investigating the flexible control that we have over behaviour. In this task, participants must suppress the reflexive urge to look at a visual target that appears suddenly in the peripheral visual field and must instead look away from the target in the opposite direction. A crucial step involved in performing this task is the top-down inhibition of a reflexive, automatic saccade. Here, we describe recent neurophysiological evidence demonstrating the presence of this inhibitory function in single-cell activity in the frontal eye fields and superior colliculus. Patients diagnosed with various neurological and/or psychiatric disorders that affect the frontal lobes or basal ganglia find it difficult to suppress the automatic pro-saccade, revealing a deficit in top-down inhibition.",
"title": ""
}
] |
scidocsrr
|
fe9da71b07b45bef1a7f551e5e0a1f17
|
Confidence and certainty: distinct probabilistic quantities for different goals
|
[
{
"docid": "50a89110795314b5610fabeaf41f0e40",
"text": "People are capable of robust evaluations of their decisions: they are often aware of their mistakes even without explicit feedback, and report levels of confidence in their decisions that correlate with objective performance. These metacognitive abilities help people to avoid making the same mistakes twice, and to avoid overcommitting time or resources to decisions that are based on unreliable evidence. In this review, we consider progress in characterizing the neural and mechanistic basis of these related aspects of metacognition-confidence judgements and error monitoring-and identify crucial points of convergence between methods and theories in the two fields. This convergence suggests that common principles govern metacognitive judgements of confidence and accuracy; in particular, a shared reliance on post-decisional processing within the systems responsible for the initial decision. However, research in both fields has focused rather narrowly on simple, discrete decisions-reflecting the correspondingly restricted focus of current models of the decision process itself-raising doubts about the degree to which discovered principles will scale up to explain metacognitive evaluation of real-world decisions and actions that are fluid, temporally extended, and embedded in the broader context of evolving behavioural goals.",
"title": ""
}
] |
[
{
"docid": "e3ccebbfb328e525c298816950d135a5",
"text": "It is important for robots to be able to decide whether they can go through a space or not, as they navigate through a dynamic environment. This capability can help them avoid injury or serious damage, e.g., as a result of running into people and obstacles, getting stuck, or falling off an edge. To this end, we propose an unsupervised and a near-unsupervised method based on Generative Adversarial Networks (GAN) to classify scenarios as traversable or not based on visual data. Our method is inspired by the recent success of data-driven approaches on computer vision problems and anomaly detection, and reduces the need for vast amounts of negative examples at training time. Collecting negative data indicating that a robot should not go through a space is typically hard and dangerous because of collisions; whereas collecting positive data can be automated and done safely based on the robot’s own traveling experience. We verify the generality and effectiveness of the proposed approach on a test dataset collected in a previously unseen environment with a mobile robot. Furthermore, we show that our method can be used to build costmaps (we call as ”GoNoGo” costmaps) for robot path planning using visual data only.",
"title": ""
},
{
"docid": "0a2810ea169fac476c0ffe1f3d163c95",
"text": "BACKGROUND\nAntidepressant treatment efficacy is low, but might be improved by matching patients to interventions. At present, clinicians have no empirically validated mechanisms to assess whether a patient with depression will respond to a specific antidepressant. We aimed to develop an algorithm to assess whether patients will achieve symptomatic remission from a 12-week course of citalopram.\n\n\nMETHODS\nWe used patient-reported data from patients with depression (n=4041, with 1949 completers) from level 1 of the Sequenced Treatment Alternatives to Relieve Depression (STAR*D; ClinicalTrials.gov, number NCT00021528) to identify variables that were most predictive of treatment outcome, and used these variables to train a machine-learning model to predict clinical remission. We externally validated the model in the escitalopram treatment group (n=151) of an independent clinical trial (Combining Medications to Enhance Depression Outcomes [COMED]; ClinicalTrials.gov, number NCT00590863).\n\n\nFINDINGS\nWe identified 25 variables that were most predictive of treatment outcome from 164 patient-reportable variables, and used these to train the model. The model was internally cross-validated, and predicted outcomes in the STAR*D cohort with accuracy significantly above chance (64·6% [SD 3·2]; p<0·0001). The model was externally validated in the escitalopram treatment group (N=151) of COMED (accuracy 59·6%, p=0.043). The model also performed significantly above chance in a combined escitalopram-buproprion treatment group in COMED (n=134; accuracy 59·7%, p=0·023), but not in a combined venlafaxine-mirtazapine group (n=140; accuracy 51·4%, p=0·53), suggesting specificity of the model to underlying mechanisms.\n\n\nINTERPRETATION\nBuilding statistical models by mining existing clinical trial data can enable prospective identification of patients who are likely to respond to a specific antidepressant.\n\n\nFUNDING\nYale University.",
"title": ""
},
{
"docid": "4c2b13b00ce3c92762fa9bfbd34dd0a0",
"text": "Technology advances in the areas of Image processing IP and Information Retrieval IR have evolved separately for a long time However successful content based image retrieval systems require the integration of the two There is an urgent need to develop integration mechanisms to link the image retrieval model to text retrieval model such that the well established text retrieval techniques can be utilized Approaches of converting image feature vectors IP do main to weighted term vectors IR domain are proposed in this paper Furthermore the relevance feedback technique from the IR domain is used in content based image retrieval to demonstrate the e ectiveness of this conversion Exper imental results show that the image retrieval precision in creases considerably by using the proposed integration ap proach",
"title": ""
},
{
"docid": "84c2b96916ce68245cf81bdf8f4b435c",
"text": "INTRODUCTION\nComplete and accurate coding of injury causes is essential to the understanding of injury etiology and to the development and evaluation of injury-prevention strategies. While civilian hospitals use ICD-9-CM external cause-of-injury codes, military hospitals use codes derived from the NATO Standardization Agreement (STANAG) 2050.\n\n\nDISCUSSION\nThe STANAG uses two separate variables to code injury cause. The Trauma code uses a single digit with 10 possible values to identify the general class of injury as battle injury, intentionally inflicted nonbattle injury, or unintentional injury. The Injury code is used to identify cause or activity at the time of the injury. For a subset of the Injury codes, the last digit is modified to indicate place of occurrence. This simple system contains fewer than 300 basic codes, including many that are specific to battle- and sports-related injuries not coded well by either the ICD-9-CM or the draft ICD-10-CM. However, while falls, poisonings, and injuries due to machinery and tools are common causes of injury hospitalizations in the military, few STANAG codes correspond to these events. Intentional injuries in general and sexual assaults in particular are also not well represented in the STANAG. Because the STANAG does not map directly to the ICD-9-CM system, quantitative comparisons between military and civilian data are difficult.\n\n\nCONCLUSIONS\nThe ICD-10-CM, which will be implemented in the United States sometime after 2001, expands considerably on its predecessor, ICD-9-CM, and provides more specificity and detail than the STANAG. With slight modification, it might become a suitable replacement for the STANAG.",
"title": ""
},
{
"docid": "4c05d5add4bd2130787fd894ce74323a",
"text": "Although semi-supervised model can extract the event mentions matching frequent event patterns, it suffers much from those event mentions, which match infrequent patterns or have no matching pattern. To solve this issue, this paper introduces various kinds of linguistic knowledge-driven event inference mechanisms to semi-supervised Chinese event extraction. These event inference mechanisms can capture linguistic knowledge from four aspects, i.e. semantics of argument role, compositional semantics of trigger, consistency on coreference events and relevant events, to further recover missing event mentions from unlabeled texts. Evaluation on the ACE 2005 Chinese corpus shows that our event inference mechanisms significantly outperform the refined state-of-the-art semi-supervised Chinese event extraction system in F1-score by 8.5%.",
"title": ""
},
{
"docid": "5a99af400ea048d34ee961ad7f3e3bf6",
"text": "Breast cancer is becoming pervasive with each passing day. Hence, its early detection is a big step in saving life of any patient. Mammography is a common tool in breast cancer diagnosis. The most important step here is classification of mammogram patches as normal-abnormal and benign-malignant. Texture of a breast in a mammogram patch plays a big role in these classifications. We propose a new feature extraction descriptor called Histogram of Oriented Texture (HOT), which is a combination of Histogram of Gradients (HOG) and a Gabor filter, and exploits this fact. We also revisit the Pass Band Discrete Cosine Transform (PB-DCT) descriptor that captures texture information well. All features of a mammogram patch may not be useful. Hence, we apply a feature selection technique called Discrimination Potentiality (DP). Our resulting descriptors, DP-HOT and DP-PB-DCT, are compared with the standard descriptors. Density of a mammogram patch is important for classification, and has not been studied exhaustively. The Image Retrieval in Medical Application (IRMA) database from RWTH Aachen, Germany is a standard database that provides mammogram patches, and most researchers have tested their frameworks only on a subset of patches from this database. We apply our two new descriptors on all images of the IRMA database for density wise classification, and compare with the standard descriptors. We achieve higher accuracy than all of the existing standard descriptors (more than 92% ).",
"title": ""
},
{
"docid": "62f5640954e5b731f82599fb52ea816f",
"text": "This paper presents an energy-balance control strategy for a cascaded single-phase grid-connected H-bridge multilevel inverter linking n independent photovoltaic (PV) arrays to the grid. The control scheme is based on an energy-sampled data model of the PV system and enables the design of a voltage loop linear discrete controller for each array, ensuring the stability of the system for the whole range of PV array operating conditions. The control design is adapted to phase-shifted and level-shifted carrier pulsewidth modulations to share the control action among the cascade-connected bridges in order to concurrently synthesize a multilevel waveform and to keep each of the PV arrays at its maximum power operating point. Experimental results carried out on a seven-level inverter are included to validate the proposed approach.",
"title": ""
},
{
"docid": "1ff8d3270f4884ca9a9c3d875bdf1227",
"text": "This paper addresses the challenging problem of perceiving the hidden or occluded geometry of the scene depicted in any given RGBD image. Unlike other image labeling problems such as image segmentation where each pixel needs to be assigned a single label, layered decomposition requires us to assign multiple labels to pixels. We propose a novel \"Occlusion-CRF\" model that allows for the integration of sophisticated priors to regularize the solution space and enables the automatic inference of the layer decomposition. We use a generalization of the Fusion Move algorithm to perform Maximum a Posterior (MAP) inference on the model that can handle the large label sets needed to represent multiple surface assignments to each pixel. We have evaluated the proposed model and the inference algorithm on many RGBD images of cluttered indoor scenes. Our experiments show that not only is our model able to explain occlusions but it also enables automatic inpainting of occluded/ invisible surfaces.",
"title": ""
},
{
"docid": "0016ef3439b78a29c76a14e8db2a09be",
"text": "In tasks such as pursuit and evasion, multiple agents need to coordinate their behavior to achieve a common goal. An interesting question is, how can such behavior be best evolved? A powerful approach is to control the agents with neural networks, coevolve them in separate subpopulations, and test them together in the common task. In this paper, such a method, called multiagent enforced subpopulations (multiagent ESP), is proposed and demonstrated in a prey-capture task. First, the approach is shown to be more efficient than evolving a single central controller for all agents. Second, cooperation is found to be most efficient through stigmergy, i.e., through role-based responses to the environment, rather than communication between the agents. Together these results suggest that role-based cooperation is an effective strategy in certain multiagent tasks.",
"title": ""
},
{
"docid": "36fdd31b04f53f7aef27b9d4af5f479f",
"text": "Smart meters have been deployed in many countries across the world since early 2000s. The smart meter as a key element for the smart grid is expected to provide economic, social, and environmental benefits for multiple stakeholders. There has been much debate over the real values of smart meters. One of the key factors that will determine the success of smart meters is smart meter data analytics, which deals with data acquisition, transmission, processing, and interpretation that bring benefits to all stakeholders. This paper presents a comprehensive survey of smart electricity meters and their utilization focusing on key aspects of the metering process, different stakeholder interests, and the technologies used to satisfy stakeholder interests. Furthermore, the paper highlights challenges as well as opportunities arising due to the advent of big data and the increasing popularity of cloud environments.",
"title": ""
},
{
"docid": "a2f8cb66e02e87861a322ce50fef97af",
"text": "The conversion of biomass by gasification into a fuel suitable for use in a gas engine increases greatly the potential usefulness of biomass as a renewable resource. Gasification is a robust proven technology that can be operated either as a simple, low technology system based on a fixed-bed gasifier, or as a more sophisticated system using fluidized-bed technology. The properties of the biomass feedstock and its preparation are key design parameters when selecting the gasifier system. Electricity generation using a gas engine operating on gas produced by the gasification of biomass is applicable equally to both the developed world (as a means of reducing greenhouse gas emissions by replacing fossil fuel) and to the developing world (by providing electricity in rural areas derived from traditional biomass).",
"title": ""
},
{
"docid": "bc892fe2a369f701e0338085eaa0bdbd",
"text": "In his In the blink of an eye,Walter Murch, the Oscar-awarded editor of the English Patient, Apocalypse Now, and many other outstanding movies, devises the Rule of Six—six criteria for what makes a good cut. On top of his list is \"to be true to the emotion of the moment,\" a quality more important than advancing the story or being rhythmically interesting. The cut has to deliver a meaningful, compelling, and emotion-rich \"experience\" to the audience. Because, \"what they finally remember is not the editing, not the camerawork, not the performances, not even the story—it’s how they felt.\" Technology for all the right reasons applies this insight to the design of interactive products and technologies—the domain of Human-Computer Interaction,Usability Engineering,and Interaction Design. It takes an experiential approach, putting experience before functionality and leaving behind oversimplified calls for ease, efficiency, and automation or shallow beautification. Instead, it explores what really matters to humans and what it needs to make technology more meaningful. The book clarifies what experience is, and highlights five crucial aspects and their implications for the design of interactive products. It provides reasons why we should bother with an experiential approach, and presents a detailed working model of experience useful for practitioners and academics alike. It closes with the particular challenges of an experiential approach for design. The book presents its view as a comprehensive, yet entertaining blend of scientific findings, design examples, and personal anecdotes.",
"title": ""
},
{
"docid": "2c7bafac9d4c4fedc43982bd53c99228",
"text": "One of the uniqueness of business is for firm to be customer focus. Study have shown that this could be achieved through blockchain technology in enhancing customer loyalty programs (Michael J. Casey 2015; John Ream et al 2016; Sean Dennis 2016; James O'Brien and Dave Montali, 2016; Peiguss 2012; Singh, Khan, 2012; and among others). Recent advances in block chain technology have provided the tools for marketing managers to create a new generation of being able to assess the level of control companies want to have over customer data and activities as well as security/privacy issues that always arise with every additional participant of the network While block chain technology is still in the early stages of adoption, it could prove valuable for loyalty rewards program providers. Hundreds of blockchain initiatives are already underway in various industries, particularly airline services, even though standardization is far from a reality. One attractive feature of loyalty rewards is that they are not core to business revenue and operations and companies willing to implement blockchain for customer loyalty programs benefit lower administrative costs, improved customer experiences, and increased user engagement (Michael J. Casey, 2015; James O'Brien and Dave Montali 2016; Peiguss 2012; Singh, Abstract: In today business world, companies have accelerated the use of Blockchain technology to enhance the brand recognition of their products and services. Company believes that the integration of Blockchain into the current business marketing strategy will enhance the growth of their products, and thus acting as a customer loyalty solution. The goal of this study is to obtain a deep understanding of the impact of blockchain technology in enhancing customer loyalty programs of airline business. To achieve the goal of the study, a contextualized and literature based research instrument was used to measure the application of the investigated “constructs”, and a survey was conducted to collect data from the sample population. A convenience sample of total (450) Questionnaires were distributed to customers, and managers of the surveyed airlines who could be reached by the researcher. 274 to airline customers/passengers, and the remaining 176 to managers in the various airlines researched. Questionnaires with instructions were hand-delivered to respondents. Out of the 397 completed questionnaires returned, 359 copies were found usable for the present study, resulting in an effective response rate of 79.7%. The respondents had different social, educational, and occupational backgrounds. The research instrument showed encouraging evidence of reliability and validity. Data were analyzed using descriptive statistics, percentages and ttest analysis. The findings clearly show that there is significant evidence that blockchain technology enhance customer loyalty programs of airline business. It was discovered that Usage of blockchain technology is emphasized by the surveyed airlines operators in Nigeria., the extent of effective usage of customer loyalty programs is related to blockchain technology, and that he level or extent of effective usage of blockchain technology does affect the achievement of customer loyalty program goals and objectives. Feedback from the research will assist to expand knowledge as to the usefulness of blockchain technology being a customer loyalty solution.",
"title": ""
},
{
"docid": "8e00a3e7a07b69bce89a66fc6d4934aa",
"text": "This article is organised in five main sections. First, the sub-area of task-based instruction is introduced and contextualised. Its origins within communicative language teaching and second language acquisition research are sketched, and the notion of a task in language learning is defined. There is also brief coverage of the different and sometimes contrasting groups who are interested in the use of tasks. The second section surveys research into tasks, covering the different perspectives (interactional, cognitive) which have been influential. Then a third section explores how performance on tasks has been measured, generally in terms of how complex the language used is, how accurate it is, and how fluent. There is also discussion of approaches to measuring interaction. A fourth section explores the pedagogic and interventionist dimension of the use of tasks. The article concludes with a survey of the various critiques of tasks that have been made in recent years.",
"title": ""
},
{
"docid": "fe0fa94ce6f02626fca12f21b60bec46",
"text": "Solid waste management (SWM) is a major public health and environmental concern in urban areas of many developing countries. Nairobi’s solid waste situation, which could be taken to generally represent the status which is largely characterized by low coverage of solid waste collection, pollution from uncontrolled dumping of waste, inefficient public services, unregulated and uncoordinated private sector and lack of key solid waste management infrastructure. This paper recapitulates on the public-private partnership as the best system for developing countries; challenges, approaches, practices or systems of SWM, and outcomes or advantages to the approach; the literature review focuses on surveying information pertaining to existing waste management methodologies, policies, and research relevant to the SWM. Information was sourced from peer-reviewed academic literature, grey literature, publicly available waste management plans, and through consultation with waste management professionals. Literature pertaining to SWM and municipal solid waste minimization, auditing and management were searched for through online journal databases, particularly Web of Science, and Science Direct. Legislation pertaining to waste management was also researched using the different databases. Additional information was obtained from grey literature and textbooks pertaining to waste management topics. After conducting preliminary research, prevalent references of select sources were identified and scanned for additional relevant articles. Research was also expanded to include literature pertaining to recycling, composting, education, and case studies; the manuscript summarizes with future recommendationsin terms collaborations of public/ private patternships, sensitization of people, privatization is important in improving processes and modernizing urban waste management, contract private sector, integrated waste management should be encouraged, provisional government leaders need to alter their mind set, prepare a strategic, integrated SWM plan for the cities, enact strong and adequate legislation at city and national level, evaluate the real impacts of waste management systems, utilizing locally based solutions for SWM service delivery and design, location, management of the waste collection centersand recycling and compositing activities should be",
"title": ""
},
{
"docid": "ad4949d61aecf488fffcc4ca25ca0fb7",
"text": "Predicting the gender of users in social media has aroused great interests in recent years. Almost all existing studies rely on the the content features extracted from the main texts like tweets or reviews. It is sometimes difficult to extract content information since many users do not write any posts at all. In this paper, we present a novel framework which uses only the users' ids and their social contexts for gender prediction. The key idea is to represent users in the embedding connection space. A user often has the social context of family members, schoolmates, colleagues, and friends. This is similar to a word and its contexts in documents, which motivates our study. However, when modifying the word embedding technique for user embedding, there are two major challenges. First, unlike the syntax in language, no rule is responsible for the composition of the social contexts. Second, new users were not seen when learning the representations and thus they do not have embedding vectors. Two strategies circular ordering and incremental updating are proposed to solve these problems. We evaluate our methodology on two real data sets. Experimental results demonstrate that our proposed approach is significantly better than the traditional graph representation and the state-of-the-art graph embedding baselines. It also outperforms the content based approaches by a large margin.",
"title": ""
},
{
"docid": "245b313fa0a72707949f20c28ce7e284",
"text": "We consider the class of Iterative Shrinkage-Thresholding Algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods is attractive due to its simplicity, however, they are also known to converge quite slowly. In this paper we present a Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) which preserves the computational simplicity of ISTA, but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA.",
"title": ""
},
{
"docid": "f95f77f81f5a4838f9f3fa2538e9d132",
"text": "Learning analytics tools should be useful, i.e., they should be usable and provide the functionality for reaching the goals attributed to learning analytics. This paper seeks to unite learning analytics and action research. Based on this, we investigate how the multitude of questions that arise during technology-enhanced teaching and learning systematically can be mapped to sets of indicators. We examine, which questions are not yet supported and propose concepts of indicators that have a high potential of positively influencing teachers' didactical considerations. Our investigation shows that many questions of teachers cannot be answered with currently available research tools. Furthermore, few learning analytics studies report about measuring impact. We describe which effects learning analytics should have on teaching and discuss how this could be evaluated.",
"title": ""
},
{
"docid": "d9b2cb1a7abdadfad4caeb3598a58e68",
"text": "A highly efficient planar integrated magnetic (PIM) design approach for primary-parallel isolated boost converters is presented. All magnetic components in the converter, including two input inductors and two transformers with primary-parallel and secondary-series windings, are integrated into an E-I-E-core geometry, reducing the total ferrite volume and core loss. The transformer windings are symmetrically distributed into the outer legs of E-cores, and the inductor windings are wound on the center legs of E-cores with air gaps. Therefore, the inductor and the transformer can be operated independently. Due to the low-reluctance path provided by the shared I-core, the two input inductors can be integrated independently, and also, the two transformers can be partially coupled to each other. Detailed characteristics of the integrated structure have been studied in this paper. AC losses in the windings and the leakage inductance of the transformer are kept low by interleaving the primary and secondary turns of the transformers substantially. Because of the combination of inductors and transformers, the maximum output power capability of the fully integrated module needs to be investigated. Winding loss, core loss, and switching loss of MOSFETs are analyzed in-depth in this work as well. To verify the validity of the design approach, a 2-kW prototype converter with two primary power stages is implemented for fuel-cell-fed traction applications with 20-50-V input and 400-V output. An efficiency of 95.9% can be achieved during 1.5-kW nominal operating conditions. Experimental comparisons between the PIM module and three separated cases have illustrated that the PIM module has advantages of lower footprint and higher efficiencies.",
"title": ""
}
] |
scidocsrr
|
bdd1f4f9fac3620b0dae565d4b40d9d2
|
Scaling Nakamoto Consensus to Thousands of Transactions per Second
|
[
{
"docid": "4ac3affdf995c4bb527229da0feb411d",
"text": "Algorand is a new cryptocurrency that confirms transactions with latency on the order of a minute while scaling to many users. Algorand ensures that users never have divergent views of confirmed transactions, even if some of the users are malicious and the network is temporarily partitioned. In contrast, existing cryptocurrencies allow for temporary forks and therefore require a long time, on the order of an hour, to confirm transactions with high confidence.\n Algorand uses a new Byzantine Agreement (BA) protocol to reach consensus among users on the next set of transactions. To scale the consensus to many users, Algorand uses a novel mechanism based on Verifiable Random Functions that allows users to privately check whether they are selected to participate in the BA to agree on the next set of transactions, and to include a proof of their selection in their network messages. In Algorand's BA protocol, users do not keep any private state except for their private keys, which allows Algorand to replace participants immediately after they send a message. This mitigates targeted attacks on chosen participants after their identity is revealed.\n We implement Algorand and evaluate its performance on 1,000 EC2 virtual machines, simulating up to 500,000 users. Experimental results show that Algorand confirms transactions in under a minute, achieves 125x Bitcoin's throughput, and incurs almost no penalty for scaling to more users.",
"title": ""
},
{
"docid": "9f6e103a331ab52b303a12779d0d5ef6",
"text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
"title": ""
}
] |
[
{
"docid": "e613ef418da545958c2094c5cce8f4f1",
"text": "This paper proposes a new visual SLAM technique that not only integrates 6 degrees of freedom (DOF) pose and dense structure but also simultaneously integrates the colour information contained in the images over time. This involves developing an inverse model for creating a super-resolution map from many low resolution images. Contrary to classic super-resolution techniques, this is achieved here by taking into account full 3D translation and rotation within a dense localisation and mapping framework. This not only allows to take into account the full range of image deformations but also allows to propose a novel criteria for combining the low resolution images together based on the difference in resolution between different images in 6D space. Another originality of the proposed approach with respect to the current state of the art lies in the minimisation of both colour (RGB) and depth (D) errors, whilst competing approaches only minimise geometry. Several results are given showing that this technique runs in real-time (30Hz) and is able to map large scale environments in high-resolution whilst simultaneously improving the accuracy and robustness of the tracking.",
"title": ""
},
{
"docid": "1450854a32ea6c18f4cc817f686aaf15",
"text": "This article reports on the development of two measures relating to historical trauma among American Indian people: The Historical Loss Scale and The Historical Loss Associated Symptoms Scale. Measurement characteristics including frequencies, internal reliability, and confirmatory factor analyses were calculated based on 143 American Indian adult parents of children aged 10 through 12 years who are part of an ongoing longitudinal study of American Indian families in the upper Midwest. Results indicate both scales have high internal reliability. Frequencies indicate that the current generation of American Indian adults have frequent thoughts pertaining to historical losses and that they associate these losses with negative feelings. Two factors of the Historical Loss Associated Symptoms Scale indicate one anxiety/depression component and one anger/avoidance component. The results are discussed in terms of future research and theory pertaining to historical trauma among American Indian people.",
"title": ""
},
{
"docid": "886e88c878bae3c56fc81e392cecd1c9",
"text": "This review summarizes data from the numerous investigations from the beginning of the last century to the present. The studies concerned the main issues of the morphology, the life cycle, hosts and localization of Hepatozoon canis (phylum Apicomplexa, suborder Adeleorina, family Hepatozoidae). The characteristic features of hepatozoonosis, caused by Hepatozoon canis in the dog, are evaluated. A survey of clinical signs, gross pathological changes, epidemiology, diagnosis and treatment of the disease was made. The measures for prevention of Hepatozoon canis infection in animals are listed. The importance of hepatozoonosis with regard to public health was evaluated. The studies on the subject, performed in Bulgaria, are discussed.",
"title": ""
},
{
"docid": "54776bdc9f7a9b18289d4901a8db5d7a",
"text": "The goal of this research was to determine the effect of different doses of galactooligosaccharide (GOS) on the fecal microbiota of healthy adults, with a focus on bifidobacteria. The study was designed as a single-blinded study, with eighteen subjects consuming GOS-containing chocolate chews at four increasing dosage levels; 0, 2.5, 5.0, and 10.0g. Subjects consumed each dose for 3 weeks, with a two-week baseline period preceding the study and a two-week washout period at the end. Fecal samples were collected weekly and analyzed by cultural and molecular methods. Cultural methods were used for bifidobacteria, Bacteroides, enterobacteria, enterococci, lactobacilli, and total anaerobes; culture-independent methods included denaturing gradient gel electrophoresis (DGGE) and quantitative real-time PCR (qRT-PCR) using Bifidobacterium-specific primers. All three methods revealed an increase in bifidobacteria populations, as the GOS dosage increased to 5 or 10g. Enumeration of bifidobacteria by qRT-PCR showed a high inter-subject variation in bifidogenic effect and indicated a subset of 9 GOS responders among the eighteen subjects. There were no differences, however, in the initial levels of bifidobacteria between the responding individuals and the non-responding individuals. Collectively, this study showed that a high purity GOS, administered in a confection product at doses of 5g or higher, was bifidogenic, while a dose of 2.5g showed no significant effect. However, the results also showed that even when GOS was administered for many weeks and at high doses, there were still some individuals for which a bifidogenic response did not occur.",
"title": ""
},
{
"docid": "bf180a4ed173ef81c91594a2ee651c8c",
"text": "Recent emergence of low-cost and easy-operating depth cameras has reinvigorated the research in skeleton-based human action recognition. However, most existing approaches overlook the intrinsic interdependencies between skeleton joints and action classes, thus suffering from unsatisfactory recognition performance. In this paper, a novel latent max-margin multitask learning model is proposed for 3-D action recognition. Specifically, we exploit skelets as the mid-level granularity of joints to describe actions. We then apply the learning model to capture the correlations between the latent skelets and action classes each of which accounts for a task. By leveraging structured sparsity inducing regularization, the common information belonging to the same class can be discovered from the latent skelets, while the private information across different classes can also be preserved. The proposed model is evaluated on three challenging action data sets captured by depth cameras. Experimental results show that our model consistently achieves superior performance over recent state-of-the-art approaches.",
"title": ""
},
{
"docid": "71da7722f6ce892261134bd60ca93ab7",
"text": "Semantically annotated data, using markup languages like RDFa and Microdata, has become more and more publicly available in the Web, especially in the area of e-commerce. Thus, a large amount of structured product descriptions are freely available and can be used for various applications, such as product search or recommendation. However, little efforts have been made to analyze the categories of the available product descriptions. Although some products have an explicit category assigned, the categorization schemes vary a lot, as the products originate from thousands of different sites. This heterogeneity makes the use of supervised methods, which have been proposed by most previous works, hard to apply. Therefore, in this paper, we explain how distantly supervised approaches can be used to exploit the heterogeneous category information in order to map the products to set of target categories from an existing product catalogue. Our results show that, even though this task is by far not trivial, we can reach almost 56% accuracy for classifying products into 37 categories.",
"title": ""
},
{
"docid": "2f48ab4d20f0928837bf10d2f638fed3",
"text": "Duchenne muscular dystrophy (DMD), a recessive sex-linked hereditary disorder, is characterized by degeneration, atrophy, and weakness of skeletal and cardiac muscle. The purpose of this study was to document the prevalence of abnormally low resting BP recordings in patients with DMD in our outpatient clinic. The charts of 31 patients with DMD attending the cardiology clinic at Rush University Medical Center were retrospectively reviewed. Demographic data, systolic, diastolic, and mean blood pressures along with current medications, echocardiograms, and documented clinical appreciation and management of low blood pressure were recorded in the form of 104 outpatient clinical visits. Blood pressure (BP) was classified as low if the systolic and/or mean BP was less than the fifth percentile for height for patients aged ≤17 years (n = 23). For patients ≥18 years (n = 8), systolic blood pressure (SBP) <90 mmHg or a mean arterial pressure (MAP) <60 mmHg was recorded as a low reading. Patients with other forms of myopathy or unclear diagnosis were excluded. Statistical analysis was done using PASW version 18. BP was documented at 103 (99.01 %) outpatient encounters. Low systolic and mean BP were recorded in 35 (33.7 %) encounters. This represented low recordings for 19 (61.3 %) out of a total 31 patients with two or more successive low recordings for 12 (38.7 %) patients. Thirty-one low BP encounters were in patients <18 years old. Hispanic patients accounted for 74 (71.2 %) visits and had low BP recorded in 32 (43.2 %) instances. The patients were non-ambulant in 71 (68.3 %) encounters. Out of 35 encounters with low BP, 17 patients (48.6 %) were taking heart failure medication. In instances when patients had low BP, 22 (66.7 %) out of 33 echocardiography encounters had normal left ventricular ejection fraction. Clinician comments on low BP reading were present in 11 (10.6 %) encounters, and treatment modification occurred in only 1 (1 %) patient. Age in years (p = .031) and ethnicity (p = .035) were independent predictors of low BP using stepwise multiple regression analysis. Low BP was recorded in a significant number of patient encounters in patients with DMD. Age 17 years or less and Hispanic ethnicity were significant predictors associated with low BP readings in our DMD cohort. Concomitant heart failure therapy was not a statistically significant association. There is a need for enhanced awareness of low BP in DMD patients among primary care and specialty physicians. The etiology and clinical impact of these findings are unclear but may impact escalation of heart failure therapy.",
"title": ""
},
{
"docid": "357576d56c379c2b5d4365ad1412ff92",
"text": "Malware authors employ a myriad of evasion techniques to impede automated reverse engineering and static analysis efforts. The most popular technologies include ‘code obfuscators’ that serve to rewrite the original binary code to an equivalent form that provides identical functionality while defeating signature-based detection systems. These systems significantly complicate static analysis, making it challenging to uncover the malware intent and the full spectrum of embedded capabilities. While code obfuscation techniques are commonly integrated into contemporary commodity packers, from the perspective of a reverse engineer, deobfuscation is often a necessary step that must be conducted independently after unpacking the malware binary. In this paper, we describe a set of techniques for automatically unrolling the impact of code obfuscators with the objective of completely recovering the original malware logic. We have implemented a set of generic debofuscation rules as a plug-in for the popular IDA Pro disassembler. We use sophisticated obfuscation strategies employed by two infamous malware instances from 2009, Conficker C and Hydraq (the binary associated with the Aurora attack) as case studies. In both instances our deobfuscator enabled a complete decompilation of the underlying code logic. This work was instrumental in the comprehensive reverse engineering of the heavily obfuscated P2P protocol embedded in the Conficker worm. The plug-in is integrated with the HexRays decompiler to provide a complete reverse engineering of malware binaries from binary form to C code and is available for free download on the SRI malware threat center website: http://www.mtc.sri.com/deobfuscation/.",
"title": ""
},
{
"docid": "048f237ad6cb844a79c63d7f6f3d6aa9",
"text": "Superpixel segmentation has emerged as an important research problem in the areas of image processing and computer vision. In this paper, we propose a framework, namely Iterative Spanning Forest (ISF), in which improved sets of connected superpixels (supervoxels in 3D) can be generated by a sequence of Image Foresting Transforms. In this framework, one can choose the most suitable combination of ISF components for a given application - i.e., i) a seed sampling strategy, ii) a connectivity function, iii) an adjacency relation, and iv) a seed pixel recomputation procedure. The superpixels in ISF structurally correspond to spanning trees rooted at those seeds. We present five ISF-based methods to illustrate different choices for those components. These methods are compared with a number of state-of-the-art approaches with respect to effectiveness and efficiency. Experiments are carried out on several datasets containing 2D and 3D objects with distinct texture and shape properties, including a high-level application, named sky image segmentation. The theoretical properties of ISF are demonstrated in the supplementary material and the results show ISF-based methods rank consistently among the best for all datasets.",
"title": ""
},
{
"docid": "ed509de8786ee7b4ba0febf32d0c87f7",
"text": "Threat detection and analysis are indispensable processes in today's cyberspace, but current state of the art threat detection is still limited to specific aspects of modern malicious activities due to the lack of information to analyze. By measuring and collecting various types of data, from traffic information to human behavior, at different vantage points for a long duration, the viewpoint seems to be helpful to deeply inspect threats, but faces scalability issues as the amount of collected data grows, since more computational resources are required for the analysis. In this paper, we report our experience from operating the Hadoop platform, called MATATABI, for threat detections, and present the micro-benchmarks with four different backends of data processing in typical use cases such as log data and packet trace analysis. The benchmarks demonstrate the advantages of distributed computation in terms of performance. Our extensive use cases of analysis modules showcase the potential benefit of deploying our threat analysis platform.",
"title": ""
},
{
"docid": "3d1093e183b4e9c656e5dd20efe5a311",
"text": "In the past, tactile displays were of one of two kinds: they were either shape displays, or relied on distributed vibrotactile stimulation. A tactile display device is described in this paper which is distinguished by the fact that it relies exclusively on lateral skin stretch stimulation. It is constructed from an array of 64 closely packed piezoelectric actuators connected to a membrane. The deformations of this membrane cause an array of 112 skin contactors to create programmable lateral stress fields in the skin of the finger pad. Some preliminary observations are reported with respect to the sensations that this kind of display can produce. INTRODUCTION Tactile displays are devices used to provide subjects with the sensation of touching objects directly with the skin. Previously reported tactile displays portray distributed tactile stimulation as a one of two possibilities [1]. One class of displays, termed “shape displays”, typically consists of devices having a dense array of skin contactors which can move orthogonally to the surface of the skin in an attempt to display the shape of objects via its spatially sampled approximation. There exist numerous examples of such displays, for recent designs see [2; 3; 4; 5]. In the interest of brevity, the distinction between “pressure displays” and shape displays is not made here. However, an important distinction with regard to the focus of this paper must be made between displays intended to cause no slip between the contactors and the skin and those intended for the opposite case.1 Displays which are intended to be used without slip can be mounted on a carrier device [6; 2]. 1Braille displays can be found in this later category. Another class of displays takes advantage of vibrotactile stimulation. With this technique, an array of tactilly active sites stimulates the skin using an array of contactors vibrating at a fixed frequency. This frequency is selected to maximize the loudness of the sensation (200–300 Hz). Tactile images are associated, not to the quasi-static depth of indentation, but the amplitude of the vibration [7].2 Figure 1. Typical Tactile Display. Shape displays control the rising movement of the contactors (resp. the force applied to). In a vibrotactile display, the contactors oscillate at a fixed frequency. Devices intended to be used as general purpose tactile displays cause stimulation by independently and simultaneously activated skin contactors according to patterns that depend both on space and on time. Such patterns may be thought of as “tactile images”, but because of the rapid adaptation of the skin mechanoreceptors, the images should more accurately be described as “tactile movies”. It is also accepted that the separation between these contactors needs to be of the order of one millimeter so that the resulting percept fuse into one single continuous image. In addition, when contactors apply vibratory signals to the skin at a frequency, which may range from a few Hertz to a few kiloHertz, a perception is derived which may be described 2The Optacon device is a well known example [8]. Proceedings of the Haptic Interfaces for Virtual Environment and Teleoperator Systems Symposium, ASME International Mechanical Engineering Congress & Exposition 2000, Orlando, Florida, USA . pp. 1309-1314",
"title": ""
},
{
"docid": "665952db8f68ccef12e1a032f50d840a",
"text": "Internet of Things (IoT) is one of the most buzzing and discussed topic in research field today. Some of the researchers are also looking future of the world in this technology. Since then significant research and development have taken place on IoT, however various vulnerabilities are observed which shall keep IoT as a technology in danger. As a result, there are so many attacks on IoT have been invented before actual commercial implementation of it. The present study discusses about various IoT attacks happening, classify them, its countermeasures and finding the most prominent attacks in IoT. A state of the art survey about the various attacks have been presented and compared including their efficiency and damage level in IoT.",
"title": ""
},
{
"docid": "aa8ae1fc471c46b5803bfa1303cb7001",
"text": "It is widely recognized that steganography with sideinformation in the form of a precover at the sender enjoys significantly higher empirical security than other embedding schemes. Despite the success of side-informed steganography, current designs are purely heuristic and little has been done to develop the embedding rule from first principles. Building upon the recently proposed MiPOD steganography, in this paper we impose multivariate Gaussian model on acquisition noise and estimate its parameters from the available precover. The embedding is then designed to minimize the KL divergence between cover and stego distributions. In contrast to existing heuristic algorithms that modulate the embedding costs by 1–2|e|, where e is the rounding error, in our model-based approach the sender should modulate the steganographic Fisher information, which is a loose equivalent of embedding costs, by (1–2|e|)^2. Experiments with uncompressed and JPEG images show promise of this theoretically well-founded approach. Introduction Steganography is a privacy tool in which messages are embedded in inconspicuous cover objects to hide the very presence of the communicated secret. Digital media, such as images, video, and audio are particularly suitable cover sources because of their ubiquity and the fact that they contain random components, the acquisition noise. On the other hand, digital media files are extremely complex objects that are notoriously hard to describe with sufficiently accurate and estimable statistical models. This is the main reason for why current steganography in such empirical sources [3] lacks perfect security and heavily relies on heuristics, such as embedding “costs” and intuitive modulation factors. Similarly, practical steganalysis resorts to increasingly more complex high-dimensional descriptors (rich models) and advanced machine learning paradigms, including ensemble classifiers and deep learning. Often, a digital media object is subjected to processing and/or format conversion prior to embedding the secret. The last step in the processing pipeline is typically quantization. In side-informed steganography with precover [21], the sender makes use of the unquantized cover values during embedding to hide data in a more secure manner. The first embedding scheme of this type described in the literature is the embedding-while-dithering [14] in which the secret message was embedded by perturbing the process of color quantization and dithering when converting a true-color image to a palette format. Perturbed quantization [15] started another direction in which rounding errors of DCT coefficients during JPEG compression were used to modify the embedding algorithm. This method has been advanced through a series of papers [23, 24, 29, 20], culminating with approaches based on advanced coding techniques with a high level of empirical security [19, 18, 6]. Side-information can have many other forms. Instead of one precover, the sender may have access to the acquisition oracle (a camera) and take multiple images of the same scene. These multiple exposures can be used to estimate the acquisition noise and also incorporated during embedding. This research direction has been developed to a lesser degree compared to steganography with precover most likely due to the difficulty of acquiring the required imagery and modeling the differences between acquisitions. In a series of papers [10, 12, 11], Franz et al. proposed a method in which multiple scans of the same printed image on a flat-bed scanner were used to estimate the model of the acquisition noise at every pixel. This requires acquiring a potentially large number of scans, which makes this approach rather labor intensive. Moreover, differences in the movement of the scanner head between individual scans lead to slight spatial misalignment that complicates using this type of side-information properly. Recently, the authors of [7] showed how multiple JPEG images of the same scene can be used to infer the preferred direction of embedding changes. By working with quantized DCT coefficients instead of pixels, the embedding is less sensitive to small differences between multiple acquisitions. Despite the success of side-informed schemes, there appears to be an alarming lack of theoretical analysis that would either justify the heuristics or suggest a well-founded (and hopefully more powerful) approach. In [13], the author has shown that the precover compensates for the lack of the cover model. In particular, for a Gaussian model of acquisition noise, precover-informed rounding is more secure than embedding designed to preserve the cover model estimated from the precover image assuming the cover is “sufficiently non-stationary.” Another direction worth mentioning in this context is the bottom-up model-based approach recently proposed by Bas [2]. The author showed that a high-capacity steganographic scheme with a rather low empirical detectability can be constructed when the process of digitally developing a RAW sensor capture is sufficiently simplified. The impact of embedding is masked as an increased level of photonic noise, e.g., due to a higher ISO setting. It will likely be rather difficult, however, to extend this approach to realistic processing pipelines. Inspired by the success of the multivariate Gaussian model in steganography for digital images [25, 17, 26], in this paper we adopt the same model for the precover and then derive the embedding rule to minimize the KL divergence between cover and stego distributions. The sideinformation is used to estimate the parameters of the acquisition noise and the noise-free scene. In the next section, we review current state of the art in heuristic side-informed steganography with precover. In the following section, we introduce a formal model of image acquisition. In Section “Side-informed steganography with MVG acquisition noise”, we describe the proposed model-based embedding method, which is related to heuristic approaches in Section “Connection to heuristic schemes.” The main bulk of results from experiments on images represented in the spatial and JPEG domain appear in Section “Experiments.” In the subsequent section, we investigate whether the public part of the selection channel, the content adaptivity, can be incorporated in selection-channel-aware variants of steganalysis features to improve detection of side-informed schemes. The paper is then closed with Conclusions. The following notation is adopted for technical arguments. Matrices and vectors will be typeset in boldface, while capital letters are reserved for random variables with the corresponding lower case symbols used for their realizations. In this paper, we only work with grayscale cover images. Precover values will be denoted with xij ∈ R, while cover and stego values will be integer arrays cij and sij , 1 ≤ i ≤ n1, 1 ≤ j ≤ n2, respectively. The symbols [x], dxe, and bxc are used for rounding and rounding up and down the value of x. By N (μ,σ2), we understand Gaussian distribution with mean μ and variance σ2. The complementary cumulative distribution function of a standard normal variable (the tail probability) will be denoted Q(x) = ∫∞ x (2π)−1/2 exp ( −z2/2 ) dz. Finally, we say that f(x)≈ g(x) when limx→∞ f(x)/g(x) = 1. Prior art in side-informed steganography with precover All modern steganographic schemes, including those that use side-information, are implemented within the paradigm of distortion minimization. First, each cover element cij is assigned a “cost” ρij that measures the impact on detectability should that element be modified during embedding. The payload is then embedded while minimizing the sum of costs of all changed cover elements, ∑ cij 6=sij ρij . A steganographic scheme that embeds with the minimal expected cost changes each cover element with probability βij = exp(−λρij) 1 +exp(−λρij) , (1) if the embedding operation is constrained to be binary, and βij = exp(−λρij) 1 +2exp(−λρij) , (2) for a ternary scheme with equal costs of changing cij to cij ± 1. Syndrome-trellis codes [8] can be used to build practical embedding schemes that operate near the rate–distortion bound. For steganography designed to minimize costs (embedding distortion), a popular heuristic to incorporate a precover value xij during embedding is to modulate the costs based on the rounding error eij = cij − xij , −1/2≤ eij ≤ 1/2 [23, 29, 20, 18, 19, 6, 24]. A binary embedding scheme modulates the cost of changing cij = [xij ] to [xij ] + sign(eij) by 1−2|eij |, while prohibiting the change to [xij ]− sign(eij): ρij(sign(eij)) = (1−2|eij |)ρij (3) ρij(−sign(eij)) = Ω, (4) where ρij(u) is the cost of modifying the cover value by u∈ {−1,1}, ρij are costs of some additive embedding scheme, and Ω is a large constant. This modulation can be justified heuristically because when |eij | ≈ 1/2, a small perturbation of xij could cause cij to be rounded to the other side. Such coefficients are thus assigned a proportionally smaller cost because 1− 2|eij | ≈ 0. On the other hand, the costs are unchanged when eij ≈ 0, as it takes a larger perturbation of the precover to change the rounded value. A ternary version of this embedding strategy [6] allows modifications both ways with costs: ρij(sign(eij)) = (1−2|eij |)ρij (5) ρij(−sign(eij)) = ρij . (6) Some embedding schemes do not use costs and, instead, minimize statistical detectability. In MiPOD [25], the embedding probabilities βij are derived from their impact on the cover multivariate Gaussian model by solving the following equation for each pixel ij: βijIij = λ ln 1−2βij βij , (7) where Iij = 2/σ̂4 ij is the Fisher information with σ̂ 2 ij an estimated variance of the acquisition noise at pixel ij, and λ is a Lagrange multiplier determined by the payload size. To incorporate the side-information, the sender first converts the embedding probabilities into costs and then modulates them as in (3) or (5). This can be done b",
"title": ""
},
{
"docid": "69f89f9cfeb87b251186ffd05788cc16",
"text": "Online social media allow users to interact with one another by sharing opinions, and these opinions have a critical impact on the way readers think and behave. Accordingly, an increasing number of <i>manipulators</i> deliberately spread messages to influence the public, often in an organized manner. In particular, political manipulation—manipulation of opponents to win political advantage—can result in serious consequences: antigovernment riots can break out, leading to candidates’ defeat in an election. A few approaches have been proposed to detect such manipulation based on the level of social interaction (i.e., manipulators actively post opinions but infrequently befriend and reply to other users). However, several studies have shown that the interactions can be forged at a low cost and thus may not be effective measures of manipulation.\n To go one step further, we collect a dataset for real, large-scale political manipulation, which consists of opinions found on Internet forums. These opinions are divided into manipulators and nonmanipulators. Using this collection, we demonstrate that manipulators inevitably work hard, in teams, to quickly influence a large audience. With this in mind, it could be said that a high level of collaborative efforts strongly indicates manipulation. For example, a group of manipulators may jointly post numerous opinions with a consistent theme and selectively recommend the same, well-organized opinion to promote its rank. We show that the effort measures, when combined with a supervised learning algorithm, successfully identify greater than 95% of the manipulators. We believe that the proposed method will help system administrators to accurately detect manipulators in disguise, significantly decreasing the intensity of manipulation.",
"title": ""
},
{
"docid": "3a92798e81a03e5ef7fb18110e5da043",
"text": "BACKGROUND\nRespiratory failure is a serious complication that can adversely affect the hospital course and survival of multiply injured patients. Some studies have suggested that delayed surgical stabilization of spine fractures may increase the incidence of respiratory complications. However, the authors of these studies analyzed small sets of patients and did not assess the independent effects of multiple risk factors.\n\n\nMETHODS\nA retrospective cohort study was conducted at a regional level-I trauma center to identify risk factors for respiratory failure in patients with surgically treated thoracic and lumbar spine fractures. Demographic, diagnostic, and procedural variables were identified. The incidence of respiratory failure was determined in an adult respiratory distress syndrome registry maintained concurrently at the same institution. Univariate and multivariate analyses were used to determine independent risk factors for respiratory failure. An algorithm was formulated to predict respiratory failure.\n\n\nRESULTS\nRespiratory failure developed in 140 of the 1032 patients in the study cohort. Patients with respiratory failure were older; had a higher mean Injury Severity Score (ISS) and Charlson Comorbidity Index Score; had greater incidences of pneumothorax, pulmonary contusion, and thoracic level injury; had a lower mean Glasgow Coma Score (GCS); were more likely to have had a posterior surgical approach; and had a longer mean time from admission to surgical stabilization than the patients without respiratory failure (p < 0.05). Multivariate analysis identified five independent risk factors for respiratory failure: an age of more than thirty-five years, an ISS of > 25 points, a GCS of < or = 12 points, blunt chest injury, and surgical stabilization performed more than two days after admission. An algorithm was created to determine, on the basis of the number of preoperative predictors present, the relative risk of respiratory failure when surgery was delayed for more than two days.\n\n\nCONCLUSIONS\nIndependent risk factors for respiratory failure were identified in an analysis of a large cohort of patients who had undergone operative stabilization of thoracic and lumbar spine fractures. Early operative stabilization of these fractures, the only risk factor that can be controlled by the physician, may decrease the risk of respiratory failure in multiply injured patients.",
"title": ""
},
{
"docid": "bd53dea475e4ddecf40ebf31a225f0c2",
"text": "Business process management is multidimensional tool which utilizes several methods to examine processes from a holistic perspective, transcending the narrow borders of specific functions. It undertakes fundamental reconsideration and radical redesign of organizational processes in order to achieve drastic improvement of current performance in terms of cost, service and speed. Business process management tries to encourage a radical change rather than an incremental change. An analytical approach has been applied for the current study. For this study, the case of Bank X, which is a leading public sector bank operating in the state, has been taken into consideration. A sample of 250 customers was selected randomly from Alwar, Dausa and Bharatpur districts. For policy framework, corporate headquarters were consulted. For the research a self-designed survey instrument, looking for information from the customers on several parameters like cost, quality, services and performance, was used. This article tries to take a critical account of existent business process management in Bank X and to study the relationship between business process management and organizational performance. The data has been tested by correlation analysis. The findings of the study show that business process management exists in the Bank X and there is a significant relationship between business process management and organizational performance. Keywords-Business Process Management; Business Process Reengineering; Organizational Performance",
"title": ""
},
{
"docid": "ad50525ba815295122d34f8008dea9ab",
"text": "Real-time scheduling algorithms like RMA or EDF and their corresponding schedulability test have proven to be powerful tools for developing predictable real-time systems. However, the traditional interrupt management model presents multiple inconsistencies that break the assumptions of many of the real-time scheduling tests, diminishing its utility. In this article, we analyze these inconsistencies and present a model that resolves them by integrating interrupts and tasks in a single scheduling model. We then use the RMA theory to calculate the cost of the model and analyze the circumstances under which it can provide the most value. This model was implemented in a kernel module. The portability of the design of our module is discussed in terms of its independence from both the hardware and the kernel. We also discuss the implementation issues of the model over conventional PC hardware, along with its cost and novel optimizations for reducing the overhead. Finally, we present our experimental evaluation to show evidence of its temporal determinism and overhead.",
"title": ""
},
{
"docid": "83ab7bbacc7b2a18faf580a2291b84ea",
"text": "When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines) allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR Bsplines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.",
"title": ""
},
{
"docid": "c0551510c63a42682abc4ea008f81683",
"text": "Deep generative models have recently yielded encouraging results in producing subjectively realistic samples of complex data. Far less attention has been paid to making these generative models interpretable. In many scenarios, ranging from scientific applications to finance, the observed variables have a natural grouping. It is often of interest to understand systems of interaction amongst these groups, and latent factor models (LFMs) are an attractive approach. However, traditional LFMs are limited by assuming a linear correlation structure. We present an output interpretable VAE (oi-VAE) for grouped data that models complex, nonlinear latent-to-observed relationships. We combine a structured VAE comprised of group-specific generators with a sparsity-inducing prior. We demonstrate that oi-VAE yields meaningful notions of interpretability in the analysis of motion capture and MEG data. We further show that in these situations, the regularization inherent to oi-VAE can actually lead to improved generalization and learned generative processes.",
"title": ""
}
] |
scidocsrr
|
bc3ee191f9ab7ec057fe941484f51c62
|
Convolutional Radio Modulation Recognition Networks
|
[
{
"docid": "a8d709ee5c0a9cd32b5e59c8d73394ca",
"text": "Spectrum awareness is currently one of the most challenging problems in cognitive radio (CR) design. Detection and classification of very low SNR signals with relaxed information on the signal parameters being detected is critical for proper CR functionality as it enables the CR to react and adapt to the changes in its radio environment. In this work, the cycle frequency domain profile (CDP) is used for signal detection and preprocessing for signal classification. Signal features are extracted from CDP using a threshold-test method. For classification, a Hidden Markov Model (HMM) has been used to process extracted signal features due to its robust pattern-matching capability. We also investigate the effects of varied observation length on signal detection and classification. It is found that the CDP-based detector and the HMM-based classifier can detect and classify incoming signals at a range of low SNRs.",
"title": ""
},
{
"docid": "b651dab78e39d59e3043cb091b7e4f1b",
"text": "Learning an acoustic model directly from the raw waveform has been an active area of research. However, waveformbased models have not yet matched the performance of logmel trained neural networks. We will show that raw waveform features match the performance of log-mel filterbank energies when used with a state-of-the-art CLDNN acoustic model trained on over 2,000 hours of speech. Specifically, we will show the benefit of the CLDNN, namely the time convolution layer in reducing temporal variations, the frequency convolution layer for preserving locality and reducing frequency variations, as well as the LSTM layers for temporal modeling. In addition, by stacking raw waveform features with log-mel features, we achieve a 3% relative reduction in word error rate.",
"title": ""
}
] |
[
{
"docid": "feb649029daef80f2ecf33221571a0b1",
"text": "The National Airspace System (NAS) is a large and complex system with thousands of interrelated components: administration, control centers, airports, airlines, aircraft, passengers, etc. The complexity of the NAS creates many difficulties in management and control. One of the most pressing problems is flight delay. Delay creates high cost to airlines, complaints from passengers, and difficulties for airport operations. As demand on the system increases, the delay problem becomes more and more prominent. For this reason, it is essential for the Federal Aviation Administration to understand the causes of delay and to find ways to reduce delay. Major contributing factors to delay are congestion at the origin airport, weather, increasing demand, and air traffic management (ATM) decisions such as the Ground Delay Programs (GDP). Delay is an inherently stochastic phenomenon. Even if all known causal factors could be accounted for, macro-level national airspace system (NAS) delays could not be predicted with certainty from micro-level aircraft information. This paper presents a stochastic model that uses Bayesian Networks (BNs) to model the relationships among different components of aircraft delay and the causal factors that affect delays. A case study on delays of departure flights from Chicago O’Hare international airport (ORD) to Hartsfield-Jackson Atlanta International Airport (ATL) reveals how local and system level environmental and human-caused factors combine to affect components of delay, and how these components contribute to the final arrival delay at the destination airport.",
"title": ""
},
{
"docid": "6cfee185a7438811aafd16a03fb75852",
"text": "The Internet-of-Things (IoT) envisions a world where billions of everyday objects and mobile devices communicate using a large number of interconnected wired and wireless networks. Maximizing the utilization of this paradigm requires fine-grained QoS support for differentiated application requirements, context-aware semantic information retrieval, and quick and easy deployment of resources, among many other objectives. These objectives can only be achieved if components of the IoT can be dynamically managed end-to-end across heterogeneous objects, transmission technologies, and networking architectures. Software-defined Networking (SDN) is a new paradigm that provides powerful tools for addressing some of these challenges. Using a software-based control plane, SDNs introduce significant flexibility for resource management and adaptation of network functions. In this article, we study some promising solutions for the IoT based on SDN architectures. Particularly, we analyze the application of SDN in managing resources of different types of networks such as Wireless Sensor Networks (WSN) and mobile networks, the utilization of SDN for information-centric networking, and how SDN can leverage Sensing-as-a-Service (SaaS) as a key cloud application in the IoT.",
"title": ""
},
{
"docid": "ca7e4eafed84f5dbe5f996ac7c795c91",
"text": "This paper examines the effects of review arousal on perceived helpfulness of online reviews, and on consumers’ emotional responses elicited by the reviews. Drawing on emotion theories in psychology and neuroscience, we focus on four emotions – anger, anxiety, excitement, and enjoyment that are common in the context of online reviews. The effects of the four emotions embedded in online reviews were examined using a controlled experiment. Our preliminary results show that reviews embedded with the four emotions (arousing reviews) are perceived to be more helpful than reviews without the emotions embedded (non-arousing reviews). However, reviews embedded with anxiety and enjoyment (low-arousal reviews) are perceived to be more helpfulness that reviews embedded with anger and excitement (high-arousal reviews). Furthermore, compared to reviews embedded with anger, reviews embedded with anxiety are associated with a higher EEG activity that is generally linked to negative emotions. The results suggest a non-linear relationship between review arousal and perceived helpfulness, which can be explained by the consumers’ emotional responses elicited by the reviews.",
"title": ""
},
{
"docid": "3a920687e57591c1abfaf10b691132a7",
"text": "BP3TKI Palembang is the government agencies that coordinate, execute and selection of prospective migrants registration and placement. To simplify the existing procedures and improve decision-making is necessary to build a decision support system (DSS) to determine eligibility for employment abroad by applying Fuzzy Multiple Attribute Decision Making (FMADM), using the linear sequential systems development methods. The system is built using Microsoft Visual Basic. Net 2010 and SQL Server 2008 database. The design of the system using use case diagrams and class diagrams to identify the needs of users and systems as well as systems implementation guidelines. Decision Support System which is capable of ranking the dihasialkan to prospective migrants, making it easier for parties to take keputusna BP3TKI the workers who will be flown out of the country.",
"title": ""
},
{
"docid": "323d633995296611c903874aefa5cdb7",
"text": "This paper investigates the possibility of communicating through vibrations. By modulating the vibration motors available in all mobile phones, and decoding them through accelerometers, we aim to communicate small packets of information. Of course, this will not match the bit rates available through RF modalities, such as NFC or Bluetooth, which utilize a much larger bandwidth. However, where security is vital, vibratory communication may offer advantages. We develop Ripple, a system that achieves up to 200 bits/s of secure transmission using off-the-shelf vibration motor chips, and 80 bits/s on Android smartphones. This is an outcome of designing and integrating a range of techniques, including multicarrier modulation, orthogonal vibration division, vibration braking, side-channel jamming, etc. Not all these techniques are novel; some are borrowed and suitably modified for our purposes, while others are unique to this relatively new platform of vibratory communication.",
"title": ""
},
{
"docid": "98788b45932c8564d29615f49407d179",
"text": "BACKGROUND\nAbnormal forms of grief, currently referred to as complicated grief or prolonged grief disorder, have been discussed extensively in recent years. While the diagnostic criteria are still debated, there is no doubt that prolonged grief is disabling and may require treatment. To date, few interventions have demonstrated efficacy.\n\n\nMETHODS\nWe investigated whether outpatients suffering from prolonged grief disorder (PGD) benefit from a newly developed integrative cognitive behavioural therapy for prolonged grief (PG-CBT). A total of 51 patients were randomized into two groups, stratified by the type of death and their relationship to the deceased; 24 patients composed the treatment group and 27 patients composed the wait list control group (WG). Treatment consisted of 20-25 sessions. Main outcome was change in grief severity; secondary outcomes were reductions in general psychological distress and in comorbidity.\n\n\nRESULTS\nPatients on average had 2.5 comorbid diagnoses in addition to PGD. Between group effect sizes were large for the improvement of grief symptoms in treatment completers (Cohen׳s d=1.61) and in the intent-to-treat analysis (d=1.32). Comorbid depressive symptoms also improved in PG-CBT compared to WG. The completion rate was 79% in PG-CBT and 89% in WG.\n\n\nLIMITATIONS\nThe major limitations of this study were a small sample size and that PG-CBT took longer than the waiting time.\n\n\nCONCLUSIONS\nPG-CBT was found to be effective with an acceptable dropout rate. Given the number of bereaved people who suffer from PGD, the results are of high clinical relevance.",
"title": ""
},
{
"docid": "6fdd0c7d239417234cfc4706a82b5a0f",
"text": "We propose a method of generating teaching policies for use in intelligent tutoring systems (ITS) for concept learning tasks <xref ref-type=\"bibr\" rid=\"ref1\">[1]</xref> , e.g., teaching students the meanings of words by showing images that exemplify their meanings à la Rosetta Stone <xref ref-type=\"bibr\" rid=\"ref2\">[2]</xref> and Duo Lingo <xref ref-type=\"bibr\" rid=\"ref3\">[3]</xref> . The approach is grounded in control theory and capitalizes on recent work by <xref ref-type=\"bibr\" rid=\"ref4\">[4] </xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> that frames the “teaching” problem as that of finding approximately optimal teaching policies for approximately optimal learners (AOTAOL). Our work expands on <xref ref-type=\"bibr\" rid=\"ref4\">[4]</xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> in several ways: (1) We develop a novel student model in which the teacher's actions can <italic>partially </italic> eliminate hypotheses about the curriculum. (2) With our student model, inference can be conducted <italic> analytically</italic> rather than numerically, thus allowing computationally efficient planning to optimize learning. (3) We develop a reinforcement learning-based hierarchical control technique that allows the teaching policy to search through <italic>deeper</italic> learning trajectories. We demonstrate our approach in a novel ITS for foreign language learning similar to Rosetta Stone and show that the automatically generated AOTAOL teaching policy performs favorably compared to two hand-crafted teaching policies.",
"title": ""
},
{
"docid": "152e8d51669b095dab15fa509d9ce9f8",
"text": "Virtualization technology plays a vital role in cloud computing. In particular, benefits of virtualization are widely employed in high performance computing (HPC) applications. Recently, virtual machines (VMs) and Docker containers known as two virtualization platforms need to be explored for developing applications efficiently. We target a model for deploying distributed applications on Docker containers, among using well-known benchmarks to evaluate performance between VMs and containers. Based on their architecture, we propose benchmark scenarios to analyze the computing performance and the ability of data access on HPC system. Remarkably, Docker container has more advantages than virtual machine in terms of data intensive application and computing ability, especially the overhead of Docker is trivial. However, Docker architecture has some drawbacks in resource management. Our experiment and evaluation show how to deploy efficiently high performance computing applications on Docker containers and VMs.",
"title": ""
},
{
"docid": "abde419c67119fa9d16f365262d39b34",
"text": "Silicon nitride is the most commonly used passivation layer in biosensor applications where electronic components must be interfaced with ionic solutions. Unfortunately, the predominant method for functionalizing silicon nitride surfaces, silane chemistry, suffers from a lack of reproducibility. As an alternative, we have developed a silane-free pathway that allows for the direct functionalization of silicon nitride through the creation of primary amines formed by exposure to a radio frequency glow discharge plasma fed with humidified air. The aminated surfaces can then be further functionalized by a variety of methods; here we demonstrate using glutaraldehyde as a bifunctional linker to attach a robust NeutrAvidin (NA) protein layer. Optimal amine formation, based on plasma exposure time, was determined by labeling treated surfaces with an amine-specific fluorinated probe and characterizing the coverage using X-ray photoelectron spectroscopy (XPS). XPS and radiolabeling studies also reveal that plasma-modified surfaces, as compared with silane-modified surfaces, result in similar NA surface coverage, but notably better reproducibility.",
"title": ""
},
{
"docid": "cd16afd19a0ac72cd3453a7b59aad42b",
"text": "BACKGROUND\nIncreased flexibility is often desirable immediately prior to sports performance. Static stretching (SS) has historically been the main method for increasing joint range-of-motion (ROM) acutely. However, SS is associated with acute reductions in performance. Foam rolling (FR) is a form of self-myofascial release (SMR) that also increases joint ROM acutely but does not seem to reduce force production. However, FR has never previously been studied in resistance-trained athletes, in adolescents, or in individuals accustomed to SMR.\n\n\nOBJECTIVE\nTo compare the effects of SS and FR and a combination of both (FR+SS) of the plantarflexors on passive ankle dorsiflexion ROM in resistance-trained, adolescent athletes with at least six months of FR experience.\n\n\nMETHODS\nEleven resistance-trained, adolescent athletes with at least six months of both resistance-training and FR experience were tested on three separate occasions in a randomized cross-over design. The subjects were assessed for passive ankle dorsiflexion ROM after a period of passive rest pre-intervention, immediately post-intervention and after 10, 15, and 20 minutes of passive rest. Following the pre-intervention test, the subjects randomly performed either SS, FR or FR+SS. SS and FR each comprised 3 sets of 30 seconds of the intervention with 10 seconds of inter-set rest. FR+SS comprised the protocol from the FR condition followed by the protocol from the SS condition in sequence.\n\n\nRESULTS\nA significant effect of time was found for SS, FR and FR+SS. Post hoc testing revealed increases in ROM between baseline and post-intervention by 6.2% for SS (p < 0.05) and 9.1% for FR+SS (p < 0.05) but not for FR alone. Post hoc testing did not reveal any other significant differences between baseline and any other time point for any condition. A significant effect of condition was observed immediately post-intervention. Post hoc testing revealed that FR+SS was superior to FR (p < 0.05) for increasing ROM.\n\n\nCONCLUSIONS\nFR, SS and FR+SS all lead to acute increases in flexibility and FR+SS appears to have an additive effect in comparison with FR alone. All three interventions (FR, SS and FR+SS) have time courses that lasted less than 10 minutes.\n\n\nLEVEL OF EVIDENCE\n2c.",
"title": ""
},
{
"docid": "ab2159730f00662ba29e25a0e27d1799",
"text": "This paper proposes a novel and efficient re-ranking technque to solve the person re-identification problem in the surveillance application. Previous methods treat person re-identification as a special object retrieval problem, and compute the retrieval result purely based on a unidirectional matching between the probe and all gallery images. However, the correct matching may be not included in the top-k ranking result due to appearance changes caused by variations in illumination, pose, viewpoint and occlusion. To obtain more accurate re-identification results, we propose to reversely query every gallery person image in a new gallery composed of the original probe person image and other gallery person images, and revise the initial query result according to bidirectional ranking lists. The behind philosophy of our method is that images of the same person should not only have similar visual content, refer to content similarity, but also possess similar k-nearest neighbors, refer to context similarity. Furthermore, the proposed bidirectional re-ranking method can be divided into offline and online parts, where the majority of computation load is accomplished by the offline part and the online computation complexity is only proportional to the size of the gallery data set, which is especially suited to the real-time required video investigation task. Extensive experiments conducted on a series of standard data sets have validated the effectiveness and efficiency of our proposed method.",
"title": ""
},
{
"docid": "9e95ce11f502478c11df990d3465360f",
"text": "This paper presents a ultra-wideband (UWB) micro-strip structure high-pass filter with multi-stubs. The proposed filter was designed using a combination of 4 short-circuited stubs and an open-circuited stub in the form of micro-strip lines. The short-circuited stubs are to realize a high-pass filter with a bad band rejection. In order to achieve a steep cutoff, a transmission zero can be added thus an open-circuited stub is used. The passband is 5-19 GHz. The insertion loss is greater than -2dB and the return loss is less than -10dB, while the suppression of the modified filter is better than 30 dB below 4.2GHz.",
"title": ""
},
{
"docid": "faf6eec39b17b21a265f97969616dbba",
"text": "Food is an indispensible part of our lives. In today’s globalized market, food from different geographical regions show remarkable variations in the choice of ingredients and the ways to prepare them. Some cuisines have ingredient choices unimaginable to customers habituated to other cuisines, but still present surprisingly tasty dishes. Besides unique ingredients, even the same ingredients, depending on the preparation process, may end up preserving different fractions of the nutritional value and very different calories. If restaurant networks provide a mobile app that can aim at a photograph of a plate of food and report its cuisine, the composing ingredients, and list other similar dishes, it could be a good marketing strategy. In addition, restaurants of a specific cuisine can also refer to other cuisines’ use of ingredients to sparkle ideas for more creative dishes that utilize similar ingredients in new ways. One core component of such an application is to recognize the cuisine and ingredients on a plate of food. Then the information may later be used to compare cuisines and find a network of common ingredients in the dishes.",
"title": ""
},
{
"docid": "369e5fb60d3afc993821159b64bc3560",
"text": "For five years, we collected annual snapshots of file-system metadata from over 60,000 Windows PC file systems in a large corporation. In this article, we use these snapshots to study temporal changes in file size, file age, file-type frequency, directory size, namespace structure, file-system population, storage capacity and consumption, and degree of file modification. We present a generative model that explains the namespace structure and the distribution of directory sizes. We find significant temporal trends relating to the popularity of certain file types, the origin of file content, the way the namespace is used, and the degree of variation among file systems, as well as more pedestrian changes in size and capacities. We give examples of consequent lessons for designers of file systems and related software.",
"title": ""
},
{
"docid": "febf797870da28d6492885095b92ef1f",
"text": "Most methods for learning object categories require large amounts of labeled training data. However, obtaining such data can be a difficult and time-consuming endeavor. We have developed a novel, entropy-based ldquoactive learningrdquo approach which makes significant progress towards this problem. The main idea is to sequentially acquire labeled data by presenting an oracle (the user) with unlabeled images that will be particularly informative when labeled. Active learning adaptively prioritizes the order in which the training examples are acquired, which, as shown by our experiments, can significantly reduce the overall number of training examples required to reach near-optimal performance. At first glance this may seem counter-intuitive: how can the algorithm know whether a group of unlabeled images will be informative, when, by definition, there is no label directly associated with any of the images? Our approach is based on choosing an image to label that maximizes the expected amount of information we gain about the set of unlabeled images. The technique is demonstrated in several contexts, including improving the efficiency of Web image-search queries and open-world visual learning by an autonomous agent. Experiments on a large set of 140 visual object categories taken directly from text-based Web image searches show that our technique can provide large improvements (up to 10 x reduction in the number of training examples needed) over baseline techniques.",
"title": ""
},
{
"docid": "fff9e38c618a6a644e3795bdefd74801",
"text": "Several code smell detection tools have been developed providing different results, because smells can be subjectively interpreted, and hence detected, in different ways. In this paper, we perform the largest experiment of applying machine learning algorithms to code smells to the best of our knowledge. We experiment 16 different machine-learning algorithms on four code smells (Data Class, Large Class, Feature Envy, Long Method) and 74 software systems, with 1986 manually validated code smell samples. We found that all algorithms achieved high performances in the cross-validation data set, yet the highest performances were obtained by J48 and Random Forest, while the worst performance were achieved by support vector machines. However, the lower prevalence of code smells, i.e., imbalanced data, in the entire data set caused varying performances that need to be addressed in the future studies. We conclude that the application of machine learning to the detection of these code smells can provide high accuracy (>96 %), and only a hundred training examples are needed to reach at least 95 % accuracy.",
"title": ""
},
{
"docid": "bb8ca605a714d71be903d46bf6e1fa40",
"text": "Several methods have been proposed for automatic and objective monitoring of food intake, but their performance suffers in the presence of speech and motion artifacts. This paper presents a novel sensor system and algorithms for detection and characterization of chewing bouts from a piezoelectric strain sensor placed on the temporalis muscle. The proposed data acquisition device was incorporated into the temple of eyeglasses. The system was tested by ten participants in two part experiments, one under controlled laboratory conditions and the other in unrestricted free-living. The proposed food intake recognition method first performed an energy-based segmentation to isolate candidate chewing segments (instead of using epochs of fixed duration commonly reported in research literature), with the subsequent classification of the segments by linear support vector machine models. On participant level (combining data from both laboratory and free-living experiments), with ten-fold leave-one-out cross-validation, chewing were recognized with average F-score of 96.28% and the resultant area under the curve was 0.97, which are higher than any of the previously reported results. A multivariate regression model was used to estimate chew counts from segments classified as chewing with an average mean absolute error of 3.83% on participant level. These results suggest that the proposed system is able to identify chewing segments in the presence of speech and motion artifacts, as well as automatically and accurately quantify chewing behavior, both under controlled laboratory conditions and unrestricted free-living.",
"title": ""
},
{
"docid": "c3318c1f2750c26fcc518638a6cb52ee",
"text": "The tremendous success of machine learning algorithms at image recognition tasks in recent years intersects with a time of dramatically increased use of electronic medical records and diagnostic imaging. This review introduces the machine learning algorithms as applied to medical image analysis, focusing on convolutional neural networks, and emphasizing clinical aspects of the field. The advantage of machine learning in an era of medical big data is that significant hierarchal relationships within the data can be discovered algorithmically without laborious hand-crafting of features. We cover key research areas and applications of medical image classification, localization, detection, segmentation, and registration. We conclude by discussing research obstacles, emerging trends, and possible future directions.",
"title": ""
},
{
"docid": "762376fb3a4c0b7fe596b76cc5b2dde2",
"text": "We describe our system (DT Team) submitted at SemEval-2017 Task 1, Semantic Textual Similarity (STS) challenge for English (Track 5). We developed three different models with various features including similarity scores calculated using word and chunk alignments, word/sentence embeddings, and Gaussian Mixture Model (GMM). The correlation between our system’s output and the human judgments were up to 0.8536, which is more than 10% above baseline, and almost as good as the best performing system which was at 0.8547 correlation (the difference is just about 0.1%). Also, our system produced leading results when evaluated with a separate STS benchmark dataset. The word alignment and sentence embeddings based features were found to be very effective.",
"title": ""
},
{
"docid": "2f0d6b9bee323a75eea3d15a3cabaeb6",
"text": "OBJECTIVE\nThis article reviews the mechanisms and pathophysiology of traumatic brain injury (TBI).\n\n\nMETHODS\nResearch on the pathophysiology of diffuse and focal TBI is reviewed with an emphasis on damage that occurs at the cellular level. The mechanisms of injury are discussed in detail including the factors and time course associated with mild to severe diffuse injury as well as the pathophysiology of focal injuries. Examples of electrophysiologic procedures consistent with recent theory and research evidence are presented.\n\n\nRESULTS\nAcceleration/deceleration (A/D) forces rarely cause shearing of nervous tissue, but instead, initiate a pathophysiologic process with a well defined temporal progression. The injury foci are considered to be diffuse trauma to white matter with damage occurring at the superficial layers of the brain, and extending inward as A/D forces increase. Focal injuries result in primary injuries to neurons and the surrounding cerebrovasculature, with secondary damage occurring due to ischemia and a cytotoxic cascade. A subset of electrophysiologic procedures consistent with current TBI research is briefly reviewed.\n\n\nCONCLUSIONS\nThe pathophysiology of TBI occurs over time, in a pattern consistent with the physics of injury. The development of electrophysiologic procedures designed to detect specific patterns of change related to TBI may be of most use to the neurophysiologist.\n\n\nSIGNIFICANCE\nThis article provides an up-to-date review of the mechanisms and pathophysiology of TBI and attempts to address misconceptions in the existing literature.",
"title": ""
}
] |
scidocsrr
|
53d9eca423f5e11fe018f66d04969e5b
|
An Optimization Framework for Online Ride-Sharing Markets
|
[
{
"docid": "15a079037d3dbb1b08591c0a3c8e0804",
"text": "The paper offers an introduction and a road map to the burgeoning literature on two-sided markets. In many industries, platforms court two (or more) sides that use the platform to interact with each other. The platforms’ usage or variable charges impact the two sides’ willingness to trade, and thereby their net surpluses from potential interactions; the platforms’ membership or fixed charges in turn determine the end-users’ presence on the platform. The platforms’ fine design of the structure of variable and fixed charges is relevant only if the two sides do not negotiate away the corresponding usage and membership externalities. The paper first focuses on usage charges and provides conditions for the allocation of the total usage charge (e.g., the price of a call or of a payment card transaction) between the two sides not to be neutral; the failure of the Coase theorem is necessary but not sufficient for two-sidedness. Second, the paper builds a canonical model integrating usage and membership externalities. This model allows us to unify and compare the results obtained in the two hitherto disparate strands of the literature emphasizing either form of externality; and to place existing membership (or indirect) externalities models on a stronger footing by identifying environments in which these models can accommodate usage pricing. We also obtain general results on usage pricing of independent interest. Finally, the paper reviews some key economic insights on platform price and non-price strategies.",
"title": ""
},
{
"docid": "4253afeaeb2f238339611e5737ed3e06",
"text": "Over the past decade there has been a growing public fascination with the complex connectedness of modern society. This connectedness is found in many incarnations: in the rapid growth of the Internet, in the ease with which global communication takes place, and in the ability of news and information as well as epidemics and financial crises to spread with surprising speed and intensity. These are phenomena that involve networks, incentives, and the aggregate behavior of groups of people; they are based on the links that connect us and the ways in which our decisions can have subtle consequences for others. This introductory undergraduate textbook takes an interdisciplinary look at economics, sociology, computing and information science, and applied mathematics to understand networks and behavior. It describes the emerging field of study that is growing at the interface of these areas, addressing fundamental questions about how the social, economic, and technological worlds are connected.",
"title": ""
},
{
"docid": "8700e170ba9c3e6c35008e2ccff48ef9",
"text": "Recently, Uber has emerged as a leader in the \"sharing economy\". Uber is a \"ride sharing\" service that matches willing drivers with customers looking for rides. However, unlike other open marketplaces (e.g., AirBnB), Uber is a black-box: they do not provide data about supply or demand, and prices are set dynamically by an opaque \"surge pricing\" algorithm. The lack of transparency has led to concerns about whether Uber artificially manipulate prices, and whether dynamic prices are fair to customers and drivers. In order to understand the impact of surge pricing on passengers and drivers, we present the first in-depth investigation of Uber. We gathered four weeks of data from Uber by emulating 43 copies of the Uber smartphone app and distributing them throughout downtown San Francisco (SF) and midtown Manhattan. Using our dataset, we are able to characterize the dynamics of Uber in SF and Manhattan, as well as identify key implementation details of Uber's surge price algorithm. Our observations about Uber's surge price algorithm raise important questions about the fairness and transparency of this system.",
"title": ""
}
] |
[
{
"docid": "9a9fd442bc7353d9cd202e9ace6e6580",
"text": "The idea of developmental dyspraxia has been discussed in the research literature for almost 100 years. However, there continues to be a lack of consensus regarding both the definition and description of this disorder. This paper presents a neuropsychologically based operational definition of developmental dyspraxia that emphasizes that developmental dyspraxia is a disorder of gesture. Research that has investigated the development of praxis is discussed. Further, different types of gestural disorders displayed by children and different mechanisms that underlie developmental dyspraxia are compared to and contrasted with adult acquired apraxia. The impact of perceptual-motor, language, and cognitive impairments on children's gestural development and the possible associations between these developmental disorders and developmental dyspraxia are also examined. Also, the relationship among limb, orofacial, and verbal dyspraxia is discussed. Finally, problems that exist in the neuropsychological assessment of developmental dyspraxia are discussed and recommendations concerning what should be included in such an assessment are presented.",
"title": ""
},
{
"docid": "a162d5e622bb7fa8f281e7c9b5943346",
"text": "The Legionellae are Gram-negative bacteria able to survive and replicate in a wide range of protozoan hosts in natural environments, but they also occur in man-made aquatic systems, which are the major source of infection. After transmission to humans via aerosols, Legionella spp. can cause pneumonia (Legionnaires’ disease) or influenza-like respiratory infections (Pontiac fever). In children, Legionnaires’ disease is uncommon and is mainly diagnosed in children with immunosuppression. The clinical picture of Legionella pneumonia does not allow differentiation from pneumonia caused by others pathogens. The key to diagnosis is performing appropriate microbiological testing. The clinical presentation and the natural course of Legionnaires’ disease in children are not clear due to an insufficient number of samples, but morbidity and mortality caused by this infection are extremely high. The mortality rate for legionellosis depends on the promptness of an appropriate antibiotic therapy. Fluoroquinolones are the most efficacious drugs against Legionella. A combination of these drugs with macrolides seems to be promising in the treatment of immunosuppressed patients and individuals with severe legionellosis. Although all Legionella species are considered potentially pathogenic for humans, Legionella pneumophila is the etiological agent responsible for most reported cases of community-acquired and nosocomial legionellosis.",
"title": ""
},
{
"docid": "2bdfeabf15a4ca096c2fe5ffa95f3b17",
"text": "This paper studies how to incorporate the external word correlation knowledge to improve the coherence of topic modeling. Existing topic models assume words are generated independently and lack the mechanism to utilize the rich similarity relationships among words to learn coherent topics. To solve this problem, we build a Markov Random Field (MRF) regularized Latent Dirichlet Allocation (LDA) model, which defines a MRF on the latent topic layer of LDA to encourage words labeled as similar to share the same topic label. Under our model, the topic assignment of each word is not independent, but rather affected by the topic labels of its correlated words. Similar words have better chance to be put into the same topic due to the regularization of MRF, hence the coherence of topics can be boosted. In addition, our model can accommodate the subtlety that whether two words are similar depends on which topic they appear in, which allows word with multiple senses to be put into different topics properly. We derive a variational inference method to infer the posterior probabilities and learn model parameters and present techniques to deal with the hardto-compute partition function in MRF. Experiments on two datasets demonstrate the effectiveness of our model.",
"title": ""
},
{
"docid": "4ff1f12dc669a6dea895692b4e98857f",
"text": "Reliable prediction of sales can improve the quality of business strategy. In this research, fuzzy logic and artificial neural network are integrated into the fuzzy back-propagation network (FBPN) for sales forecasting in Printed Circuit Board (PCB) industry. The fuzzy back propagation network is constructed to incorporate production-control expert judgments in enhancing the model’s performance. Parameters chosen as inputs to the FBPN are no longer considered as of equal importance, but some sales managers and production control experts are requested to express their opinions about the importance of each input parameter in predicting the sales with linguistic terms, which can be converted into pre-specified fuzzy numbers. The proposed system is evaluated through the real world data provided by a printed circuit board company and experimental results indicate that the Fuzzy back-propagation approach outperforms other three different forecasting models in MAPE measures. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c711224d4fa687b4c81fc276b66df857",
"text": "We introduce DeepGRU, a deep learning-based gesture and action recognizer. Our method is intuitive and easy to implement, yet versatile and powerful for various application scenarios. Using only raw pose and vector data, DeepGRU can achieve high recognition accuracy regardless of the dataset size, the number of training samples or the choice of the input device. At the heart of our method lies a set of stacked gated recurrent units (GRU), two fully connected layers and a global attention model. We demonstrate that even in the absence of powerful hardware, and using only the CPU, our method can still be trained in a short period of time, making it suitable for rapid prototyping and development scenarios. We evaluate our proposed method on 7 publicly available datasets, spanning over a broad range of interactions as well as dataset sizes. In many cases we outperform the state-of-the-art pose-based methods. For instance, we achieve a recognition accuracy of 84.9% and 92.3% on cross-subject and cross-view tests of the NTU RGB+D dataset respectively, and also 100% recognition accuracy on the UT-Kinect dataset.",
"title": ""
},
{
"docid": "28439c317c1b7f94527db6c2e0edcbd0",
"text": "AnswerBus1 is an open-domain question answering system based on sentence level Web information retrieval. It accepts users’ natural-language questions in English, German, French, Spanish, Italian and Portuguese and provides answers in English. Five search engines and directories are used to retrieve Web pages that are relevant to user questions. From the Web pages, AnswerBus extracts sentences that are determined to contain answers. Its current rate of correct answers to TREC-8’s 200 questions is 70.5% with the average response time to the questions being seven seconds. The performance of AnswerBus in terms of accuracy and response time is better than other similar systems.",
"title": ""
},
{
"docid": "dbdc0a429784aa085c571b7c01e3399f",
"text": "A large number of deaths are caused by Traffic accidents worldwide. The global crisis of road safety can be seen by observing the significant number of deaths and injuries that are caused by road traffic accidents. In many situations the family members or emergency services are not informed in time. This results in delayed emergency service response time, which can lead to an individual’s death or cause severe injury. The purpose of this work is to reduce the response time of emergency services in situations like traffic accidents or other emergencies such as fire, theft/robberies and medical emergencies. By utilizing onboard sensors of a smartphone to detect vehicular accidents and report it to the nearest emergency responder available and provide real time location tracking for responders and emergency victims, will drastically increase the chances of survival for emergency victims, and also help save emergency services time and resources. Keywords—Traffic accidents; accident detection; on-board sensor; accelerometer; android smartphones; real-time tracking; emergency services; emergency responder; emergency victim; SOSafe; SOSafe Go; firebase",
"title": ""
},
{
"docid": "d612aeb7f7572345bab8609571f4030d",
"text": "In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function.1 When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1%–5.4% absolute accuracy gains over the non-meta-learning counterparts.",
"title": ""
},
{
"docid": "eabd54407a2f1de0126795e98cdcb194",
"text": "This paper reports our submissions to the four subtasks of Aspect Based Sentiment Analysis (ABSA) task (i.e., task 4) in SemEval 2014 including aspect term extraction and aspect sentiment polarity classification (Aspect-level tasks), aspect category detection and aspect category sentiment polarity classification (Categorylevel tasks). For aspect term extraction, we present three methods, i.e., noun phrase (NP) extraction, Named Entity Recognition (NER) and a combination of NP and NER method. For aspect sentiment classification, we extracted several features, i.e., topic features, sentiment lexicon features, and adopted a Maximum Entropy classifier. Our submissions rank above average.",
"title": ""
},
{
"docid": "e04bc357c145c38ed555b3c1fa85c7da",
"text": "This paper presents Hybrid (RSA & AES) encryption algorithm to safeguard data security in Cloud. Security being the most important factor in cloud computing has to be dealt with great precautions. This paper mainly focuses on the following key tasks: 1. Secure Upload of data on cloud such that even the administrator is unaware of the contents. 2. Secure Download of data in such a way that the integrity of data is maintained. 3. Proper usage and sharing of the public, private and secret keys involved for encryption and decryption. The use of a single key for both encryption and decryption is very prone to malicious attacks. But in hybrid algorithm, this problem is solved by the use of three separate keys each for encryption as well as decryption. Out of the three keys one is the public key, which is made available to all, the second one is the private key which lies only with the user. In this way, both the secure upload as well as secure download of the data is facilitated using the two respective keys. Also, the key generation technique used in this paper is unique in its own way. This has helped in avoiding any chances of repeated or redundant key.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "db1b3a472b9d002cf8b901f96d20196b",
"text": "Recent studies in NER use the supervised machine learning. This study used CRF as a learning algorithm, and applied word embedding to feature for NER training. Word embedding is helpful in many learning algorithms of NLP, indicating that words in a sentence are mapped by a real vector in a lowdimension space. As a result of comparing the performance of multiple techniques for word embedding to NER, it was found that CCA (85.96%) in Test A and Word2Vec (80.72%) in Test B exhibited the best performance.",
"title": ""
},
{
"docid": "0a55717b9efe122c8559f34ac858c282",
"text": "Semantic role labeling (SRL) is dedicated to recognizing the predicate-argument structure of a sentence. Previous studies have shown syntactic information has a remarkable contribution to SRL performance. However, such perception was challenged by a few recent neural SRL models which give impressive performance without a syntactic backbone. This paper intends to quantify the importance of syntactic information to dependency SRL in deep learning framework. We propose an enhanced argument labeling model companying with an extended korder argument pruning algorithm for effectively exploiting syntactic information. Our model achieves state-of-the-art results on the CoNLL-2008, 2009 benchmarks for both English and Chinese, showing the quantitative significance of syntax to neural SRL together with a thorough empirical survey over existing models.",
"title": ""
},
{
"docid": "a08de35fa7c245830a2175be09f88a49",
"text": "In this chapter, we introduce the readers to the field of big educational data and how big educational data can be analysed to provide insights into different stakeholders and thereby foster data driven actions concerning quality improvement in education. For the analysis and exploitation of big educational data, we present different techniques and popular applied scientific methods for data analysis and manipulation such as analyt‐ ics and different analytical approaches such as learning, academic and visual analytics, providing examples of how these techniques and methods could be used. The concept of quality improvement in education is presented in relation to two factors: (a) to improve‐ ment science and its impact on different processes in education such as the learning, educational and academic processes and (b) as a result of the practical application and realization of the presented analytical concepts. The context of health professions education is used to exemplify the different concepts.",
"title": ""
},
{
"docid": "863fbc4e33b1af53dd89e237d4c00ccd",
"text": "BACKGROUND\nRhesus macaques are widely used in biomedical research. Automated behavior monitoring can be useful in various fields (including neuroscience), as well as having applications to animal welfare but current technology lags behind that developed for other species. One difficulty facing developers is the reliable identification of individual macaques within a group especially as pair- and group-housing of macaques becomes standard. Current published methods require either implantation or wearing of a tracking device.\n\n\nNEW METHOD\nI present face recognition, in combination with face detection, as a method to non-invasively identify individual rhesus macaques in videos. The face recognition method utilizes local-binary patterns in combination with a local discriminant classification algorithm.\n\n\nRESULTS\nA classification accuracy of between 90 and 96% was achieved for four different groups. Group size, number of training images and challenging image conditions such as high contrast all had an impact on classification accuracy. I demonstrate that these methods can be applied in real time using standard affordable hardware and a potential application to studies of social structure.\n\n\nCOMPARISON WITH EXISTING METHOD(S)\nFace recognition methods have been reported for humans and other primate species such as chimpanzees but not rhesus macaques. The classification accuracy with this method is comparable to that for chimpanzees. Face recognition has the advantage over other methods for identifying rhesus macaques such as tags and collars of being non-invasive.\n\n\nCONCLUSIONS\nThis is the first reported method for face recognition of rhesus macaques, has high classification accuracy and can be implemented in real time.",
"title": ""
},
{
"docid": "83696ddab94d293e0d28172c200709e0",
"text": "Traffic sign detection plays an important role in driving assistance systems and traffic safety. But the existing detection methods are usually limited to a predefined set of traffic signs. Therefore we propose a traffic sign detection algorithm based on deep Convolutional Neural Network (CNN) using Region Proposal Network(RPN) to detect all Chinese traffic sign. Firstly, a Chinese traffic sign dataset is obtained by collecting seven main categories of traffic signs and their subclasses. Then a traffic sign detection CNN model is trained and evaluated by fine-tuning technology using the collected dataset. Finally, the model is tested by 33 video sequences with the size of 640×480. The result shows that the proposed method has towards real-time detection speed and above 99% detection precision. The trained model can be used to capture the traffic sign from videos by on-board camera or driving recorder and construct a complete traffic sign dataset.",
"title": ""
},
{
"docid": "86cb943d46574ee94a4e1ceaf36a9759",
"text": "Yearly there's an influx of over three million Muslims to Makkah., Saudi Arabia to perform Hajj. As this large group of pilgrims move between the different religious sites safety and security becomes an issue of main concern. This research looks into the integration of different mobile technologies to serve the purpose of crowd management., people tracking and location based services. It explores the solution to track the movement of pilgrims via RFID technology. A location aware mobile solution will also be integrated into this. This will be made available to pilgrims with smartphones to enhance the accuracy and tracking time of the pilgrims and provide them with location based services for Hajj.",
"title": ""
},
{
"docid": "78b8da26d1ca148b8c261c6cfdc9b2b6",
"text": "Collaborative filtering (CF) aims to build a model from users' past behaviors and/or similar decisions made by other users, and use the model to recommend items for users. Despite of the success of previous collaborative filtering approaches, they are all based on the assumption that there are sufficient rating scores available for building high-quality recommendation models. In real world applications, however, it is often difficult to collect sufficient rating scores, especially when new items are introduced into the system, which makes the recommendation task challenging. We find that there are often \" short \" texts describing features of items, based on which we can approximate the similarity of items and make recommendation together with rating scores. In this paper we \" borrow \" the idea of vector representation of words to capture the information of short texts and embed it into a matrix factorization framework. We empirically show that our approach is effective by comparing it with state-of-the-art approaches.",
"title": ""
},
{
"docid": "0a842427c2c03d08f9950765ee0fb625",
"text": "For centuries, several hundred pesticides have been used to control insects. These pesticides differ greatly in their mode of action, uptake by the body, metabolism, elimination from the body, and toxicity to humans. Potential exposure from the environment can be estimated by environmental monitoring. Actual exposure (uptake) is measured by the biological monitoring of human tissues and body fluids. Biomarkers are used to detect the effects of pesticides before adverse clinical health effects occur. Pesticides and their metabolites are measured in biological samples, serum, fat, urine, blood, or breast milk by the usual analytical techniques. Biochemical responses to environmental chemicals provide a measure of toxic effect. A widely used biochemical biomarker, cholinesterase depression, measures exposure to organophosphorus insecticides. Techniques that measure DNA damage (e.g., detection of DNA adducts) provide a powerful tool in measuring environmental effects. Adducts to hemoglobin have been detected with several pesticides. Determination of chromosomal aberration rates in cultured lymphocytes is an established method of monitoring populations occupationally or environmentally exposed to known or suspected mutagenic-carcinogenic agents. There are several studies on the cytogenetic effects of work with pesticide formulations. The majority of these studies report increases in the frequency of chromosomal aberrations and/or sister chromatid exchanges among the exposed workers. Biomarkers will have a major impact on the study of environmental risk factors. The basic aim of scientists exploring these issues is to determine the nature and consequences of genetic change or variation, with the ultimate purpose of predicting or preventing disease.",
"title": ""
},
{
"docid": "9eacc5f0724ff8fe2152930980dded4b",
"text": "A computer-controlled adjustable nanosecond pulse generator based on high-voltage MOSFET is designed in this paper, which owns stable performance and miniaturization profile of 32×30×7 cm3. The experiment results show that the pulser can generate electrical pulse with Gaussian rising time of 20 nanosecond, section-adjustable index falling time of 40–200 nanosecond, continuously adjustable repitition frequency of 0–5 kHz, quasi-continuously adjustable amplitude of 0–1 kV at 50 Ω load. And the pulser could meet the requiremen.",
"title": ""
}
] |
scidocsrr
|
3c5bbae9d08b579af73c14f6ecd274da
|
An Augmented Lagrangian Approach to the Constrained Optimization Formulation of Imaging Inverse Problems
|
[
{
"docid": "2871de581ee0efe242438567ca3a57dd",
"text": "The sparsity which is implicit in MR images is exploited to significantly undersample k-space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain-for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressed-sensing, images with a sparse representation can be recovered from randomly undersampled k-space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the l(1) norm of a transformed image, subject to data fidelity constraints. Examples demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin-echo brain imaging and 3D contrast enhanced angiography.",
"title": ""
}
] |
[
{
"docid": "8a3b72d495b7352f6690a7323ab29286",
"text": "Security Enhanced Linux (SELinux) is a widely used Mandatory Access Control system which is integrated in the Linux kernel. It is an added layer of security mechanism on top of the standard Discretionary Access Control system that Unix/Linux and other major operating systems have. SELinux does not nullify DAC but in fact supports DAC and its checks are performed after DAC's. If DAC allows an operation then SELinux checks that operation by comparing it with the set of specified rules that it has and decides based on those rules only. If DAC denies some access then SELinux checks are not performed. Because DAC allows users to have full control over files that they own, they could unwantedly set any permission on the files that they own, at their own discretion, which could prove dangerous so for this reason SELinux brings the Mandatory Access Controls (MAC) mechanism which enforces rules based on a specified policy and denies access operations if policy in use do not allow it, even if the file permissions were world-accessible using DAC In this paper we discuss various SELinux policies and provide a statistical comparison using standard Delphi method.",
"title": ""
},
{
"docid": "72e1c5690f20c47a63ebbb1dd3fc7f2c",
"text": "In edge-cloud computing, a set of edge servers are deployed near the mobile devices such that these devices can offload jobs to the servers with low latency. One fundamental and critical problem in edge-cloud systems is how to dispatch and schedule the jobs so that the job response time (defined as the interval between the release of a job and the arrival of the computation result at its device) is minimized. In this paper, we propose a general model for this problem, where the jobs are generated in arbitrary order and times at the mobile devices and offloaded to servers with both upload and download delays. Our goal is to minimize the total weighted response time over all the jobs. The weight is set based on how latency sensitive the job is. We derive the first online job dispatching and scheduling algorithm in edge-clouds, called OnDisc, which is scalable in the speed augmentation model; that is, OnDisc is (1 + ε)-speed O(1/ε)-competitive for any constant ε ∊ (0,1). Moreover, OnDisc can be easily implemented in distributed systems. Extensive simulations on a real-world data-trace from Google show that OnDisc can reduce the total weighted response time dramatically compared with heuristic algorithms.",
"title": ""
},
{
"docid": "3e8bffdcf0df0a34b95ecc5432984777",
"text": "We focus on grounding (i.e., localizing or linking) referring expressions in images, e.g., \"largest elephant standing behind baby elephant\". This is a general yet challenging vision-language task since it does not only require the localization of objects, but also the multimodal comprehension of context - visual attributes (e.g., \"largest\", \"baby\") and relationships (e.g., \"behind\") that help to distinguish the referent from other objects, especially those of the same category. Due to the exponential complexity involved in modeling the context associated with multiple image regions, existing work oversimplifies this task to pairwise region modeling by multiple instance learning. In this paper, we propose a variational Bayesian method, called Variational Context, to solve the problem of complex context modeling in referring expression grounding. Our model exploits the reciprocal relation between the referent and context, i.e., either of them influences estimation of the posterior distribution of the other, and thereby the search space of context can be greatly reduced. We also extend the model to unsupervised setting where no annotation for the referent is available. Extensive experiments on various benchmarks show consistent improvement over state-of-the-art methods in both supervised and unsupervised settings. The code is available at https://github.com/yuleiniu/vc/.",
"title": ""
},
{
"docid": "741a897b87cc76d68f5400974eee6b32",
"text": "Numerous techniques exist to augment the security functionality of Commercial O -The-Shelf (COTS) applications and operating systems, making them more suitable for use in mission-critical systems. Although individually useful, as a group these techniques present di culties to system developers because they are not based on a common framework which might simplify integration and promote portability and reuse. This paper presents techniques for developing Generic Software Wrappers { protected, non-bypassable kernel-resident software extensions for augmenting security without modi cation of COTS source. We describe the key elements of our work: our high-level Wrapper De nition Language (WDL), and our framework for con guring, activating, and managing wrappers. We also discuss code reuse, automatic management of extensions, a framework for system-building through composition, platform-independence, and our experiences with our Solaris and FreeBSD prototypes.",
"title": ""
},
{
"docid": "443191f41aba37614c895ba3533f80ed",
"text": "De novo engineering of gene circuits inside cells is extremely difficult, and efforts to realize predictable and robust performance must deal with noise in gene expression and variation in phenotypes between cells. Here we demonstrate that by coupling gene expression to cell survival and death using cell–cell communication, we can programme the dynamics of a population despite variability in the behaviour of individual cells. Specifically, we have built and characterized a ‘population control’ circuit that autonomously regulates the density of an Escherichia coli population. The cell density is broadcasted and detected by elements from a bacterial quorum-sensing system, which in turn regulate the death rate. As predicted by a simple mathematical model, the circuit can set a stable steady state in terms of cell density and gene expression that is easily tunable by varying the stability of the cell–cell communication signal. This circuit incorporates a mechanism for programmed death in response to changes in the environment, and allows us to probe the design principles of its more complex natural counterparts.",
"title": ""
},
{
"docid": "1b7fb04cd80a016ddd53d8481f6da8bd",
"text": "The classification of retinal vessels into artery/vein (A/V) is an important phase for automating the detection of vascular changes, and for the calculation of characteristic signs associated with several systemic diseases such as diabetes, hypertension, and other cardiovascular conditions. This paper presents an automatic approach for A/V classification based on the analysis of a graph extracted from the retinal vasculature. The proposed method classifies the entire vascular tree deciding on the type of each intersection point (graph nodes) and assigning one of two labels to each vessel segment (graph links). Final classification of a vessel segment as A/V is performed through the combination of the graph-based labeling results with a set of intensity features. The results of this proposed method are compared with manual labeling for three public databases. Accuracy values of 88.3%, 87.4%, and 89.8% are obtained for the images of the INSPIRE-AVR, DRIVE, and VICAVR databases, respectively. These results demonstrate that our method outperforms recent approaches for A/V classification.",
"title": ""
},
{
"docid": "c89b903e497ebe8e8d89e8d1d931fae1",
"text": "Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c913313524862f21df94651f78616e09",
"text": "The solidity is one of the most important factors which greatly affects the performance of the straight-bladed vertical axis wind turbine (SB-VAWT). In this study, numerical computations were carried out on a small model of the SB-VAWT with different solidities to invest its performance effects. Two kinds of solidity were decided, and for each one, three patterns were selected by changing the blade chord and number. Numerical computations based on the 2 dimensions incompressible steady flow were made. Flow fields around the SB-VAWT were obtained, and the torque and power coefficients were also calculated. According to the computation results under the conditions of this study, the effects of solidity on both the static and dynamic performance of the SB-VAWT were discussed. Keywords-vertical axis wind turbine;straight-bladed; numerical computation; solidity; stactic torque;power",
"title": ""
},
{
"docid": "b0741999659724f8fa5dc1117ec86f0d",
"text": "With the rapidly growing scales of statistical problems, subset based communicationfree parallel MCMC methods are a promising future for large scale Bayesian analysis. In this article, we propose a new Weierstrass sampler for parallel MCMC based on independent subsets. The new sampler approximates the full data posterior samples via combining the posterior draws from independent subset MCMC chains, and thus enjoys a higher computational efficiency. We show that the approximation error for the Weierstrass sampler is bounded by some tuning parameters and provide suggestions for choice of the values. Simulation study shows the Weierstrass sampler is very competitive compared to other methods for combining MCMC chains generated for subsets, including averaging and kernel smoothing.",
"title": ""
},
{
"docid": "574e0006bffb310bf64417a607adccdf",
"text": "We design differentially private learning algorithms that are agnostic to the learning model assuming access to a limited amount of unlabeled public data. First, we provide a new differentially private algorithm for answering a sequence of m online classification queries (given by a sequence of m unlabeled public feature vectors) based on a private training set. Our algorithm follows the paradigm of subsample-and-aggregate, in which any generic non-private learner is trained on disjoint subsets of the private training set, and then for each classification query, the votes of the resulting classifiers ensemble are aggregated in a differentially private fashion. Our private aggregation is based on a novel combination of the distance-to-instability framework [26], and the sparse-vector technique [15, 18]. We show that our algorithm makes a conservative use of the privacy budget. In particular, if the underlying non-private learner yields a classification error of at most α ∈ (0, 1), then our construction answers more queries, by at least a factor of 1/α in some cases, than what is implied by a straightforward application of the advanced composition theorem for differential privacy. Next, we apply the knowledge transfer technique to construct a private learner that outputs a classifier, which can be used to answer an unlimited number of queries. In the PAC model, we analyze our construction and prove upper bounds on the sample complexity for both the realizable and the non-realizable cases. Similar to non-private sample complexity, our bounds are completely characterized by the VC dimension of the concept class.",
"title": ""
},
{
"docid": "e6640dc272e4142a2ddad8291cfaead7",
"text": "We give a summary of R. Borcherds’ solution (with some modifications) to the following part of the Conway-Norton conjectures: Given the Monster M and Frenkel-Lepowsky-Meurman’s moonshine module V ♮, prove the equality between the graded characters of the elements of M acting on V ♮ (i.e., the McKay-Thompson series for V ♮) and the modular functions provided by Conway and Norton. The equality is established using the homology of a certain subalgebra of the monster Lie algebra, and the Euler-Poincaré identity.",
"title": ""
},
{
"docid": "001d2da1fbdaf2c49311f6e68b245076",
"text": "Lack of physical activity is a serious health concern for individuals who are visually impaired as they have fewer opportunities and incentives to engage in physical activities that provide the amounts and kinds of stimulation sufficient to maintain adequate fitness and to support a healthy standard of living. Exergames are video games that use physical activity as input and which have the potential to change sedentary lifestyles and associated health problems such as obesity. We identify that exergames have a number properties that could overcome the barriers to physical activity that individuals with visual impairments face. However, exergames rely upon being able to perceive visual cues that indicate to the player what input to provide. This paper presents VI Tennis, a modified version of a popular motion sensing exergame that explores the use of vibrotactile and audio cues. The effectiveness of providing multimodal (tactile/audio) versus unimodal (audio) cues was evaluated with a user study with 13 children who are blind. Children achieved moderate to vigorous levels of physical activity- the amount required to yield health benefits. No significant difference in active energy expenditure was found between both versions, though children scored significantly better with the tactile/audio version and also enjoyed playing this version more, which emphasizes the potential of tactile/audio feedback for engaging players for longer periods of time.",
"title": ""
},
{
"docid": "3b8f2694d8b6f7177efa8716d72b9129",
"text": "Behara, B and Jacobson, BH. Acute effects of deep tissue foam rolling and dynamic stretching on muscular strength, power, and flexibility in Division I linemen. J Strength Cond Res 31(4): 888-892, 2017-A recent strategy to increase sports performance is a self-massage technique called myofascial release using foam rollers. Myofascial restrictions are believed to be brought on by injuries, muscle imbalances, overrecruitment, and/or inflammation, all of which can decrease sports performance. The purpose of this study was to compare the acute effects of a single-bout of lower extremity self-myofascial release using a custom deep tissue roller (DTR) and a dynamic stretch protocol. Subjects consisted of NCAA Division 1 offensive linemen (n = 14) at a Midwestern university. All players were briefed on the objectives of the study and subsequently signed an approved IRB consent document. A randomized crossover design was used to assess each dependent variable (vertical jump [VJ] power and velocity, knee isometric torque, and hip range of motion was assessed before and after: [a] no treatment, [b] deep tissue foam rolling, and [c] dynamic stretching). Results of repeated-measures analysis of variance yielded no pretest to posttest significant differences (p > 0.05) among the groups for VJ peak power (p = 0.45), VJ average power (p = 0.16), VJ peak velocity (p = 0.25), VJ average velocity (p = 0.23), peak knee extension torque (p = 0.63), average knee extension torque (p = 0.11), peak knee flexion torque (p = 0.63), or average knee flexion torque (p = 0.22). However, hip flexibility was statistically significant when tested after both dynamic stretching and foam rolling (p = 0.0001). Although no changes in strength or power was evident, increased flexibility after DTR may be used interchangeably with traditional stretching exercises.",
"title": ""
},
{
"docid": "bf0531b03cc36a69aca1956b21243dc6",
"text": "Sound of their breath fades with the light. I think about the loveless fascination, Under the milky way tonight. Lower the curtain down in memphis, Lower the curtain down all right. I got no time for private consultation, Under the milky way tonight. Wish I knew what you were looking for. Might have known what you would find. And it's something quite peculiar, Something thats shimmering and white. It leads you here despite your destination, Under the milky way tonight (chorus) Preface This Master's Thesis concludes my studies in Human Aspects of Information Technology (HAIT) at Tilburg University. It describes the development, implementation, and analysis of an automatic mood classifier for music. I would like to thank those who have contributed to and supported the contents of the thesis. Special thanks goes to my supervisor Menno van Zaanen for his dedication and support during the entire process of getting started up to the final results. Moreover, I would like to express my appreciation to Fredrik Mjelle for providing the user-tagged instances exported out of the MOODY database, which was used as the dataset for the experiments. Furthermore, I would like to thank Toine Bogers for pointing me out useful website links regarding music mood classification and sending me papers with citations and references. I would also like to thank Michael Voong for sending me his papers on music mood classification research, Jaap van den Herik for his support and structuring of my writing and thinking. I would like to recognise Eric Postma and Marieke van Erp for their time assessing the thesis as members of the examination committee. Finally, I would like to express my gratitude to my family for their enduring support. Abstract This research presents the outcomes of research into using the lingual part of music for building an automatic mood classification system. Using a database consisting of extracted lyrics and user-tagged mood attachments, we built a classifier based on machine learning techniques. By testing the classification system on various mood frameworks (or dimensions) we examined to what extent it is possible to attach mood tags automatically to songs based on lyrics only. Furthermore, we examined to what extent the linguistic part of music revealed adequate information for assigning a mood category and which aspects of mood can be classified best. Our results show that the use of term frequencies and tf*idf values provide a valuable source of …",
"title": ""
},
{
"docid": "e201c682e1e048b92a60ade663aa7112",
"text": "In this paper, we study the problem of landmark recognition and propose to leverage 3D visual phrases to improve the performance. A 3D visual phrase is a triangular facet on the surface of a reconstructed 3D landmark model. In contrast to existing 2D visual phrases which are mainly based on co-occurrence statistics in 2D image planes, such 3D visual phrases explicitly characterize the spatial structure of a 3D object (landmark), and are highly robust to projective transformations due to viewpoint changes. We present an effective solution to discover, describe, and detect 3D visual phrases. The experiments on 10 landmarks have achieved promising results, which demonstrate that our approach provides a good balance between precision and recall of landmark recognition while reducing the dependence on post-verification to reject false positives.",
"title": ""
},
{
"docid": "1d53b01ee1a721895a17b7d0f3535a28",
"text": "We present a suite of algorithms for self-organization of wireless sensor networks, in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. † This research is supported by DARPA contract number F04701-97-C-0010, and was presented in part at the 37 Allerton Conference on Communication, Computing and Control, September 1999. ‡ Corresponding author.",
"title": ""
},
{
"docid": "ad059332e36849857c9bf1a52d5b0255",
"text": "Interaction Design Beyond Human Computer Interaction instructions guide, service manual guide and maintenance manual guide for the products. Before employing this manual, service or maintenance guide you should know detail regarding your products cause this manual for expert only. We hope ford alternator wiring diagram internal regulator and yet another manual of these lists a good choice for your to repair, fix and solve your product or service or device problems don't try an oversight.",
"title": ""
},
{
"docid": "3673e0f738cf6fd1cc7c94650e827273",
"text": "An important question when eliciting opinions from experts is how to aggregate the reported opinions. In this paper, we propose a pooling method to aggregate expert opinions. Intuitively, it works as if the experts were continuously updating their opinions in order to accommodate the expertise of others. Each updated opinion takes the form of a linear opinion pool, where the weight that an expert assigns to a peer’s opinion is inversely related to the distance between their opinions. In other words, experts are assumed to prefer opinions that are close to their own opinions. We prove that such an updating process leads to consensus, i.e., the experts all converge towards the same opinion. Further, we show that if rational experts are rewarded using the quadratic scoring rule, then the assumption that they prefer opinions that are close to their own opinions follows naturally. We empirically demonstrate the efficacy of the proposed method using real-world data.",
"title": ""
},
{
"docid": "2f4325291ec4d705ed2fe19e57d4db36",
"text": "Reliable precision grasping for unknown objects is a prerequisite for robots that work in the field of logistics, manufacturing and household tasks. The nature of this task requires a simultaneous solution of a mixture of sub-problems. These include estimating object properties, finding viable grasps and executing grasps without displacement. We propose to explicitly take perceptual uncertainty into account during grasp execution. The underlying object representation is a probabilistic signed distance field, which includes both signed distances to the surface and spatially interpretable variances. Based on this representation, we propose a two-stage grasp generation method, which is specifically designed for generating precision grasps. In order to evaluate the whole approach, we perform extensive real world grasping experiments on a set of hard-to-grasp objects. Our approach achieves 78% success rate and shows robustness to the placement orientation.",
"title": ""
},
{
"docid": "12363d704fcfe9fef767c5e27140c214",
"text": "The application range of UAVs (unmanned aerial vehicles) is expanding along with performance upgrades. Vertical take-off and landing (VTOL) aircraft has the merits of both fixed-wing and rotary-wing aircraft. Tail-sitting is the simplest way for the VTOL maneuver since it does not need extra actuators. However, conventional hovering control for a tail-sitter UAV is not robust enough against large disturbance such as a blast of wind, a bird strike, and so on. It is experimentally observed that the conventional quaternion feedback hovering control often fails to keep stability when the control compensates large attitude errors. This paper proposes a novel hovering control strategy for a tail-sitter VTOL UAV that increases stability against large disturbance. In order to verify the proposed hovering control strategy, simulations and experiments on hovering of the UAV are performed giving large attitude errors. The results show that the proposed control strategy successfully compensates initial large attitude errors keeping stability, while the conventional quaternion feedback controller fails.",
"title": ""
}
] |
scidocsrr
|
fa975122fc9c4e260dc405373b72ff08
|
A Performance Study on Different Cost Aggregation Approaches Used in Real-Time Stereo Matching
|
[
{
"docid": "d603806f579a937a24ad996543fe9093",
"text": "Early vision relies heavily on rectangular windows for tasks such as smoothing and computing correspondence. While rectangular windows are efficient, they yield poor results near object boundaries. We describe an efficient method for choosing an arbitrarily shaped connected window, in a manner which varies at each pixel. Our approach can be applied to many problems, including image restoration and visual correspondence. It runs in linear time, and takes a few seconds on traditional benchmark images. Performance on both synthetic and real imagery with ground truth appears promising.",
"title": ""
}
] |
[
{
"docid": "177c86301b4dec3a8d86119520a0cb70",
"text": "This paper considers city-wide air quality estimation with limited available monitoring stations which are geographically sparse. Since air pollution is highly spatio-temporal (S-T) dependent and considerably influenced by urban dynamics (e.g., meteorology and traffic), we can infer the air quality not covered by monitoring stations with S-T heterogeneous urban big data. However, estimating air quality using S-T heterogeneous big data poses two challenges. The first challenge is due to with the data diversity, i.e., there are different categories of urban dynamics and some may be useless and even detrimental for the estimation. To overcome this, we first propose an S-T extended Granger causality model to analyze all the causalities among urban dynamics in a consistent manner. Then by implementing non-causality test, we rule out the urban dynamics that do not “Granger” cause air pollution. The second challenge is due to the time complexity when processing the massive volume of data. We propose to discover the region of influence (ROI) by selecting data with the highest causality levels spatially and temporally. Results show that we achieve higher accuracy using “part” of the data than “all” of the data. This may be explained by the most influential data eliminating errors induced by redundant or noisy data. The causality model observation and the city-wide air quality map are illustrated and visualized using data from Shenzhen, China.",
"title": ""
},
{
"docid": "5233286436f0ecfde8e0e647e89b288f",
"text": "Each employee’s performance is important in an organization. A way to motivate it is through the application of reinforcement theory which is developed by B. F. Skinner. One of the most commonly used methods is positive reinforcement in which one’s behavior is strengthened or increased based on consequences. This paper aims to review the impact of positive reinforcement on the performances of employees in organizations. It can be applied by utilizing extrinsic reward or intrinsic reward. Extrinsic rewards include salary, bonus and fringe benefit while intrinsic rewards are praise, encouragement and empowerment. By applying positive reinforcement in these factors, desired positive behaviors are encouraged and negative behaviors are eliminated. Financial and non-financial incentives have a positive relationship with the efficiency and effectiveness of staffs.",
"title": ""
},
{
"docid": "e7155ddcd4b47466b97fd2967501ccd3",
"text": "We demonstrate a use of deep neural networks (DNN) for OSNR monitoring with minimum prior knowledge. By using 5-layers DNN trained with 400,000 samples, the DNN successfully estimates OSNR in a 16-GBd DP-QPSK system.",
"title": ""
},
{
"docid": "7ecfea8abc9ba29719cdd4bf02e99d5d",
"text": "The literature shows an increase in blended learning implementations (N = 74) at faculties of education in Turkey whereas pre-service and in-service teachers’ ICT competencies have been identified as one of the areas where they are in need of professional development. This systematic review was conducted to find out the impact of blended learning on academic achievement and attitudes at teacher education programs in Turkey. 21 articles and 10 theses complying with all pre-determined criteria (i.e., studies having quantitative research design or at least a quantitative aspect conducted at pre-service teacher education programs) included within the scope of this review. With regard to academic achievement, it was synthesized that majority of the studies confirmed its positive impact on attaining course outcomes. Likewise, blended learning environment was revealed to contribute pre-service teachers to develop positive attitudes towards the courses. It was also concluded that face-to-face aspect of the courses was favoured considerably as it enhanced social interaction between peers and teachers. Other benefits of blended learning were listed as providing various materials, receiving prompt feedback, and tracking progress. Slow internet access, connection failure and anxiety in some pre-service teachers on using ICT were reported as obstacles. Regarding the positive results of blended learning and the significance of ICT integration, pre-service teacher education curricula are suggested to be reconstructed by infusing ICT into entire program through blended learning rather than delivering isolated ICT courses which may thus serve for prospective teachers as catalysts to integrate the use of ICT in their own teaching.",
"title": ""
},
{
"docid": "acce5017b1138c67e24e661c1eabc185",
"text": "The main goal of the paper is to continuously enlarge the set of software building blocks that can be reused in the search and rescue domain.",
"title": ""
},
{
"docid": "a600a19440b8e6799e0e603cf56ff141",
"text": "In this work, we address the problem of distributed expert finding using chains of social referrals and profile matching with only local information in online social networks. By assuming that users are selfish, rational, and have privately known cost of participating in the referrals, we design a novel truthful efficient mechanism in which an expert-finding query will be relayed by intermediate users. When receiving a referral request, a participant will locally choose among her neighbors some user to relay the request. In our mechanism, several closely coupled methods are carefully designed to improve the performance of distributed search, including, profile matching, social acquaintance prediction, score function for locally choosing relay neighbors, and budget estimation. We conduct extensive experiments on several data sets of online social networks. The extensive study of our mechanism shows that the success rate of our mechanism is about 90 percent in finding closely matched experts using only local search and limited budget, which significantly improves the previously best rate 20 percent. The overall cost of finding an expert by our truthful mechanism is about 20 percent of the untruthful methods, e.g., the method that always selects high-degree neighbors. The median length of social referral chains is 6 using our localized search decision, which surprisingly matches the well-known small-world phenomenon of global social structures.",
"title": ""
},
{
"docid": "effd314d69f6775b80dbe5570e3f37d8",
"text": "New paradigms in networking industry, such as Software Defined Networking (SDN) and Network Functions Virtualization (NFV), require the hypervisors to enable the execution of Virtual Network Functions in virtual machines (VMs). In this context, the virtual switch function is critical to achieve carrier grade performance, hardware independence, advanced features and programmability. SnabbSwitch is a virtual switch designed to run in user space with carrier grade performance targets, based on an efficient architecture which has driven the development of vhost-user (now also adopted by OVS-DPDK, the user space implementation of OVS based on Intel DPDK), easy to deploy and to program through its Lua scripting layer. This paper presents the SnabbSwitch virtual switch implementation along with its novelties (the vhost-user implementation and the usage of a trace compiler) and code optimizations, which have been merged in the mainline project repository. Extensive benchmarking activities, whose results are included in this paper, have been carried on to compare SnabbSwitch with other virtual switching solutions (i.e., OVS, OVS-DPDK, Linux Bridge, VFIO and SR-IOV). These results show that SnabbSwitch performs as well as hardware based solutions, such as SR-IOV and VFIO, while allowing for additional functional and flexible operation; they show also that SnabbSwitch is faster than the vhost-user based version (user space) of OVS-DPDK.",
"title": ""
},
{
"docid": "21e17ad2d2a441940309b7eacd4dec6e",
"text": "ÐWith a huge amount of data stored in spatial databases and the introduction of spatial components to many relational or object-relational databases, it is important to study the methods for spatial data warehousing and OLAP of spatial data. In this paper, we study methods for spatial OLAP, by integration of nonspatial OLAP methods with spatial database implementation techniques. A spatial data warehouse model, which consists of both spatial and nonspatial dimensions and measures, is proposed. Methods for computation of spatial data cubes and analytical processing on such spatial data cubes are studied, with several strategies proposed, including approximation and selective materialization of the spatial objects resulted from spatial OLAP operations. The focus of our study is on a method for spatial cube construction, called object-based selective materialization, which is different from cuboid-based selective materialization proposed in previous studies of nonspatial data cube construction. Rather than using a cuboid as an atomic structure during the selective materialization, we explore granularity on a much finer level, that of a single cell of a cuboid. Several algorithms are proposed for object-based selective materialization of spatial data cubes and the performance study has demonstrated the effectiveness of these techniques. Index TermsÐData warehouse, data mining, online analytical processing (OLAP), spatial databases, spatial data analysis, spatial",
"title": ""
},
{
"docid": "4a9ad387ad16727d9ac15ac667d2b1c3",
"text": "In recent years face recognition has received substantial attention from both research communities and the market, but still remained very challenging in real applications. A lot of face recognition algorithms, along with their modifications, have been developed during the past decades. A number of typical algorithms are presented, being categorized into appearancebased and model-based schemes. For appearance-based methods, three linear subspace analysis schemes are presented, and several non-linear manifold analysis approaches for face recognition are briefly described. The model-based approaches are introduced, including Elastic Bunch Graph matching, Active Appearance Model and 3D Morphable Model methods. A number of face databases available in the public domain and several published performance evaluation results are digested. Future research directions based on the current recognition results are pointed out.",
"title": ""
},
{
"docid": "231f27d4cb32a5687a05dd26e775fbb8",
"text": "There are currently more objects connected to the Internet than there are people in the world. This gap will continue to grow, as more objects gain the ability to directly interface with the Internet or become physical representations of data accessible via Internet systems. This trend toward greater independent object interaction in the Internet is collectively described as the Internet of Things (IoT). As with previous global technology trends, such as widespread mobile adoption and datacentre consolidation, the changing operating environment associated with the Internet of Things represents considerable impact to the attack surface and threat environment of the Internet and Internet-connected systems. The increase in Internet-connected systems and the accompanying, non-linear increase in Internet attack surface can be represented by several tiers of increased surface complexity. Users, or groups of users, are linked to a non-linear number of connected entities, which in turn are linked to a non-linear number of indirectly connected, trackable entities. At each tier of this model, the increasing population, complexity, heterogeneity, interoperability, mobility, and distribution of entities represents an expanding attack surface, measurable by additional channels, methods, and data items. Further, this expansion will necessarily increase the field of security stakeholders and introduce new manageability challenges. This document provides a framework for measurement and analysis of the security implications inherent in an Internet that is dominated by non-user endpoints, content in the form of objects, and content that is generated by objects without direct user involvement.",
"title": ""
},
{
"docid": "a306ea0a425a00819b81ea7f52544cfb",
"text": "Early research in electronic markets seemed to suggest that E-Commerce transactions would result in decreased costs for buyers and sellers alike, and would therefore ultimately lead to the elimination of intermediaries from electronic value chains. However, a careful analysis of the structure and functions of electronic marketplaces reveals a different picture. Intermediaries provide many value-adding functions that cannot be easily substituted or ‘internalised’ through direct supplier-buyer dealings, and hence mediating parties may continue to play a significant role in the E-Commerce world. In this paper we provide an analysis of the potential roles of intermediaries in electronic markets and we articulate a number of hypotheses for the future of intermediation in such markets. Three main scenarios are discussed: the disintermediation scenario where market dynamics will favour direct buyer-seller transactions, the reintermediation scenario where traditional intermediaries will be forced to differentiate themselves and reemerge in the electronic marketplace, and the cybermediation scenario where wholly new markets for intermediaries will be created. The analysis suggests that the likelihood of each scenario dominating a given market is primarily dependent on the exact functions that intermediaries play in each case. A detailed discussion of such functions is presented in the paper, together with an analysis of likely outcomes in the form of a contingency model for intermediation in electronic markets.",
"title": ""
},
{
"docid": "16fec520bf539ab23a5164ffef5561b4",
"text": "This article traces the major trends in TESOL methods in the past 15 years. It focuses on the TESOL profession’s evolving perspectives on language teaching methods in terms of three perceptible shifts: (a) from communicative language teaching to task-based language teaching, (b) from method-based pedagogy to postmethod pedagogy, and (c) from systemic discovery to critical discourse. It is evident that during this transitional period, the profession has witnessed a heightened awareness about communicative and task-based language teaching, about the limitations of the concept of method, about possible postmethod pedagogies that seek to address some of the limitations of method, about the complexity of teacher beliefs that inform the practice of everyday teaching, and about the vitality of the macrostructures—social, cultural, political, and historical—that shape the microstructures of the language classroom. This article deals briefly with the changes and challenges the trend-setting transition seems to be bringing about in the profession’s collective thought and action.",
"title": ""
},
{
"docid": "bfe762fc6e174778458b005be75d8285",
"text": "The Gibbs sampler, the algorithm of Metropolis and similar iterative simulation methods are potentially very helpful for summarizing multivariate distributions. Used naively, however, iterative simulation can give misleading answers. Our methods are simple and generally applicable to the output of any iterative simulation; they are designed for researchers primarily interested in the science underlying the data and models they are analyzing, rather than for researchers interested in the probability theory underlying the iterative simulations themselves. Our recommended strategy is to use several independent sequences, with starting points sampled from an overdispersed istribution. At each step of the iterative simulation, we obtain, for each univariate estimand of interest, a distributional estimate and an estimate of how much sharper the distributional estimate might become if the simulations were continued indefinitely. Because our focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normality after transformations and marginalization, we derive our results as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations. The methods are illustrated on a randomeffects mixture model applied to experimental measurements of reaction times of normal and schizophrenic patients.",
"title": ""
},
{
"docid": "3e70a22831b064bff3ff784a932d068b",
"text": "An ultrawideband (UWB) antenna that rejects extremely sharply the two narrow and closely-spaced U.S. WLAN 802.11a bands is presented. The antenna is designed on a single surface (it is uniplanar) and uses only linear sections for easy scaling and fine-tuning. Distributed-element and lumped-element equivalent circuit models of this dual band-reject UWB antenna are presented and used to support the explanation of the physical principles of operation of the dual band-rejection mechanism thoroughly. The circuits are evaluated by comparing with the response of the presented UWB antenna that has very high selectivity and achieves dual-frequency rejection of the WLAN 5.25 GHz and 5.775 GHz bands, while it receives signal from the intermediate band between 5.35-5.725 GHz. The rejection is achieved using double open-circuited stubs, which is uncommon and are chosen based on their narrowband performance. The antenna was fabricated on a single side of a thin, flexible, LCP substrate. The measured achieved rejection is the best reported for a dual-band reject antenna with so closely-spaced rejected bands. The measured group delay of the antenna validates its suitability for UWB links. Such antennas improve both UWB and WLAN communication links at practically zero cost.",
"title": ""
},
{
"docid": "65060deb3fafc21de3db4b9946c6df06",
"text": "In this paper we describe the Wireless Power-Controlled Outlet Module (WPCOM) with a scalable mechanism for home power management which we have developed. The WPCOM integrates the multiple AC power sockets and a simple low-power microcontroller into a power outlet to switch the power of the sockets ON/OFF and to measure the power consumption of plugged electric home appliances. Our WPCOM consists of six scalable modules, that is, the Essential Control Module, the Bluetooth Module, the GSM Module, the Ethernet Module, the SD Card Module and the Power Measuring Module, which together provide an indoor wireless, and an outdoor remote control and monitor of electric home appliances. We have designed a PDA control software and remote control software which support the Graphic User Interface, thus allowing the user to easily monitor the electric home appliances through the PDA and the Internet individually. In addition, we use a Short Message Service to achieve control and monitoring through a GSM cellular mobile phone for remote use anytime and anywhere.",
"title": ""
},
{
"docid": "fba48672e859a7606707406267dd0957",
"text": "We suggest a spectral histogram, defined as the marginal distribution of filter responses, as a quantitative definition for a texton pattern. By matching spectral histograms, an arbitrary image can be transformed to an image with similar textons to the observed. We use the chi(2)-statistic to measure the difference between two spectral histograms, which leads to a texture discrimination model. The performance of the model well matches psychophysical results on a systematic set of texture discrimination data and it exhibits the nonlinearity and asymmetry phenomena in human texture discrimination. A quantitative comparison with the Malik-Perona model is given, and a number of issues regarding the model are discussed.",
"title": ""
},
{
"docid": "c95c9d1e9b427bf8ed0ce14c0af985e1",
"text": "This chapter reviews the recent developments in Markov chain Monte Carlo simulation methods These methods, which are concerned with the simulation of high dimensional probability distributions, have gained enormous prominence and revolutionized Bayesian statistics The chapter provides background on the relevant Markov chain theory and provides detailed information on the theory and practice of Markov chain sampling based on the Metropolis-Hastings and Gibbs sampling algorithms Convergence diagnostics and strategies for implementation are also discussed A number of examples drawn from Bayesian statistics are used to illustrate the ideas The chapter also covers in detail the application of MCMC methods to the problems of prediction and model choice.",
"title": ""
},
{
"docid": "51ec3dee7a91b7e9afcb26694ded0c11",
"text": "[1] PRIMELT2.XLS software is introduced for calculating primary magma composition and mantle potential temperature (TP) from an observed lava composition. It is an upgrade over a previous version in that it includes garnet peridotite melting and it detects complexities that can lead to overestimates in TP by >100 C. These are variations in source lithology, source volatile content, source oxidation state, and clinopyroxene fractionation. Nevertheless, application of PRIMELT2.XLS to lavas from a wide range of oceanic islands reveals no evidence that volatile-enrichment and source fertility are sufficient to produce them. All are associated with thermal anomalies, and this appears to be a prerequisite for their formation. For the ocean islands considered in this work, TP maxima are typically 1450–1500 C in the Atlantic and 1500–1600 C in the Pacific, substantially greater than 1350 C for ambient mantle. Lavas from the Galápagos Islands and Hawaii record in their geochemistry high TP maxima and large ranges in both TP and melt fraction over short horizontal distances, a result that is predicted by the mantle plume model.",
"title": ""
},
{
"docid": "072f3152a93eb2a75f716dd1aec131c4",
"text": "Research has not verified the theoretical or practical value of the brand attachment construct in relation to alternative constructs, particularly brand attitude strength. The authors make conceptual, measurement, and managerial contributions to this research issue. Conceptually, they define brand attachment, articulate its defining properties, and differentiate it from brand attitude strength. From a measurement perspective, they develop and validate a parsimonious measure of brand attachment, test the assumptions that underlie it, and demonstrate that it indicates the concept of attachment. They also demonstrate the convergent and discriminant validity of this measure in relation to brand attitude strength. Managerially, they demonstrate that brand attachment offers value over brand attitude strength in predicting (1) consumers’ intentions to perform difficult behaviors (those they regard as using consumer resources), (2) actual purchase behaviors, (3) brand purchase share (the share of a brand among directly competing brands), and (4) need share (the extent to which consumers rely on a brand to address relevant needs, including those brands in substitutable product categories).",
"title": ""
},
{
"docid": "2a2b2332e949372c6bba650725e9a9a2",
"text": "This study aimed to investigate the effect of academic procrastination on e-learning course achievement. Because all of the interactions among students, instructors, and contents in an e-learning environment were automatically recorded in a learning management system (LMS), procrastination such as the delays in weekly scheduled learning and late submission of assignments could be identified from log data. Among 569 college students who enrolled in an e-learning course in Korea, the absence and late submission of assignments were chosen to measure academic procrastination in e-learning. Multiple regression analysis was conducted to examine the relationship between academic procrastination and course achievement. The results showed that the absence and late submission of assignments were negatively significant in predicting course achievement. Furthermore, the study explored the predictability of academic procrastination on course achievement at four points of the 15-week course to test its potential for early prediction. The results showed that the regression model at each time point significantly predicted course achievement, and the predictability increased as time passed. Based on the findings, practical implications for facilitating a successful e-learning environment were suggested, and the potential of analyzing LMS data was discussed.",
"title": ""
}
] |
scidocsrr
|
0638c797fa39a75eb278cc756e9aecb7
|
PUBLIC KEY ENCRYPTION WITH CONJUNCTIVE FIELD FREE KEYWORD SEARCH SCHEME
|
[
{
"docid": "1f629796e9180c14668e28b83dc30675",
"text": "In this article we tackle the issue of searchable encryption with a generalized query model. Departing from many previous works that focused on queries consisting of a single keyword, we consider the the case of queries consisting of arbitrary boolean expressions on keywords, that is to say conjunctions and disjunctions of keywords and their complement. Our construction of boolean symmetric searchable encryption BSSE is mainly based on the orthogonalization of the keyword field according to the Gram-Schmidt process. Each document stored in an outsourced server is associated with a label which contains all the keywords corresponding to the document, and searches are performed by way of a simple inner product. Furthermore, the queries in the BSSE scheme are randomized. This randomization hides the search pattern of the user since the search results cannot be associated deterministically to queries. We formally define an adaptive security model for the BSSE scheme. In addition, the search complexity is in $O(n)$ where $n$ is the number of documents stored in the outsourced server.",
"title": ""
}
] |
[
{
"docid": "4487f3713062ef734ceab5c7f9ccc6e3",
"text": "In the analysis of machine learning models, it is often convenient to assume that the parameters are IID. This assumption is not satisfied when the parameters are updated through training processes such as SGD. A relaxation of the IID condition is a probabilistic symmetry known as exchangeability. We show the sense in which the weights in MLPs are exchangeable. This yields the result that in certain instances, the layer-wise kernel of fully-connected layers remains approximately constant during training. We identify a sharp change in the macroscopic behavior of networks as the covariance between weights changes from zero.",
"title": ""
},
{
"docid": "a84143b7aa2d42f3297d81a036dc0f5e",
"text": "Vehicular Ad hoc Networks (VANETs) have emerged recently as one of the most attractive topics for researchers and automotive industries due to their tremendous potential to improve traffic safety, efficiency and other added services. However, VANETs are themselves vulnerable against attacks that can directly lead to the corruption of networks and then possibly provoke big losses of time, money, and even lives. This paper presents a survey of VANETs attacks and solutions in carefully considering other similar works as well as updating new attacks and categorizing them into different classes.",
"title": ""
},
{
"docid": "5ad26d4135cc2ce1638046ead24351df",
"text": "A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described,",
"title": ""
},
{
"docid": "c155ce2743c59f4ce49fdffe74d94443",
"text": "The theta oscillation (5-10Hz) is a prominent behavior-specific brain rhythm. This review summarizes studies showing the multifaceted role of theta rhythm in cognitive functions, including spatial coding, time coding and memory, exploratory locomotion and anxiety-related behaviors. We describe how activity of hippocampal theta rhythm generators - medial septum, nucleus incertus and entorhinal cortex, links theta with specific behaviors. We review evidence for functions of the theta-rhythmic signaling to subcortical targets, including lateral septum. Further, we describe functional associations of theta oscillation properties - phase, frequency and amplitude - with memory, locomotion and anxiety, and outline how manipulations of these features, using optogenetics or pharmacology, affect associative and innate behaviors. We discuss work linking cognition to the slope of the theta frequency to running speed regression, and emotion-sensitivity (anxiolysis) to its y-intercept. Finally, we describe parallel emergence of theta oscillations, theta-mediated neuronal activity and behaviors during development. This review highlights a complex interplay of neuronal circuits and synchronization features, which enables an adaptive regulation of multiple behaviors by theta-rhythmic signaling.",
"title": ""
},
{
"docid": "71129f40a4eda0639df82ecf790b2f4c",
"text": "Driving aggressively increases the risk of accidents. Assessing a person's driving style is a useful way to guide aggressive drivers toward having safer driving behaviors. A number of studies have investigated driving style, but they often rely on the use of self-reports or simulators, which are not suitable for the real-time, continuous, automated assessment and feedback on the road. In order to understand and model aggressive driving style, we construct an in-vehicle sensing platform that uses a smartphone instead of using heavyweight, expensive systems. Utilizing additional cheap sensors, our sensing platform can collect useful information about vehicle movement, maneuvering and steering wheel movement. We use this data and apply machine learning to build a driver model that evaluates drivers' driving styles based on a number of driving-related features. From a naturalistic data collection from 22 drivers for 3 weeks, we analyzed the characteristics of drivers who have an aggressive driving style. Our model classified those drivers with an accuracy of 90.5% (violation-class) and 81% (questionnaire-class). We describe how, in future work, our model can be used to provide real-time feedback to drivers using only their current smartphone.",
"title": ""
},
{
"docid": "d93bc6fa3822dac43949d72a82e5c047",
"text": "In breast cancer, gene expression analyses have defined five tumor subtypes (luminal A, luminal B, HER2-enriched, basal-like and claudin-low), each of which has unique biologic and prognostic features. Here, we comprehensively characterize the recently identified claudin-low tumor subtype. The clinical, pathological and biological features of claudin-low tumors were compared to the other tumor subtypes using an updated human tumor database and multiple independent data sets. These main features of claudin-low tumors were also evaluated in a panel of breast cancer cell lines and genetically engineered mouse models. Claudin-low tumors are characterized by the low to absent expression of luminal differentiation markers, high enrichment for epithelial-to-mesenchymal transition markers, immune response genes and cancer stem cell-like features. Clinically, the majority of claudin-low tumors are poor prognosis estrogen receptor (ER)-negative, progesterone receptor (PR)-negative, and epidermal growth factor receptor 2 (HER2)-negative (triple negative) invasive ductal carcinomas with a high frequency of metaplastic and medullary differentiation. They also have a response rate to standard preoperative chemotherapy that is intermediate between that of basal-like and luminal tumors. Interestingly, we show that a group of highly utilized breast cancer cell lines, and several genetically engineered mouse models, express the claudin-low phenotype. Finally, we confirm that a prognostically relevant differentiation hierarchy exists across all breast cancers in which the claudin-low subtype most closely resembles the mammary epithelial stem cell. These results should help to improve our understanding of the biologic heterogeneity of breast cancer and provide tools for the further evaluation of the unique biology of claudin-low tumors and cell lines.",
"title": ""
},
{
"docid": "f629f426943b995a304f3d35b7090cda",
"text": "We describe an LSTM-based model which we call Byte-to-Span (BTS) that reads text as bytes and outputs span annotations of the form [start, length, label] where start positions, lengths, and labels are separate entries in our vocabulary. Because we operate directly on unicode bytes rather than languagespecific words or characters, we can analyze text in many languages with a single model. Due to the small vocabulary size, these multilingual models are very compact, but produce results similar to or better than the state-ofthe-art in Part-of-Speech tagging and Named Entity Recognition that use only the provided training datasets (no external data sources). Our models are learning “from scratch” in that they do not rely on any elements of the standard pipeline in Natural Language Processing (including tokenization), and thus can run in standalone fashion on raw text.",
"title": ""
},
{
"docid": "2b4d85ad7ec9bbb3b2b964d1552b3006",
"text": "The transmission of pain from peripheral tissues through the spinal cord to the higher centres of the brain is clearly not a passive simple process using exclusive pathways. Rather, circuitry within the spinal cord has the potential to alter, dramatically, the relation between the stimulus and the response to pain in an individual. Thus an interplay between spinal neuronal systems, both excitatory and inhibitory, will determine the messages delivered to higher levels of the central nervous system. The incoming messages may be attenuated or enhanced, changes which may be determined by the particular circumstances. The latter state, termed central hypersensitivity [61], whereby low levels of afferent activity are amplified by spinal pharmacological mechanisms has attracted much attention [13, 15]. However, additionally, inhibitory controls are subject to alteration so that opioid sensitivity in different pain states is not fixed [14]. This plasticity, the capacity for transmission in nociceptive systems to change, can be induced over very short time courses. Recent research on the pharmacology of nociception has started to shed some well-needed light on this rapid plasticity which could have profound consequences for the pharmacological treatment of pain [8, 13, 15, 23, 24, 35, 36, 41, 62]. The pharmacology of the sensory neurones in the dorsal horn of the spinal cord is complex, so much so that most of the candidate neurotransmitters and their receptors found in the CNS are also found here [4, 32]. The transmitters are derived from either the afferent fibres, intrinsic neurones or descending fibres. The majority of the transmitters and receptors are concentrated in the substantia gelatinosa, one of the densest neuronal areas in the CNS and crucial for the reception and modulation of nociceptive messages transmitted via the peripheral fibres [4]. Nociceptive C-fibres terminate in the outer lamina 1 and the underlying substantia gelatinosa, whereas the large tactile fibres terminate in deeper laminae. However, in addition to the lamina 1 cells which send long ascending axons to the brain, deep dorsal horn cells also give rise to ascending axons and respond to C-fibre stimulation. In the case of these deep cells the C-fibre input may be relayed via",
"title": ""
},
{
"docid": "f32c21b557c9835cce1cb62cdaa1ef39",
"text": "The multi-bit first stage of a 12 b 75 MS/s pipelined ADC uses an open-loop gain stage to achieve more than 60% residue amplifier power savings over a conventional implementation. Statistical background calibration removes linear and nonlinear residue errors in the digital domain. The prototype IC achieves 68.2 dB SNR, -76 dB THD, occupies 7.9 mm/sup 2/ in 0.35 /spl mu/m CMOS and consumes 290 mW at 3 V.",
"title": ""
},
{
"docid": "46ef5b489f02a1b62b0fb78a28bfc32c",
"text": "Biobanks have been heralded as essential tools for translating biomedical research into practice, driving precision medicine to improve pathways for global healthcare treatment and services. Many nations have established specific governance systems to facilitate research and to address the complex ethical, legal and social challenges that they present, but this has not lead to uniformity across the world. Despite significant progress in responding to the ethical, legal and social implications of biobanking, operational, sustainability and funding challenges continue to emerge. No coherent strategy has yet been identified for addressing them. This has brought into question the overall viability and usefulness of biobanks in light of the significant resources required to keep them running. This review sets out the challenges that the biobanking community has had to overcome since their inception in the early 2000s. The first section provides a brief outline of the diversity in biobank and regulatory architecture in seven countries: Australia, Germany, Japan, Singapore, Taiwan, the UK, and the USA. The article then discusses four waves of responses to biobanking challenges. This article had its genesis in a discussion on biobanks during the Centre for Health, Law and Emerging Technologies (HeLEX) conference in Oxford UK, co-sponsored by the Centre for Law and Genetics (University of Tasmania). This article aims to provide a review of the issues associated with biobank practices and governance, with a view to informing the future course of both large-scale and smaller scale biobanks.",
"title": ""
},
{
"docid": "df5ce1a194802b0f6dac28d1a05bb08e",
"text": "This paper presents a 77-GHz CMOS frequency-modulated continuous-wave (FMCW) frequency synthesizer with the capability of reconfigurable chirps. The frequency-sweep range and sweep time of the chirp signals can be reconfigured every cycle such that the frequency-hopping random chirp signal can be realized for an FMCW radar transceiver. The frequency synthesizer adopts the fractional-N phase-locked-loop technique and is fully integrated in TSMC 65-nm digital CMOS technology. The silicon area of the synthesizer is 0.65 mm × 0.45 mm and it consumes 51.3 mW of power. The measured output phase noise of the synthesizer is -85.1 dBc/Hz at 1-MHz offset and the root-mean-square modulation frequency error is smaller than 73 kHz.",
"title": ""
},
{
"docid": "97bf94b65caf7f4cfaf19699a69d856c",
"text": "Customer churn, i.e., losing a customer to the competition, is a major problem in mobile telecommunications. This paper investigates the added value of combining regular tabular data mining with social network mining, leveraging the graph formed by communications between customers. We extend classical tabular churn datasets with predictors derived from social network neighborhoods. We also extend traditional social network spreading activation models with information from classical tabular churn models. Experiments show that in the second approach the combination of tabular and social network mining improves results, but overall the traditional tabular churn models score best.",
"title": ""
},
{
"docid": "9f4ed0a381bec3c334ec15dec27a8a24",
"text": "Software code review, i.e., the practice of having other team members critique changes to a software system, is a well-established best practice in both open source and proprietary software domains. Prior work has shown that formal code inspections tend to improve the quality of delivered software. However, the formal code inspection process mandates strict review criteria (e.g., in-person meetings and reviewer checklists) to ensure a base level of review quality, while the modern, lightweight code reviewing process does not. Although recent work explores the modern code review process, little is known about the relationship between modern code review practices and long-term software quality. Hence, in this paper, we study the relationship between post-release defects (a popular proxy for long-term software quality) and: (1) code review coverage, i.e., the proportion of changes that have been code reviewed, (2) code review participation, i.e., the degree of reviewer involvement in the code review process, and (3) code reviewer expertise, i.e., the level of domain-specific expertise of the code reviewers. Through a case study of the Qt, VTK, and ITK projects, we find that code review coverage, participation, and expertise share a significant link with software quality. Hence, our results empirically confirm the intuition that poorly-reviewed code has a negative impact on software quality in large systems using modern reviewing tools.",
"title": ""
},
{
"docid": "6e2ecc13dc0a1151c8e921dc6a2b2b97",
"text": "A Continuous Integration system is often considered one of the key elements involved in supporting an agile software development and testing environment. As a traditional software tester transitioning to an agile development environment it became clear to me that I would need to put this essential infrastructure in place and promote improved development practices in order to make the transition to agile testing possible. This experience report discusses a continuous integration implementation I led last year. The initial motivations for implementing continuous integration are discussed and a pre and post-assessment using Martin Fowler's\" Practices of Continuous Integration\" is provided along with the technical specifics of the implementation. The report concludes with a retrospective of my experiences implementing and promoting continuous integration within the context of agile testing.",
"title": ""
},
{
"docid": "f1eebe848e14bef48c876047bfb517b6",
"text": "BACKGROUND\nThe goal of total phallic construction is the creation of a sensate and cosmetically acceptable phallus. An incorporated neourethra allows the patient to void while standing, and the insertion of a penile implant allows the patient to resume sexual activities, thus improving quality of life.\n\n\nOBJECTIVE\nTo report our experience of total phallic construction with the use of the radial artery free flap in female-to-male transsexuals.\n\n\nDESIGN, SETTINGS, AND PARTICIPANTS\nThe notes of the 115 patients who underwent total phallic construction with the use of the radial artery-based forearm free flap between January 1998 and December 2008 were reviewed retrospectively.\n\n\nMEASUREMENTS\nThe surgical outcome, cosmesis of the phallus, complications, eventual need for revision surgery, and patient satisfaction were recorded during the follow-up.\n\n\nRESULTS AND LIMITATIONS\nThis technique allowed the reconstruction of a cosmetically acceptable phallus in 112 patients; 3 patients lost the phallus due to venous thrombosis in the immediate postoperative period. After a median follow-up of 26 mo (range: 1-270 mo), 97% of patients are fully satisfied with cosmesis and size of the phallus. Sensation of the phallus was reported by 86% of patients. Urethral strictures and fistulae in the phallus and join-up site were the most common complications, occurring respectively in 9 and 20 patients; however, after revision surgery, 99% of patients were able to void from the tip of the phallus while standing.\n\n\nCONCLUSIONS\nThe radial artery-based forearm free flap technique is excellent for total phallic construction, providing excellent cosmetic and functional results.",
"title": ""
},
{
"docid": "d38f389809b9ed973e3b92216496909c",
"text": "Bullwhip effect in the supply chain distribution network is a phenomenon that is highly avoided because it can lead to high operational costs. It drew the attention of researchers to examine ways to minimize the bullwhip effect. Bullwhip effect occurs because of incorrect company planning in pursuit of customer demand. Bullwhip effect occurs due to increased amplitude of demand variance towards upper supply chain level. If the product handled is a perishable product it will make the bullwhip effect more sensitive. The purpose of this systematic literature review is to map out some of the variables used in constructing mathematical models to minimize the bullwhip effect on food supply chains that have perishable product characteristics. The result of this systematic literature review is that the authors propose an appropriate optimization model that will be applied in the food supply chain sales on the train in Indonesian railways in the next research.",
"title": ""
},
{
"docid": "20d534fb4ab89e77945366c24c066b06",
"text": "In this paper, we present a new collection of open-source software libraries that provides command line binary utilities and library classes and functions for compiling regular expression and context-sensitive rewrite rules into finite-state transducers, and for n-gram language modeling. The OpenGrm libraries use the OpenFst library to provide an efficient encoding of grammars and general algorithms for building, modifying and applying models.",
"title": ""
},
{
"docid": "a3add1c3190decbc773e0d45a0563cab",
"text": "Despite the relatively recent emergence of the Unified Theory of Acceptance and Use of Technology (UTAUT), the originating article has already been cited by a large number of studies, and hence it appears to have become a popular theoretical choice within the field of information system (IS)/information technology (IT) adoption and diffusion. However, as yet there have been no attempts to analyse the reasons for citing the originating article. Such a systematic review of citations may inform researchers and guide appropriate future use of the theory. This paper therefore presents the results of a systematic review of 450 citations of the originating article in an attempt to better understand the reasons for citation, use and adaptations of the theory. Findings revealed that although a large number of studies have cited the originating article since its appearance, only 43 actually utilised the theory or its constructs in their empirical research for examining IS/IT related issues. This chapter also classifies and discusses these citations and explores the limitations of UTAUT use in existing research.",
"title": ""
},
{
"docid": "d03390ba2dacef4b657e724c019b2b66",
"text": "Recent efforts to add new services to the Internet have increased interest in software-based routers that are easy to extend and evolve. This paper describes our experiences using emerging network processors---in particular, the Intel IXP1200---to implement a router. We show it is possible to combine an IXP1200 development board and a PC to build an inexpensive router that forwards minimum-sized packets at a rate of 3.47Mpps. This is nearly an order of magnitude faster than existing pure PC-based routers, and sufficient to support 1.77Gbps of aggregate link bandwidth. At lesser aggregate line speeds, our design also allows the excess resources available on the IXP1200 to be used robustly for extra packet processing. For example, with 8 × 100Mbps links, 240 register operations and 96 bytes of state storage are available for each 64-byte packet. Using a hierarchical architecture we can guarantee line-speed forwarding rates for simple packets with the IXP1200, and still have extra capacity to process exceptional packets with the Pentium. Up to 310Kpps of the traffic can be routed through the Pentium to receive 1510 cycles of extra per-packet processing.",
"title": ""
},
{
"docid": "5a777c011d7dbd82653b1b2d0f007607",
"text": "The Factored Language Model (FLM) is a flexible framework for incorporating various information sources, such as morphology and part-of-speech, into language modeling. FLMs have so far been successfully applied to tasks such as speech recognition and machine translation; it has the potential to be used in a wide variety of problems in estimating probability tables from sparse data. This tutorial serves as a comprehensive description of FLMs and related algorithms. We document the FLM functionalities as implemented in the SRI Language Modeling toolkit and provide an introductory walk-through using FLMs on an actual dataset. Our goal is to provide an easy-to-understand tutorial and reference for researchers interested in applying FLMs to their problems. Overview of the Tutorial We first describe the factored language model (Section 1) and generalized backoff (Section 2), two complementary techniques that attempt to improve statistical estimation (i.e., reduce parameter variance) in language models, and that also attempt to better describe the way in which language (and sequences of words) might be produced. Researchers familar with the algorithms behind FLMs may skip to Section 3, which describes the FLM programs and file formats in the publicly-available SRI Language Modeling (SRILM) toolkit.1 Section 4 is a step-by-step walkthrough with several FLM examples on a real language modeling dataset. This may be useful for beginning users of the FLMs. Finally, Section 5 discusses the problem of automatically tuning FLM parameters on real datasets and refers to existing software. This may be of interest to advanced users of FLMs.",
"title": ""
}
] |
scidocsrr
|
6cb019ffe68815ac169895230f0908cc
|
Automatic inference of code transforms for patch generation
|
[
{
"docid": "b93446bab637abd4394338615a5ef6e9",
"text": "Genetic programming is a methodology inspired by biological evolution. By using computational analogs to biological crossover and mutation new versions of a program are generated automatically. This population of new programs is then evaluated by an user defined fittness function to only select the programs that show an improved behavior as compared to the original program. In this case the desired behavior is to retain all original functionality and additionally fixing bugs found in the program code.",
"title": ""
}
] |
[
{
"docid": "cc6b9165f395e832a396d59c85f482cc",
"text": "Vision-based automatic counting of people has widespread applications in intelligent transportation systems, security, and logistics. However, there is currently no large-scale public dataset for benchmarking approaches on this problem. This work fills this gap by introducing the first real-world RGBD People Counting DataSet (PCDS) containing over 4, 500 videos recorded at the entrance doors of buses in normal and cluttered conditions. It also proposes an efficient method for counting people in real-world cluttered scenes related to public transportations using depth videos. The proposed method computes a point cloud from the depth video frame and re-projects it onto the ground plane to normalize the depth information. The resulting depth image is analyzed for identifying potential human heads. The human head proposals are meticulously refined using a 3D human model. The proposals in each frame of the continuous video stream are tracked to trace their trajectories. The trajectories are again refined to ascertain reliable counting. People are eventually counted by accumulating the head trajectories leaving the scene. To enable effective head and trajectory identification, we also propose two different compound features. A thorough evaluation on PCDS demonstrates that our technique is able to count people in cluttered scenes with high accuracy at 45 fps on a 1.7 GHz processor, and hence it can be deployed for effective real-time people counting for intelligent transportation systems.",
"title": ""
},
{
"docid": "44d96985132b956f809d4f03fbb07415",
"text": "We propose a method for extracting very accurate masks of hands in egocentric views. Our method is based on a novel Deep Learning architecture: In contrast with current Deep Learning methods, we do not use upscaling layers applied to a low-dimensional representation of the input image. Instead, we extract features with convolutional layers and map them directly to a segmentation mask with a fully connected layer. We show that this approach, when applied in a multi-scale fashion, is both accurate and efficient enough for real-time. We demonstrate it on a new dataset made of images captured in various environments, from the outdoors to offices.",
"title": ""
},
{
"docid": "16c9b857bbe8d9f13f078ddb193d7483",
"text": "We present TweetMotif, an exploratory search application for Twitter. Unlike traditional approaches to information retrieval, which present a simple list of messages, TweetMotif groups messages by frequent significant terms — a result set’s subtopics — which facilitate navigation and drilldown through a faceted search interface. The topic extraction system is based on syntactic filtering, language modeling, near-duplicate detection, and set cover heuristics. We have used TweetMotif to deflate rumors, uncover scams, summarize sentiment, and track political protests in real-time. A demo of TweetMotif, plus its source code, is available at http://tweetmotif.com. Introduction and Description On the microblogging service Twitter, users post millions of very short messages every day. Organizing and searching through this large corpus is an exciting research problem. Since messages are so small, we believe microblog search requires summarization across many messages at once. Our system, TweetMotif, responds to user queries, first retrieving several hundred recent matching messages from a simple index; we use the Twitter Search API. Instead of simply showing this result set as a list, TweetMotif extracts a set of themes (topics) to group and summarize these messages. A topic is simultaneously characterized by (1) a 1to 3-word textual label, and (2) a set of messages, whose texts must all contain the label. TweetMotif’s user interface is inspired by faceted search, which has been shown to aid Web search tasks (Hearst et al. 2002). The main screen is a two-column layout. The left column is a list of themes that are related to the current search term, while the right column presents actual tweets, grouped by theme. As themes are selected on the left column, a sample of tweets for that theme appears at the top of the right column, pushing down (but not removing) tweet results for any previously selected related themes. This allows users to explore and compare multiple related themes at once. The set of topics is chosen to try to satisfy several criteria, which often conflict: Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Screenshot of TweetMotif. 1. Frequency contrast: Topic label phrases should be frequent in the query subcorpus, but infrequent among general Twitter messages. This ensures relevance to the query while eliminating overly generic terms. 2. Topic diversity: Topics should be chosen such that their messages and label phrases minimally overlap. Overlapping topics repetitively fill the same information niche; only one should be used. 3. Topic size: A topic that includes too few messages is bad; it is overly specific. 4. Small number of topics: Screen real-estate and concomitant user cognitive load are limited resources. The goal is to provide the user a concise summary of themes and variation in the query subcorpus, then allow the user to navigate to individual topics to see their associated messages, and allow recursive drilldown. The approach is related to document clustering (though a message can belong to multiple topics) and text summarization (topic labels are a high-relevance subset of text across messages). We heuristically proceed through several stages of analysis.",
"title": ""
},
{
"docid": "e075b1870628a92c3d96e6a7a05c7037",
"text": "The two major intracellular protein degradation systems, the ubiquitin-proteasome system (UPS) and autophagy, work collaboratively in many biological processes including development, apoptosis, aging, and countering oxidative injuries. We report here that, in human retinal pigment epithelial cells (RPE), ARPE-19 cells, proteasome inhibitors, clasto-lactacystinβ-lactone (LA) or epoxomicin (Epo), at non-lethal doses, increased the protein levels of autophagy-specific genes Atg5 and Atg7 and enhanced the conversion of microtubule-associated protein light chain (LC3) from LC3-I to its lipidative form, LC3-II, which was enhanced by co-addition of the saturated concentration of Bafilomycin A1 (Baf). Detection of co-localization for LC3 staining and labeled-lysosome further confirmed autophagic flux induced by LA or Epo. LA or Epo reduced the phosphorylation of the protein kinase B (Akt), a downstream target of phosphatidylinositol-3-kinases (PI3K), and mammalian target of rapamycin (mTOR) in ARPE-19 cells; by contrast, the induced changes of autophagy substrate, p62, showed biphasic pattern. The autophagy inhibitor, Baf, attenuated the reduction in oxidative injury conferred by treatment with low doses of LA and Epo in ARPE-19 cells exposed to menadione (VK3) or 4-hydroxynonenal (4-HNE). Knockdown of Atg7 with siRNA in ARPE-19 cells reduced the protective effects of LA or Epo against VK3. Overall, our results suggest that treatment with low levels of proteasome inhibitors confers resistance to oxidative injury by a pathway involving inhibition of the PI3K-Akt-mTOR pathway and activation of autophagy.",
"title": ""
},
{
"docid": "61d29b80bcea073665f454444a3b0262",
"text": "Nitric oxide (NO) is the principal mediator of penile erection. NO is synthesized by nitric oxide synthase (NOS). It has been well documented that the major causative factor contributing to erectile dysfunction in diabetic patients is the reduction in the amount of NO synthesis in the corpora cavernosa of the penis resulting in alterations of normal penile homeostasis. Arginase is an enzyme that shares a common substrate with NOS, thus arginase may downregulate NO production by competing with NOS for this substrate, l-arginine. The purpose of the present study was to compare arginase gene expression, protein levels, and enzyme activity in diabetic human cavernosal tissue. When compared to normal human cavernosal tissue, diabetic corpus cavernosum from humans with erectile dysfunction had higher levels of arginase II protein, gene expression, and enzyme activity. In contrast, gene expression and protein levels of arginase I were not significantly different in diabetic cavernosal tissue when compared to control tissue. The reduced ability of diabetic tissue to convert l-arginine to l-citrulline via nitric oxide synthase was reversed by the selective inhibition of arginase by 2(S)-amino-6-boronohexanoic acid (ABH). These data suggest that the increased expression of arginase II in diabetic cavernosal tissue may contribute to the erectile dysfunction associated with this common disease process and may play a role in other manifestations of diabetic disease in which nitric oxide production is decreased.",
"title": ""
},
{
"docid": "1e7f14531caad40797594f9e4c188697",
"text": "The Drosophila melanogaster germ plasm has become the paradigm for understanding both the assembly of a specific cytoplasmic localization during oogenesis and its function. The posterior ooplasm is necessary and sufficient for the induction of germ cells. For its assembly, localization of gurken mRNA and its translation at the posterior pole of early oogenic stages is essential for establishing the posterior pole of the oocyte. Subsequently, oskar mRNA becomes localized to the posterior pole where its translation leads to the assembly of a functional germ plasm. Many gene products are required for producing the posterior polar plasm, but only oskar, tudor, valois, germcell-less and some noncoding RNAs are required for germ cell formation. A key feature of germ cell formation is the precocious segregation of germ cells, which isolates the primordial germ cells from mRNA turnover, new transcription, and continued cell division. nanos is critical for maintaining the transcription quiescent state and it is required to prevent transcription of Sex-lethal in pole cells. In spite of the large body of information about the formation and function of the Drosophila germ plasm, we still do not know what specifically is required to cause the pole cells to be germ cells. A series of unanswered problems is discussed in this chapter.",
"title": ""
},
{
"docid": "709902b59dd0623d6e97520c52db7608",
"text": "In this paper, we propose a scale-invariant framework based on Convolutional Neural Networks (CNNs). The network exhibits robustness to scale and resolution variations in data. Previous efforts in achieving scale invariance were made on either integrating several variant-specific CNNs or data augmentation. However, these methods did not solve the fundamental problem that CNNs develop different feature representations for the variants of the same image. The topology proposed by this paper develops a uniform representation for each of the variants of the same image. The uniformity is acquired by concatenating scale-variant and scale-invariant features to enlarge the feature space so that the case when input images are of diverse variations but from the same class can be distinguished from another case when images are of different classes. Higher-order decision boundaries lead to the success of the framework. Experimental results on a challenging dataset substantiates that our framework performs better than traditional frameworks with the same number of free parameters. Our proposed framework can also achieve a higher training efficiency.",
"title": ""
},
{
"docid": "b50b912cb79368db51825e7cbea2df5d",
"text": "Effectively solving the problem of sketch generation, which aims to produce human-drawing-like sketches from real photographs, opens the door for many vision applications such as sketch-based image retrieval and nonphotorealistic rendering. In this paper, we approach automatic sketch generation from a human visual perception perspective. Instead of gathering insights from photographs, for the first time, we extract information from a large pool of human sketches. In particular, we study how multiple Gestalt rules can be encapsulated into a unified perceptual grouping framework for sketch generation. We further show that by solving the problem of Gestalt confliction, i.e., encoding the relative importance of each rule, more similar to human-made sketches can be generated. For that, we release a manually labeled sketch dataset of 96 object categories and 7,680 sketches. A novel evaluation framework is proposed to quantify human likeness of machinegenerated sketches by examining how well they can be classified using models trained from human data. Finally, we demonstrate the superiority of our sketches under the practical application of sketch-based image retrieval.",
"title": ""
},
{
"docid": "037a61984469ddaa9cf2f17eb304e781",
"text": "The Dynamic Time Warping (DTW) distance measure is a technique that has long been known in speech recognition community. It allows a non-linear mapping of one signal to another by minimizing the distance between the two. A decade ago, DTW was introduced into Data Mining community as a utility for various tasks for time series problems including classification, clustering, and anomaly detection. The technique has flourished, particularly in the last three years, and has been applied to a variety of problems in various disciplines. In spite of DTW’s great success, there are still several persistent “myths” about it. These myths have caused confusion and led to much wasted research effort. In this work, we will dispel these myths with the most comprehensive set of time series experiments ever conducted.",
"title": ""
},
{
"docid": "e4c4a0f2bf476892794aebd79c0f05cc",
"text": "Switched reluctance motors (SRMs) have been gaining increasing popularity and emerging as an attractive alternative to traditional electrical motors in hybrid vehicle applications due to their numerous advantages. However, large torque ripple and acoustic noise are its major disadvantages. This paper presents a novel five-phase 15/12 SRM which features higher power density, very low level of vibration with flexibility in controlling the torque ripple profile. This design is classified as an axial field SRM and hence it needs three-dimensional finite-element analysis model. However, an alternative two-dimensional model is presented and some design features and result are discussed in this paper.",
"title": ""
},
{
"docid": "ecd54b6fad0a1d79440204df72b977fa",
"text": "The rapid development of Web technology has resulted in an increasing number of hotel customers sharing their opinions on the hotel services. Effective visual analysis of online customer opinions is needed, as it has a significant impact on building a successful business. In this paper, we present OpinionSeer, an interactive visualization system that could visually analyze a large collection of online hotel customer reviews. The system is built on a new visualization-centric opinion mining technique that considers uncertainty for faithfully modeling and analyzing customer opinions. A new visual representation is developed to convey customer opinions by augmenting well-established scatterplots and radial visualization. To provide multiple-level exploration, we introduce subjective logic to handle and organize subjective opinions with degrees of uncertainty. Several case studies illustrate the effectiveness and usefulness of OpinionSeer on analyzing relationships among multiple data dimensions and comparing opinions of different groups. Aside from data on hotel customer feedback, OpinionSeer could also be applied to visually analyze customer opinions on other products or services.",
"title": ""
},
{
"docid": "d2eefcb0a03f769c5265a66be89c5ca3",
"text": "The computational treatment of subjectivity and sentiment in natural language is usually significantly improved by applying features exploiting lexical resources where entries are tagged with semantic orientation (e.g., positive, negative values). In spite of the fair amount of work on Arabic sentiment analysis over the past few years, e.g., (Abbasi et al., 2008; Abdul-Mageed et al., 2014; Abdul-Mageed et al., 2012; Abdul-Mageed and Diab, 2012a; Abdul-Mageed and Diab, 2012b; Abdul-Mageed et al., 2011a; Abdul-Mageed and Diab, 2011), the language remains under-resourced as to these polarity repositories compared to the English language. In this paper, we report efforts to build and present SANA, a large-scale, multi-genre, multi-dialect multi-lingual lexicon for the subjectivity and sentiment analysis of the Arabic language and dialects.",
"title": ""
},
{
"docid": "aad262b19db8dd6c6caf34e7966c433a",
"text": "Cloud computing is now a well-consolidated paradigm for on-demand services provisioning on a pay-as-you-go model. Elasticity, one of the major benefits required for this computing model, is the ability to add and remove resources “on the fly” to handle the load variation. Although many works in literature have surveyed cloud computing and its features, there is a lack of a detailed analysis about elasticity for the cloud. As an attempt to fill this gap, we propose this survey on cloud computing elasticity based on an adaptation of a classic systematic review. We address different aspects of elasticity, such as definitions, metrics and tools for measuring, evaluation of the elasticity, and existing solutions. Finally, we present some open issues and future direcEmanuel Ferreira Coutinho Master and Doctorate in Computer Science (MDCC) Virtual UFC Institute Federal University of Ceara (UFC) Brazil Tel.: +55-85-8875-1977 E-mail: emanuel@virtual.ufc.br Flávio R. C. Sousa Teleinformatics Engineering Department (DETI) Federal University of Ceara (UFC) Brazil E-mail: flaviosousa@ufc.br Paulo A. L. Rego Master and Doctorate in Computer Science (MDCC) Federal University of Ceara (UFC) Brazil E-mail: pauloalr@ufc.br Danielo G. Gomes Teleinformatics Engineering Department (DETI) Federal University of Ceara (UFC) Brazil E-mail: danielo@ufc.br José N. de Souza Master and Doctorate in Computer Science (MDCC) Federal University of Ceara (UFC) Brazil E-mail: neuman@ufc.br 2 Emanuel Ferreira Coutinho et al. tions. To the best of our knowledge, this is the first study on cloud computing elasticity using a systematic review approach.",
"title": ""
},
{
"docid": "a2dfa8007b3a13da31a768fe07393d15",
"text": "Predicting the time and effort for a software problem has long been a difficult task. We present an approach that automatically predicts the fixing effort, i.e., the person-hours spent on fixing an issue. Our technique leverages existing issue tracking systems: given a new issue report, we use the Lucene framework to search for similar, earlier reports and use their average time as a prediction. Our approach thus allows for early effort estimation, helping in assigning issues and scheduling stable releases. We evaluated our approach using effort data from the JBoss project. Given a sufficient number of issues reports, our automatic predictions are close to the actual effort; for issues that are bugs, we are off by only one hour, beating na¨ýve predictions by a factor of four.",
"title": ""
},
{
"docid": "2c7b61aaca38051230122bef872002cc",
"text": "Signal-based Surveillance systems such as Closed Circuits Televisions (CCTV) have been widely installed in public places. Those systems are normally used to find the events with security interest, and play a significant role in public safety. Though such systems are still heavily reliant on human labour to monitor the captured information, there have been a number of automatic techniques proposed to analysing the data. This article provides an overview of automatic surveillance event detection techniques . Despite it’s popularity in research, it is still too challenging a problem to be realised in a real world deployment. The challenges come from not only the detection techniques such as signal processing and machine learning, but also the experimental design with factors such as data collection, evaluation protocols, and ground-truth annotation. Finally, this article propose that multi-disciplinary research is the path towards a solution to this problem.",
"title": ""
},
{
"docid": "a0c126480f0bce527a893853f6f3bec9",
"text": "Word problems are an established technique for teaching mathematical modeling skills in K-12 education. However, many students find word problems unconnected to their lives, artificial, and uninteresting. Most students find them much more difficult than the corresponding symbolic representations. To account for this phenomenon, an ideal pedagogy might involve an individually crafted progression of unique word problems that form a personalized plot. We propose a novel technique for automatic generation of personalized word problems. In our system, word problems are generated from general specifications using answer-set programming (ASP). The specifications include tutor requirements (properties of a mathematical model), and student requirements (personalization, characters, setting). Our system takes a logical encoding of the specification, synthesizes a word problem narrative and its mathematical model as a labeled logical plot graph, and realizes the problem in natural language. Human judges found our problems as solvable as the textbook problems, with a slightly more artificial language.",
"title": ""
},
{
"docid": "af25bc1266003202d3448c098628aee8",
"text": "Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR10, CIFAR-100, and SVHN datasets, yielding new state-ofthe-art results of 2.56%, 15.20%, and 1.30% test error respectively. Code available at https://github.com/ uoguelph-mlrg/Cutout.",
"title": ""
},
{
"docid": "92b20ec581fc5609da2908f9f0f74a33",
"text": "We address the problem of using external rotation information with uncalibrated video sequences. The main problem addressed is, what is the benefit of the orientation information for camera calibration? It is shown that in case of a rotating camera the camera calibration problem is linear even in the case that all intrinsic parameters vary. For arbitrarily moving cameras the calibration problem is also linear but underdetermined for the general case of varying all intrinsic parameters. However, if certain constraints are applied to the intrinsic parameters the camera calibration can be computed linearily. It is analyzed which constraints are needed for camera calibration of freely moving cameras. Furthermore we address the problem of aligning the camera data with the rotation sensor data in time. We give an approach to align these data in case of a rotating camera.",
"title": ""
},
{
"docid": "5a99af400ea048d34ee961ad7f3e3bf6",
"text": "Breast cancer is becoming pervasive with each passing day. Hence, its early detection is a big step in saving life of any patient. Mammography is a common tool in breast cancer diagnosis. The most important step here is classification of mammogram patches as normal-abnormal and benign-malignant. Texture of a breast in a mammogram patch plays a big role in these classifications. We propose a new feature extraction descriptor called Histogram of Oriented Texture (HOT), which is a combination of Histogram of Gradients (HOG) and a Gabor filter, and exploits this fact. We also revisit the Pass Band Discrete Cosine Transform (PB-DCT) descriptor that captures texture information well. All features of a mammogram patch may not be useful. Hence, we apply a feature selection technique called Discrimination Potentiality (DP). Our resulting descriptors, DP-HOT and DP-PB-DCT, are compared with the standard descriptors. Density of a mammogram patch is important for classification, and has not been studied exhaustively. The Image Retrieval in Medical Application (IRMA) database from RWTH Aachen, Germany is a standard database that provides mammogram patches, and most researchers have tested their frameworks only on a subset of patches from this database. We apply our two new descriptors on all images of the IRMA database for density wise classification, and compare with the standard descriptors. We achieve higher accuracy than all of the existing standard descriptors (more than 92% ).",
"title": ""
}
] |
scidocsrr
|
372185c392e68e1b8d506702e95cafc9
|
Intelligent machinery a heretical theory ; reprinted in ( Copeland 2004 ) Universal Turing Machine
|
[
{
"docid": "de1f5c84419787885e6c9d4c3dbd5f78",
"text": "Signed algorithms and scatter/gather I/O have garnered tremendous interest from both system administrators and biologists in the last several years. After years of confirmed research into the producer-consumer problem, we demonstrate the refinement of IPv4. In this work, we concentrate our efforts on proving that online algorithms and XML can interfere to accomplish this aim.",
"title": ""
}
] |
[
{
"docid": "fb11b937a3c07fd4b76cda1ed1eadc07",
"text": "Depth information plays an important role in a variety of applications, including manufacturing, medical imaging, computer vision, graphics, and virtual/augmented reality (VR/AR). Depth sensing has thus attracted sustained attention from both academia and industry communities for decades. Mainstream depth cameras can be divided into three categories: stereo, time of flight (ToF), and structured light. Stereo cameras require no active illumination and can be used outdoors, but they are fragile for homogeneous surfaces. Recently, off-the-shelf light field cameras have demonstrated improved depth estimation capability with a multiview stereo configuration. ToF cameras operate at a high frame rate and fit time-critical scenarios well, but they are susceptible to noise and limited to low resolution [3]. Structured light cameras can produce high-resolution, high-accuracy depth, provided that a number of patterns are sequentially used. Due to its promising and reliable performance, the structured light approach has been widely adopted for three-dimensional (3-D) scanning purposes. However, achieving real-time depth with structured light either requires highspeed (and thus expensive) hardware or sacrifices depth resolution and accuracy by using a single pattern instead.",
"title": ""
},
{
"docid": "b629ae23b7351c59c55ee9e9f1a33117",
"text": "75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 Tthe treatment of chronic hepatitis C virus (HCV) infection has been nothing short of remarkable with the prospect of elimination never more within reach. Attention has shifted to the safety and efficacy of DAAs in special populations, such as hepatitis B virus (HBV)/HCV coinfected individuals. Although the true prevalence of coinfection is unknown, studies from the United States report that 1.4% to 5.8% of HCV-infected individuals are hepatitis B surface antigen (HBsAg) positive compared with 1.4% to 4.1% in China. Coinfection is associated with higher rates of cirrhosis, decompensation, and hepatocellular carcinoma compared with monoinfected individuals. Because HBsAgpositive individuals were excluded from clinical trials of DAAs, HBV reactivation after HCV clearance was only reported after DAAs entered clinical use. Reports of severe and even fatal cases led the US Food and Drug Administration (FDA) to issue a strong directive regarding the risk of HBV reactivation with DAA treatment. The FDA boxed warning was based on 29 cases of HBV reactivation, including 2 fatal events and one that led to liver transplantation. However, owing to the nature of postapproval reporting, critical data were often missing, including baseline HBV serology, making it difficult to truly assess the risk. To err on the safe side, the FDA recommended screening all individuals scheduled to receive DAAs for evidence of current or past HBV infection with follow-up HBV DNA testing for any positive serology. Differing recommendations from international guidelines left clinicians unsure of how to proceed. The study by Liu et al in this issue of Gastroenterology provides much-needed data regarding the risk of HBV reactivation in coinfected individuals treated with DAAs. This prospective study enrolled 111 patients with HBV/ HCV coinfection who received sofosbuvir/ledipasvir for 12 weeks. Notably, although 61% were infected with HCV genotype 1, 39% had genotype 2 infection, a group for whom sofosbuvir/ledipasvir is not currently recommended. All patients achieved sustained virologic response (SVR). More important, the authors carefully evaluated what happened to HBV during and after HCV therapy. Patients were divided into 2 groups: those with undetectable HBV DNA and those with an HBV DNA of >20 IU/mL at baseline. Increases in HBV DNA levels were common in both groups. DNA increased to quantifiable levels in 31 of 37 initially",
"title": ""
},
{
"docid": "86ededf9b452bbc51117f5a117247b51",
"text": "An approach to high field control, particularly in the areas near the high voltage (HV) and ground terminals of an outdoor insulator, is proposed using a nonlinear grading material; Zinc Oxide (ZnO) microvaristors compounded with other polymeric materials to obtain the required properties and allow easy application. The electrical properties of the microvaristor compounds are characterised by a nonlinear field-dependent conductivity. This paper describes the principles of the proposed field-control solution and demonstrates the effectiveness of the proposed approach in controlling the electric field along insulator profiles. A case study is carried out for a typical 11 kV polymeric insulator design to highlight the merits of the grading approach. Analysis of electric potential and field distributions on the insulator surface is described under dry clean and uniformly contaminated surface conditions for both standard and microvaristor-graded insulators. The grading and optimisation principles to allow better performance are investigated to improve the performance of the insulator both under steady state operation and under surge conditions. Furthermore, the dissipated power and associated heat are derived to examine surface heating and losses in the grading regions and for the complete insulator. Preliminary tests on inhouse prototype insulators have confirmed better flashover performance of the proposed graded insulator with a 21 % increase in flashover voltage.",
"title": ""
},
{
"docid": "c1a44605e8e9b76a76bf5a2dd3539310",
"text": "This paper presents a stereo matching approach for a novel multi-perspective panoramic stereo vision system, making use of asynchronous and non-simultaneous stereo imaging towards real-time 3D 360° vision. The method is designed for events representing the scenes visual contrast as a sparse visual code allowing the stereo reconstruction of high resolution panoramic views. We propose a novel cost measure for the stereo matching, which makes use of a similarity measure based on event distributions. Thus, the robustness to variations in event occurrences was increased. An evaluation of the proposed stereo method is presented using distance estimation of panoramic stereo views and ground truth data. Furthermore, our approach is compared to standard stereo methods applied on event-data. Results show that we obtain 3D reconstructions of 1024 × 3600 round views and outperform depth reconstruction accuracy of state-of-the-art methods on event data.",
"title": ""
},
{
"docid": "0533a5382c58c8714f442784b5596258",
"text": "Using 2 phase-change memory (PCM) devices per synapse, a 3-layer perceptron network with 164,885 synapses is trained on a subset (5000 examples) of the MNIST database of handwritten digits using a backpropagation variant suitable for NVM+selector crossbar arrays, obtaining a training (generalization) accuracy of 82.2% (82.9%). Using a neural network (NN) simulator matched to the experimental demonstrator, extensive tolerancing is performed with respect to NVM variability, yield, and the stochasticity, linearity and asymmetry of NVM-conductance response.",
"title": ""
},
{
"docid": "0e30297bf0ab30413e97e7478a9916a3",
"text": "Neural network models have been demonstrated to be capable of achieving remarkable performance in sentence and document modeling. Convolutional neural network (CNN) and recurrent neural network (RNN) are two mainstream architectures for such modeling tasks, which adopt totally different ways of understanding natural languages. In this work, we combine the strengths of both architectures and propose a novel and unified model called C-LSTM for sentence representation and text classification. C-LSTM utilizes CNN to extract a sequence of higher-level phrase representations, and are fed into a long short-term memory recurrent neural network (LSTM) to obtain the sentence representation. C-LSTM is able to capture both local features of phrases as well as global and temporal sentence semantics. We evaluate the proposed architecture on sentiment classification and question classification tasks. The experimental results show that the C-LSTM outperforms both CNN and LSTM and can achieve excellent performance on these tasks.",
"title": ""
},
{
"docid": "5a898d79de6cedebae4ff7acc4fabc34",
"text": "Education-job mismatches are reported to have serious effects on wages and other labour market outcomes. Such results are often cited in support of assignment theory, but can also be explained by institutional and human capital models. To test the assignment explanation, we examine the relation between educational mismatches and skill mismatches. In line with earlier research, educational mismatches affect wages strongly. Contrary to the assumptions of assignment theory, this effect is not explained by skill mismatches. Conversely, skill mismatches are much better predictors of job satisfaction and on-the-job search than are educational mismatches.",
"title": ""
},
{
"docid": "05cf044dcb3621a0190403a7961ecb00",
"text": "This paper describes a real-time beat tracking system that recognizes a hierarchical beat structure comprising the quarter-note, half-note, and measure levels in real-world audio signals sampled from popular-music compact discs. Most previous beat-tracking systems dealt with MIDI signals and had difficulty in processing, in real time, audio signals containing sounds of various instruments and in tracking beats above the quarter-note level. The system described here can process music with drums and music without drums and can recognize the hierarchical beat structure by using three kinds of musical knowledge: of onset times, of chord changes, and of drum patterns. This paper also describes several applications of beat tracking, such as beat-driven real-time computer graphics and lighting control.",
"title": ""
},
{
"docid": "2956ef98f020e0f17c36a69a890e21dc",
"text": "Complete coverage path planning requires the robot path to cover every part of the workspace, which is an essential issue in cleaning robots and many other robotic applications such as vacuum robots, painter robots, land mine detectors, lawn mowers, automated harvesters, and window cleaners. In this paper, a novel neural network approach is proposed for complete coverage path planning with obstacle avoidance of cleaning robots in nonstationary environments. The dynamics of each neuron in the topologically organized neural network is characterized by a shunting equation derived from Hodgkin and Huxley's (1952) membrane equation. There are only local lateral connections among neurons. The robot path is autonomously generated from the dynamic activity landscape of the neural network and the previous robot location. The proposed model algorithm is computationally simple. Simulation results show that the proposed model is capable of planning collision-free complete coverage robot paths.",
"title": ""
},
{
"docid": "246a0d759f0dbc8a050c225a3977b898",
"text": "Role mining recently has attracted much attention from the role-based access control (RBAC) research community as it provides a machine-operated means of discovering roles from existing permission assignments. While there is a rich body of literature on role mining, we find that user experience/perception one ultimate goal for any information system is surprisingly ignored by the existing works. This work is the first to study role mining from the end-user perspective. Specifically, based on the observation that end-users prefer simple role assignments, we propose to incorporate to the role mining process a user-role assignment sparseness constraint that mandates the maximum number of roles each user can have. Under this rationale, we formulate user-oriented role mining as two specific problems: one is user-oriented exact role mining problem (RMP), which is obliged to completely reconstruct the given permission assignments, and the other is user-oriented approximate RMP, which tolerates a certain amount of deviation from the complete reconstruction. The extra sparseness constraint poses a great challenge to role mining, which in general is already a hard problem. We examine some typical existing role mining methods to see their applicability to our problems. In light of their insufficiency, we present a new algorithm, which is based on a novel dynamic candidate role generation strategy, tailored to our problems. Experiments on benchmark datasets demonstrate the effectiveness of our proposed algorithm.",
"title": ""
},
{
"docid": "0185d09853600b950f5a1af27e0cdd91",
"text": "In this paper, the problem of matching pairs of correlated random graphs with multi-valued edge attributes is considered. Graph matching problems of this nature arise in several settings of practical interest including social network de-anonymization, study of biological data, and web graphs. An achievable region of graph parameters for successful matching is derived by analyzing a new matching algorithm that we refer to as typicality matching. The algorithm operates by investigating the joint typicality of the adjacency matrices of the two correlated graphs. Our main result shows that the achievable region depends on the mutual information between the variables corresponding to the edge probabilities of the two graphs. The result is based on bounds on the typicality of permutations of sequences of random variables that might be of independent interest.",
"title": ""
},
{
"docid": "4fea6fb309d496f9b4fd281c80a8eed7",
"text": "Network alignment is the problem of matching the nodes of two graphs, maximizing the similarity of the matched nodes and the edges between them. This problem is encountered in a wide array of applications---from biological networks to social networks to ontologies---where multiple networked data sources need to be integrated. Due to the difficulty of the task, an accurate alignment can rarely be found without human assistance. Thus, it is of great practical importance to develop network alignment algorithms that can optimally leverage experts who are able to provide the correct alignment for a small number of nodes. Yet, only a handful of existing works address this active network alignment setting.\n The majority of the existing active methods focus on absolute queries (\"are nodes a and b the same or not?\"), whereas we argue that it is generally easier for a human expert to answer relative queries (\"which node in the set b1,...,bn is the most similar to node a?\"). This paper introduces two novel relative-query strategies, TopMatchings and GibbsMatchings, which can be applied on top of any network alignment method that constructs and solves a bipartite matching problem. Our methods identify the most informative nodes to query by sampling the matchings of the bipartite graph associated to the network-alignment instance.\n We compare the proposed approaches to several commonly-used query strategies and perform experiments on both synthetic and real-world datasets. Our sampling-based strategies yield the highest overall performance, outperforming all the baseline methods by more than 15 percentage points in some cases. In terms of accuracy, TopMatchings and GibbsMatchings perform comparably. However, GibbsMatchings is significantly more scalable, but it also requires hyperparameter tuning for a temperature parameter.",
"title": ""
},
{
"docid": "a58b23fa78f7df8c36db139029459686",
"text": "We report on the algorithm of trajectory planning and four leg coordination for quasi-static stair climbing in a quadruped robot. The development is based on the geometrical interactions between the robot legs and the stair, starting from single-leg analysis, followed by two-leg collaboration, and then four-leg coordination. In addition, a brief study on stability of the robot is also reported. Finally, simulation and experimental test are also executed to evaluate the performance of the algorithm.",
"title": ""
},
{
"docid": "a1292045684debec0e6e56f7f5e85fad",
"text": "BACKGROUND\nLncRNA and microRNA play an important role in the development of human cancers; they can act as a tumor suppressor gene or an oncogene. LncRNA GAS5, originating from the separation from tumor suppressor gene cDNA subtractive library, is considered as an oncogene in several kinds of cancers. The expression of miR-221 affects tumorigenesis, invasion and metastasis in multiple types of human cancers. However, there's very little information on the role LncRNA GAS5 and miR-221 play in CRC. Therefore, we conducted this study in order to analyze the association of GAS5 and miR-221 with the prognosis of CRC and preliminary study was done on proliferation, metastasis and invasion of CRC cells. In the present study, we demonstrate the predictive value of long non-coding RNA GAS5 (lncRNA GAS5) and mircoRNA-221 (miR-221) in the prognosis of colorectal cancer (CRC) and their effects on CRC cell proliferation, migration and invasion.\n\n\nMETHODS\nOne hundred and fifty-eight cases with CRC patients and 173 cases of healthy subjects that with no abnormalities, who've been diagnosed through colonoscopy between January 2012 and January 2014 were selected for the study. After the clinicopathological data of the subjects, tissue, plasma and exosomes were collected, lncRNA GAS5 and miR-221 expressions in tissues, plasma and exosomes were measured by reverse transcription quantitative polymerase chain reaction (RT-qPCR). The diagnostic values of lncRNA GAS5 and miR-221 expression in tissues, plasma and exosomes in patients with CRC were analyzed using receiver operating characteristic curve (ROC). Lentiviral vector was constructed for the overexpression of lncRNA GAS5, and SW480 cell line was used for the transfection of the experiment and assigned into an empty vector and GAS5 groups. The cell proliferation, migration and invasion were tested using a cell counting kit-8 assay and Transwell assay respectively.\n\n\nRESULTS\nThe results revealed that LncRNA GAS5 was upregulated while the miR-221 was downregulated in the tissues, plasma and exosomes of patients with CRC. The results of ROC showed that the expressions of both lncRNA GAS5 and miR-221 in the tissues, plasma and exosomes had diagnostic value in CRC. While the LncRNA GAS5 expression in tissues, plasma and exosomes were associated with the tumor node metastasis (TNM) stage, Dukes stage, lymph node metastasis (LNM), local recurrence rate and distant metastasis rate, the MiR-221 expression in tissues, plasma and exosomes were associated with tumor size, TNM stage, Dukes stage, LNM, local recurrence rate and distant metastasis rate. LncRNA GAS5 and miR-221 expression in tissues, plasma and exosomes were found to be independent prognostic factors for CRC. Following the overexpression of GAS5, the GAS5 expressions was up-regulated and miR-221 expression was down-regulated; the rate of cell proliferation, migration and invasion were decreased.",
"title": ""
},
{
"docid": "e4c1342b2405cc7401e1f929c6c41011",
"text": "This paper introduces a protocol for the measuremen t of shoulder movement that uses a motion analysis ba sed technique and the proposed standards of the Interna tional Society of Biomechanics. The protocol demonstrates e ff ctive dynamic tracking of shoulder movements in 3D, inclu ding the movement of the thorax relative to the global coord inate system, the humerus relative to the thorax, the scapula rel ative to the thorax, the sternoclavicular joint, the acromioclavicular joint and the glenohumeral joint. This measurement protocol mu st be further tested for accuracy and repeatability using motion and imaging data from existing methods developed prior to the ISB recommendations. It is proposed to apply the valid ated model to assess pathological shoulder movement and function with the aim of developing a valuable clinical diagnostic tool t o aid surgeons in identifying optimum treatment strategies. Keywords-shoulder complex; measurement technique; motion analysis; ISB recommendations.",
"title": ""
},
{
"docid": "45ada94d2c16a697a2781a193c7c0e5a",
"text": "Currently, road vehicles are more and more equipped by PM synchronous motors. Of particular interest are brushless DC (BLDC) motors with their low resolution encoder make it feasible their integration in a large scale industry such as the automotive one. Within this trend, the paper deals with the synthesis and implementation of a new DTC strategy dedicated to the control of BLDC motor drives. Compared to the most recent and high-performance DTC strategy, the proposed one offers an improved reliability thanks to the achievement of a balanced switching frequencies of the inverter upper and lower IGBTs, on one hand, and the reduction of the average value of the motor common mode voltage, on the other hand. Furthermore, the torque ripple is significantly damped during sequence-to-sequence commutations using a three-level hysteresis torque controller. An experimentally-based comparative study between the most recent and high-performance DTC strategy and the introduced one clearly highlights the potentialities exhibited by the latter.",
"title": ""
},
{
"docid": "060101cf53a576336e27512431c4c4fc",
"text": "The aim of this chapter is to give an overview of domain adaptation and transfer learning with a specific view to visual applications. After a general motivation, we first position domain adaptation in the more general transfer learning problem. Second, we try to address and analyze briefly the state-of-the-art methods for different types of scenarios, first describing the historical shallow methods, addressing both the homogeneous and heterogeneous domain adaptation methods. Third, we discuss the effect of the success of deep convolutional architectures which led to the new type of domain adaptation methods that integrate the adaptation within the deep architecture. Fourth, we review DA methods that go beyond image categorization, such as object detection, image segmentation, video analyses or learning visual attributes. We conclude the chapter with a section where we relate domain adaptation to other machine learning solutions.",
"title": ""
},
{
"docid": "b55d2448633f70da4830565268a2b590",
"text": "This paper proposes an online tree-based Bayesian approach for reinforcement learning. For inference, we employ a generalised context tree model. This defines a distribution on multivariate Gaussian piecewise-linear models, which can be updated in closed form. The tree structure itself is constructed using the cover tree method, which remains efficient in high dimensional spaces. We combine the model with Thompson sampling and approximate dynamic programming to obtain effective exploration policies in unknown environments. The flexibility and computational simplicity of the model render it suitable for many reinforcement learning problems in continuous state spaces. We demonstrate this in an experimental comparison with a Gaussian process model, a linear model and simple least squares policy iteration.",
"title": ""
},
{
"docid": "2802db74e062103d45143e8e9ad71890",
"text": "Maritime traffic monitoring is an important aspect of safety and security, particularly in close to port operations. While there is a large amount of data with variable quality, decision makers need reliable information about possible situations or threats. To address this requirement, we propose extraction of normal ship trajectory patterns that builds clusters using, besides ship tracing data, the publicly available International Maritime Organization (IMO) rules. The main result of clustering is a set of generated lanes that can be mapped to those defined in the IMO directives. Since the model also takes non-spatial attributes (speed and direction) into account, the results allow decision makers to detect abnormal patterns - vessels that do not obey the normal lanes or sail with higher or lower speeds.",
"title": ""
}
] |
scidocsrr
|
78cbc33673f79fb2d27cdd17125660f7
|
On security and privacy issues of fog computing supported Internet of Things environment
|
[
{
"docid": "55a6353fa46146d89c7acd65bee237b5",
"text": "The drastic increase of Android malware has led to a strong interest in developing methods to automate the malware analysis process. Existing automated Android malware detection and classification methods fall into two general categories: 1) signature-based and 2) machine learning-based. Signature-based approaches can be easily evaded by bytecode-level transformation attacks. Prior learning-based works extract features from application syntax, rather than program semantics, and are also subject to evasion. In this paper, we propose a novel semantic-based approach that classifies Android malware via dependency graphs. To battle transformation attacks, we extract a weighted contextual API dependency graph as program semantics to construct feature sets. To fight against malware variants and zero-day malware, we introduce graph similarity metrics to uncover homogeneous application behaviors while tolerating minor implementation differences. We implement a prototype system, DroidSIFT, in 23 thousand lines of Java code. We evaluate our system using 2200 malware samples and 13500 benign samples. Experiments show that our signature detection can correctly label 93\\% of malware instances; our anomaly detector is capable of detecting zero-day malware with a low false negative rate (2\\%) and an acceptable false positive rate (5.15\\%) for a vetting purpose.",
"title": ""
},
{
"docid": "1e5956b0d9d053cd20aad8b53730c969",
"text": "The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as \"the fog\". However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper offers a comprehensive definition of the fog, comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially breakthrough technology amalgamation.",
"title": ""
},
{
"docid": "ac2f02b46a885cf662c41a16f976819e",
"text": "This paper presents a conceptual framework for security engineering, with a strong focus on security requirements elicitation and analysis. This conceptual framework establishes a clear-cut vocabulary and makes explicit the interrelations between the different concepts and notions used in security engineering. Further, we apply our conceptual framework to compare and evaluate current security requirements engineering approaches, such as the Common Criteria, Secure Tropos, SREP, MSRA, as well as methods based on UML and problem frames. We review these methods and assess them according to different criteria, such as the general approach and scope of the method, its validation, and quality assurance capabilities. Finally, we discuss how these methods are related to the conceptual framework and to one another.",
"title": ""
}
] |
[
{
"docid": "d679e7cbef9ac3cfbea38b92891fc1a0",
"text": "Personal health records (PHR) have enormous potential to improve both documentation of health information and patient care. The adoption of these systems, however, has been relatively slow. In this work, we used a multi-method approach to evaluate PHR systems. We interviewed potential end users---clinicians and patients---and conducted evaluations with patients and caregivers as well as a heuristic evaluation with HCI experts. In these studies, we focused on three PHR systems: Google Health, Microsoft HealthVault, and WorldMedCard. Our results demonstrate that both usability concerns and socio-cultural influences are barriers to PHR adoption and use. In this paper, we present those results as well as reflect on how both PHR designers and developers might address these issues now and throughout the design cycle.",
"title": ""
},
{
"docid": "4d56abf003caaa11e5bef74a14bd44e0",
"text": "The increasing importance of search engines to commercial web sites has given rise to a phenomenon we call \"web spam\", that is, web pages that exist only to mislead search engines into (mis)leading users to certain web sites. Web spam is a nuisance to users as well as search engines: users have a harder time finding the information they need, and search engines have to cope with an inflated corpus, which in turn causes their cost per query to increase. Therefore, search engines have a strong incentive to weed out spam web pages from their index.We propose that some spam web pages can be identified through statistical analysis: Certain classes of spam pages, in particular those that are machine-generated, diverge in some of their properties from the properties of web pages at large. We have examined a variety of such properties, including linkage structure, page content, and page evolution, and have found that outliers in the statistical distribution of these properties are highly likely to be caused by web spam.This paper describes the properties we have examined, gives the statistical distributions we have observed, and shows which kinds of outliers are highly correlated with web spam.",
"title": ""
},
{
"docid": "5dddbb144a947892fd7bfcc041263e3c",
"text": "The ability of deep convolutional neural networks (CNNs) to learn discriminative spectro-temporal patterns makes them well suited to environmental sound classification. However, the relative scarcity of labeled data has impeded the exploitation of this family of high-capacity models. This study has two primary contributions: first, we propose a deep CNN architecture for environmental sound classification. Second, we propose the use of audio data augmentation for overcoming the problem of data scarcity and explore the influence of different augmentations on the performance of the proposed CNN architecture. Combined with data augmentation, the proposed model produces state-of-the-art results for environmental sound classification. We show that the improved performance stems from the combination of a deep, high-capacity model and an augmented training set: this combination outperforms both the proposed CNN without augmentation and a “shallow” dictionary learning model with augmentation. Finally, we examine the influence of each augmentation on the model's classification accuracy for each class, and observe that the accuracy for each class is influenced differently by each augmentation, suggesting that the performance of the model could be improved further by applying class-conditional data augmentation.",
"title": ""
},
{
"docid": "27e10b0ba009a8b86431a808e712d761",
"text": "In this work, we propose using camera arrays coupled with coherent illumination as an effective method of improving spatial resolution in long distance images by a factor often and beyond. Recent advances in ptychography have demonstrated that one can image beyond the diffraction limit of the objective lens in a microscope. We demonstrate a similar imaging system to image beyond the diffraction limit in long range imaging. We emulate a camera array with a single camera attached to an XY translation stage. We show that an appropriate phase retrieval based reconstruction algorithm can be used to effectively recover the lost high resolution details from the multiple low resolution acquired images. We analyze the effects of noise, required degree of image overlap, and the effect of increasing synthetic aperture size on the reconstructed image quality. We show that coherent camera arrays have the potential to greatly improve imaging performance. Our simulations show resolution gains of 10× and more are achievable. Furthermore, experimental results from our proof-of-concept systems show resolution gains of 4 × -7× for real scenes. All experimental data and code is made publicly available on the project webpage. Finally, we introduce and analyze in simulation a new strategy to capture macroscopic Fourier Ptychography images in a single snapshot, albeit using a camera array.",
"title": ""
},
{
"docid": "86f5c3e7b238656ae5f680db6ce0b7f5",
"text": "It is important to study and analyse educational data especially students’ performance. Educational Data Mining (EDM) is the field of study concerned with mining educational data to find out interesting patterns and knowledge in educational organizations. This study is equally concerned with this subject, specifically, the students’ performance. This study explores multiple factors theoretically assumed to affect students’ performance in higher education, and finds a qualitative model which best classifies and predicts the students’ performance based on related personal and social factors. Keywords—Data Mining; Education; Students; Performance; Patterns",
"title": ""
},
{
"docid": "d01e73d5437f1c1de3c0b3c2fb502bf4",
"text": "The present study investigated the effects of loneliness, depression and perceived social support on problematic Internet use among university students. The participants were 459 students at two universities in Turkey. The study data were collected with a Questionnaire Form, Problematic Internet Use Scale (PIUS), University of California at Los Angeles (UCLA) Loneliness Scale (Version 3), Multidimensional Scale of Perceived Social Support (MSPSS) and Beck Depression Inventory (BDI). The Mann-Whitney U Test and Kruskal-Wallis one-way analysis of variance were conducted to examine the differences; and correlation and regression analyses were used to examine the relationships between variables. There was a positive significant correlation between the PIUS and MSPSS and the UCLA Loneliness Scale and a negative significant correlation between the PIUS and Beck Depression Scale (BDS). The female students had higher total PIUS scores. The results also illustrated that there was a statistically significant difference in total PIUS scores according to having a social network account. Address for correspondence : Dr. Murat Ozsaker Celal Bayar University, School of Physical Education and Sport, 45040 Manisa, Turkey Telephone: +90 236 231 30 02 Fax: +90 236 231 30 01 E-mail: muratozsaker@yahoo.com INTRODUCTION The Internet has become the leading tool of communication in the 21st century. With a gradual increase in the public use of the Internet and widening differences in user profiles, it has become inevitable to study both the negative effects of the Internet and its positive contributions, such as sharing knowledge and facilitating communication between people (Odaci and Kalkan 2010). Internet use may be beneficial or benign when kept to ‘normal’ levels, however, high levels of internet use which interfere with daily life have been linked to a range of problems, including decreased psychosocial wellbeing, relationship breakdown and neglect of domestic, academic and work responsibilities (Gonul 2002; Hardie and Yi-Tee 2007). The concept of “problematic internet use” revealed when individual cannot control internet use. “Problematic Internet use” (Beard and Wolf 2001; Davis et al. 2002) which is also called as “pathological Internet use” (Davis 2001; Morahan-Martin and Schumacher 2000) revealed itself as spending time on the Internet more and more, not being able to stop the desire to access to the Internet and continuing to use it despite the deterioration of mental preoccupation and functioning in various areas regarding Internet use. The studies have shown that Internet use is comparatively more common among university students (Morahan-Martin and Schumacher 2000; Nalwa and Anand 2003; Niemz et al. 2006; SPO 2008). A study carried out by the Turkish State Planning Organization (SPO) with a larger sample (2008) suggested that 16-24 year-old young people compose the leading group of Internet users (65.6%), that Internet use increases with educational status (87.7%) and students are the top users of the Internet (82.2%) (State Planning Organization Information Society Statistics 534 MURAT OZSAKER, GONCA KARAYAGIZ MUSLU, AYSE KAHRAMAN ET AL. 2008). As a result, young Internet users are more likely to develop Internet addiction (Chou et al. 2005). The higher levels of Internet addiction among university students may result from a variety of reasons. They may encounter many challenges (gaining independence, seeking a better career, adapting to peer groups) with their new life at university. Some university students may not successfully cope with such novelties and difficulties and they may potentially develop depression or stress, which may lead to an escape into the online world (Celik and Odaci 2012). Thus, it proved to be elemental to investigate the correlation between Internet use and mental problems of students in developing preventive guidance programs against Internet addiction. An easier and faster Internet access at universities may also enhance the risk of university students getting involved with negative effects. Ceyhan (2010) argued that the findings of different studies on problematic Internet use would enable us to make generalizations and to understand the nature of this behavior better. In Turkey, there is a great need for studies on problematic Internet use (PIU) of university students. In Turkey, there is a great need for studies on problematic Internet use (PIU) of university students. Socio-demographic Features and Problematic Internet Use Studies in the literature delve into the relationship between problematic Internet use and variables like gender (Serin 2011; Ceyhan and Ceyhan 2007; Celik and Odaci 2012; MorahanMartin and Schumacher 2000; Odaci and Kalkan 2010; Tekinarslan and Gurer 2011; Weiser 2000), age/class level (Ceyhan and Ceyhan 2007; Johanson and Götestam 2004). However, studies with different sampling characteristics revealed different implications regarding some predictor variables including gender. In Turkey, studies of university student pupils similarly mentioned that boys use computers pathological more than girls (Serin 2011; Ceyhan and Ceyhan 2007; Celik and Odaci 2012; Odaci and Kalkan 2010; Tekinarslan and Gurer 2011). However, some of the studies points that there are no gender differences in the PIU levels of the students (Ceyhan et al. 2009; Davis et al. 2002; Hardie and Yi-Tee 2007; Odaci and Celik 2011). Similarly, studies with different sampling characteristics revealed different implications regarding age (Hardie and Yi-Tee 2007; Niemz et al. 2005). Further, there are still some controversy particularly about age issue in the PIU literature. Time Spent Online and Problematic Internet Use Time spent on the Internet is one of the most important criteria of diagnosis for problematic Internet use. The more time spent using the Internet, the higher the possibility of problematic Internet use. The researchers investigated the relationship between PIU and time spent online (Morahan-Martin and Schumacher 2000; Odaci and Kalkan 2010) and purpose of internet usage (Caplan 2002; Chak and Leung 2004). People, who are addicted to the Internet, obviously make intense and frequent use of the Internet measuring in per week. Especially, due to the purposes of internet use such as gambling, gaming, chatting and so forth individuals may spend more time when online, and this may result in the PIU (Morahan-Martin and Schumacher 2000; Tekinarslan and Gurer 2011). The studies resulted also show that the more time spent on the Internet; the more likely were to have problematic Internet use and unhealthier lifestyles. Internet use changed with regard to several lifestyle-related factors including decreases in physical activity, increases in time spent on the Internet, shorter durations or lack of sleep, and increasingly irregular dietary habits and poor eating patterns (Kim and Chun 2005; Lam et al. 2009). Loneliness, Depression, Social Support and Problematic Internet Use Recent studies on the Internet mainly focus on psychosocial wellness and Internet use, which particularly emphasized the correlation between PIU and depression (Shapira et al. 2000), loneliness (Serin 2011; Caplan 2007; Ceyhan and Ceyhan 2008; Davis 2001; Davis et al. 2002; DurakBatigun and Hasta 2010; Gross et al. 2002; Hardie and Yi-Tee 2007; Kim et al. 2009; MorahanMartin and Schumacher 2003; Odaci and Kalkan 2010), social support (Hardie and Yi-Tee 2007; Keser-Ozcan and Buzlu 2005; Swickert et al. 2002) and interpersonal distortion (Kalkan 2012) at university students. Davis (2001) suggested that psychosocial problems, such as loneliness and depression, are the precursors of PIU and lonely PROBLEMATIC INTERNET USE AMONG UNIVERSITY STUDENTS 535 and depressed people are more prone to prefer online interaction. This, further, acknowledged that individuals with lower levels of communication skills prefer online communication to faceto-face communication and who reportedly experience difficulties in controlling the time spent online (Davis 2001). Shaw and Gant (2002) stated that more internet use was associated with an increase in perceived social support but also decrease in loneliness. In a study it was found that lonely individuals can develop a preference for online social interaction and it can cause problematic internet use (Caplan 2003). In Turkey, Odaci and Kalkan (2010) additionally noted that PIU among university students increases with higher levels of loneliness. Ceyhan and Ceyhan (2008) stated that individuals experiencing the feeling of loneliness tend to have more PIU behavior. Based on these theoretical frameworks, this analytical study aims to conduct a thorough analysis of the effects of loneliness, depression and perceived social support on problematic Internet use among university students. The hypotheses of the study are as following: 1. There is a significant difference between students’ gender and levels of problematic Internet use. 2. There is a significant difference between students’ age and levels of problematic Internet use. 3. There is a significant difference between levels of problematic Internet use and students’ length of Internet use. 4. There is a significant difference between levels of problematic Internet use and a social network accounts 5. There is a significant correlation between students’ problematic Internet use and loneliness, depression and social support levels. MATERIAL AND METHODS",
"title": ""
},
{
"docid": "0b22284d575fb5674f61529c367bb724",
"text": "The scapula fulfils many roles to facilitate optimal function of the shoulder. Normal function of the shoulder joint requires a scapula that can be properly aligned in multiple planes of motion of the upper extremity. Scapular dyskinesis, meaning abnormal motion of the scapula during shoulder movement, is a clinical finding commonly encountered by shoulder surgeons. It is best considered an impairment of optimal shoulder function. As such, it may be the underlying cause or the accompanying result of many forms of shoulder pain and dysfunction. The present review looks at the causes and treatment options for this indicator of shoulder pathology and aims to provide an overview of the management of disorders of the scapula.",
"title": ""
},
{
"docid": "7543281174d7dc63e180249d94ad6c07",
"text": "Enriching speech recognition output with sentence boundaries improves its human readability and enables further processing by downstream language processing modules. We have constructed a hidden Markov model (HMM) system to detect sentence boundaries that uses both prosodic and textual information. Since there are more nonsentence boundaries than sentence boundaries in the data, the prosody model, which is implemented as a decision tree classifier, must be constructed to effectively learn from the imbalanced data distribution. To address this problem, we investigate a variety of sampling approaches and a bagging scheme. A pilot study was carried out to select methods to apply to the full NIST sentence boundary evaluation task across two corpora (conversational telephone speech and broadcast news speech), using both human transcriptions and recognition output. In the pilot study, when classification error rate is the performance measure, using the original training set achieves the best performance among the sampling methods, and an ensemble of multiple classifiers from different downsampled training sets achieves slightly poorer performance, but has the potential to reduce computational effort. However, when performance is measured using receiver operating characteristics (ROC) or area under the curve (AUC), then the sampling approaches outperform the original training set. This observation is important if the 0885-2308/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.csl.2005.06.002 * Corresponding author. Tel.: +1 510 666 2993; fax: +510 666 2956. E-mail addresses: yangl@icsi.berkeley.edu (Y. Liu), nchawla@cse.nd.edu (N.V. Chawla), harper@ecn.purdue.edu (M.P. Harper), ees@speech.sri.com (E. Shriberg), stolcke@speech.sri.com (A. Stolcke). Y. Liu et al. / Computer Speech and Language 20 (2006) 468–494 469 sentence boundary detection output is used by downstream language processing modules. Bagging was found to significantly improve system performance for each of the sampling methods. The gain from these methods may be diminished when the prosody model is combined with the language model, which is a strong knowledge source for the sentence detection task. The patterns found in the pilot study were replicated in the full NIST evaluation task. The conclusions may be dependent on the task, the classifiers, and the knowledge combination approach. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4b84b6936669a2496e5172de0023c965",
"text": "We present a patient with partial monosomy of the short arm of chromosome 18 caused by de novo translocation t(Y;18) and a generalized form of keratosis pilaris (keratosis pilaris affecting the skin follicles of the trunk, limbs and face-ulerythema ophryogenes). Two-color FISH with centromere-specific Y and 18 DNA probes identified the derivative chromosome 18 as a dicentric with breakpoints in p11.2 on both involved chromosomes. The patient had another normal Y chromosome. This is a third report the presence of a chromosome 18p deletion (and first case of a translocation involving 18p and a sex chromosome) with this genodermatosis. Our data suggest that the short arm of chromosome 18 is a candidate region for a gene causing keratosis pilaris. Unmasking of a recessive mutation at the disease locus by deletion of the wild type allele could be the cause of the recessive genodermatosis.",
"title": ""
},
{
"docid": "91b924c8dbb22ca4593150c5fadfd38b",
"text": "This paper investigates the power allocation problem of full-duplex cooperative non-orthogonal multiple access (FD-CNOMA) systems, in which the strong users relay data for the weak users via a full duplex relaying mode. For the purpose of fairness, our goal is to maximize the minimum achievable user rate in a NOMA user pair. More specifically, we consider the power optimization problem for two different relaying schemes, i.e., the fixed relaying power scheme and the adaptive relaying power scheme. For the fixed relaying scheme, we demonstrate that the power allocation problem is quasi-concave and a closed-form optimal solution is obtained. Then, based on the derived results of the fixed relaying scheme, the optimal power allocation policy for the adaptive relaying scheme is also obtained by transforming the optimization objective function as a univariate function of the relay transmit power $P_R$. Simulation results show that the proposed FD- CNOMA scheme with adaptive relaying can always achieve better or at least the same performance as the conventional NOMA scheme. In addition, there exists a switching point between FD-CNOMA and half- duplex cooperative NOMA.",
"title": ""
},
{
"docid": "ca599d7b637d25835d881c6803a9e064",
"text": "Accumulating research shows that prenatal exposure to maternal stress increases the risk for behavioral and mental health problems later in life. This review systematically analyzes the available human studies to identify harmful stressors, vulnerable periods during pregnancy, specificities in the outcome and biological correlates of the relation between maternal stress and offspring outcome. Effects of maternal stress on offspring neurodevelopment, cognitive development, negative affectivity, difficult temperament and psychiatric disorders are shown in numerous epidemiological and case-control studies. Offspring of both sexes are susceptible to prenatal stress but effects differ. There is not any specific vulnerable period of gestation; prenatal stress effects vary for different gestational ages possibly depending on the developmental stage of specific brain areas and circuits, stress system and immune system. Biological correlates in the prenatally stressed offspring are: aberrations in neurodevelopment, neurocognitive function, cerebral processing, functional and structural brain connectivity involving amygdalae and (pre)frontal cortex, changes in hypothalamo-pituitary-adrenal (HPA)-axis and autonomous nervous system.",
"title": ""
},
{
"docid": "a0279756831dcba1dc1dee634e1d7e8b",
"text": "Join order selection plays a significant role in query performance. Many modern database engine query optimizers use join order enumerators, cost models, and cardinality estimators to choose join orderings, each of which is based on painstakingly hand-tuned heuristics and formulae. Additionally, these systems typically employ static algorithms that ignore the end result (they do not “learn from their mistakes”). In this paper, we argue that existing deep reinforcement learning techniques can be applied to query planning. These techniques can automatically tune themselves, alleviating a massive human effort. Further, deep reinforcement learning techniques naturally take advantage of feedback, learning from their successes and failures. Towards this goal, we present ReJOIN, a proof-of-concept join enumerator. We show preliminary results indicating that ReJOIN can match or outperform the Postgres optimizer.",
"title": ""
},
{
"docid": "3fb840309fcd22533cf86f57dbae22b5",
"text": "Non-volatile RAM (NVRAM) makes it possible for data structures to tolerate transient failures, assuming however that programmers have designed these structures such that their consistency is preserved upon recovery. Previous approaches are typically transactional and inherently make heavy use of logging, resulting in implementations that are significantly slower than their DRAM counterparts. In this paper, we introduce a set of techniques aimed at lock-free data structures that, in the large majority of cases, remove the need for logging (and costly durable store instructions) both in the data structure algorithm and in the associated memory management scheme. Together, these generic techniques enable us to design what we call log-free concurrent data structures, which, as we illustrate on linked lists, hash tables, skip lists, and BSTs, can provide several-fold performance improvements over previous transaction-based implementations, with overheads of the order of milliseconds for recovery after a failure. We also highlight how our techniques can be integrated into practical systems, by presenting a durable version of Memcached that maintains the performance of its volatile counterpart.",
"title": ""
},
{
"docid": "2fe0e5b0b49e886c9f99132f50beeea6",
"text": "Practical wearable gesture tracking requires that sensors align with existing ergonomic device forms. We show that combining EMG and pressure data sensed only at the wrist can support accurate classification of hand gestures. A pilot study with unintended EMG electrode pressure variability led to exploration of the approach in greater depth. The EMPress technique senses both finger movements and rotations around the wrist and forearm, covering a wide range of gestures, with an overall 10-fold cross validation classification accuracy of 96%. We show that EMG is especially suited to sensing finger movements, that pressure is suited to sensing wrist and forearm rotations, and their combination is significantly more accurate for a range of gestures than either technique alone. The technique is well suited to existing wearable device forms such as smart watches that are already mounted on the wrist.",
"title": ""
},
{
"docid": "78d879c810c64413825d7a243c9de78c",
"text": "Algebra greatly broadened the very notion of algebra in two ways. First, the traditional numerical domains such as Z, Q R, and C, were now seen as instances of more general concepts of equationally-defined algebraic structure, which did not depend on any particular representation for their elements, but only on abstract sets of elements, operations on such elements, and equational properties satisfied by such operations. In this way, the integers Z were seen as an instance of the ring algebraic structure, that is, a set R with constants 0 and 1, and with addition + and mutiplication ∗ operations satisfying the equational axioms of the theory of rings, along with other rings such as the ring Zk of the residue classes of integers modulo k, the ring Z[x1, . . . , xn] of polynomials on n variables, and so on. Likewise, Q, R, and C were viewed as instances of the field structure, that is, a ring F together with a division operator / , so that each nonzero element x has an inverse 1/x with x ∗ (1/x) = 1, along with other fields such as the fields Zp, with p prime, the fields of rational functions Q(x1, . . . , xn), R(x1, . . . , xn), and C(x1, . . . , xn) (whose elements are quotients p/q with p, q polynomials and q , 0), and so on. A second way in which Abstract Algebra broadened the notion of algebra was by considering other equationally-defined structures besides rings and fields, such as monoids, groups, modules, vector spaces, and so on. This intimately connected algebra with other areas of mathematics such as geometry, analysis and topology in new ways, besides the already well-known connections with geometic figures defined as solutions of polynomal equations (the so-called algebraic varieties, such as algebraic curves or surfaces). Universal Algebra (the seminal paper is the one by Garett Birkhoff [4]), takes one more step in this line of generalization: why considering only the usual suspects: monoids, groups, rings, fields, modules, and vector spaces? Why not considering any algebraic structure defined by an arbitrary collection Σ of function symbols (called a signature), and obeying an arbitrary set E of equational axioms? And why not developing algebra in this much more general setting? That is, Universal Algebra is just Abstract Algebra brought to its full generality. Of course, generalization never stops, so that Universal Algebra itself has been further generalized in various directions. One of them, which we will fully pursue in this Part II and which, as we shall see, has many applications to Computer Science, is from considering a single set of data elements (unsorted algebras) to considering a family of such sets (many-sorted algebras), or a family of such sets but allowing subtype inclusions (order-sorted algebras). Three other, are: (i) replacing the underlying sets by richer structures such as posets, topological spaces, sheaves, or algebraic varieties, leading to notions such as those of an ordered algebra, a topological algebra, or an algebraic structure on a sheaf or on an algebraic variety; for example, an elliptic curve is a cubic curve having a commutative group structure; (ii) allowing not only finitary operations but also infinitary ones (we have already seen examples of such algebras with infinitary operations —namely, complete lattices and complete semi-lattices— in §7.5); and (iii) allowing operations to be partial functions, leading to the notion of a partial algebra. Order-sorted algebras already provide quite useful support for certain forms of partiality; and their generalization to algebras in membership equational logic provides full support for partiality (see [36, 39]).",
"title": ""
},
{
"docid": "5d63c5820cc8035822b86ef5fdaebefd",
"text": "As the third most popular social network among millennials, Snapchat is well known for its picture and video messaging system that deletes content after it is viewed. However, the Stories feature of Snapchat offers a different perspective of ephemeral content sharing, with pictures and videos that are available for friends to watch an unlimited number of times for 24 hours. We conduct-ed an in-depth qualitative investigation by interviewing 18 participants and reviewing 14 days of their Stories posts. We identify five themes focused on how participants perceive and use the Stories feature, and apply a Goffmanesque metaphor to our analysis. We relate the Stories medium to other research on self-presentation and identity curation in social media.",
"title": ""
},
{
"docid": "cc5ef7b506f0532e7ee2c89957846d5b",
"text": "In this paper, we present recent contributions for the battle against one of the main problems faced by search engines: the spamdexing or web spamming. They are malicious techniques used in web pages with the purpose of circumvent the search engines in order to achieve good visibility in search results. To better understand the problem and finding the best setup and methods to avoid such virtual plague, in this paper we present a comprehensive performance evaluation of several established machine learning techniques. In our experiments, we employed two real, public and large datasets: the WEBSPAM-UK2006 and the WEBSPAM-UK2007 collections. The samples are represented by content-based, link-based, transformed link-based features and their combinations. The found results indicate that bagging of decision trees, multilayer perceptron neural networks, random forest and adaptive boosting of decision trees are promising in the task of web spam classification. Keywords—Spamdexing; web spam; spam host; classification, WEBSPAM-UK2006, WEBSPAM-UK2007.",
"title": ""
},
{
"docid": "a31652c0236fb5da569ffbf326eb29e5",
"text": "Since 2012, citizens in Alaska, Colorado, Oregon, and Washington have voted to legalize the recreational use of marijuana by adults. Advocates of legalization have argued that prohibition wastes scarce law enforcement resources by selectively arresting minority users of a drug that has fewer adverse health effects than alcohol.1,2 It would be better, they argue, to legalize, regulate, and tax marijuana, like alcohol.3 Opponents of legalization argue that it will increase marijuana use among youth because it will make marijuana more available at a cheaper price and reduce the perceived risks of its use.4 Cerdá et al5 have assessed these concerns by examining the effects of marijuana legalization in Colorado and Washington on attitudes toward marijuana and reported marijuana use among young people. They used surveys from Monitoring the Future between 2010 and 2015 to examine changes in the perceived risks of occasional marijuana use and self-reported marijuana use in the last 30 days among students in eighth, 10th, and 12th grades in Colorado and Washington before and after legalization. They compared these changes with changes among students in states in the contiguous United States that had not legalized marijuana (excluding Oregon, which legalized in 2014). The perceived risks of using marijuana declined in all states, but there was a larger decline in perceived risks and a larger increase in marijuana use in the past 30 days among eighth and 10th graders from Washington than among students from other states. They did not find any such differences between students in Colorado and students in other US states that had not legalized, nor did they find any of these changes in 12th graders in Colorado or Washington. If the changes observed in Washington are attributable to legalization, why were there no changes found in Colorado? The authors suggest that this may have been because Colorado’s medical marijuana laws were much more liberal before legalization than those in Washington. After 2009, Colorado permitted medical marijuana to be supplied through for-profit dispensaries and allowed advertising of medical marijuana products. This hypothesisissupportedbyotherevidencethattheperceivedrisks of marijuana use decreased and marijuana use increased among young people in Colorado after these changes in 2009.6",
"title": ""
}
] |
scidocsrr
|
6af3a535b200167897b35341b12a84a7
|
Analysis of security attacks in a smart home networks
|
[
{
"docid": "73b76fa13443a4c285dc9a97cfaa22dd",
"text": "As mobile ad hoc network applications are deployed, security emerges as a central requirement. In this paper, we introduce the wormhole attack, a severe attack in ad hoc networks that is particularly challenging to defend against. The wormhole attack is possible even if the attacker has not compromised any hosts, and even if all communication provides authenticity and confidentiality. In the wormhole attack, an attacker records packets (or bits) at one location in the network, tunnels them (possibly selectively) to another location, and retransmits them there into the network. The wormhole attack can form a serious threat in wireless networks, especially against many ad hoc network routing protocols and location-based wireless security systems. For example, most existing ad hoc network routing protocols, without some mechanism to defend against the wormhole attack, would be unable to find routes longer than one or two hops, severely disrupting communication. We present a general mechanism, called packet leashes, for detecting and, thus defending against wormhole attacks, and we present a specific protocol, called TIK, that implements leashes. We also discuss topology-based wormhole detection, and show that it is impossible for these approaches to detect some wormhole topologies.",
"title": ""
}
] |
[
{
"docid": "e85431739e31d749ce1af97a7a1ad769",
"text": ".................................................................................................................",
"title": ""
},
{
"docid": "f4d4e87dd292377115ff815cc56c001c",
"text": "We present the design and implementation of a real-time, distributed light field camera. Our system allows multiple viewers to navigate virtual cameras in a dynamically changing light field that is captured in real-time. Our light field camera consists of 64 commodity video cameras that are connected to off-the-shelf computers. We employ a distributed rendering algorithm that allows us to overcome the data bandwidth problems inherent in dynamic light fields. Our algorithm works by selectively transmitting only those portions of the video streams that contribute to the desired virtual views. This technique not only reduces the total bandwidth, but it also allows us to scale the number of cameras in our system without increasing network bandwidth. We demonstrate our system with a number of examples.",
"title": ""
},
{
"docid": "fa4653a3d762bae45cd17488ea4c286e",
"text": "Now-a-days many researchers work on mining a content posted in natural language at different forums, blogs or social networking sites. Sentiment analysis is rapidly expanding topic with various applications. Previously a person collect response from any relatives previous to procuring an object, but today look is different, now person get reviews of many people on all sides of world. Blogs, e-commerce sites data consists number of implications, that expressing user opinions about specific object. Such data is pre-processed then classified into classes as positive, negative and irrelevant. Sentiment analysis allows us to determine view of public or general users feeling about any object. Two global techniques are used: Supervised Machine-Learning and Unsupervised machine-learning methods. In unsupervised learning use a lexicon with words scored for polarity values such as neutral, positive or negative. Whereas supervised methods require a training set of texts with manually assigned polarity values. This suggest one direction is make use of Fuzzy logic for sentiment analysis which may improve analysis results.",
"title": ""
},
{
"docid": "7afe5c6affbaf30b4af03f87a018a5b3",
"text": "Sentiment analysis deals with identifying polarity orientation embedded in users' comments and reviews. It aims at discriminating positive reviews from negative ones. Sentiment is related to culture and language morphology. In this paper, we investigate the effects of language morphology on sentiment analysis in reviews written in the Arabic language. In particular, we investigate, in details, how negation affects sentiments. We also define a set of rules that capture the morphology of negations in Arabic. These rules are then used to detect sentiment taking care of negated words. Experimentations prove that our suggested approach is superior to several existing methods that deal with sentiment detection in Arabic reviews.",
"title": ""
},
{
"docid": "d6c34d138692851efdbb807a89d0fcca",
"text": "Vaccine hesitancy reflects concerns about the decision to vaccinate oneself or one's children. There is a broad range of factors contributing to vaccine hesitancy, including the compulsory nature of vaccines, their coincidental temporal relationships to adverse health outcomes, unfamiliarity with vaccine-preventable diseases, and lack of trust in corporations and public health agencies. Although vaccination is a norm in the U.S. and the majority of parents vaccinate their children, many do so amid concerns. The proportion of parents claiming non-medical exemptions to school immunization requirements has been increasing over the past decade. Vaccine refusal has been associated with outbreaks of invasive Haemophilus influenzae type b disease, varicella, pneumococcal disease, measles, and pertussis, resulting in the unnecessary suffering of young children and waste of limited public health resources. Vaccine hesitancy is an extremely important issue that needs to be addressed because effective control of vaccine-preventable diseases generally requires indefinite maintenance of extremely high rates of timely vaccination. The multifactorial and complex causes of vaccine hesitancy require a broad range of approaches on the individual, provider, health system, and national levels. These include standardized measurement tools to quantify and locate clustering of vaccine hesitancy and better understand issues of trust; rapid, independent, and transparent review of an enhanced and appropriately funded vaccine safety system; adequate reimbursement for vaccine risk communication in doctors' offices; and individually tailored messages for parents who have vaccine concerns, especially first-time pregnant women. The potential of vaccines to prevent illness and save lives has never been greater. Yet, that potential is directly dependent on parental acceptance of vaccines, which requires confidence in vaccines, healthcare providers who recommend and administer vaccines, and the systems to make sure vaccines are safe.",
"title": ""
},
{
"docid": "e88ad42145c63dd2aeff6c1f64f4b4c7",
"text": "Recommender systems are in the center of network science, and they are becoming increasingly important in individual businesses for providing efficient, personalized services and products to users. Previous research in the field of recommendation systems focused on improving the precision of the system through designing more accurate recommendation lists. Recently, the community has been paying attention to diversity and novelty of recommendation lists as key characteristics of modern recommender systems. In many cases, novelty and precision do not go hand in hand, and the accuracy--novelty dilemma is one of the challenging problems in recommender systems, which needs efforts in making a trade-off between them.\n In this work, we propose an algorithm for providing novel and accurate recommendation to users. We consider the standard definition of accuracy and an effective self-information--based measure to assess novelty of the recommendation list. The proposed algorithm is based on item popularity, which is defined as the number of votes received in a certain time interval. Wavelet transform is used for analyzing popularity time series and forecasting their trend in future timesteps. We introduce two filtering algorithms based on the information extracted from analyzing popularity time series of the items. The popularity-based filtering algorithm gives a higher chance to items that are predicted to be popular in future timesteps. The other algorithm, denoted as a novelty and population-based filtering algorithm, is to move toward items with low popularity in past timesteps that are predicted to become popular in the future. The introduced filters can be applied as adds-on to any recommendation algorithm. In this article, we use the proposed algorithms to improve the performance of classic recommenders, including item-based collaborative filtering and Markov-based recommender systems. The experiments show that the algorithms could significantly improve both the accuracy and effective novelty of the classic recommenders.",
"title": ""
},
{
"docid": "4dd0d34f6b67edee60f2e6fae5bd8dd9",
"text": "Virtual learning environments facilitate online learning, generating and storing large amounts of data during the learning/teaching process. This stored data enables extraction of valuable information using data mining. In this article, we present a systematic mapping, containing 42 papers, where data mining techniques are applied to predict students performance using Moodle data. Results show that decision trees are the most used classification approach. Furthermore, students interactions in forums are the main Moodle attribute analyzed by researchers.",
"title": ""
},
{
"docid": "8384e50dad9ed96ff8610dc007c89e97",
"text": "Opinion Mining is a process of automatic extraction of knowledge from the opinion of others about some particular topic or problem. The idea of Opinion mining and Sentiment Analysis tool is to “process a set of search results for a given item, generating a list of product attributes (quality, features etc.) and aggregating opinion”. But with the passage of time more interesting applications and developments came into existence in this area and now its main goal is to make computer able to recognize and generate emotions like human. This paper will try to focus on the basic definitions of Opinion Mining, analysis of linguistic resources required for Opinion Mining, few machine learning techniques on the basis of their usage and importance for the analysis, evaluation of Sentiment classifications and its various applications. KeywordsSentiment Mining, Opinion Mining, Text Classification.",
"title": ""
},
{
"docid": "9018c146d532071e7953cdc79d8ba2c0",
"text": "The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.",
"title": ""
},
{
"docid": "7760a3074983f36e385299706ed9a927",
"text": "A reflectarray antenna monolithically integrated with 90 RF MEMS switches has been designed and fabricated to achieve switching of the main beam. Aperture coupled microstrip patch antenna (ACMPA) elements are used to form a 10 × 10 element reconfigurable reflectarray antenna operating at 26.5 GHz. The change in the progressive phase shift between the elements is obtained by adjusting the length of the open ended transmission lines in the elements with the RF MEMS switches. The reconfigurable reflectarray is monolithically fabricated with the RF MEMS switches in an area of 42.46 cm2 using an in-house surface micromachining and wafer bonding process. The measurement results show that the main beam can be switched between broadside and 40° in the H-plane at 26.5 GHz.",
"title": ""
},
{
"docid": "25305e33949beff196ff6c0946d1807b",
"text": "Clinical and preclinical studies have gathered substantial evidence that stress response alterations play a major role in the development of major depression, panic disorder and posttraumatic stress disorder. The stress response, the hypothalamic pituitary adrenocortical (HPA) system and its modulation by CRH, corticosteroids and their receptors as well as the role of natriuretic peptides and neuroactive steroids are described. Examplarily, we review the role of the HPA system in major depression, panic disorder and posttraumatic stress disorder as well as its possible relevance for treatment. Impaired glucocorticoid receptor function in major depression is associated with an excessive release of neurohormones, like CRH to which a number of signs and symptoms characteristic of depression can be ascribed. In panic disorder, a role of central CRH in panic attacks has been suggested. Atrial natriuretic peptide (ANP) is causally involved in sodium lactate-induced panic attacks. Furthermore, preclinical and clinical data on its anxiolytic activity suggest that non-peptidergic ANP receptor ligands may be of potential use in the treatment of anxiety disorders. Recent data further suggest a role of 3alpha-reduced neuroactive steroids in major depression, panic attacks and panic disorder. Posttraumatic stress disorder is characterized by a peripheral hyporesponsive HPA-system and elevated CRH concentrations in CSF. This dissociation is probably related to an increased risk for this disorder. Antidepressants are effective both in depression and anxiety disorders and have major effects on the HPA-system, especially on glucocorticoid and mineralocorticoid receptors. Normalization of HPA-system abnormalities is a strong predictor of the clinical course, at least in major depression and panic disorder. CRH-R1 or glucorticoid receptor antagonists and ANP receptor agonists are currently being studied and may provide future treatment options more closely related to the pathophysiology of the disorders.",
"title": ""
},
{
"docid": "a981db3aa149caec10b1824c82840782",
"text": "It has been suggested that the performance of a team is determined by the team members’ roles. An analysis of the performance of 342 individuals organised into 33 teams indicates that team roles characterised by creativity, co-ordination and cooperation are positively correlated with team performance. Members of developed teams exhibit certain performance enhancing characteristics and behaviours. Amongst the more developed teams there is a positive relationship between Specialist Role characteristics and team performance. While the characteristics associated with the Coordinator Role are also positively correlated with performance, these can impede the performance of less developed teams.",
"title": ""
},
{
"docid": "20f98a15433514dc5aa76110f68a71ba",
"text": "We describe a case of secondary syphilis of the tongue in which the main clinical presentation of the disease was similar to oral hairy leukoplakia. In a man who was HIV seronegative, the first symptom was a dryness of the throat followed by a feeling of foreign body in the tongue. Lesions were painful without cutaneous manifestations of secondary syphilis. IgM-fluorescent treponemal antibody test and typical serologic parameters promptly led to the diagnosis of secondary syphilis. We initiated an appropriate antibiotic therapy using benzathine penicillin, which induced healing of the tongue lesions. The differential diagnosis of this lesion may include oral squamous carcinoma, leukoplakia, candidosis, lichen planus, and, especially, hairy oral leukoplakia. This case report emphasizes the importance of considering secondary syphilis in the differential diagnosis of hairy oral leukoplakia. Depending on the clinical picture, the possibility of syphilis should not be overlooked in the differential diagnosis of many diseases of the oral mucosa.",
"title": ""
},
{
"docid": "9065a59c349b0bcf36c47b3d51f87461",
"text": "The goal of this work is to compare different versions of three-dimensional product-presentations with two-dimensional ones. Basically the usability of these technologies will be compared and other user related factors will be integrated into the test as well. These factors were determined via a literature research. In order to achieve a generalizable conclusion about 3D-web-applications in e-commerce sample products from miscellaneous product categories are chosen for the study. This paper starts with the summary of the literature research about the factors for the study. It continues by shortly introducing research methods and strategies and explaining why certain methods are selected for this kind of study. The conception of the study is described in detail in the following paragraph. With the help of the generalized results of the study, recommendations for the usage of 3D-product-presentations in practical e-commerce environments are given.",
"title": ""
},
{
"docid": "b9a5cedbec1b6cd5091fb617c0513a13",
"text": "The cerebellum undergoes a protracted development, making it particularly vulnerable to a broad spectrum of developmental events. Acquired destructive and hemorrhagic insults may also occur. The main steps of cerebellar development are reviewed. The normal imaging patterns of the cerebellum in prenatal ultrasound and magnetic resonance imaging (MRI) are described with emphasis on the limitations of these modalities. Because of confusion in the literature regarding the terminology used for cerebellar malformations, some terms (agenesis, hypoplasia, dysplasia, and atrophy) are clarified. Three main pathologic settings are considered and the main diagnoses that can be suggested are described: retrocerebellar fluid enlargement with normal or abnormal biometry (Dandy-Walker malformation, Blake pouch cyst, vermian agenesis), partially or globally decreased cerebellar biometry (cerebellar hypoplasia, agenesis, rhombencephalosynapsis, ischemic and/or hemorrhagic damage), partially or globally abnormal cerebellar echogenicity (ischemic and/or hemorrhagic damage, cerebellar dysplasia, capillary telangiectasia). The appropriate timing for performing MRI is also discussed.",
"title": ""
},
{
"docid": "443637fcc9f9efcf1026bb64aa0a9c97",
"text": "Given the unprecedented availability of data and computing resources, there is widespread renewed interest in applying data-driven machine learning methods to problems for which the development of conventional engineering solutions is challenged by modeling or algorithmic deficiencies. This tutorial-style paper starts by addressing the questions of why and when such techniques can be useful. It then provides a high-level introduction to the basics of supervised and unsupervised learning. For both supervised and unsupervised learning, exemplifying applications to communication networks are discussed by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack, with an emphasis on the physical layer.",
"title": ""
},
{
"docid": "ee820c65fd029b5ba1c4afdfe0126800",
"text": "In this paper, a new centrality called local Fiedler vector centrality (LFVC) is proposed to analyze the connectivity structure of a graph. It is associated with the sensitivity of algebraic connectivity to node or edge removals and features distributed computations via the associated graph Laplacian matrix. We prove that LFVC can be related to a monotonic submodular set function that guarantees that greedy node or edge removals come within a factor 1-1/e of the optimal non-greedy batch removal strategy. Due to the close relationship between graph topology and community structure, we use LFVC to detect deep and overlapping communities on real-world social network datasets. The results offer new insights on community detection by discovering new significant communities and key members in the network. Notably, LFVC is also shown to significantly outperform other well-known centralities for community detection.",
"title": ""
},
{
"docid": "13c79ec2455730f5a493b6dd6053f5ba",
"text": "A wealth of computationally efficient approximation methods for Gaussian process regression have been recently proposed. We give a unifying overview of sparse approximations, following Quiñonero-Candela and Rasmussen (2005), and a brief review of approximate matrix-vector multiplication methods.",
"title": ""
},
{
"docid": "2cc1383f98adb6f9e522fe2b933d35e5",
"text": "This paper presents the innovative design of an air cooled permanent magnet assisted synchronous reluctance machine (PMaSyRM) for automotive traction application. Key design features include low cost ferrite magnets in an optimized rotor geometry with high saliency ratio, low weight and sufficient mechanical strength as well as a tailored hairpin stator winding in order to meet the demands of an A-segment battery electric vehicle (BEV). Effective torque ripple reduction techniques are analyzed and a suitable combination is chosen to keep additional manufacturing measures as low as possible. Although the ferrite magnets exhibit low remanence, it is shown that their contribution to the electrical machine's performance is essential in the field weakening region. Efficiency optimized torque-speed-characteristics are identified, including additional losses of the inverter, showing an overall system efficiency of more than 94 %. Lastly, the results of no load measurements of a prototype are compared to the FEM simulation results, indicating the proposed design of a PMaSyRM as a cost-effective alternative to state-of-the-art permanent magnet synchronous machines (PMSM) for vehicle traction purposes.",
"title": ""
},
{
"docid": "2eb157031961417e69e8abe55cf2ac14",
"text": "Research on human induced pluripotent stem cells (hiPSCs) is one of the fastest growing fields in biomedicine. Generated from patient's own somatic cells, hiPSCs can be differentiated towards all functional cell types and returned to the patient without immunological concerns. 3D printing of hiPSCs could enable the generation of functional organs for replacement therapies or realization of organ-on-chip systems for individualized medicine. Printing of living cells was demonstrated with immortalized cell lines, primary cells, and adult stem cells with different printing technologies and biomaterials. However, hiPSCs are more sensitive to handling procedures, in particular, when dissociated into single cells. Both pluripotency and directed differentiation are influenced by numerous environmental factors including culture media, biomaterials, and cell density. Notably, existing literature on the effect of applied biomaterials on pluripotency is rather ambiguous. In this study, laser bioprinting of undifferentiated hiPSCs in combination with different biomaterials was performed and the impact on cells' behavior, pluripotency, and differentiation was investigated. Our findings suggest that hiPSCs are indeed more sensitive to the applied biomaterials, but not to laser printing itself. With appropriate biomaterials, such as the hyaluronic acid based solutions applied in this study, hiPSCs can be successfully laser printed without losing their pluripotency.",
"title": ""
}
] |
scidocsrr
|
e2f6434cf7acfa6bd722f893c9bd1851
|
Image Synthesis for Self-Supervised Visual Representation Learning
|
[
{
"docid": "976199e51443fe7ee8bcb5267ac55975",
"text": "We aim to color greyscale images automatically, without any manual intervention. The color proposition could then be interactively corrected by user-provided color landmarks if necessary. Automatic colorization is nontrivial since there is usually no one-to-one correspondence between color and local texture. The contribution of our framework is that we deal directly with multimodality and estimate, for each pixel of the image to be colored, the probability distribution of all possible colors, instead of choosing the most probable color at the local level. We also predict the expected variation of color at each pixel, thus defining a nonuniform spatial coherency criterion. We then use graph cuts to maximize the probability of the whole colored image at the global level. We work in the L-a-b color space in order to approximate the human perception of distances between colors, and we use machine learning tools to extract as much information as possible from a dataset of colored examples. The resulting algorithm is fast, designed to be more robust to texture noise, and is above all able to deal with ambiguity, in contrary to previous approaches.",
"title": ""
},
{
"docid": "f2fc77ae984b27bc90a24454d5a7c762",
"text": "We develop a method for comparing hierarchical image representations in terms of their ability to explain perceptual sensitivity in humans. Specifically, we utilize Fisher information to establish a model-derived prediction of sensitivity to local perturbations of an image. For a given image, we compute the eigenvectors of the Fisher information matrix with largest and smallest eigenvalues, corresponding to the model-predicted mostand least-noticeable image distortions, respectively. For human subjects, we then measure the amount of each distortion that can be reliably detected when added to the image. We use this method to test the ability of a variety of representations to mimic human perceptual sensitivity. We find that the early layers of VGG16, a deep neural network optimized for object recognition, provide a better match to human perception than later layers, and a better match than a 4-stage convolutional neural network (CNN) trained on a database of human ratings of distorted image quality. On the other hand, we find that simple models of early visual processing, incorporating one or more stages of local gain control, trained on the same database of distortion ratings, provide substantially better predictions of human sensitivity than either the CNN, or any combination of layers of VGG16. Human capabilities for recognizing complex visual patterns are believed to arise through a cascade of transformations, implemented by neurons in successive stages in the visual system. Several recent studies have suggested that representations of deep convolutional neural networks trained for object recognition can predict activity in areas of the primate ventral visual stream better than models constructed explicitly for that purpose (Yamins et al. [2014], Khaligh-Razavi and Kriegeskorte [2014]). These results have inspired exploration of deep networks trained on object recognition as models of human perception, explicitly employing their representations as perceptual distortion metrics or loss functions (Hénaff and Simoncelli [2016], Johnson et al. [2016], Dosovitskiy and Brox [2016]). On the other hand, several other studies have used synthesis techniques to generate images that indicate a profound mismatch between the sensitivity of these networks and that of human observers. Specifically, Szegedy et al. [2013] constructed image distortions, imperceptible to humans, that cause their networks to grossly misclassify objects. Similarly, Nguyen and Clune [2015] optimized randomly initialized images to achieve reliable recognition by a network, but found that the resulting ∗Currently at Google, Inc. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. ‘fooling images’ were uninterpretable by human viewers. Simpler networks, designed for texture classification and constrained to mimic the early visual system, do not exhibit such failures (Portilla and Simoncelli [2000]). These results have prompted efforts to understand why generalization failures of this type are so consistent across deep network architectures, and to develop more robust training methods to defend networks against attacks designed to exploit these weaknesses (Goodfellow et al. [2014]). From the perspective of modeling human perception, these synthesis failures suggest that representational spaces within deep neural networks deviate significantly from those of humans, and that methods for comparing representational similarity, based on fixed object classes and discrete sampling of the representational space, are insufficient to expose these deviations. If we are going to use such networks as models for human perception, we need better methods of comparing model representations to human vision. Recent work has taken the first step in this direction, by analyzing deep networks’ robustness to visual distortions on classification tasks, as well as the similarity of classification errors that humans and deep networks make in the presence of the same kind of distortion (Dodge and Karam [2017]). Here, we aim to accomplish something in the same spirit, but rather than testing on a set of handselected examples, we develop a model-constrained synthesis method for generating targeted test stimuli that can be used to compare the layer-wise representational sensitivity of a model to human perceptual sensitivity. Utilizing Fisher information, we isolate the model-predicted most and least noticeable changes to an image. We test these predictions by determining how well human observers can discriminate these same changes. We apply this method to six layers of VGG16 (Simonyan and Zisserman [2015]), a deep convolutional neural network (CNN) trained to classify objects. We also apply the method to several models explicitly trained to predict human sensitivity to image distortions, including both a 4-stage generic CNN, an optimally-weighted version of VGG16, and a family of highly-structured models explicitly constructed to mimic the physiology of the early human visual system. Example images from the paper, as well as additional examples, are available at http://www.cns.nyu.edu/~lcv/eigendistortions/. 1 Predicting discrimination thresholds Suppose we have a model for human visual representation, defined by conditional density p(~r|~x), where ~x is an N -dimensional vector containing the image pixels, and ~r is an M -dimensional random vector representing responses internal to the visual system (e.g., firing rates of a population of neurons). If the image is modified by the addition of a distortion vector, ~x+ αû, where û is a unit vector, and scalar α controls the amplitude of distortion, the model can be used to predict the threshold at which the distorted image can be reliably distinguished from the original image. Specifically, one can express a lower bound on the discrimination threshold in direction û for any observer or model that bases its judgments on ~r (Seriès et al. [2009]): T (û; ~x) ≥ β √ ûTJ−1[~x]û (1) where β is a scale factor that depends on the noise amplitude of the internal representation (as well as experimental conditions, when measuring discrimination thresholds of human observers), and J [~x] is the Fisher information matrix (FIM; Fisher [1925]), a second-order expansion of the log likelihood: J [~x] = E~r|~x [( ∂ ∂~x log p(~r|~x) )( ∂ ∂~x log p(~r|~x) )T] (2) Here, we restrict ourselves to models that can be expressed as a deterministic (and differentiable) mapping from the input pixels to mean output response vector, f(~x), with additive white Gaussian noise in the response space. The log likelihood in this case reduces to a quadratic form: log p(~r|~x) = − 2 ( [~r − f(~x)] [~r − f(~x)] ) + const. Substituting this into Eq. (2) gives: J [~x] = ∂f ∂~x T ∂f ∂~x Thus, for these models, the Fisher information matrix induces a locally adaptive Euclidean metric on the space of images, as specified by the Jacobian matrix, ∂f/∂~x.",
"title": ""
}
] |
[
{
"docid": "f829820706687c186e998bfed5be9c42",
"text": "As deep learning systems are widely adopted in safetyand securitycritical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic consequences. In this paper, we initiate the first study of leveraging physical fault injection attacks on Deep Neural Networks (DNNs), by using laser injection technique on embedded systems. In particular, our exploratory study targets four widely used activation functions in DNNs development, that are the general main building block of DNNs that creates non-linear behaviors – ReLu, softmax, sigmoid, and tanh. Our results show that by targeting these functions, it is possible to achieve a misclassification by injecting faults into the hidden layer of the network. Such result can have practical implications for realworld applications, where faults can be introduced by simpler means (such as altering the supply voltage).",
"title": ""
},
{
"docid": "85df53c5fc62e8e66e6b0ba6409116e2",
"text": "No aspect of adolescent development ha& received more attention from the public and from researchers than parent-child relation ships. Much of the research indicates that despite altered patterns of interaction, relation ships with parents remain important social and emotional resources well beyond the child hood years (for recent reviews, see Collins & Steinberg, 2006; Smetana, Campione-Barr, & Metzger, 2006). Yet it is a challenge to rec oncile this conclusion with the widespread per ception that parent-child relationships decline in quality and influence over the course of the adolescent years. The aim of this chapter is to specify the characteristics and processes of parent-child relationships that sustain the cen trality of the family amid the extensive changes of adolescence. We will argue that it is the con tent and the quality of these relationships, rather than the actions of either parent or adolescent alone, that determine the nature and extent of family influences on adolescent development. We will also argue that divergence between academic prescriptions and public perceptions about parent-adolescent relationships can be traced to the relative emphasis that each places on potential individual differences. The chapter reflects three premises that have emerged from the sizable literature on parent-child relationships during adolescence. First, relationships with parents undergo trans formations across the adolescent years that set the stage for less hierarchical interactions dur ing adulthood. Second, family relationships have far-reaching implications for concurrent and long-term relationships with friends. romantic partners. teacher~. and other adults, as well as for individual mental health, psy chosocial adjustment school performance. and eventual occupational choice and suc cess. Third, contextual and cultural variations significantly shape family relationships and experiences that, in turn, affect the course and outcomes of development both during and beyond adolescence. The chapter is divided into four main sec tions. The first section outlines theoretical views of parent-adolescent relationships and their developmental significance. The second section focuses on the behavior of parents and children and on interpersonal processes between them. with particular attention given to the distinctive characteristics of parent child relationships and how these relationships change during adolescence. The third sec tion considers whether and how parent-child relationships and their transformations are significant for adolescent development. The fourth section focuses on variability in parent child relationships during adolescence as' a function of structural. economic. and demo graphic distinctions among families.",
"title": ""
},
{
"docid": "ddeb76fa4315ee274bf1aa7ac014b6a2",
"text": "Linked Data offers new opportunities for Semantic Web-based application development by connecting structured information from various domains. These technologies allow machines and software agents to automatically interpret and consume Linked Data and provide users with intelligent query answering services. In order to enable advanced and innovative semantic applications of Linked Data such as recommendation, social network analysis, and information clustering, a fundamental requirement is systematic metrics that allow comparison between resources. In this research, we develop a hybrid similarity metric based on the characteristics of Linked Data. In particular, we develop and demonstrate metrics for providing recommendations of closely related resources. The results of our preliminary experiments and future directions are also presented.",
"title": ""
},
{
"docid": "615d2f03b2ff975242e90103e98d70d3",
"text": "The insurance industries consist of more than thousand companies in worldwide. And collect more than one trillions of dollars premiums in each year. When a person or entity make false insurance claims in order to obtain compensation or benefits to which they are not entitled is known as an insurance fraud. The total cost of an insurance fraud is estimated to be more than forty billions of dollars. So detection of an insurance fraud is a challenging problem for the insurance industry. The traditional approach for fraud detection is based on developing heuristics around fraud indicator. The auto\\vehicle insurance fraud is the most prominent type of insurance fraud, which can be done by fake accident claim. In this paper, focusing on detecting the auto\\vehicle fraud by using, machine learning technique. Also, the performance will be compared by calculation of confusion matrix. This can help to calculate accuracy, precision, and recall.",
"title": ""
},
{
"docid": "17812cae7547ba46d7170b99f6be1efc",
"text": "Developing supernumerary limbs is a rare congenital condition that only a few cases have been documented. Depending on the cause and developmental conditions, they may be single, multiple or complicated, and occur as a syndrome or associated with other anomalies. Polymelia is defined as the presence of extra limb(s) which have been reported in human, mouse, chicken, calf and lamb. It seems that the precise mechanism regulating this type of congenital malformations is not yet clearly understood. While hereditary trait of some limb anomalies was proven in human and the responsible genetic impairments were found, this has not been confirmed in the other animals especially the birds. Regarding the different susceptibilities of various vertebrate species to the environmental and genetic factors in embryonic period, the probable cause of an embryonic defect in one species cannot be generalized to the all other species class. The present study reports a case of polymelia in an Iranian indigenous young fowl and discusses its possible causes.",
"title": ""
},
{
"docid": "f811a281efec4eb6b9f703ebb420407b",
"text": "Hospital workers are highly mobile; they are constantly changing location to perform their daily work, which includes visiting patients, locating resources, such as medical records, or consulting with other specialists. The information required by these specialists is highly dependent on their location. Access to a patient's laboratory results might be more relevant when the physician is near the patient's bed and not elsewhere. We describe a location-aware medical information system that was developed to provide access to resources such as patient's records or the location of a medical specialist, based on the user's location. The system is based on a handheld computer which includes a trained backpropagation neural-network used to estimate the user's location and a client to access information from the hospital information system that is relevant to the user's current location.",
"title": ""
},
{
"docid": "3d238cc92a56e64f32f08e0833d117b3",
"text": "The efficiency of two biomass pretreatment technologies, dilute acid hydrolysis and dissolution in an ionic liquid, are compared in terms of delignification, saccharification efficiency and saccharide yields with switchgrass serving as a model bioenergy crop. When subject to ionic liquid pretreatment (dissolution and precipitation of cellulose by anti-solvent) switchgrass exhibited reduced cellulose crystallinity, increased surface area, and decreased lignin content compared to dilute acid pretreatment. Pretreated material was characterized by powder X-ray diffraction, scanning electron microscopy, Fourier transform infrared spectroscopy, Raman spectroscopy and chemistry methods. Ionic liquid pretreatment enabled a significant enhancement in the rate of enzyme hydrolysis of the cellulose component of switchgrass, with a rate increase of 16.7-fold, and a glucan yield of 96.0% obtained in 24h. These results indicate that ionic liquid pretreatment may offer unique advantages when compared to the dilute acid pretreatment process for switchgrass. However, the cost of the ionic liquid process must also be taken into consideration.",
"title": ""
},
{
"docid": "8da6cc5c6a8a5d45fadbab8b7ca8b71f",
"text": "Feature detection and description is a pivotal step in many computer vision pipelines. Traditionally, human engineered features have been the main workhorse in this domain. In this paper, we present a novel approach for learning to detect and describe keypoints from images leveraging deep architectures. To allow for a learning based approach, we collect a large-scale dataset of patches with matching multiscale keypoints. The proposed model learns from this vast dataset to identify and describe meaningful keypoints. We evaluate our model for the effectiveness of its learned representations for detecting multiscale keypoints and describing their respective support regions.",
"title": ""
},
{
"docid": "f925550d3830944b8649266292eae3fd",
"text": "In the recent years antenna design appears as a mature field of research. It really is not the fact because as the technology grows with new ideas, fitting expectations in the antenna design are always coming up. A Ku-band patch antenna loaded with notches and slit has been designed and simulated using Ansoft HFSS 3D electromagnetic simulation tool. Multi-frequency band operation is obtained from the proposed microstrip antenna. The design was carried out using Glass PTFE as the substrate and copper as antenna material. The designed antennas resonate at 15GHz with return loss over 50dB & VSWR less than 1, on implementing different slots in the radiating patch multiple frequencies resonate at 12.2GHz & 15.00GHz (Return Loss -27.5, -37.73 respectively & VSWR 0.89, 0.24 respectively) and another resonate at 11.16 GHz, 15.64GHz & 17.73 GHz with return loss -18.99, -23.026, -18.156 dB respectively and VSWR 1.95, 1.22 & 2.1 respectively. All the above designed band are used in the satellite application for non-geostationary orbit (NGSO) and fixed-satellite services (FSS) providers to operate in various segments of the Ku-band.",
"title": ""
},
{
"docid": "6ca8dffc616d38bc528bf830a970d97f",
"text": "The Cart-Inverted Pendulum System (CIPS) is a classical benchmark control problem. Its dynamics resembles with that of many real world systems of interest like missile launchers, pendubots, human walking and segways and many more. The control of this system is challenging as it is highly unstable, highly non-linear, non-minimum phase system and underactuated. Further, the physical constraints on the track position control voltage etc. also pose complexity in its control design. The thesis begins with the description of the CIPS together with hardware setup used for research, its dynamics in state space and transfer function models. In the past, a lot of research work has been directed to develop control strategies for CIPS. But, very little work has been done to validate the developed design through experiments. Also robustness margins of the developed methods have not been analysed. Thus, there lies an ample opportunity to develop controllers and study the cart-inverted pendulum controlled system in real-time. The objective of this present work is to stabilize the unstable CIPS within the different physical constraints such as in track length and control voltage. Also, simultaneously ensure good robustness. A systematic iterative method for the state feedback design by choosing weighting matrices key to the Linear Quadratic Regulator (LQR) design is presented. But, this yields oscillations in cart position. The Two-Loop-PID controller yields good robustness, and superior cart responses. A sub-optimal LQR based state feedback subjected to H∞ constraints through Linear Matrix Inequalities (LMIs) is solved and it is observed from the obtained results that a good stabilization result is achieved. Non-linear cart friction is identified using an exponential cart friction and is modeled as a plant matrix uncertainty. It has been observed that modeling the cart friction as above has led to improved cart response. Subsequently an integral sliding mode controller has been designed for the CIPS. From the obtained simulation and experiments it is seen that the ISM yields good robustness towards the output channel gain perturbations. The efficacies of the developed techniques are tested both in simulation and experimentation. It has been also observed that the Two-Loop PID Controller yields overall satisfactory response in terms of superior cart position and robustness. In the event of sensor fault the ISM yields best performance out of all the techniques.",
"title": ""
},
{
"docid": "b4f47ddd8529fe3859869b9e7c85bb2f",
"text": "This paper studies the problem of building text classifiers using positive and unlabeled examples. The key feature of this problem is that there is no negative example for learning. Recently, a few techniques for solving this problem were proposed in the literature. These techniques are based on the same idea, which builds a classifier in two steps. Each existing technique uses a different method for each step. In this paper, we first introduce some new methods for the two steps, and perform a comprehensive evaluation of all possible combinations of methods of the two steps. We then propose a more principled approach to solving the problem based on a biased formulation of SVM, and show experimentally that it is more accurate than the existing techniques.",
"title": ""
},
{
"docid": "732eb96d39d250e6b1355f7f4d53feed",
"text": "Determine blood type is essential before administering a blood transfusion, including in emergency situation. Currently, these tests are performed manually by technicians, which can lead to human errors. Various systems have been developed to automate these tests, but none is able to perform the analysis in time for emergency situations. This work aims to develop an automatic system to perform these tests in a short period of time, adapting to emergency situations. To do so, it uses the slide test and image processing techniques using the IMAQ Vision from National Instruments. The image captured after the slide test is processed and detects the occurrence of agglutination. Next the classification algorithm determines the blood type in analysis. Finally, all the information is stored in a database. Thus, the system allows determining the blood type in an emergency, eliminating transfusions based on the principle of universal donor and reducing transfusion reactions risks.",
"title": ""
},
{
"docid": "fd0ed39ee4a5e8dcfce49228cf246d5f",
"text": "Minimization with orthogonality constraints (e.g., X>X = I) and/or spherical constraints (e.g., ‖x‖2 = 1) has wide applications in polynomial optimization, combinatorial optimization, eigenvalue problems, sparse PCA, p-harmonic flows, 1-bit compressive sensing, matrix rank minimization, etc. These problems are difficult because the constraints are not only non-convex but numerically expensive to preserve during iterations. To deal with these difficulties, we apply the Cayley transform — a Crank-Nicolson-like update scheme — to preserve the constraints and based on it, develop curvilinear search algorithms with lower flops compared to those based on projections and geodesics. The efficiency of the proposed algorithms is demonstrated on a variety of test problems. In particular, for the maxcut problem, it exactly solves a decomposition formulation for the SDP relaxation. For polynomial optimization, nearest correlation matrix estimation and extreme eigenvalue problems, the proposed algorithms run very fast and return solutions no worse than those from their stateof-the-art algorithms. For the quadratic assignment problem, a gap 0.842% to the best known solution on the largest problem “tai256c” in QAPLIB can be reached in 5 minutes on a typical laptop.",
"title": ""
},
{
"docid": "4cc9083bd050969933367166c2245b05",
"text": "Emotion regulation involves the pursuit of desired emotional states (i.e., emotion goals) in the service of superordinate motives. The nature and consequences of emotion regulation, therefore, are likely to depend on the motives it is intended to serve. Nonetheless, limited attention has been devoted to studying what motivates emotion regulation. By mapping the potential benefits of emotion to key human motives, this review identifies key classes of motives in emotion regulation. The proposed taxonomy distinguishes between hedonic motives that target the immediate phenomenology of emotions, and instrumental motives that target other potential benefits of emotions. Instrumental motives include behavioral, epistemic, social, and eudaimonic motives. The proposed taxonomy offers important implications for understanding the mechanism of emotion regulation, variation across individuals and contexts, and psychological function and dysfunction, and points to novel research directions.",
"title": ""
},
{
"docid": "b2de917d74765e39562c60c74a88d7f3",
"text": "Computer-phobic university students are easy to find today especially when it come to taking online courses. Affect has been shown to influence users’ perceptions of computers. Although self-reported computer anxiety has declined in the past decade, it continues to be a significant issue in higher education and online courses. More importantly, anxiety seems to be a critical variable in relation to student perceptions of online courses. A substantial amount of work has been done on computer anxiety and affect. In fact, the technology acceptance model (TAM) has been extensively used for such studies where affect and anxiety were considered as antecedents to perceived ease of use. However, few, if any, have investigated the interplay between the two constructs as they influence perceived ease of use and perceived usefulness towards using online systems for learning. In this study, the effects of affect and anxiety (together and alone) on perceptions of an online learning system are investigated. Results demonstrate the interplay that exists between affect and anxiety and their moderating roles on perceived ease of use and perceived usefulness. Interestingly, the results seem to suggest that affect and anxiety may exist simultaneously as two weights on each side of the TAM scale.",
"title": ""
},
{
"docid": "a3f5d2fb8bfa71b6f974a871a4ae2e5f",
"text": "Recent years have witnessed the popularity of using recurrent neural network (RNN) for action recognition in videos. However, videos are of high dimensionality and contain rich human dynamics with various motion scales, which makes the traditional RNNs difficult to capture complex action information. In this paper, we propose a novel recurrent spatial-temporal attention network (RSTAN) to address this challenge, where we introduce a spatial-temporal attention mechanism to adaptively identify key features from the global video context for every time-step prediction of RNN. More specifically, we make three main contributions from the following aspects. First, we reinforce the classical long short-term memory (LSTM) with a novel spatial-temporal attention module. At each time step, our module can automatically learn a spatial-temporal action representation from all sampled video frames, which is compact and highly relevant to the prediction at the current step. Second, we design an attention-driven appearance-motion fusion strategy to integrate appearance and motion LSTMs into a unified framework, where LSTMs with their spatial-temporal attention modules in two streams can be jointly trained in an end-to-end fashion. Third, we develop actor-attention regularization for RSTAN, which can guide our attention mechanism to focus on the important action regions around actors. We evaluate the proposed RSTAN on the benchmark UCF101, HMDB51 and JHMDB data sets. The experimental results show that, our RSTAN outperforms other recent RNN-based approaches on UCF101 and HMDB51 as well as achieves the state-of-the-art on JHMDB.",
"title": ""
},
{
"docid": "fdb0009b962254761541eb08f556fa0e",
"text": "Nonionic surfactants are widely used in the development of protein pharmaceuticals. However, the low level of residual peroxides in surfactants can potentially affect the stability of oxidation-sensitive proteins. In this report, we examined the peroxide formation in polysorbate 80 under a variety of storage conditions and tested the potential of peroxides in polysorbate 80 to oxidize a model protein, IL-2 mutein. For the first time, we demonstrated that peroxides can be easily generated in neat polysorbate 80 in the presence of air during incubation at elevated temperatures. Polysorbate 80 in aqueous solution exhibited a faster rate of peroxide formation and a greater amount of peroxides during incubation, which is further promoted/catalyzed by light. Peroxide formation can be greatly inhibited by preventing any contact with air/oxygen during storage. IL-2 mutein can be easily oxidized both in liquid and solid states. A lower level of peroxides in polysorbate 80 did not change the rate of IL-2 mutein oxidation in liquid state but significantly accelerated its oxidation in solid state under air. A higher level of peroxides in polysorbate 80 caused a significant increase in IL-2 mutein oxidation both in liquid and solid states, and glutathione can significantly inhibit the peroxide-induced oxidation of IL-2 mutein in a lyophilized formulation. In addition, a higher level of peroxides in polysorbate 80 caused immediate IL-2 mutein oxidation during annealing in lyophilization, suggesting that implementation of an annealing step needs to be carefully evaluated in the development of a lyophilization process for oxidation-sensitive proteins in the presence of polysorbate.",
"title": ""
},
{
"docid": "49d50ed96ff7bfa5246561b0c51876af",
"text": "Nutch is an open-source Web search engine that can be used at global, local, and even personal scale. Its initial design goal was to enable a transparent alternative for global Web search in the public interest — one of its signature features is the ability to “explain” its result rankings. Recent work has emphasized how it can also be used for intranets; by local communities with richer data models, such as the Creative Commons metadata-enabled search for licensed content; on a personal scale to index a user's files, email, and web-surfing history; and we also report on several other research projects built on Nutch. In this paper, we present how the architecture of the Nutch system enables it to be more flexible and scalable than other comparable systems today.",
"title": ""
},
{
"docid": "f967ad72daeb84e2fce38aec69997c8a",
"text": "While HCI has focused on multitasking with information workers, we report on multitasking among Millennials who grew up with digital media - focusing on college students. We logged computer activity and used biosensors to measure stress of 48 students for 7 days for all waking hours, in their in situ environments. We found a significant positive relationship with stress and daily time spent on computers. Stress is positively associated with the amount of multitasking. Conversely, stress is negatively associated with Facebook and social media use. Heavy multitaskers use significantly more social media and report lower positive affect than light multitaskers. Night habits affect multitasking the following day: late-nighters show longer duration of computer use and those ending their activities earlier in the day multitask less. Our study shows that college students multitask at double the frequency compared to studies of information workers. These results can inform designs for stress management of college students.",
"title": ""
},
{
"docid": "4d71e585675eb2cec41ca20f1b97045b",
"text": "Weed scouting is an important part of modern integrated weed management but can be time consuming and sparse when performed manually. Automated weed scouting and weed destruction has typically been performed using classification systems able to classify a set group of species known a priori. This greatly limits deployability as classification systems must be retrained for any field with a different set of weed species present within them. In order to overcome this limitation, this paper works towards developing a clustering approach to weed scouting which can be utilized in any field without the need for prior species knowledge. We demonstrate our system using challenging data collected in the field from an agricultural robotics platform. We show that considerable improvements can be made by (i) learning low-dimensional (bottleneck) features using a deep convolutional neural network to represent plants in general and (ii) tying views of the same area (plant) together. Deploying this algorithm on in-field data collected by AgBotII, we are able to successfully cluster cotton plants from grasses without prior knowledge or training for the specific plants in the field.",
"title": ""
}
] |
scidocsrr
|
557decad349ae344cdab796f8112003b
|
Development methodology of the HISMM Maturity Model
|
[
{
"docid": "8cb6a2a3014bd3a7f945abd4cb2ffe88",
"text": "In order to identify and explore the strength and weaknesses of particular organizational designs, a wide range of maturity models have been developed by both, practitioners and academics over the past years. However, a systematization and generalization of the procedure on how to design maturity models as well as a synthesis of design science research with the rather behavioural field of organization theory is still lacking. Trying to combine the best of both fields, a first design proposition of a situational maturity model is presented in this paper. The proposed maturity model design is illustrated with the help of an instantiation for the healthcare domain.",
"title": ""
}
] |
[
{
"docid": "e8ef1683247fddbd844437c7b27b978f",
"text": "Inductive Power Transfer (IPT) is well-established for applications with biomedical implants and radio-frequency identification systems. Recently, also systems for the charging of the batteries of consumer electronic devices and of electric and hybrid electric vehicles have been developed. The efficiency η of the power transfer of IPT systems is given by the inductor quality factor Q and the magnetic coupling k of the transmission coils. In this paper, the influence of the transmission frequency on the inductor quality factor and the efficiency is analyzed taking also the admissible field emissions as limited by standards into account. Aspects of an optimization of the magnetic design with respect to a high magnetic coupling and a high quality factor are discussed for IPT at any power level. It is shown that the magnetic coupling mainly depends on the area enclosed by the coils and that their exact shape has only a minor influence. The results are verified with an experimental prototype.",
"title": ""
},
{
"docid": "dcf8fc03b228c9d7f715605f06d55ed7",
"text": "This paper presents an exploratory study in which a humanoid robot (MecWilly) acted as a partner to preschool children, helping them to learn English words. In order to use the Socio-Cognitive Conflict paradigm to induce the knowledge acquisition process, we designed a playful activity in which children worked in pairs with another child or with the humanoid robot on a word-picture association task involving fruit and vegetables. The analysis of the two experimental conditions (child-child and child-robot) demonstrates the effectiveness of Socio-Cognitive Conflict in improving the children’s learning of English. Furthermore, the analysis of children's performances as reported in this study appears to highlight the potential use of humanoid robots in the acquisition of English by young children.",
"title": ""
},
{
"docid": "eab5044761dabda84529fc41fb6022ba",
"text": "Fundamental frequency (f0) estimation from polyphonic music includes the tasks of multiple-f0, melody, vocal, and bass line estimation. Historically these problems have been approached separately, and only recently, using learning-based approaches. We present a multitask deep learning architecture that jointly estimates outputs for various tasks including multiplef0, melody, vocal and bass line estimation, and is trained using a large, semi-automatically annotated dataset. We show that the multitask model outperforms its single-task counterparts, and explore the effect of various design decisions in our approach, and show that it performs better or at least competitively when compared against strong baseline methods.",
"title": ""
},
{
"docid": "8d32a0c8bcf1c197d6c312e84d395f49",
"text": "In previous work, we advanced a new technique for direct visual matching of images for the purposes of face recognition and image retrieval, using a probabilistic measure of similarity based primarily on a Bayesian (MAP) analysis of image differences, leading to a d̈ual̈basis similar to eigenfaces. The performance advantage of this probabilistic matching technique over standard Euclidean nearest-neighbor eigenface matching was recently demonstrated using results from DARPAś 1996F̈ERET̈face recognition competition, in which this probabilistic matching algorithm was found to be the top performer. We have further developed a simple method of replacing the costly compution of nonlinear (online) Bayesian similarity measures by the relatively inexpensive computation of linear (offline) subspace projections and simple (online) Euclidean norms, thus resulting in a significant computational speed-up for implementation with very large image databases as typically encountered in real-world applications. Advances in Neural Information Processing Systems 11, M. S. Kearns, S. A. Solla, D. A. Cohn, (Eds.), MIT Press, 1999. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 1999 201 Broadway, Cambridge, Massachusetts 02139",
"title": ""
},
{
"docid": "fdaf5546d430226721aa1840f92ba5af",
"text": "The recent development of regulatory policies that permit the use of TV bands spectrum on a secondary basis has motivated discussion about coexistence of primary (e.g. TV broadcasts) and secondary users (e.g. WiFi users in TV spectrum). However, much less attention has been given to coexistence of different secondary wireless technologies in the TV white spaces. Lack of coordination between secondary networks may create severe interference situations, resulting in less efficient usage of the spectrum. In this paper, we consider two of the most prominent wireless technologies available today, namely Long Term Evolution (LTE), and WiFi, and address some problems that arise from their coexistence in the same band. We perform exhaustive system simulations and observe that WiFi is hampered much more significantly than LTE in coexistence scenarios. A simple coexistence scheme that reuses the concept of almost blank subframes in LTE is proposed, and it is observed that it can improve the WiFi throughput per user up to 50 times in the studied scenarios.",
"title": ""
},
{
"docid": "1e3e1e272f1b8da8f28cc9d3a338bfc6",
"text": "In an attempt to preserve the structural information in malware binaries during feature extraction, function call graph-based features have been used in various research works in malware classification. However, the approach usually employed when performing classification on these graphs, is based on computing graph similarity using computationally intensive techniques. Due to this, much of the previous work in this area incurred large performance overhead and does not scale well. In this paper, we propose a linear time function call graph (FCG) vector representation based on function clustering that has significant performance gains in addition to improved classification accuracy. We also show how this representation can enable using graph features together with other non-graph features.",
"title": ""
},
{
"docid": "152cb1714c9ce1edd90f0856e7080283",
"text": "Relation classification is an important semantic processing task in the field of natural language processing (NLP). In this paper, we present a novel model, Structure Regularized Bidirectional Recurrent Convolutional Neural Network(SRBRCNN), to classify the relation of two entities in a sentence, and the new dataset of Chinese Sanwen for named entity recognition and relation classification. Some state-of-theart systems concentrate on modeling the shortest dependency path (SDP) between two entities leveraging convolutional or recurrent neural networks. We further explore how to make full use of the dependency relations information in the SDP and how to improve the model by the method of structure regularization. We propose a structure regularized model to learn relation representations along the SDP extracted from the forest formed by the structure regularized dependency tree, which benefits reducing the complexity of the whole model and helps improve the F1 score by 10.3. Experimental results show that our method outperforms the state-of-the-art approaches on the Chinese Sanwen task and performs as well on the SemEval-2010 Task 8 dataset.",
"title": ""
},
{
"docid": "375e1ca80c48f8535794588fac2284b9",
"text": "Huntington's disease (HD) is a progressive neurodegenerative disorder caused by an expanding CAG repeat coding for polyglutamine in the huntingtin protein. Recent data have suggested the possibility that an N-terminal fragment of huntingtin may aggregate in neurons of patients with HD, both in the cytoplasm, forming dystrophic neurites, and in the nucleus, forming intranuclear neuronal inclusion bodies. An animal model of HD using the short N-terminal fragment of huntingtin has also been found to have intranuclear inclusions and this same fragment can aggregate in vitro . We have now developed a cell culture model demonstrating that N-terminal fragments of huntingtin with expanded glutamine repeats aggregate both in the cytoplasm and in the nucleus. Neuroblastoma cells transiently transfected with full-length huntingtin constructs with either a normal or expanded repeat had diffuse cytoplasmic localization of the protein. In contrast, cells transfected with truncated N-terminal fragments showed aggregation only if the glutamine repeat was expanded. The aggregates were often ubiquitinated. The shorter truncated product appeared to form more aggregates in the nucleus. Cells transfected with the expanded repeat construct but not the normal repeat construct showed enhanced toxicity to the apoptosis-inducing agent staurosporine. These data indicate that N-terminal truncated fragments of huntingtin with expanded glutamine repeats can aggregate in cells in culture and that this aggregation can be toxic to cells. This model will be useful for future experiments to test mechanisms of aggregation and toxicity and potentially for testing experimental therapeutic interventions.",
"title": ""
},
{
"docid": "159cd44503cb9def6276cb2b9d33c40e",
"text": "In the airline industry, data analysis and data mining are a prerequisite to push customer relationship management (CRM) ahead. Knowledge about data mining methods, marketing strategies and airline business processes has to be combined to successfully implement CRM. This paper is a case study and gives an overview about distinct issues, which have to be taken into account in order to provide a first solution to run CRM processes. We do not focus on each individual task of the project; rather we give a sketch about important steps like data preparation, customer valuation and segmentation and also explain the limitation of the solutions.",
"title": ""
},
{
"docid": "6622922fb28cce3df8c68c21ac55e20e",
"text": "Semantic-based approaches are relatively new technologies. Some of these technologies are supported by specifications of W3 Consortium, i.e. RDF, SPARQL and so on. There are many areas where semantic data can be utilized, e.g. social networks, annotation of protein sequences etc. From the physical database design point of view, several index data structures are utilized to handle this data. In many cases, the well-known B-tree is used as a basic index supporting some operations. Since the semantic data are multidimensional, a common way is to use a number of B-trees to index the data. In this article, we review other index data structures; we show that we can create only one index when we utilize a multidimensional data structure like the R-tree. We compare a performance of the B-tree indices with the R-tree and some its variants. Our experiments are performed over a huge semantic database, we show advantages and disadvantages of these data structures.",
"title": ""
},
{
"docid": "68b2e5a2f82435c2a007c806e060e301",
"text": "Self-forming barriers and advanced liner materials are studied extensively for their Cu gapfill performance and interconnect scaling. In this paper, 22nm1/2 pitch Cu low-k interconnects with barrier (Mn-based, TaN) /liner (Co, Ru) combinations are compared and benchmarked for their resistivity, resistance scaling, and electromigration (EM) performance. Extendibility to 16nm copper width was explored experimentally and a projection towards 12nm width is performed. It is found that the Ru-liner based systems show a higher overall Cu-resistivity. We show that this increase can be compensated by combining Ru with a thinner Mn-based barrier, which increases the effective Cu-area at a particular trench width. The EM performance reveals that the Ru-liner systems have a better EM lifetime compared to the Co-liner based systems. More interestingly, in a comparison of the maximum current density Jmax a significant improvement is found for the scaled Mn-based/Ru system, making it therefore a serious candidate to extend the Cu metallization.",
"title": ""
},
{
"docid": "d7f4e1a083875c2d732be4e11a396b1f",
"text": "We present a new technique for extracting local features from images of architectural scenes, based on detecting and representing local symmetries. These new features are motivated by the fact that local symmetries, at different scales, are a fundamental characteristic of many urban images, and are potentially more invariant to large appearance changes than lower-level features such as SIFT. Hence, we apply these features to the problem of matching challenging pairs of photos of urban scenes. Our features are based on simple measures of local bilateral and rotational symmetries computed using local image operations. These measures are used both for feature detection and for computing descriptors. We demonstrate our method on a challenging new dataset containing image pairs exhibiting a range of dramatic variations in lighting, age, and rendering style, and show that our features can improve matching performance for this difficult task.",
"title": ""
},
{
"docid": "4f20763c1f25a2d6074376f8ec4f0c35",
"text": "Lateral flow (immuno)assays are currently used for qualitative, semiquantitative and to some extent quantitative monitoring in resource-poor or non-laboratory environments. Applications include tests on pathogens, drugs, hormones and metabolites in biomedical, phytosanitary, veterinary, feed/food and environmental settings. We describe principles of current formats, applications, limitations and perspectives for quantitative monitoring. We illustrate the potentials and limitations of analysis with lateral flow (immuno)assays using a literature survey and a SWOT analysis (acronym for \"strengths, weaknesses, opportunities, threats\"). Articles referred to in this survey were searched for on MEDLINE, Scopus and in references of reviewed papers. Search terms included \"immunochromatography\", \"sol particle immunoassay\", \"lateral flow immunoassay\" and \"dipstick assay\".",
"title": ""
},
{
"docid": "cb60cfcc91949d73db10ba60b3d5b9bb",
"text": "Knowledge distillation is an effective technique that transfers knowledge from a large teacher model to a shallow student. However, just like massive classification, large scale knowledge distillation also imposes heavy computational costs on training models of deep neural networks, as the softmax activations at the last layer involve computing probabilities over numerous classes. In this work, we apply the idea of importance sampling which is often used in Neural Machine Translation on large scale knowledge distillation. We present a method called dynamic importance sampling, where ranked classes are sampled from a dynamic distribution derived from the interaction between the teacher and student in full distillation. We highlight the utility of our proposal prior which helps the student capture the main information in the loss function. Our approach manages to reduce the computational cost at training time while maintaining the competitive performance on CIFAR100 and Market-1501 person re-identification datasets.",
"title": ""
},
{
"docid": "86889526d71a853cb2055040c4f987d4",
"text": "Traceability underlies many important software and systems engineering activities, such as change impact analysis and regression testing. Despite important research advances, as in the automated creation and maintenance of trace links, traceability implementation and use is still not pervasive in industry. A community of traceability researchers and practitioners has been collaborating to understand the hurdles to making traceability ubiquitous. Over a series of years, workshops have been held to elicit and enhance research challenges and related tasks to address these shortcomings. A continuing discussion of the community has resulted in the research roadmap of this paper. We present a brief view of the state of the art in traceability, the grand challenge for traceability and future directions for the field.",
"title": ""
},
{
"docid": "3227e141d4572b58214585c5047a9b8b",
"text": "Post-natal ontogenetic variation of the marmot mandible and ventral cranium is investigated in two species of the subgenus Petromarmota (M. caligata, M. flaviventris) and four species of the subgenus Marmota (M. caudata, M. himalayana, M. marmota, M. monax). Relationships between size and shape are analysed using geometric morphometric techniques. Sexual dimorphism is negligible, allometry explains the main changes in shape during growth, and males and females manifest similar allometric trajectories. Anatomical regions affected by size-related shape variation are similar in different species, but allometric trajectories are divergent. The largest modifications of the mandible and ventral cranium occur in regions directly involved in the mechanics of mastication. Relative to other anatomical regions, the size of areas of muscle insertion increases, while the size of sense organs, nerves and teeth generally decreases. Epigenetic factors, developmental constraints and size variation were found to be the major contributors in producing the observed allometric patterns. A phylogenetic signal was not evident in the comparison of allometric trajectories, but traits that allow discrimination of the Palaearctic marmots from the Nearctic species of Petromarmota are present early in development and are conserved during post-natal ontogeny.",
"title": ""
},
{
"docid": "a9b0d197e41fc328502c71c0ddf7b91e",
"text": "We propose a new full-rate space-time block code (STBC) for two transmit antennas which can be designed to achieve maximum diversity or maximum capacity while enjoying optimized coding gain and reduced-complexity maximum-likelihood (ML) decoding. The maximum transmit diversity (MTD) construction provides a diversity order of 2Nr for any number of receive antennas Nr at the cost of channel capacity loss. The maximum channel capacity (MCC) construction preserves the mutual information between the transmit and the received vectors while sacrificing diversity. The system designer can switch between the two constructions through a simple parameter change based on the operating signal-to-noise ratio (SNR), signal constellation size and number of receive antennas. Thanks to their special algebraic structure, both constructions enjoy low-complexity ML decoding proportional to the square of the signal constellation size making them attractive alternatives to existing full-diversity full-rate STBCs in [6], [3] which have high ML decoding complexity proportional to the fourth order of the signal constellation size. Furthermore, we design a differential transmission scheme for our proposed STBC, derive the exact ML differential decoding rule, and compare its performance with competitive schemes. Finally, we investigate transceiver design and performance of our proposed STBC in spatial multiple-access scenarios and over frequency-selective channels.",
"title": ""
},
{
"docid": "5bdd417eb1f2bbcd9b839c2566e8cca9",
"text": "There are major trends to advance the functionality of search engines to a more expressive semantic level. This is enabled by the advent of knowledge-sharing communities such as Wikipedia and the progress in automatically extracting entities and relationships from semistructured as well as natural-language Web sources. Recent endeavors of this kind include DBpedia, EntityCube, KnowItAll, ReadTheWeb, and our own YAGO-NAGA project (and others). The goal is to automatically construct and maintain a comprehensive knowledge base of facts about named entities, their semantic classes, and their mutual relations as well as temporal contexts, with high precision and high recall. This tutorial discusses state-of-the-art methods, research opportunities, and open challenges along this avenue of knowledge harvesting.",
"title": ""
},
{
"docid": "32f1417c75ae4406c6fe4e9ad71610de",
"text": "The recently developed digital coherent receiver enables us to employ a variety of spectrally efficient modulation formats such as M-ary phase-shift keying and quadrature-amplitude modulation. Moreover, in the digital domain, we can equalize all linear transmission impairments such as group-velocity dispersion and polarization-mode dispersion of transmission fibers, because coherent detection preserves the phase information of the optical signal. This paper reviews the history of research and development related to coherent optical communications and describes the principle of coherent detection, including its quantum-noise characteristics. In addition, it discusses the role of digital signal processing in mitigating linear transmission impairments, estimating the carrier phase, and tracking the state of polarization of the signal in coherent receivers.",
"title": ""
},
{
"docid": "21c7cbcf02141c60443f912ae5f1208b",
"text": "A novel driving scheme based on simultaneous emission is reported for 2D/3D AMOLED TVs. The new method reduces leftright crosstalk without sacrificing luminance. The new scheme greatly simplifies the pixel circuit as the number of transistors for Vth compensation is reduced from 6 to 3. The capacitive load of scan lines is reduced by 48%, enabling very high refresh rate (240 Hz).",
"title": ""
}
] |
scidocsrr
|
80be253c6f3f2578e7b8c291ebf98f4b
|
Recent developments in human gait research: parameters, approaches, applications, machine learning techniques, datasets and challenges
|
[
{
"docid": "c6e0843498747096ebdafd51d4b5cca6",
"text": "The use of on-body wearable sensors is widespread in several academic and industrial domains. Of great interest are their applications in ambulatory monitoring and pervasive computing systems; here, some quantitative analysis of human motion and its automatic classification are the main computational tasks to be pursued. In this paper, we discuss how human physical activity can be classified using on-body accelerometers, with a major emphasis devoted to the computational algorithms employed for this purpose. In particular, we motivate our current interest for classifiers based on Hidden Markov Models (HMMs). An example is illustrated and discussed by analysing a dataset of accelerometer time series.",
"title": ""
}
] |
[
{
"docid": "59dfaac9730e526604193f06b48a9dd5",
"text": "We evaluated the functional and oncological outcome of ultralow anterior resection and coloanal anastomosis (CAA), which is a popular technique for preserving anal sphincter in patients with distal rectal cancer. Forty-eight patients were followed up for 6–100 months regarding fecal or gas incontinence, frequency of bowel movement, and local or systemic recurrence. The main operative techniques were total mesorectal excision with autonomic nerve preservation; the type of anastomosis was straight CAA, performed by the perianal hand sewn method in 38 cases and by the double-stapled method in 10. Postoperative complications included transient urinary retention (n=7), anastomotic stenosis (n=3), anastomotic leakage (n=3), rectovaginal fistula (n=2), and cancer positive margin (n=1; patient refused reoperation). Overall there were recurrences in seven patients (14.5%): one local and one systemic recurrence in stage B2; and one local, two systemic, and two combined local and systemic in C2. The mean frequency of bowel movements was 6.1 per day after 3 months, 4.4 after 1 year, and 3.1 after 2 years. The Kirwan grade for fecal incontinence was 2.7 after 3 months, 1.8 after 1 year, and 1.5 after 2 years. With careful selection of patients and good operative technique, CAA can be performed safely in distal rectal cancer. Normal continence and acceptable frequency of bowel movements can be obtained within 1 year after operation without compromising the rate of local recurrence.",
"title": ""
},
{
"docid": "82a40130bc83a2456c8368fa9275c708",
"text": "This paper presents a novel strategy for using ant colony optimization (ACO) to evolve the structure of deep recurrent neural networks. While versions of ACO for continuous parameter optimization have been previously used to train the weights of neural networks, to the authors’ knowledge they have not been used to actually design neural networks. The strategy presented is used to evolve deep neural networks with up to 5 hidden and 5 recurrent layers for the challenging task of predicting general aviation flight data, and is shown to provide improvements of 63 % for airspeed, a 97 % for altitude and 120 % for pitch over previously best published results, while at the same time not requiring additional input neurons for residual values. The strategy presented also has many benefits for neuro evolution, including the fact that it is easily parallizable and scalable, and can operate using any method for training neural networks. Further, the networks it evolves can typically be trained in fewer iterations than fully connected networks.",
"title": ""
},
{
"docid": "f9f1cf949093c41a84f3af854a2c4a8b",
"text": "Modern TCP implementations are capable of very high point-to-point bandwidths. Delivered performance on the fastest networks is often limited by the sending and receiving hosts, rather than by the network hardware or the TCP protocol implementation itself. In this case, systems can achieve higher bandwidth by reducing host overheads through a variety of optimizations above and below the TCP protocol stack, given support from the network interface. This paper surveys the most important of these optimizations, and illustrates their effects quantitatively with empirical results from a an experimental network delivering up to two gigabits per second of point-to-point TCP bandwidth.",
"title": ""
},
{
"docid": "153f452486e2eacb9dc1cf95275dd015",
"text": "This paper presents a Fuzzy Neural Network (FNN) control system for a traveling-wave ultrasonic motor (TWUSM) driven by a dual mode modulation non-resonant driving circuit. First, the motor configuration and the proposed driving circuit of a TWUSM are introduced. To drive a TWUSM effectively, a novel driving circuit, that simultaneously employs both the driving frequency and phase modulation control scheme, is proposed to provide two-phase balance voltage for a TWUSM. Since the dynamic characteristics and motor parameters of the TWUSM are highly nonlinear and time-varying, a FNN control system is therefore investigated to achieve high-precision speed control. The proposed FNN control system incorporates neuro-fuzzy control and the driving frequency and phase modulation to solve the problem of nonlinearities and variations. The proposed control system is digitally implemented by a low-cost digital signal processor based microcontroller, hence reducing the system hardware size and cost. The effectiveness of the proposed driving circuit and control system is verified with hardware experiments under the occurrence of uncertainties. In addition, the advantages of the proposed control scheme are indicated in comparison with a conventional proportional-integral control system.",
"title": ""
},
{
"docid": "f31ec6460f0e938f8e43f5b9be055aaf",
"text": "Many people have turned to technological tools to help them be physically active. To better understand how goal-setting, rewards, self-monitoring, and sharing can encourage physical activity, we designed a mobile phone application and deployed it in a four-week field study (n=23). Participants found it beneficial to have secondary and primary weekly goals and to receive non-judgmental reminders. However, participants had problems with some features that are commonly used in practice and suggested in the literature. For example, trophies and ribbons failed to motivate most participants, which raises questions about how such rewards should be designed. A feature to post updates to a subset of their Facebook NewsFeed created some benefits, but barriers remained for most participants.",
"title": ""
},
{
"docid": "1169d70de6d0c67f52ecac4d942d2224",
"text": "All drivers have habits behind the wheel. Different drivers vary in how they hit the gas and brake pedals, how they turn the steering wheel, and how much following distance they keep to follow a vehicle safely and comfortably. In this paper, we model such driving behaviors as car-following and pedal operation patterns. The relationship between following distance and velocity mapped into a two-dimensional space is modeled for each driver with an optimal velocity model approximated by a nonlinear function or with a statistical method of a Gaussian mixture model (GMM). Pedal operation patterns are also modeled with GMMs that represent the distributions of raw pedal operation signals or spectral features extracted through spectral analysis of the raw pedal operation signals. The driver models are evaluated in driver identification experiments using driving signals collected in a driving simulator and in a real vehicle. Experimental results show that the driver model based on the spectral features of pedal operation signals efficiently models driver individual differences and achieves an identification rate of 76.8% for a field test with 276 drivers, resulting in a relative error reduction of 55% over driver models that use raw pedal operation signals without spectral analysis",
"title": ""
},
{
"docid": "cdee51ab9562e56aee3fff58cd2143ba",
"text": "Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method to adaptively estimate a preconditioner, such that the amplitudes of perturbations of preconditioned stochastic gradient match that of the perturbations of parameters to be optimized in a way comparable to Newton method for deterministic optimization. Unlike the preconditioners based on secant equation fitting as done in deterministic quasi-Newton methods, which assume positive definite Hessian and approximate its inverse, the new preconditioner works equally well for both convex and nonconvex optimizations with exact or noisy gradients. When stochastic gradient is used, it can naturally damp the gradient noise to stabilize SGD. Efficient preconditioner estimation methods are developed, and with reasonable simplifications, they are applicable to large-scale problems. Experimental results demonstrate that equipped with the new preconditioner, without any tuning effort, preconditioned SGD can efficiently solve many challenging problems like the training of a deep neural network or a recurrent neural network requiring extremely long-term memories.",
"title": ""
},
{
"docid": "3baec781f7b5aaab8598c3628ea0af3b",
"text": "Article history: Received 15 November 2010 Received in revised form 9 February 2012 Accepted 15 February 2012 Information professionals performing business activity related investigative analysis must routinely associate data from a diverse range of Web based general-interest business and financial information sources. XBRL has become an integral part of the financial data landscape. At the same time, Open Data initiatives have contributed relevant financial, economic, and business data to the pool of publicly available information on the Web but the use of XBRL in combination with Open Data remains at an early state of realisation. In this paper we argue that Linked Data technology, created for Web scale information integration, can accommodate XBRL data and make it easier to combine it with open datasets. This can provide the foundations for a global data ecosystem of interlinked and interoperable financial and business information with the potential to leverage XBRL beyond its current regulatory and disclosure role. We outline the uses of Linked Data technologies to facilitate XBRL consumption in conjunction with non-XBRL Open Data, report on current activities and highlight remaining challenges in terms of information consolidation faced by both XBRL and Web technologies. © 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "d4ed4cad670b1e11cfb3c869e34cf9fd",
"text": "BACKGROUND\nDespite the many antihypertensive medications available, two-thirds of patients with hypertension do not achieve blood pressure control. This is thought to be due to a combination of poor patient education, poor medication adherence, and \"clinical inertia.\" The present trial evaluates an intervention consisting of health coaching, home blood pressure monitoring, and home medication titration as a method to address these three causes of poor hypertension control.\n\n\nMETHODS/DESIGN\nThe randomized controlled trial will include 300 patients with poorly controlled hypertension. Participants will be recruited from a primary care clinic in a teaching hospital that primarily serves low-income populations.An intervention group of 150 participants will receive health coaching, home blood pressure monitoring, and home-titration of antihypertensive medications during 6 months. The control group (n=150) will receive health coaching plus home blood pressure monitoring for the same duration. A passive control group will receive usual care. Blood pressure measurements will take place at baseline, and after 6 and 12 months. The primary outcome will be change in systolic blood pressure after 6 and 12 months. Secondary outcomes measured will be change in diastolic blood pressure, adverse events, and patient and provider satisfaction.\n\n\nDISCUSSION\nThe present study is designed to assess whether the 3-pronged approach of health coaching, home blood pressure monitoring, and home medication titration can successfully improve blood pressure, and if so, whether this effect persists beyond the period of the intervention.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov identifier: NCT01013857.",
"title": ""
},
{
"docid": "c61b210036484009cf8077a803824695",
"text": "Synthetic Aperture Radar (SAR) image is disturbed by multiplicative noise known as speckle. In this paper, based on the power of deep fully convolutional network, an encoding-decoding framework is introduced for multisource SAR image despeckling. The network contains a series of convolution and deconvolution layers, forming an end-to-end non-linear mapping between noise and clean SAR images. With addition of skip connection, the network can keep image details and accomplish the strategy for residual learning which solves the notorious problem of vanishing gradients and accelerates convergence. The experimental results on simulated and real SAR images show that the introduced approach achieves improvements in both despeckling performance and time efficiency over the state-of-the-art despeckling methods.",
"title": ""
},
{
"docid": "8fb598f1f55f7a20bfc05865fc0a5efa",
"text": "The detection of anomalous executions is valuable for reducing potential hazards in assistive manipulation. Multimodal sensory signals can be helpful for detecting a wide range of anomalies. However, the fusion of high-dimensional and heterogeneous modalities is a challenging problem for model-based anomaly detection. We introduce a long short-term memory-based variational autoencoder (LSTM-VAE) that fuses signals and reconstructs their expected distribution by introducing a progress-based varying prior. Our LSTM-VAE-based detector reports an anomaly when a reconstruction-based anomaly score is higher than a state-based threshold. For evaluations with 1555 robot-assisted feeding executions, including 12 representative types of anomalies, our detector had a higher area under the receiver operating characteristic curve of 0.8710 than 5 other baseline detectors from the literature. We also show the variational autoencoding and state-based thresholding are effective in detecting anomalies from 17 raw sensory signals without significant feature engineering effort.",
"title": ""
},
{
"docid": "577c557bc6fcddcb51e962e68ed034ed",
"text": "Text categorization is used to assign each text document to predefined categories. This paper presents a new text classification method for classifying Chinese text based on Rocchio algorithm. We firstly use the TFIDF to extract document vectors from the training documents which have been correctly categorized, and then use those document vectors to generate codebooks as classification models using the LBG and Rocchio algorithm. The codebook is then used to categorize the target documents using vector scores. We tested this method in the experiment and the result shows that this method can achieve better performance.",
"title": ""
},
{
"docid": "d72652b6ad54422e6864baccc88786a8",
"text": "Neisseria meningitidis is a major global pathogen that continues to cause endemic and epidemic human disease. Initial exposure typically occurs within the nasopharynx, where the bacteria can invade the mucosal epithelium, cause fulminant sepsis, and disseminate to the central nervous system, causing bacterial meningitis. Recently, Chamot-Rooke and colleagues1 described a unique virulence property of N. meningitidis in which the bacterial surface pili, after contact with host cells, undergo a modification that facilitates both systemic invasion and the spread of colonization to close contacts. Person-to-person spread of N. meningitidis can result in community epidemics of bacterial meningitis, with major consequences for public health. In resource-poor nations, cyclical outbreaks continue to result in high mortality and long-term disability, particularly in sub-Saharan Africa, where access to early diagnosis, antibiotic therapy, and vaccination is limited.2,3 An exclusively human pathogen, N. meningitidis uses several virulence factors to cause disease. Highly charged and hydrophilic capsular polysaccharides protect N. meningitidis from phagocytosis and complement-mediated bactericidal activity of the innate immune system. A family of proteins (called opacity proteins) on the bacterial outer membrane facilitate interactions with both epithelial and endothelial cells. These proteins are phase-variable — that is, the genome of the bacterium encodes related opacity proteins that are variably expressed, depending on environment, allowing the bacterium to adjust to rapidly changing environmental conditions. Lipooligosaccharide, analogous to the lipopolysaccharide of enteric gram-negative bacteria, contains a lipid A moiety with endotoxin activity that promotes the systemic sepsis encountered clinically. However, initial attachment to host cells is primarily mediated by filamentous organelles referred to as type IV pili, which are common to many bacterial pathogens and unique in their ability to undergo both antigenic and phase variation. Within hours of attachment to the host endothelial cell, N. meningitidis induces the formation of protrusions in the plasma membrane of host cells that aggregate the bacteria into microcolonies and facilitate pili-mediated contacts between bacteria and between bacteria and host cells. After attachment and aggregation, N. meningitidis detaches from the aggregates to systemically invade the host, by means of a transcellular pathway that crosses the respiratory epithelium,4 or becomes aerosolized and spreads the colonization of new hosts (Fig. 1). Chamot-Rooke et al. dissected the molecular mechanism underlying this critical step of systemic invasion and person-to-person spread and reported that pathogenesis depends on a unique post-translational modification of the type IV pili. Using whole-protein mass spectroscopy, electron microscopy, and molecular modeling, they showed that the major component of N. meningitidis type IV pili (called PilE or pilin) undergoes an unusual post-translational modification by phosphoglycerol. Expression of pilin phosphotransferase, the enzyme that transfers phosphoglycerol onto pilin, is increased within 4 hours of meningococcus contact with host cells and modifies the serine residue at amino acid position 93 of pilin, altering the charge of the pilin structure and thereby destabilizing the pili bundles, reducing bacterial aggregation, and promoting detachment from the cell surface. Strains of N. meningitidis in which phosphoglycerol modification of pilin occurred had a greatly enhanced ability to cross epithelial monolayers, a finding that supports the view that this virulence property, which causes deaggregation, promotes both transmission to new hosts and systemic invasion. Although this new molecular understanding of N. meningitidis virulence in humans is provoc-",
"title": ""
},
{
"docid": "83f970bc22a2ada558aaf8f6a7b5a387",
"text": "The imputeTS package specializes on univariate time series imputation. It offers multiple state-of-the-art imputation algorithm implementations along with plotting functions for time series missing data statistics. While imputation in general is a well-known problem and widely covered by R packages, finding packages able to fill missing values in univariate time series is more complicated. The reason for this lies in the fact, that most imputation algorithms rely on inter-attribute correlations, while univariate time series imputation instead needs to employ time dependencies. This paper provides an introduction to the imputeTS package and its provided algorithms and tools. Furthermore, it gives a short overview about univariate time series imputation in R. Introduction In almost every domain from industry (Billinton et al., 1996) to biology (Bar-Joseph et al., 2003), finance (Taylor, 2007) up to social science (Gottman, 1981) different time series data are measured. While the recorded datasets itself may be different, one common problem are missing values. Many analysis methods require missing values to be replaced with reasonable values up-front. In statistics this process of replacing missing values is called imputation. Time series imputation thereby is a special sub-field in the imputation research area. Most popular techniques like Multiple Imputation (Rubin, 1987), Expectation-Maximization (Dempster et al., 1977), Nearest Neighbor (Vacek and Ashikaga, 1980) and Hot Deck (Ford, 1983) rely on interattribute correlations to estimate values for the missing data. Since univariate time series do not possess more than one attribute, these algorithms cannot be applied directly. Effective univariate time series imputation algorithms instead need to employ the inter-time correlations. On CRAN there are several packages solving the problem of imputation of multivariate data. Most popular and mature (among others) are AMELIA (Honaker et al., 2011), mice (van Buuren and Groothuis-Oudshoorn, 2011), VIM (Kowarik and Templ, 2016) and missMDA (Josse and Husson, 2016). However, since these packages are designed for multivariate data imputation only they do not work for univariate time series. At the moment imputeTS (Moritz, 2016a) is the only package on CRAN that is solely dedicated to univariate time series imputation and includes multiple algorithms. Nevertheless, there are some other packages that include imputation functions as addition to their core package functionality. Most noteworthy being zoo (Zeileis and Grothendieck, 2005) and forecast (Hyndman, 2016). Both packages offer also some advanced time series imputation functions. The packages spacetime (Pebesma, 2012), timeSeries (Rmetrics Core Team et al., 2015) and xts (Ryan and Ulrich, 2014) should also be mentioned, since they contain some very simple but quick time series imputation methods. For a broader overview about available time series imputation packages in R see also (Moritz et al., 2015). In this technical report we evaluate the performance of several univariate imputation functions in R on different time series. This paper is structured as follows: Section Overview imputeTS package gives an overview, about all features and functions included in the imputeTS package. This is followed by Usage examples of the different provided functions. The paper ends with a Conclusions section. Overview imputeTS package The imputeTS package can be found on CRAN and is an easy to use package that offers several utilities for ’univariate, equi-spaced, numeric time series’. Univariate means there is just one attribute that is observed over time. Which leads to a sequence of single observations o1, o2, o3, ... on at successive points t1, t2, t3, ... tn in time. Equi-spaced means, that time increments between successive data points are equal |t1 − t2| = |t2 − t3| = ... = |tn−1 − tn|. Numeric means that the observations are measurable quantities that can be described as a number. In the first part of this section, a general overview about all available functions and datasets is given. The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 Contributed research article 2 This is followed by more detailed overviews about the three areas covered by the package: ’Plots & Statistics’, ’Imputation’ and ’Datasets’. Information about how to apply these functions and tools can be found later in the Usage examples section. General overview As can be seen in Table 1, beyond several imputation algorithm implementations the package also includes plotting functions and datasets. The imputation algorithms can be divided into rather simple but fast approaches like mean imputation and more advanced algorithms that need more computation time like kalman smoothing on a structural model. Simple Imputation Imputation Plots & Statistics Datasets na.locf na.interpolation plotNA.distribution tsAirgap na.mean na.kalman plotNA.distributionBar tsAirgapComplete na.random na.ma plotNA.gapsize tsHeating na.replace na.seadec plotNA.imputations tsHeatingComplete na.remove na.seasplit statsNA tsNH4 tsNH4Complete Table 1: General Overview imputeTS package As a whole, the package aims to support the user in the complete process of replacing missing values in time series. This process starts with analyzing the distribution of the missing values using the statsNA function and the plots of plotNA.distribution, plotNA.distributionBar, plotNA.gapsize. In the next step the actual imputation can take place with one of the several algorithm options. Finally, the imputation results can be visualized with the plotNA.imputations function. Additionally, the package contains three datasets, each in a version with and without missing values, that can be used to test imputation algorithms. Plots & Statistics functions An overview about the available plots and statistics functions can be found in Table 2. To get a good impression what the plots look like section Usage examples is recommended. Function Description plotNA.distribution Visualize Distribution of Missing Values plotNA.distributionBar Visualize Distribution of Missing Values (Barplot) plotNA.gapsize Visualize Distribution of NA gap sizes plotNA.imputations Visualize Imputed Values statsNA Print Statistics about the Missing Data Table 2: Overview Plots & Statistics The statsNA function calculates several missing data statistics of the input data. This includes overall percentage of missing values, absolute amount of missing values, amount of missing value in different sections of the data, longest series of consecutive NAs and occurrence of consecutive NAs. The plotNA.distribution function visualizes the distribution of NAs in a time series. This is done using a standard time series plot, in which areas with missing data are colored red. This enables the user to see at first sight where in the series most of the missing values are located. The plotNA.distributionBar function provides the same insights to users, but is designed for very large time series. This is necessary for time series with 1000 and more observations, where it is not possible to plot each observation as a single point. The plotNA.gapsize function provides information about consecutive NAs by showing the most common NA gap sizes in the time series. The plotNA.imputations function is designated for visual inspection of the results after applying an imputation algorithm. Therefore, newly imputed observations are shown in a different color than the rest of the series. The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 Contributed research article 3 Imputation functions An overview about all available imputation algorithms can be found in Table 3. Even if these functions are really easy applicable, some examples can be found later in section Usage examples. More detailed information about the theoretical background of the algorithms can be found in the imputeTS manual (Moritz, 2016b). Function Option Description na.interpolation linear Imputation by Linear Interpolation spline Imputation by Spline Interpolation stine Imputation by Stineman Interpolation na.kalman StructTS Imputation by Structural Model & Kalman Smoothing auto.arima Imputation by ARIMA State Space Representation & Kalman Sm. na.locf locf Imputation by Last Observation Carried Forward nocb Imputation by Next Observation Carried Backward na.ma simple Missing Value Imputation by Simple Moving Average linear Missing Value Imputation by Linear Weighted Moving Average exponential Missing Value Imputation by Exponential Weighted Moving Average na.mean mean MissingValue Imputation by Mean Value median Missing Value Imputation by Median Value mode Missing Value Imputation by Mode Value na.random Missing Value Imputation by Random Sample na.replace Replace Missing Values by a Defined Value na.seadec Seasonally Decomposed Missing Value Imputation na.seasplit Seasonally Splitted Missing Value Imputation na.remove Remove Missing Values Table 3: Overview Imputation Algorithms For convenience similar algorithms are available under one function name as parameter option. For example linear, spline and stineman interpolation are all included in the na.interpolation function. The na.mean, na.locf, na.replace, na.random functions are all simple and fast. In comparison, na.interpolation, na.kalman, na.ma, na.seasplit, na.seadec are more advanced algorithms that need more computation time. The na.remove function is a special case, since it only deletes all missing values. Thus, it is not really an imputation function. It should be handled with care since removing observations may corrupt the time information of the series. The na.seasplit and na.seadec functions are as well exceptions. These perform seasonal split / decomposition operations as a preprocessing step. For the imputation itself, one out of the other imputation algorithms can be used (which one can be set as option). Looking at all available imputation methods, no single overall best method can b",
"title": ""
},
{
"docid": "ab44369792f03c9d1a171789fca24001",
"text": "High-speed actions are known to impact soccer performance and can be categorized into actions requiring maximal speed, acceleration, or agility. Contradictory findings have been reported as to the extent of the relationship between the different speed components. This study comprised 106 professional soccer players who were assessed for 10-m sprint (acceleration), flying 20-m sprint (maximum speed), and zigzag agility performance. Although performances in the three tests were all significantly correlated (p < 0.0005), coefficients of determination (r(2)) between the tests were just 39, 12, and 21% for acceleration and maximum speed, acceleration and agility, and maximum speed and agility, respectively. Based on the low coefficients of determination, it was concluded that acceleration, maximum speed, and agility are specific qualities and relatively unrelated to one another. The findings suggest that specific testing and training procedures for each speed component should be utilized when working with elite players.",
"title": ""
},
{
"docid": "6d5429ddf4050724432da73af60274d6",
"text": "We present an Integer Linear Program for exact inference under a maximum coverage model for automatic summarization. We compare our model, which operates at the subsentence or “concept”-level, to a sentencelevel model, previously solved with an ILP. Our model scales more efficiently to larger problems because it does not require a quadratic number of variables to address redundancy in pairs of selected sentences. We also show how to include sentence compression in the ILP formulation, which has the desirable property of performing compression and sentence selection simultaneously. The resulting system performs at least as well as the best systems participating in the recent Text Analysis Conference, as judged by a variety of automatic and manual content-based metrics.",
"title": ""
},
{
"docid": "055cb9aca6b16308793944154dc7866a",
"text": "Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on second-order statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies?",
"title": ""
},
{
"docid": "2d259ed5d3a1823da7cf54302d8ad1a6",
"text": "We present Lynx-robot, a quadruped, modular, compliant machine. It alternately features a directly actuated, single-joint spine design, or an actively supported, passive compliant, multi-joint spine configuration. Both spine configurations bend in the sagittal plane. This study aims at characterizing these two, largely different spine concepts, for a bounding gait of a robot with a three segmented, pantograph leg design. An earlier, similar-sized, bounding, quadruped robot named Bobcat with a two-segment leg design and a directly actuated, single-joint spine design serves as a comparison robot, to study and compare the effect of the leg design on speed, while keeping the spine design fixed. Both proposed spine designs (single rotatory and active and multi-joint compliant) reach moderate, self-stable speeds.",
"title": ""
},
{
"docid": "03966c28d31e1c45896eab46a1dcce57",
"text": "For many applications it is useful to sample from a nite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose stationary distribution is the desired distribution on this set; after the Markov chain has run for M steps, with M suuciently large, the distribution governing the state of the chain approximates the desired distribution. Unfortunately it can be diicult to determine how large M needs to be. We describe a simple variant of this method that determines on its own when to stop, and that outputs samples in exact accordance with the desired distribution. The method uses couplings, which have also played a role in other sampling schemes; however, rather than running the coupled chains from the present into the future, one runs from a distant point in the past up until the present, where the distance into the past that one needs to go is determined during the running of the algorithm itself. If the state space has a partial order that is preserved under the moves of the Markov chain, then the coupling is often particularly eecient. Using our approach one can sample from the Gibbs distributions associated with various statistical mechanics models (including Ising, random-cluster, ice, and dimer) or choose uniformly at random from the elements of a nite distributive lattice.",
"title": ""
}
] |
scidocsrr
|
4566ba3e4c29766f52b998292d4fa63c
|
Machine learning and time series: Real world applications
|
[
{
"docid": "02d11f4663277bb55a289d03403b5eb2",
"text": "Financial markets play an important role on the economical and social organization of modern society. In these kinds of markets, information is an invaluable asset. However, with the modernization of the financial transactions and the information systems, the large amount of information available for a trader can make prohibitive the analysis of a financial asset. In the last decades, many researchers have attempted to develop computational intelligent methods and algorithms to support the decision-making in different financial market segments. In the literature, there is a huge number of scientific papers that investigate the use of computational intelligence techniques to solve financial market problems. However, only few studies have focused on review the literature of this topic. Most of the existing review articles have a limited scope, either by focusing on a specific financial market application or by focusing on a family of machine learning algorithms. This paper presents a review of the application of several computational intelligent methods in several financial applications. This paper gives an overview of the most important primary studies published from 2009 to 2015, which cover techniques for preprocessing and clustering of financial data, for forecasting future market movements, for mining financial text information, among others. The main contributions of this paper are: (i) a comprehensive review of the literature of this field, (ii) the definition of a systematic procedure for guiding the task of building an intelligent trading system and (iii) a discussion about the main challenges and open problems in this scientific field. © 2016 Published by Elsevier Ltd.",
"title": ""
}
] |
[
{
"docid": "e45144bf1d377cd910f6f6bd18939a24",
"text": "The Body Esteem Scale (BES; Franzoi and Shields 1984) has been a primary research tool for over 30 years, yet its factor structure has not been fully assessed since its creation, so a two-study design examined whether the BES needed revision. In Study 1, a series of principal components analyses (PCAs) was conducted using the BES responses of 798 undergraduate students, with results indicating that changes were necessary to improve the scale’s accuracy. In Study 2, 1237 undergraduate students evaluated each BES item, along with a select set of new body items, while also rating each item’s importance to their own body esteem. Body items meeting minimum importance criteria were then utilized in a series of PCAs to develop a revised scale that has strong internal consistency and good convergent and discriminant validity. As with the original BES, the revised BES (BES-R) conceives of body esteem as both gender-specific and multidimensional. Given that the accurate assessment of body esteem is essential in better understanding the link between this construct and mental health, the BES-R can now be used in research to illuminate this link, as well as in prevention and treatment programs for body-image issues. Further implications are discussed.",
"title": ""
},
{
"docid": "a92803c4615f467662fb6e7a32c77fa4",
"text": "Large quantities of mucilage are synthesized in seed coat epidermis cells during seed coat differentiation. This process is an ideal model system for the study of plant cell wall biosynthesis and modifications. In this study, we show that mutation in Irregular Xylem 7 (IRX7) results in a defect in mucilage adherence due to reduced xylan biosynthesis. IRX7 was expressed in the seeds from 4 days post-anthesis (DPA) to 13 DPA, with the peak of expression at 13 DPA. The seed coat epidermis cells of irx7 displayed no aberrant morphology during differentiation, and these cells synthesized and deposited the same amount of mucilage as did wild type (WT) cells. However, the distribution of the water-soluble vs. adherent mucilage layers was significantly altered in irx7 compared to the WT. Both the amount of xylose and the extent of glycosyl linkages of xylan was dramatically decreased in irx7 water-soluble and adherent mucilage compared to the WT. The polymeric structure of water-soluble mucilage was altered in irx7, with a total loss of the higher molecular weight polymer components present in the WT. Correspondingly, whole-seed immunolabeling assays and dot-immunoassays of extracted mucilage indicated dramatic changes in rhamnogalacturonan I (RG I) and xylan epitopes in irx7 mucilage. Furthermore, the crystalline cellulose content was significantly reduced in irx7 mucilage. Taken together, these results indicate that xylan synthesized by IRX7 plays an essential role in maintaining the adhesive property of seed coat mucilage, and its structural role is potentially implemented through its interaction with cellulose.",
"title": ""
},
{
"docid": "703696ca3af2a485ac34f88494210007",
"text": "Cells navigate environments, communicate and build complex patterns by initiating gene expression in response to specific signals. Engineers seek to harness this capability to program cells to perform tasks or create chemicals and materials that match the complexity seen in nature. This Review describes new tools that aid the construction of genetic circuits. Circuit dynamics can be influenced by the choice of regulators and changed with expression 'tuning knobs'. We collate the failure modes encountered when assembling circuits, quantify their impact on performance and review mitigation efforts. Finally, we discuss the constraints that arise from circuits having to operate within a living cell. Collectively, better tools, well-characterized parts and a comprehensive understanding of how to compose circuits are leading to a breakthrough in the ability to program living cells for advanced applications, from living therapeutics to the atomic manufacturing of functional materials.",
"title": ""
},
{
"docid": "cc752e1e36e689a0a78be8d5bd74a61a",
"text": "Classification is paramount for an optimal processing of tweets, albeit performance of classifiers is hindered by the need of large sets of training data to encompass the diversity of contents one can find on Twitter. In this paper, we introduce an inexpensive way of labeling large sets of tweets, which can be easily regenerated or updated when needed. We use human-edited web page directories to infer categories from URLs contained in tweets. By experimenting with a large set of more than 5 million tweets categorized accordingly, we show that our proposed model for tweet classification can achieve 82% in accuracy, performing only 12.2% worse than for web page classification.",
"title": ""
},
{
"docid": "44ea81d223e3c60c7b4fd1192ca3c4ba",
"text": "Existing classification and rule learning algorithms in machine learning mainly use heuristic/greedy search to find a subset of regularities (e.g., a decision tree or a set of rules) in data for classification. In the past few years, extensive research was done in the database community on learning rules using exhaustive search under the name of association rule mining. The objective there is to find all rules in data that satisfy the user-specified minimum support and minimum confidence. Although the whole set of rules may not be used directly for accurate classification, effective and efficient classifiers have been built using the rules. This paper aims to improve such an exhaustive search based classification system CBA (Classification Based on Associations). The main strength of this system is that it is able to use the most accurate rules for classification. However, it also has weaknesses. This paper proposes two new techniques to deal with these weaknesses. This results in remarkably accurate classifiers. Experiments on a set of 34 benchmark datasets show that on average the new techniques reduce the error of CBA by 17% and is superior to CBA on 26 of the 34 datasets. They reduce the error of the decision tree classifier C4.5 by 19%, and improve performance on 29 datasets. Similar good results are also achieved against the existing classification systems, RIPPER, LB and a Naïve-Bayes",
"title": ""
},
{
"docid": "2426ed457b8f8d4ecb99c95cb7109507",
"text": "Voice cloning is a highly desired feature for personalized speech interfaces. We introduce a neural voice cloning system that learns to synthesize a person’s voice from only a few audio samples. We study two approaches: speaker adaptation and speaker encoding. Speaker adaptation is based on fine-tuning a multi-speaker generative model. Speaker encoding is based on training a separate model to directly infer a new speaker embedding, which will be applied to a multi-speaker generative model. In terms of naturalness of the speech and similarity to the original speaker, both approaches can achieve good performance, even with a few cloning audios. 2 While speaker adaptation can achieve slightly better naturalness and similarity, cloning time and required memory for the speaker encoding approach are significantly less, making it more favorable for low-resource deployment.",
"title": ""
},
{
"docid": "f3e56a991e197428110afbd0fd8ac63e",
"text": "PURPOSE\nThe development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.\n\n\nMETHODS\nSeven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (\"nodule > or =3 mm,\" \"nodule <3 mm,\" and \"non-nodule > or =3 mm\"). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus.\n\n\nRESULTS\nThe Database contains 7371 lesions marked \"nodule\" by at least one radiologist. 2669 of these lesions were marked \"nodule > or =3 mm\" by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings.\n\n\nCONCLUSIONS\nThe LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice.",
"title": ""
},
{
"docid": "888a58ccee0297f2c6f8eb9e31383cc0",
"text": "A Business Intelligence (BI) system is a technology that provides significant business value by improving the effectiveness of managerial decision-making. In an uncertain and highly competitive business environment, the value of strategic information systems such as these is easily recognised. High adoption rates and investment in BI software and services suggest that these systems are a principal provider of decision support in the current marketplace. Most business investments are screened using some form of evaluation process or technique. The benefits of BI are such that traditional evaluation techniques have difficulty in identifying the soft, intangible benefits often provided by BI. This paper, forming the first part of a larger research project, aims to review current evaluation techniques that address intangible benefits, presents issues relating to the evaluation of BI in industry, and suggests a research agenda to advance what is presently a limited body of knowledge relating to the evaluation of BI intangible benefits.",
"title": ""
},
{
"docid": "44aa302a4fcb1793666b6aedc9aa5798",
"text": "Unite neuroscience, supercomputing, and nanotechnology to discover, demonstrate, and deliver the brain's core algorithms.",
"title": ""
},
{
"docid": "e016c72bf2c3173d5c9f4973d03ab380",
"text": "SDN controllers demand tight performance guarantees over the control plane actions performed by switches. For example, traffic engineering techniques that frequently reconfigure the network require guarantees on the speed of reconfiguring the network. Initial experiments show that poor performance of Ternary Content-Addressable Memory (TCAM) control actions (e.g., rule insertion) can inflate application performance by a factor of 2x! Yet, modern switches provide no guarantees for these important control plane actions -- inserting, modifying, or deleting rules.\n In this paper, we present the design and evaluation of Hermes, a practical and immediately deployable framework that offers a novel method for partitioning and optimizing switch TCAM to enable performance guarantees. Hermes builds on recent studies on switch performance and provides guarantees by trading-off a nominal amount of TCAM space for assured performance. We evaluated Hermes using large-scale simulations. Our evaluations show that with less than 5% overheads, Hermes provides 5ms insertion guarantees that translates into an improvement of application level metrics by up to 80%. Hermes is more than 50% better than existing state of the art techniques and provides significant improvement for traditional networks running BGP.",
"title": ""
},
{
"docid": "5bb15e64e7e32f3a0b1b99be8b8ab2bf",
"text": "Breast cancer is one of the major causes of death in women when compared to all other cancers. Breast cancer has become the most hazardous types of cancer among women in the world. Early detection of breast cancer is essential in reducing life losses. This paper presents a comparison among the different Data mining classifiers on the database of breast cancer Wisconsin Breast Cancer (WBC), by using classification accuracy. This paper aims to establish an accurate classification model for Breast cancer prediction, in order to make full use of the invaluable information in clinical data, especially which is usually ignored by most of the existing methods when they aim for high prediction accuracies. We have done experiments on WBC data. The dataset is divided into training set with 499 and test set with 200 patients. In this experiment, we compare six classification techniques in Weka software and comparison results show that Support Vector Machine (SVM) has higher prediction accuracy than those methods. Different methods for breast cancer detection are explored and their accuracies are compared. With these results, we infer that the SVM are more suitable in handling the classification problem of breast cancer prediction, and we recommend the use of these approaches in similar classification problems. Keywords—breast cancer; classification; Decision tree, Naïve Bayes, MLP, Logistic Regression SVM, KNN and weka;",
"title": ""
},
{
"docid": "e2308b435dddebc422ff49a7534bbf83",
"text": "Memory encryption has yet to be used at the core of operating system designs to provide confidentiality of code and data. As a result, numerous vulnerabilities exist at every level of the software stack. Three general approaches have evolved to rectify this problem. The most popular approach is based on complex hardware enhancements; this allows all encryption and decryption to be conducted within a well-defined trusted boundary. Unfortunately, these designs have not been integrated within commodity processors and have primarily been explored through simulation with very few prototypes. An alternative approach has been to augment existing hardware with operating system enhancements for manipulating keys, providing improved trust. This approach has provided insights into the use of encryption but has involved unacceptable overheads and has not been adopted in commercial operating systems. Finally, specialized industrial devices have evolved, potentially adding coprocessors, to increase security of particular operations in specific operating environments. However, this approach lacks generality and has introduced unexpected vulnerabilities of its own. Recently, memory encryption primitives have been integrated within commodity processors such as the Intel i7, AMD bulldozer, and multiple ARM variants. This opens the door for new operating system designs that provide confidentiality across the entire software stack outside the CPU. To date, little practical experimentation has been conducted, and the improvements in security and associated performance degradation has yet to be quantified. This article surveys the current memory encryption literature from the viewpoint of these central issues.",
"title": ""
},
{
"docid": "b719b861a5bb6cc349ccbcd260f45054",
"text": "Road accident analysis is very challenging task and investigating the dependencies between the attributes become complex because of many environmental and road related factors. In this research work we applied data mining classification techniques to carry out gender based classification of which RndTree and C4.5 using AdaBoost Meta classifier gives high accurate results. The training dataset used for the research work is obtained from Fatality Analysis Reporting System (FARS) which is provided by the University of Alabama's Critical Analysis Reporting Environment (CARE) system. The results reveal that AdaBoost used with RndTree improvised the classifier's accuracy.",
"title": ""
},
{
"docid": "b7f1af8c7850ee68c19cf5a4588aeb57",
"text": "The ‘ellipsoidal distribution’, in which angles are assumed to be distributed parallel to the surface of an oblate or prolate ellipsoid, has been widely used to describe the leaf angle distribution (LAD) of plant canopies. This ellipsoidal function is constrained to show a probability density of zero at an inclination angle of zero; however, actual LADs commonly show a peak probability density at zero, a pattern consistent with functional models of plant leaf display. A ‘rotated ellipsoidal distribution’ is described here, which geometrically corresponds to an ellipsoid in which small surface elements are rotated normal to the surface. Empirical LADs from canopy and understory species in an old-growth coniferous forest were used to compare the two models. In every case the rotated ellipsoidal function provided a better description of empirical data than did the non-rotated function, while retaining only a single parameter. The ratio of G-statistics for goodness of fit for the two functions ranged from 1.03 to 3.88. The improved fit is due to the fact that the rotated function always shows a probability density greater than zero at inclination angles of zero, can show a mode at zero, and more accurately characterizes the overall shape of empirical distributions. ©2000 Published by Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "4edb705f4e60421327a77e9d7624f708",
"text": "We introduce a new neural architecture and an unsupervised a lgorithm for learning invariant representations from temporal sequence of images. The system uses two groups of complex cells whose outputs are combined multiplicative ly: one that represents the content of the image, constrained to be constant over severa l consecutive frames, and one that represents the precise location of features, which is allowed to vary over time but constrained to be sparse. The architecture uses an encod er to extract features, and a decoder to reconstruct the input from the features. The meth od was applied to patches extracted from consecutive movie frames and produces orien tat o and frequency selective units analogous to the complex cells in V1. An extension of the method is proposed to train a network composed of units with local receptive fiel d spread over a large image of arbitrary size. A layer of complex cells, subject to spars ity constraints, pool feature units over overlapping local neighborhoods, which causes t h feature units to organize themselves into pinwheel patterns of orientation-selecti v receptive fields, similar to those observed in the mammalian visual cortex. A feed-forwa rd encoder efficiently computes the feature representation of full images.",
"title": ""
},
{
"docid": "5b984d57ad0940838b703eadd7c733b3",
"text": "Neural sequence generation is commonly approached by using maximumlikelihood (ML) estimation or reinforcement learning (RL). However, it is known that they have their own shortcomings; ML presents training/testing discrepancy, whereas RL suffers from sample inefficiency. We point out that it is difficult to resolve all of the shortcomings simultaneously because of a tradeoff between ML and RL. In order to counteract these problems, we propose an objective function for sequence generation using α-divergence, which leads to an ML-RL integrated method that exploits better parts of ML and RL. We demonstrate that the proposed objective function generalizes ML and RL objective functions because it includes both as its special cases (ML corresponds to α→ 0 and RL to α→ 1). We provide a proposition stating that the difference between the RL objective function and the proposed one monotonically decreases with increasing α. Experimental results on machine translation tasks show that minimizing the proposed objective function achieves better sequence generation performance than ML-based methods.",
"title": ""
},
{
"docid": "34c47fc822f728104f861abb8b44bcf3",
"text": "In recent years, the demand for high purity spinning processes has been growing in certain industry branches, such as the semiconductor, biotechnological, pharmaceutical, and chemical industry. Therefore, the cleanness specifications have been tightened, and hermetically sealed process chambers are preferred. This paper presents an advantageous solution for such an application featuring a large scale, wide air gap, and a high accelerating bearingless segment motor. Bearingless slice motors allow complete magnetic levitation in combination with a very compact and economic design. The disc-shaped rotor holds permanent magnets generating magnetic flux in the air gap. Hence, three degrees of freedom are passively stabilized by reluctance forces. Thus, only the radial rotor position and the rotor angle have to be controlled actively. The announced bearingless segment motor is a subtype of the bearingless slice motor, featuring separate independent stator elements. This leads to a reduction of stator iron, cost, and weight and, in addition, leaves space for sensors and electronics enabling a very compact system design.",
"title": ""
},
{
"docid": "adfe33d77ff2432904c78d45122659d5",
"text": "Two important plant pathogenic bacteria Acidovorax oryzae and Acidovorax citrulli are closely related and often not easy to be differentiated from each other, which often resulted in a false identification between them based on traditional methods such as carbon source utilization profile, fatty acid methyl esters, and ELISA detection tests. MALDI-TOF MS and Fourier transform infrared (FTIR) spectra have recently been successfully applied in bacterial identification and classification, which provide an alternate method for differentiating the two species. Characterization and comparison of the 10 A. oryzae strains and 10 A. citrulli strains were performed based on traditional bacteriological methods, MALDI-TOF MS, and FTIR spectroscopy. Our results showed that the identity of the two closely related plant pathogenic bacteria A. oryzae and A. citrulli was able to be confirmed by both pathogenicity tests and species-specific PCR, but the two species were difficult to be differentiated based on Biolog and FAME profile as well as 16 S rRNA sequence analysis. However, there were significant differences in MALDI-TOF MS and FTIR spectra between the two species of Acidovorax. MALDI-TOF MS revealed that 22 and 18 peaks were specific to A. oryzae and A. citrulli, respectively, while FTIR spectra of the two species of Acidovorax have the specific peaks at 1738, 1311, 1128, 1078, 989 cm-1 and at 1337, 968, 933, 916, 786 cm-1, respectively. This study indicated that MALDI-TOF MS and FTIR spectra may give a new strategy for rapid bacterial identification and differentiation of the two closely related species of Acidovorax.",
"title": ""
},
{
"docid": "7e0d65fee19baefe31a4e14bf25f42ee",
"text": "This paper describes the process for documenting programs using Aspect-Oriented PHP through AOPHPdoc. We discuss some of the problems involved in documenting Aspect-Oriented programs, solutions to these problems, and the creation of documentation with AOPHPdoc. A survey of programmers found no preference for Javadoc-styled documentation over the colored-coded AOPHP documentation.",
"title": ""
},
{
"docid": "c1798df137166540f58bd6f02bd7ec64",
"text": "We are interested in the problem of automatically tracking and identifying players in sports video. While there are many automatic multi-target tracking methods, in sports video, it is difficult to track multiple players due to frequent occlusions, quick motion of players and camera, and camera position. We propose tracking method that associates tracklets of a same player using results of player number recognition. To deal with frequent occlusions, we detect human region by level set method and then estimates if it is occluded group region or unoccluded individual one. Moreover, we associate tracklets using the results of player number recognition at each frame by keypoints-based matching with templates from multiple viewpoints, so that final tracklets include occluded region.",
"title": ""
}
] |
scidocsrr
|
1280c28733d7b491a9e2a3178e19bce3
|
Force Generation by Parallel Combinations of Fiber-Reinforced Fluid-Driven Actuators
|
[
{
"docid": "03a8635fcb64117d5a2a6f890c2b03b5",
"text": "This work provides approaches to designing and fabricating soft fluidic elastomer robots. That is, three viable actuator morphologies composed entirely from soft silicone rubber are explored, and these morphologies are differentiated by their internal channel structure, namely, ribbed, cylindrical, and pleated. Additionally, three distinct casting-based fabrication processes are explored: lamination-based casting, retractable-pin-based casting, and lost-wax-based casting. Furthermore, two ways of fabricating a multiple DOF robot are explored: casting the complete robot as a whole and casting single degree of freedom (DOF) segments with subsequent concatenation. We experimentally validate each soft actuator morphology and fabrication process by creating multiple physical soft robot prototypes.",
"title": ""
},
{
"docid": "e259e255f9acf3fa1e1429082e1bf1de",
"text": "In this work we describe an autonomous soft-bodied robot that is both self-contained and capable of rapid, continuum-body motion. We detail the design, modeling, fabrication, and control of the soft fish, focusing on enabling the robot to perform rapid escape responses. The robot employs a compliant body with embedded actuators emulating the slender anatomical form of a fish. In addition, the robot has a novel fluidic actuation system that drives body motion and has all the subsystems of a traditional robot onboard: power, actuation, processing, and control. At the core of the fish's soft body is an array of fluidic elastomer actuators. We design the fish to emulate escape responses in addition to forward swimming because such maneuvers require rapid body accelerations and continuum-body motion. These maneuvers showcase the performance capabilities of this self-contained robot. The kinematics and controllability of the robot during simulated escape response maneuvers are analyzed and compared with studies on biological fish. We show that during escape responses, the soft-bodied robot has similar input-output relationships to those observed in biological fish. The major implication of this work is that we show soft robots can be both self-contained and capable of rapid body motion.",
"title": ""
},
{
"docid": "b6ceacf3ad3773acddc3452933b57a0f",
"text": "The growing interest in robots that interact safely with humans and surroundings have prompted the need for soft structural embodiments including soft actuators. This paper explores a class of soft actuators inspired in design and construction by Pneumatic Artificial Muscles (PAMs) or McKibben Actuators. These bio-inspired actuators consist of fluid-filled elastomeric enclosures that are reinforced with fibers along a specified orientation and are in general referred to as Fiber-Reinforced Elastomeric Enclosures (FREEs). Several recent efforts have mapped the fiber configurations to instantaneous deformation, forces, and moments generated by these actuators upon pressurization with fluid. However most of the actuators, when deployed undergo large deformations and large overall motions thus necessitating the study of their large-deformation kinematics. This paper analyzes the large deformation kinematics of FREEs. A concept called configuration memory effect is proposed to explain the smart nature of these actuators. This behavior is tested with experiments and finite element modeling for a small sample of actuators. The paper also describes different possibilities and design implications of the large deformation behavior of FREEs in successful creation of soft robots.",
"title": ""
}
] |
[
{
"docid": "e28b8c08275947f0908f64d117f5dc8e",
"text": "We propose a method for using synthetic data to help learning classifiers. Synthetic data, even is generated based on real data, normally results in a shift from the distribution of real data in feature space. To bridge the gap between the real and synthetic data, and jointly learn from synthetic and real data, this paper proposes a Multichannel Autoencoder(MCAE). We show that by suing MCAE, it is possible to learn a better feature representation for classification. To evaluate the proposed approach, we conduct experiments on two types of datasets. Experimental results on two datasets validate the efficiency of our MCAE model and our methodology of generating synthetic data.",
"title": ""
},
{
"docid": "f59fd6af9dea570b49c453de02297f4c",
"text": "OBJECTIVES\nThe role of social media as a source of timely and massive information has become more apparent since the era of Web 2.0.Multiple studies illustrated the use of information in social media to discover biomedical and health-related knowledge.Most methods proposed in the literature employ traditional document classification techniques that represent a document as a bag of words.These techniques work well when documents are rich in text and conform to standard English; however, they are not optimal for social media data where sparsity and noise are norms.This paper aims to address the limitations posed by the traditional bag-of-word based methods and propose to use heterogeneous features in combination with ensemble machine learning techniques to discover health-related information, which could prove to be useful to multiple biomedical applications, especially those needing to discover health-related knowledge in large scale social media data.Furthermore, the proposed methodology could be generalized to discover different types of information in various kinds of textual data.\n\n\nMETHODOLOGY\nSocial media data is characterized by an abundance of short social-oriented messages that do not conform to standard languages, both grammatically and syntactically.The problem of discovering health-related knowledge in social media data streams is then transformed into a text classification problem, where a text is identified as positive if it is health-related and negative otherwise.We first identify the limitations of the traditional methods which train machines with N-gram word features, then propose to overcome such limitations by utilizing the collaboration of machine learning based classifiers, each of which is trained to learn a semantically different aspect of the data.The parameter analysis for tuning each classifier is also reported.\n\n\nDATA SETS\nThree data sets are used in this research.The first data set comprises of approximately 5000 hand-labeled tweets, and is used for cross validation of the classification models in the small scale experiment, and for training the classifiers in the real-world large scale experiment.The second data set is a random sample of real-world Twitter data in the US.The third data set is a random sample of real-world Facebook Timeline posts.\n\n\nEVALUATIONS\nTwo sets of evaluations are conducted to investigate the proposed model's ability to discover health-related information in the social media domain: small scale and large scale evaluations.The small scale evaluation employs 10-fold cross validation on the labeled data, and aims to tune parameters of the proposed models, and to compare with the stage-of-the-art method.The large scale evaluation tests the trained classification models on the native, real-world data sets, and is needed to verify the ability of the proposed model to handle the massive heterogeneity in real-world social media.\n\n\nFINDINGS\nThe small scale experiment reveals that the proposed method is able to mitigate the limitations in the well established techniques existing in the literature, resulting in performance improvement of 18.61% (F-measure).The large scale experiment further reveals that the baseline fails to perform well on larger data with higher degrees of heterogeneity, while the proposed method is able to yield reasonably good performance and outperform the baseline by 46.62% (F-Measure) on average.",
"title": ""
},
{
"docid": "4e006cd320506a5ef244eedd3f761756",
"text": "Document classification is a growing interest in the research of text mining. Correctly identifying the documents into particular category is still presenting challenge because of large and vast amount of features in the dataset. In regards to the existing classifying approaches, Naïve Bayes is potentially good at serving as a document classification model due to its simplicity. The aim of this paper is to highlight the performance of employing Naïve Bayes in document classification. Results show that Naïve Bayes is the best classifiers against several common classifiers (such as decision tree, neural network, and support vector machines) in term of accuracy and computational efficiency.",
"title": ""
},
{
"docid": "5aebd19c78b6b24c612e20970c27044f",
"text": "The concept of alignment or fit between information technology (IT) and business strategy has been discussed for many years, and strategic alignment is deemed crucial in increasing firm performance. Yet few attempts have been made to investigate the factors that influence alignment, especially in the context of small and medium sized firms (SMEs). This issue is important because results from previous studies suggest that many firms struggle to achieve alignment. Therefore, this study sought to identify different levels of alignment and then investigated the factors that influence alignment. In particular, it focused on the alignment between the requirements for accounting information (AIS requirements) and the capacity of accounting systems (AIS capacity) to generate the information, in the specific context of manufacturing SMEs in Malaysia. Using a mail questionnaire, data from 214 firms was collected on nineteen accounting information characteristics for both requirements and capacity. The fit between these two sets was explored using the moderation approach and evidence was gained that AIS alignment in some firms was high. Cluster analysis was used to find two sets of groups which could be considered more aligned and less aligned. The study then investigated some factors that might be associated with a small firm’s level of AIS alignment. Findings from the study suggest that AIS alignment was related to the firm’s: level of IT maturity; level of owner/manager’s accounting and IT knowledge; use of expertise from government agencies and accounting firms; and existence of internal IT staff.",
"title": ""
},
{
"docid": "ccbb7e753b974951bb658b63e91431bb",
"text": "In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.",
"title": ""
},
{
"docid": "1c6a14765f2fefd517b174fdc4f9e45b",
"text": "Epilepsy affects 65 million people worldwide and entails a major burden in seizure-related disability, mortality, comorbidities, stigma, and costs. In the past decade, important advances have been made in the understanding of the pathophysiological mechanisms of the disease and factors affecting its prognosis. These advances have translated into new conceptual and operational definitions of epilepsy in addition to revised criteria and terminology for its diagnosis and classification. Although the number of available antiepileptic drugs has increased substantially during the past 20 years, about a third of patients remain resistant to medical treatment. Despite improved effectiveness of surgical procedures, with more than half of operated patients achieving long-term freedom from seizures, epilepsy surgery is still done in a small subset of drug-resistant patients. The lives of most people with epilepsy continue to be adversely affected by gaps in knowledge, diagnosis, treatment, advocacy, education, legislation, and research. Concerted actions to address these challenges are urgently needed.",
"title": ""
},
{
"docid": "c6e1c8aa6633ec4f05240de1a3793912",
"text": "Medial prefrontal cortex (MPFC) is among those brain regions having the highest baseline metabolic activity at rest and one that exhibits decreases from this baseline across a wide variety of goal-directed behaviors in functional imaging studies. This high metabolic rate and this behavior suggest the existence of an organized mode of default brain function, elements of which may be either attenuated or enhanced. Extant data suggest that these MPFC regions may contribute to the neural instantiation of aspects of the multifaceted \"self.\" We explore this important concept by targeting and manipulating elements of MPFC default state activity. In this functional magnetic resonance imaging (fMRI) study, subjects made two judgments, one self-referential, the other not, in response to affectively normed pictures: pleasant vs. unpleasant (an internally cued condition, ICC) and indoors vs. outdoors (an externally cued condition, ECC). The ICC was preferentially associated with activity increases along the dorsal MPFC. These increases were accompanied by decreases in both active task conditions in ventral MPFC. These results support the view that dorsal and ventral MPFC are differentially influenced by attentiondemanding tasks and explicitly self-referential tasks. The presence of self-referential mental activity appears to be associated with increases from the baseline in dorsal MPFC. Reductions in ventral MPFC occurred consistent with the fact that attention-demanding tasks attenuate emotional processing. We posit that both self-referential mental activity and emotional processing represent elements of the default state as represented by activity in MPFC. We suggest that a useful way to explore the neurobiology of the self is to explore the nature of default state activity.",
"title": ""
},
{
"docid": "7200c6c09c38e2fb363360ae8bb473ff",
"text": "This work describes autofluorescence of the mycelium of the dry rot fungus Serpula lacrymans grown on spruce wood blocks impregnated with various metals. Live mycelium, as opposed to dead mycelium, exhibited yellow autofluorescence upon blue excitation, blue fluorescence with ultraviolet (UV) excitation, orange-red and light-blue fluorescence with violet excitation, and red fluorescence with green excitation. Distinctive autofluorescence was observed in the fungal cell wall and in granula localized in the cytoplasm. In dead mycelium, the intensity of autofluorescence decreased and the signal was diffused throughout the cytoplasm. Metal treatment affected both the color and intensity of autofluorescence and also the morphology of the mycelium. The strongest yellow signal was observed with blue excitation in Cd-treated samples, in conjunction with increased branching and the formation of mycelial loops and protrusions. For the first time, we describe pink autofluorescence that was observed in Mn-, Zn-, and Cu-treated samples with UV, violet or. blue excitation. The lowest signals were obtained in Cu- and Fe-treated samples. Chitin, an important part of the fungal cell wall exhibited intensive primary fluorescence with UV, violet, blue, and green excitation.",
"title": ""
},
{
"docid": "e0632c86f648a36f083b56d534746c02",
"text": "At present, the brain is viewed primarily as a biological computer. But, crucially, the plasticity of the brain’s structure leads it to vary in functionally significant ways across individuals. Understanding the brain necessitates an understanding of the range of such variation. For example, the number of neurons in the brain and its finer structures impose inherent limitations on the functionality it can realize. The relationship between such quantitative limits on the resources available and the computations that are feasible with such resources is the subject of study in computational complexity theory. Computational complexity is a potentially useful conceptual framework because it enables the meaningful study of the family of possible structures as a whole—the study of “the brain,” as opposed to some particular brain. The language of computational complexity also provides a means of formally capturing capabilities of the brain, which may otherwise be philosophically thorny.",
"title": ""
},
{
"docid": "a972153f00c01f918f335d0877029184",
"text": "Direct volume rendering offers the opportunity to visualize all of a three-dimensional sample volume in one image. However, processing such images can be very expensive and good quality high-resolution images are far from interactive. Projection approaches to direct volume rendering process the volume region by region as opposed to ray-casting methods that process it ray by ray. Projection approaches have generated interest because they use coherence to provide greater speed than ray casting and generate the image in a layered, informative fashion. This paper discusses two topics: First, it introduces a projection approach for directly rendering rectilinear, parallel-projected sample volumes that takes advantage of coherence across cells and the identical shape of their projection. Second, it considers the repercussions of various methods of integration in depth and interpolation across the scan plane. Some of these methods take advantage of Gouraud-shading hardware, with advantages in speed but potential disadvantages in image quality.",
"title": ""
},
{
"docid": "1e6310e8b16625e8f8319c7386723e55",
"text": "Exploiting memory disclosure vulnerabilities like the HeartBleed bug may cause arbitrary reading of a victim's memory, leading to leakage of critical secrets such as crypto keys, personal identity and financial information. While isolating code that manipulates critical secrets into an isolated execution environment is a promising countermeasure, existing approaches are either too coarse-grained to prevent intra-domain attacks, or require excessive intervention from low-level software (e.g., hypervisor or OS), or both. Further, few of them are applicable to large-scale software with millions of lines of code. This paper describes a new approach, namely SeCage, which retrofits commodity hardware virtualization extensions to support efficient isolation of sensitive code manipulating critical secrets from the remaining code. SeCage is designed to work under a strong adversary model where a victim application or even the OS may be controlled by the adversary, while supporting large-scale software with small deployment cost. SeCage combines static and dynamic analysis to decompose monolithic software into several compart- ments, each of which may contain different secrets and their corresponding code. Following the idea of separating control and data plane, SeCage retrofits the VMFUNC mechanism and nested paging in Intel processors to transparently provide different memory views for different compartments, while allowing low-cost and transparent invocation across domains without hypervisor intervention.\n We have implemented SeCage in KVM on a commodity Intel machine. To demonstrate the effectiveness of SeCage, we deploy it to the Nginx and OpenSSH server with the OpenSSL library as well as CryptoLoop with small efforts. Security evaluation shows that SeCage can prevent the disclosure of private keys from HeartBleed attacks and memory scanning from rootkits. The evaluation shows that SeCage only incurs small performance and space overhead.",
"title": ""
},
{
"docid": "613057956e5c40e1257ece734bbe5246",
"text": "In this paper, we prove some convergence properties for a class of ant colony optimization algorithms. In particular, we prove that for any small constant 0 and for a sufficiently large number of algorithm iterations , the probability of finding an optimal solution at least once is ( ) 1 and that this probability tends to 1 for . We also prove that, after an optimal solution has been found, it takes a finite number of iterations for the pheromone trails associated to the found optimal solution to grow higher than any other pheromone trail and that, for , any fixed ant will produce the optimal solution during the th iteration with probability 1 (̂ min max), where min and max are the minimum and maximum values that can be taken by pheromone trails.",
"title": ""
},
{
"docid": "dc783054dac29af7d08cee0a13259a8d",
"text": "This paper develops a novel flexible capacitive tactile sensor array for prosthetic hand gripping force measurement. The sensor array has 8 × 8 (= 64) sensing units, each sensing unit has a four-layered structure: two thick PET layers with embedded copper electrodes generates a capacitor, a PDMS film with line-structure used as an insulation layer, and a top PDMS bump layer to concentrate external force. The structural design, working principle, and fabrication process of this sensor array are presented. The fabricated tactile sensor array features high flexibility has a spatial resolution of 2 mm. This is followed by the characterization of the sensing unit for normal force measurement and found that the sensing unit has two sensitivities: 4.82 0/00/mN for small contact force and 0.23 0/00/mN for large gripping force measurements. Finally, the tactile sensor array is integrated into a prosthetic hand for gripping force measurement. Results showed that the developed flexible capacitive tactile sensor array could be utilized for tactile sensing and real-time contact force visualization for prosthetic hand gripping applications.",
"title": ""
},
{
"docid": "1facd226c134b22f62613073deffce60",
"text": "We present two experiments examining the impact of navigation techniques on users' navigation performance and spatial memory in a zoomable user interface (ZUI). The first experiment with 24 participants compared the effect of egocentric body movements with traditional multi-touch navigation. The results indicate a 47% decrease in path lengths and a 34% decrease in task time in favor of egocentric navigation, but no significant effect on users' spatial memory immediately after a navigation task. However, an additional second experiment with 8 participants revealed such a significant increase in performance of long-term spatial memory: The results of a recall task administered after a 15-minute distractor task indicate a significant advantage of 27% for egocentric body movements in spatial memory. Furthermore, a questionnaire about the subjects' workload revealed that the physical demand of the egocentric navigation was significantly higher but there was less mental demand.",
"title": ""
},
{
"docid": "c7d80cd2f45eeea465c22c9d17c3af36",
"text": "In this article, a shifted Legendre tau method is introduced to get a direct solution technique for solving multi-order fractional differential equations (FDEs) with constant coefficients subject to multi-point boundary conditions. The fractional derivative is described in the Caputo sense. Also, this article reports a systematic quadrature tau method for numerically solving multi-point boundary value problems of fractional-order with variable coefficients. Here the approximation is based on shifted Legendre polynomials and the quadrature rule is treated on shifted Legendre Gauss-Lobatto points. We also present a Gauss-Lobatto shifted Legendre collocation method for solving nonlinear multi-order FDEs with multi-point boundary conditions. The main characteristic behind this approach is that it reduces such problem to those of solving a system of algebraic equations. Thus we can find directly the spectral solution of the proposed problem. Through several numerical examples, we evaluate the accuracy and performance of the proposed algorithms.",
"title": ""
},
{
"docid": "905d760630c3c020bcac0174885afd72",
"text": "Component containers are a key part of mainstream component technologies, and play an important role in separating non-functional concerns from the core component logic. This paper addresses two different aspects of containers. First, it shows how generative programming techniques, using AspectC++ and meta-programming, can be used to generate stubs and skeletons without the need for special compilers or interface description languages. Second, the paper describes an approach to create custom containers by composing different non-functional features. Unlike component technologies such as EJB, which only support a predefined set of container types, this approach allows different combinations of non-functional features to be composed in a container to meet the application needs.",
"title": ""
},
{
"docid": "75c2b1565c61136bf014d5e67eb52daf",
"text": "This paper describes a system for dense depth estimation for multiple images in real-time. The algorithm runs almost entirely on standard graphics hardware, leaving the main CPU free for other tasks as image capture, compression and storage during scene capture. We follow a plain-sweep approach extended by truncated SSD scores, shiftable windows and best camera selection. We do not need specialized hardware and exploit the computational power of freely programmable PC graphics hardware. Dense depth maps are computed with up to 20 fps.",
"title": ""
},
{
"docid": "c5bc51e3e2ad5aedccfa17095ec1d7ed",
"text": "CONTEXT\nLittle is known about the extent or severity of untreated mental disorders, especially in less-developed countries.\n\n\nOBJECTIVE\nTo estimate prevalence, severity, and treatment of Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) mental disorders in 14 countries (6 less developed, 8 developed) in the World Health Organization (WHO) World Mental Health (WMH) Survey Initiative.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nFace-to-face household surveys of 60 463 community adults conducted from 2001-2003 in 14 countries in the Americas, Europe, the Middle East, Africa, and Asia.\n\n\nMAIN OUTCOME MEASURES\nThe DSM-IV disorders, severity, and treatment were assessed with the WMH version of the WHO Composite International Diagnostic Interview (WMH-CIDI), a fully structured, lay-administered psychiatric diagnostic interview.\n\n\nRESULTS\nThe prevalence of having any WMH-CIDI/DSM-IV disorder in the prior year varied widely, from 4.3% in Shanghai to 26.4% in the United States, with an interquartile range (IQR) of 9.1%-16.9%. Between 33.1% (Colombia) and 80.9% (Nigeria) of 12-month cases were mild (IQR, 40.2%-53.3%). Serious disorders were associated with substantial role disability. Although disorder severity was correlated with probability of treatment in almost all countries, 35.5% to 50.3% of serious cases in developed countries and 76.3% to 85.4% in less-developed countries received no treatment in the 12 months before the interview. Due to the high prevalence of mild and subthreshold cases, the number of those who received treatment far exceeds the number of untreated serious cases in every country.\n\n\nCONCLUSIONS\nReallocation of treatment resources could substantially decrease the problem of unmet need for treatment of mental disorders among serious cases. Structural barriers exist to this reallocation. Careful consideration needs to be given to the value of treating some mild cases, especially those at risk for progressing to more serious disorders.",
"title": ""
},
{
"docid": "1b1953e3dd28c67e7a8648392422df88",
"text": "We examined Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) General Ability Index (GAI) and Full Scale Intelligence Quotient (FSIQ) discrepancies in 100 epilepsy patients; 44% had a significant GAI > FSIQ discrepancy. GAI-FSIQ discrepancies were correlated with the number of antiepileptic drugs taken and duration of epilepsy. Individual antiepileptic drugs differentially interfere with the expression of underlying intellectual ability in this group. FSIQ may significantly underestimate levels of general intellectual ability in people with epilepsy. Inaccurate representations of FSIQ due to selective impairments in working memory and reduced processing speed obscure the contextual interpretation of performance on other neuropsychological tests, and subtle localizing and lateralizing signs may be missed as a result.",
"title": ""
},
{
"docid": "30aaf753d3ec72f07d4838de391524ca",
"text": "The present study was aimed to determine the effect on liver, associated oxidative stress, trace element and vitamin alteration in dogs with sarcoptic mange. A total of 24 dogs with clinically established diagnosis of sarcoptic mange, divided into two groups, severely infested group (n=9) and mild/moderately infested group (n=15), according to the extent of skin lesions caused by sarcoptic mange and 6 dogs as control group were included in the present study. In comparison to healthy control hemoglobin, PCV, and TEC were significantly (P<0.05) decreased in dogs with sarcoptic mange however, significant increase in TLC along with neutrophilia and lymphopenia was observed only in severely infested dogs. The albumin, glucose and cholesterol were significantly (P<0.05) decreased and globulin, ALT, AST and bilirubin were significantly (P<0.05) increased in severely infested dogs when compared to other two groups. Malondialdehyde (MDA) levels were significantly (P<0.01) higher in dogs with sarcoptic mange, with levels highest in severely infested groups. Activity of superoxide dismutase (SOD) (P<0.05) and catalase were significantly (P<0.01) lower in sarcoptic infested dogs when compared with the healthy control group. Zinc and copper levels in dogs with sarcoptic mange were significantly (P<0.05) lower when compared with healthy control group with the levels lowest in severely infested group. Vitamin A and vitamin C levels were significantly (P<0.05) lower in sarcoptic infested dogs when compared to healthy control. From the present study, it was concluded that sarcoptic mange in dogs affects the liver and the infestation is associated with oxidant/anti-oxidant imbalance, significant alteration in trace elements and vitamins.",
"title": ""
}
] |
scidocsrr
|
cdc910c6db7d5f70950336e8e5858ee9
|
A 3.6mW 2.4-GHz multi-channel super-regenerative receiver in 130nm CMOS
|
[
{
"docid": "78e4395a6bd6b4424813e20633d140b8",
"text": "This paper introduces a high-speed CMOS comparator. The comparator consists of a differential input stage, two regenerative flip-flops, and an S-R latch. No offset cancellation is exploited, which reduces the power consumption as well as the die area and increases the comparison speed. An experimental version of the comparator has been integrated in a standard double-poly double-metal 1.5-pm n-well process with a die area of only 140 x 100 pmz. This circuit, operating under a +2.5/– 2.5-V power supply, performs comparison to a precision of 8 b with a symmetrical input dynamic range of 2.5 V (therefore ~0.5 LSB resolution is equal to ~ 4.9 mV). input stage flip-flops S-R Iat",
"title": ""
},
{
"docid": "b936c3cd8c64a7b7254e003918fb91d5",
"text": "On-chip DC-DC converters have the potential to offer fine-grain power management in modern chip-multiprocessors. This paper presents a fully integrated 3-level DC-DC converter, a hybrid of buck and switched-capacitor converters, implemented in 130 nm CMOS technology. The 3-level converter enables smaller inductors (1 nH) than a buck, while generating a wide range of output voltages compared to a 1/2 mode switched-capacitor converter. The test-chip prototype delivers up to 0.85 A load current while generating output voltages from 0.4 to 1.4 V from a 2.4 V input supply. It achieves 77% peak efficiency at power density of 0.1 W/mm2 and 63% efficiency at maximum power density of 0.3 W/mm2. The converter scales output voltage from 0.4 V to 1.4 V (or vice-versa) within 20 ns at a constant 450 mA load current. A shunt regulator reduces peak-to-peak voltage noise from 0.27 V to 0.19 V under pseudo-randomly fluctuating load currents. Using simulations across a wide range of design parameters, the paper compares conversion efficiencies of the 3-level, buck and switched-capacitor converters.",
"title": ""
}
] |
[
{
"docid": "f0d76f2795747fdf9abd006dd2f59043",
"text": "Over the past century, a major shift in North American food practices has been taking place. However, the literature on this topic is lacking in several areas. Some available research on food and cooking practices in the current context is presented, with a focus on how these are affecting health and how they might be contributing to health inequalities within the population. First, cooking and cooking skills are examined, along with the ambiguities related to terms associated with cooking in the research literature. Food choice, cooking, and health are described, particularly in relation to economic factors that may lead to health inequalities within the population. The importance of developing an understanding of factors within the wider food system as part of food choice and cooking skills is presented, and gaps in the research literature are examined and areas for future research are presented. Cooking practices are not well studied but are important to an understanding of human nutritional health as it relates to cultural, environmental, and economic factors.",
"title": ""
},
{
"docid": "0315f0355168a78bdead8d06d5f571b4",
"text": "Machine learning techniques are increasingly being applied to clinical text that is already captured in the Electronic Health Record for the sake of delivering quality care. Applications for example include predicting patient outcomes, assessing risks, or performing diagnosis. In the past, good results have been obtained using classical techniques, such as bag-of-words features, in combination with statistical models. Recently however Deep Learning techniques, such as Word Embeddings and Recurrent Neural Networks, have shown to possibly have even greater potential. In this work, we apply several Deep Learning and classical machine learning techniques to the task of predicting violence incidents during psychiatric admission using clinical text that is already registered at the start of admission. For this purpose, we use a novel and previously unexplored dataset from the Psychiatry Department of the University Medical Center Utrecht in The Netherlands. Results show that predicting violence incidents with state-of-the-art performance is possible, and that using Deep Learning techniques provides a relatively small but consistent improvement in performance. We finally discuss the potential implication of our findings for the psychiatric practice.",
"title": ""
},
{
"docid": "7ffbc12161510aa8ef01d804df9c5648",
"text": "Networks represent relationships between entities in many complex systems, spanning from online social interactions to biological cell development and brain connectivity. In many cases, relationships between entities are unambiguously known: are two users “friends” in a social network? Do two researchers collaborate on a published article? Do two road segments in a transportation system intersect? These are directly observable in the system in question. In most cases, relationships between nodes are not directly observable and must be inferred: Does one gene regulate the expression of another? Do two animals who physically co-locate have a social bond? Who infected whom in a disease outbreak in a population?\n Existing approaches for inferring networks from data are found across many application domains and use specialized knowledge to infer and measure the quality of inferred network for a specific task or hypothesis. However, current research lacks a rigorous methodology that employs standard statistical validation on inferred models. In this survey, we examine (1) how network representations are constructed from underlying data, (2) the variety of questions and tasks on these representations over several domains, and (3) validation strategies for measuring the inferred network’s capability of answering questions on the system of interest.",
"title": ""
},
{
"docid": "ded76ce302b603c5718c410f375ec20c",
"text": "This paper presents two different long range UHF (Ultra High Frequency) RFID (Radio Frequency Identification) tag antennas. They are operating at 915MHz and 920MHz, and small-sized passive tags, Cavity Structured Tag Antenna (CSTA) and Bottom Metal Structured Tag Antenna (BMTA). They are designed to apply on metal cart or pallet for auto-parts logistics. The sizes of both tag antennas are the same as 140 × 60 × 10 mm3.",
"title": ""
},
{
"docid": "6073601ab6d6e1dbba7a42c346a29436",
"text": "We present a new focus+Context (fisheye) technique for visualizing and manipulating large hierarchies. Our technique assigns more display space to a portion of the hierarchy while still embedding it in the context of the entire hierarchy. The essence of this scheme is to layout the hierarchy in a uniform way on a hyperbolic plane and map this plane onto a circular display region. This supports a smooth blending between focus and context, as well as continuous redirection of the focus. We have developed effective procedures for manipulating the focus using pointer clicks as well as interactive dragging, and for smoothly animating transitions across such manipulation. A laboratory experiment comparing the hyperbolic browser with a conventional hierarchy browser was conducted.",
"title": ""
},
{
"docid": "bb853c369f37d2d960d6b312f80cfa98",
"text": "The purpose of this platform is to support research and education goals in human-robot interaction and mobile manipulation with applications that require the integration of these abilities. In particular, our research aims to develop personal robots that work with people as capable teammates to assist in eldercare, healthcare, domestic chores, and other physical tasks that require robots to serve as competent members of human-robot teams. The robot’s small, agile design is particularly well suited to human-robot interaction and coordination in human living spaces. Our collaborators include the Laboratory for Perceptual Robotics at the University of Massachusetts at Amherst, Xitome Design, Meka Robotics, and digitROBOTICS.",
"title": ""
},
{
"docid": "8d8951321bc210e41b49bd43ce2ae192",
"text": "Software architects are responsible for designing an architectural solution that satisfies the functional and non-functional requirements of the system to the fullest extent possible. However, the details they need to make informed architectural decisions are often missing from the requirements specification. An earlier study we conducted indicated that architects intuitively recognize architecturally significant requirements in a project, and often seek out relevant stakeholders in order to ask Probing Questions (PQs) that help them acquire the information they need. This paper presents results from a qualitative interview study aimed at identifying architecturally significant functional requirements' categories from various business domains, exploring relevant PQs for each category, and then grouping PQs by type. Using interview data from 14 software architects in three countries, we identified 15 categories of architecturally significant functional requirements and 6 types of PQs. We found that the domain knowledge of the architect and her experience influence the choice of PQs significantly. A preliminary quantitative evaluation of the results against real-life software requirements specification documents indicated that software specifications in our sample largely do not contain the crucial architectural differentiators that may impact architectural choices and that PQs are a necessary mechanism to unearth them. Further, our findings provide the initial list of PQs which could be used to prompt business analysts to elicit architecturally significant functional requirements that the architects need.",
"title": ""
},
{
"docid": "735fe41fe73d527b3cbeb03926530344",
"text": "Premalignant lesions of the lower female genital tract encompassing the cervix, vagina and vulva are variably common and many, but by no means all, are related to infection by human papillomavirus (HPV). In this review, pathological aspects of the various premalignant lesions are discussed, mainly concentrating on new developments. The value of ancillary studies, mainly immunohistochemical, is discussed at the appropriate points. In the cervix, the terminology and morphological features of premalignant glandular lesions is covered, as is the distinction between adenocarcinoma in situ (AIS) and early invasive adenocarcinoma, which may be very problematic. A spectrum of benign, premalignant and malignant cervical glandular lesions exhibiting gastric differentiation is emerging with lobular endocervical glandular hyperplasia (LEGH), including so-called atypical LEGH, representing a possible precursor of non HPV-related cervical adenocarcinomas exhibiting gastric differentiation; these include the cytologically bland adenoma malignum and the morphologically malignant gastric type adenocarcinoma. Stratified mucin producing intraepithelial lesion (SMILE) is a premalignant cervical lesion with morphological overlap between cervical intraepithelial neoplasia (CIN) and AIS and which is variably regarded as a form of reserve cell dysplasia or stratified AIS. It is now firmly established that there are two distinct types of vulval intraepithelial neoplasia (VIN) with a different pathogenesis, molecular events, morphological features and risk of progression to squamous carcinoma. These comprise a more common HPV-related usual type VIN (also referred to as classic, undifferentiated, basaloid, warty, Bowenoid type) and a more uncommon differentiated (simplex) type which is non-HPV related and which is sometimes associated with lichen sclerosus. The former has a relatively low risk of progression to HPV-related vulval squamous carcinoma and the latter a high risk of progression to non-HPV related vulval squamous carcinoma. Various aspects of vulval Paget's disease are also discussed.",
"title": ""
},
{
"docid": "48a3c9d1f41f9b7ed28f8ef46b5c4533",
"text": "We introduce two new methods of deriving the classical PCA in the framework of minimizing the mean square error upon performing a lower-dimensional approximation of the data. These methods are based on two forms of the mean square error function. One of the novelties of the presented methods is that the commonly employed process of subtraction of the mean of the data becomes part of the solution of the optimization problem and not a pre-analysis heuristic. We also derive the optimal basis and the minimum error of approximation in this framework and demonstrate the elegance of our solution in comparison with a recent solution in the framework.",
"title": ""
},
{
"docid": "db3f317940f308407d217bbedf14aaf0",
"text": "Imagine your daily activities. Perhaps you will be at home today, relaxing and completing chores. Maybe you are a scientist, and plan to conduct a long series of experiments in a laboratory. You might work in an office building: you walk about your floor, greeting others, getting coffee, preparing documents, etc. There are many activities you perform regularly in large environments. If a system understood your intentions it could help you achieve your goals, or automate aspects of your environment. More generally, an understanding of human intentions would benefit, and is perhaps prerequisite for, AI systems that assist and augment human capabilities. We present a framework that continuously forecasts long-term spatial and semantic intentions (what you will do and where you will go) of a first-person camera wearer. We term our algorithm “Demonstrating Agent Rewards for K-futures Online” (DARKO). We use a first-person camera to meet the challenge of observing the wearer’s behavior everywhere. In Figure 1, DARKO forecasts multiple quantities: (1) the user intends to go to the shower (out of all possible destinations in their house), (2) their trajectory through Figure 1: Forecasting future behavior from first-person video. The overhead map shows where the person is likely to go, predicted from the first frame. Each s",
"title": ""
},
{
"docid": "450842d87097d457c94ec6f5729b547d",
"text": "Web crawlers are program, designed to fetch web pages for information retrieval system. Crawlers facilitate this process by following hyperlinks in web pages to automatically download new or update existing web pages in the repository. A web crawler interacts with millions of hosts, fetches millions of page per second and updates these pages into a database, creating a need for maintaining I/O performance, network resources within OS limit, which are essential in order to achieve high performance at a reasonable cost. This paper aims to showcase efficient techniques to develop a scalable web crawling system, addressing challenges which deals with issues related to the structure of the web, distributed computing, job scheduling, spider traps, canonicalizing URLs and inconsistent data formats on the web. A brief discussion on new web crawler architecture is done in this paper.",
"title": ""
},
{
"docid": "31abfd6e4f6d9e56bc134ffd7c7b7ffc",
"text": "Online social networks like Facebook recommend new friends to users based on an explicit social network that users build by adding each other as friends. The majority of earlier work in link prediction infers new interactions between users by mainly focusing on a single network type. However, users also form several implicit social networks through their daily interactions like commenting on people’s posts or rating similarly the same products. Prior work primarily exploited both explicit and implicit social networks to tackle the group/item recommendation problem that recommends to users groups to join or items to buy. In this paper, we show that auxiliary information from the useritem network fruitfully combines with the friendship network to enhance friend recommendations. We transform the well-known Katz algorithm to utilize a multi-modal network and provide friend recommendations. We experimentally show that the proposed method is more accurate in recommending friends when compared with two single source path-based algorithms using both synthetic and real data sets.",
"title": ""
},
{
"docid": "e84887f436757b80db5712372428492f",
"text": "Reinforcement Learning agents often need to solve not a single task, but several tasks pertaining to a same domain; in particular, each task corresponds to an MDP drawn from a family of related MDPs (a domain). An agent learning in this setting should be able exploit policies it has learned in the past, for a given set of sample tasks, in order to more rapidly acquire policies for novel tasks. Consider, for instance, a navigation problem where an agent may have to learn to navigate different (but related) mazes. Even though these correspond to distinct tasks (since the goal and starting locations of the agent may change, as well as the maze configuration itself), their solutions do share common properties—e.g. in order to reach distant areas of the maze, an agent should not move in circles. After an agent has learned to solve a few sample tasks, it may be possible to leverage the acquired experience to facilitate solving novel tasks from the same domain. Our work is motivated by the observation that trajectory samples from optimal policies for tasks belonging to a common domain, often reveal underlying useful patterns for solving novel tasks. We propose an optimization algorithm that characterizes the problem of learning reusable temporally extended actions (macros). We introduce a computationally tractable surrogate objective that is equivalent to finding macros that allow for maximal compression of a given set of sampled trajectories. We develop a compression-based approach for obtaining such macros and propose an exploration strategy that takes advantage of them. We show that meaningful behavioral patterns can be identified from sample policies over discrete and continuous action spaces, and present evidence that the proposed exploration strategy improves learning time on novel tasks.",
"title": ""
},
{
"docid": "7869c35cce4577add7de8912f2f9188e",
"text": "Ant Colony Optimization (ACO) is a new population oriented search metaphor that has been successfully applied toNP-hard combinatorial optimization problems. In this paper we discuss parallelization strategies for Ant Colony Optimization algorithms. We empirically test the most simple strategy, that of executing parallel independent runs of an algorithm. The empirical tests are performed applyingMAX–MIN Ant System, one of the most efficient ACO algorithms, to the Traveling Salesman Problem and show that using parallel independent runs is very effective.",
"title": ""
},
{
"docid": "17253a37e4f26cb6dabf1e1eb4e9a878",
"text": "The recent development of Bayesian phylogenetic inference using Markov chain Monte Carlo (MCMC) techniques has facilitated the exploration of parameter-rich evolutionary models. At the same time, stochastic models have become more realistic (and complex) and have been extended to new types of data, such as morphology. Based on this foundation, we developed a Bayesian MCMC approach to the analysis of combined data sets and explored its utility in inferring relationships among gall wasps based on data from morphology and four genes (nuclear and mitochondrial, ribosomal and protein coding). Examined models range in complexity from those recognizing only a morphological and a molecular partition to those having complex substitution models with independent parameters for each gene. Bayesian MCMC analysis deals efficiently with complex models: convergence occurs faster and more predictably for complex models, mixing is adequate for all parameters even under very complex models, and the parameter update cycle is virtually unaffected by model partitioning across sites. Morphology contributed only 5% of the characters in the data set but nevertheless influenced the combined-data tree, supporting the utility of morphological data in multigene analyses. We used Bayesian criteria (Bayes factors) to show that process heterogeneity across data partitions is a significant model component, although not as important as among-site rate variation. More complex evolutionary models are associated with more topological uncertainty and less conflict between morphology and molecules. Bayes factors sometimes favor simpler models over considerably more parameter-rich models, but the best model overall is also the most complex and Bayes factors do not support exclusion of apparently weak parameters from this model. Thus, Bayes factors appear to be useful for selecting among complex models, but it is still unclear whether their use strikes a reasonable balance between model complexity and error in parameter estimates.",
"title": ""
},
{
"docid": "ef9235285ebbef109254bfb5968d2d6b",
"text": "This paper proposes Dyadic Memory Networks (DyMemNN), a novel extension of end-to-end memory networks (memNN) for aspect-based sentiment analysis (ABSA). Originally designed for question answering tasks, memNN operates via a memory selection operation in which relevant memory pieces are adaptively selected based on the input query. In the problem of ABSA, this is analogous to aspects and documents in which the relationship between each word in the document is compared with the aspect vector. In the standard memory networks, simple dot products or feed forward neural networks are used to model the relationship between aspect and words which lacks representation learning capability. As such, our dyadic memory networks ameliorates this weakness by enabling rich dyadic interactions between aspect and word embeddings by integrating either parameterized neural tensor compositions or holographic compositions into the memory selection operation. To this end, we propose two variations of our dyadic memory networks, namely the Tensor DyMemNN and Holo DyMemNN. Overall, our two models are end-to-end neural architectures that enable rich dyadic interaction between aspect and document which intuitively leads to better performance. Via extensive experiments, we show that our proposed models achieve the state-of-the-art performance and outperform many neural architectures across six benchmark datasets.",
"title": ""
},
{
"docid": "8a31704d12d042618dd9e69f0aebd813",
"text": "a r t i c l e i n f o Keywords: Antisocial personality disorder Psychopathy Amygdala Orbitofrontal cortex Monoamine oxidase SNAP proteins Psychopathy is perhaps one of the most misused terms in the American public, which is in no small part due to our obsession with those who have no conscience, and our boldness to try and profile others with this disorder. Here, I present how psychopathy is seen today, before discussing the classification of psychopathy. I also explore the neurological differences in the brains of those with psychopathy, before finally taking a look at genetic risk factors. I conclude by raising some questions about potential treatment.",
"title": ""
},
{
"docid": "10a6bccb77b6b94149c54c9e343ceb6c",
"text": "Clone detectors find similar code fragments (i.e., instances of code clones) and report large numbers of them for industrial systems. To maintain or manage code clones, developers often have to investigate differences of multiple cloned code fragments. However,existing program differencing techniques compare only two code fragments at a time. Developers then have to manually combine several pairwise differencing results. In this paper, we present an approach to automatically detecting differences across multiple clone instances. We have implemented our approach as an Eclipse plugin and evaluated its accuracy with three Java software systems. Our evaluation shows that our algorithm has precision over 97.66% and recall over 95.63% in three open source Java projects. We also conducted a user study of 18 developers to evaluate the usefulness of our approach for eight clone-related refactoring tasks. Our study shows that our approach can significantly improve developers’performance in refactoring decisions, refactoring details, and task completion time on clone-related refactoring tasks. Automatically detecting differences across multiple clone instances also opens opportunities for building practical applications of code clones in software maintenance, such as auto-generation of application skeleton, intelligent simultaneous code editing.",
"title": ""
},
{
"docid": "7363b433f17e1f3dfecc805b58a8706b",
"text": "Mobile Edge Computing (MEC) consists of deploying computing resources (CPU, storage) at the edge of mobile networks; typically near or with eNodeBs. Besides easing the deployment of applications and services requiring low access to the remote server, such as Virtual Reality and Vehicular IoT, MEC will enable the development of context-aware and context-optimized applications, thanks to the Radio API (e.g. information on user channel quality) exposed by eNodeBs. Although ETSI is defining the architecture specifications, solutions to integrate MEC to the current 3GPP architecture are still open. In this paper, we fill this gap by proposing and implementing a Software Defined Networking (SDN)-based MEC framework, compliant with both ETSI and 3GPP architectures. It provides the required data-plane flexibility and programmability, which can on-the-fly improve the latency as a function of the network deployment and conditions. To illustrate the benefit of using SDN concept for the MEC framework, we present the details of software architecture as well as performance evaluations.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.